dify/api/core/model_runtime/model_providers
yihong 448a19bf54
fix: fish audio wrong validate credentials interface (#11019)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2024-11-23 23:39:41 +08:00
..
__base feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 2024-11-22 16:30:22 +08:00
anthropic feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 2024-11-22 16:30:22 +08:00
azure_ai_studio chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425) 2024-11-15 15:41:40 +08:00
azure_openai fix: Azure OpenAI o1 max_completion_token error (#10593) 2024-11-12 21:40:13 +08:00
baichuan refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
bedrock fix: aws presign url is not workable remote url (#10884) 2024-11-20 14:24:41 +08:00
chatglm chore: refurbish Python code by applying refurb linter rules (#8296) 2024-09-12 15:50:49 +08:00
cohere fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883) 2024-11-21 10:34:43 +08:00
deepseek fix: response_format label (#8326) 2024-09-12 23:17:29 +08:00
fireworks refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
fishaudio fix: fish audio wrong validate credentials interface (#11019) 2024-11-23 23:39:41 +08:00
gitee_ai Gitee AI Qwen2.5-72B model (#10595) 2024-11-12 21:40:32 +08:00
google feat: support LLM process document file (#10966) 2024-11-22 19:32:44 +08:00
gpustack feat: add gpustack model provider (#10158) 2024-11-01 17:23:30 +08:00
groq Added Llama 3.2 Vision Models Speech2Text Models for Groq (#9479) 2024-10-18 18:10:33 +08:00
huggingface_hub refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
huggingface_tei chore: format get_customizable_model_schema return value (#9335) 2024-10-21 19:05:44 +08:00
hunyuan fix: resolve the incorrect model name of hunyuan-standard-256k (#10052) 2024-10-30 15:43:29 +08:00
jina add jina rerank http timout parameter (#10476) 2024-11-11 13:28:11 +08:00
leptonai chore(api/core): apply ruff reformatting (#7624) 2024-09-10 17:00:20 +08:00
localai chore: format get_customizable_model_schema return value (#9335) 2024-10-21 19:05:44 +08:00
minimax add abab7-chat-preview model (#10654) 2024-11-13 19:30:42 +08:00
mistralai add MixtralAI Model (#8517) 2024-09-21 18:08:07 +08:00
mixedbread refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
moonshot fix: moonshot response_format raise error (#9847) 2024-10-25 14:59:55 +08:00
nomic refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
novita chore(api/core): apply ruff reformatting (#7624) 2024-09-10 17:00:20 +08:00
nvidia refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
nvidia_nim chore(api/core): apply ruff reformatting (#7624) 2024-09-10 17:00:20 +08:00
oci refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
ollama feat: support function call for ollama block chat api (#10784) 2024-11-20 11:15:19 +08:00
openai feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 2024-11-22 16:30:22 +08:00
openai_api_compatible Resolve 8475 support rerank model from infinity (#10939) 2024-11-21 18:03:49 +08:00
openllm fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883) 2024-11-21 10:34:43 +08:00
openrouter Support streaming output for OpenAI o1-preview and o1-mini (#10890) 2024-11-20 15:10:41 +08:00
perfxcloud refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
replicate refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
sagemaker chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425) 2024-11-15 15:41:40 +08:00
siliconflow feat: add vlm models from siliconflow (#10704) 2024-11-14 20:53:35 +08:00
spark fix:Spark's large language model token calculation error #7911 (#8755) 2024-09-25 14:51:42 +08:00
stepfun chore: format get_customizable_model_schema return value (#9335) 2024-10-21 19:05:44 +08:00
tencent chore: refurbish Python code by applying refurb linter rules (#8296) 2024-09-12 15:50:49 +08:00
togetherai chore(api/core): apply ruff reformatting (#7624) 2024-09-10 17:00:20 +08:00
tongyi feat: support LLM process document file (#10966) 2024-11-22 19:32:44 +08:00
triton_inference_server chore: format get_customizable_model_schema return value (#9335) 2024-10-21 19:05:44 +08:00
upstage refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
vertex_ai Fix: Correct the max tokens of Claude-3.5-Sonnet-20241022 for Bedrock and VertexAI (#10508) 2024-11-11 08:41:43 +08:00
vessl_ai fix: [VESSL-AI] edit some words in vessl_ai.yaml (#10417) 2024-11-11 08:38:26 +08:00
volcengine_maas fix: default max_chunks set to 1 as other providers (#10937) 2024-11-21 16:36:05 +08:00
voyage refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 2024-10-17 19:12:42 +08:00
wenxin add llm: ernie-4.0-turbo-128k of wenxin (#10135) 2024-10-31 21:49:04 +08:00
x feat: add xAI model provider (#10272) 2024-11-05 14:42:47 +08:00
xinference chore: format get_customizable_model_schema return value (#9335) 2024-10-21 19:05:44 +08:00
yi feat: add yi custom llm intergration (#9482) 2024-10-18 17:23:21 +08:00
zhinao chore(api/core): apply ruff reformatting (#7624) 2024-09-10 17:00:20 +08:00
zhipuai feat: support LLM process document file (#10966) 2024-11-22 19:32:44 +08:00
__init__.py Model Runtime (#1858) 2024-01-02 23:42:00 +08:00
_position.yaml feat: add voyage ai as a new model provider (#8747) 2024-09-29 16:55:59 +08:00
model_provider_factory.py feat: support pinning, including, and excluding for model providers and tools (#7419) 2024-08-21 11:16:43 +08:00