dify/api/core/model_providers/models/llm
wayne.wang 52ebffa857
fix: app config zhipu chatglm_std model, but it still use chatglm_lit… (#1377)
Co-authored-by: wayne.wang <wayne.wang@beibei.com>
2023-10-18 05:07:36 -05:00
..
__init__.py feat: server multi models support (#799) 2023-08-12 00:57:00 +08:00
anthropic_model.py feat: optimize anthropic connection pool (#1066) 2023-08-31 16:18:59 +08:00
azure_openai_model.py feat: remove llm client use (#1316) 2023-10-11 14:02:53 -05:00
baichuan_model.py fix: prompt for baichuan text generation models (#1299) 2023-10-10 13:01:18 +08:00
base.py feat: advanced prompt backend (#1301) 2023-10-12 10:13:10 -05:00
chatglm_model.py feat: hf inference endpoint stream support (#1028) 2023-08-26 19:48:34 +08:00
huggingface_hub_model.py fix: hf hosted inference check (#1128) 2023-09-09 00:29:48 +08:00
localai_model.py feat: add LocalAI local embedding model support (#1021) 2023-08-29 22:22:02 +08:00
minimax_model.py feat: optimize minimax llm call (#1312) 2023-10-11 07:17:41 -05:00
openai_model.py fix: max tokens of OpenAI gpt-3.5-turbo-instruct to 4097 (#1338) 2023-10-13 02:07:07 -05:00
openllm_model.py feat: hf inference endpoint stream support (#1028) 2023-08-26 19:48:34 +08:00
replicate_model.py feat: hf inference endpoint stream support (#1028) 2023-08-26 19:48:34 +08:00
spark_model.py feat: hf inference endpoint stream support (#1028) 2023-08-26 19:48:34 +08:00
tongyi_model.py fix: compatibility issues with the tongyi model. (#1310) 2023-10-11 05:16:26 -05:00
wenxin_model.py feat: support weixin ernie-bot-4 and chat mode (#1375) 2023-10-18 02:35:24 -05:00
xinference_model.py feat: hf inference endpoint stream support (#1028) 2023-08-26 19:48:34 +08:00
zhipuai_model.py fix: app config zhipu chatglm_std model, but it still use chatglm_lit… (#1377) 2023-10-18 05:07:36 -05:00