details of deepseek-r1's reasoning_content should not return to model for infer #13692
Open
5 tasks done
Labels
🐞 bug
Something isn't working
Self Checks
Dify version
0.15.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Integrating the DeepSeek R1 model hosted locally on vLLM, supporting reasoning_context. Dify can display the thinking process well in the detail tag. However, in multi-turn conversations, the thinking process from previous dialogues is also included in the final prompt of the current conversation.
DeepSeek's official recommendation is that the thinking content should not be carried over to the next conversation's prompt.
✔️ Expected Behavior
the think process in between tag should not in included in the current prompt
❌ Actual Behavior
all conversations's think process is included in the current prompt
The text was updated successfully, but these errors were encountered: