Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

details of deepseek-r1's reasoning_content should not return to model for infer #13692

Open
5 tasks done
sinopec opened this issue Feb 13, 2025 · 2 comments · May be fixed by #14372
Open
5 tasks done

details of deepseek-r1's reasoning_content should not return to model for infer #13692

sinopec opened this issue Feb 13, 2025 · 2 comments · May be fixed by #14372
Labels
🐞 bug Something isn't working

Comments

@sinopec
Copy link

sinopec commented Feb 13, 2025

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.15.3

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

Integrating the DeepSeek R1 model hosted locally on vLLM, supporting reasoning_context. Dify can display the thinking process well in the detail tag. However, in multi-turn conversations, the thinking process from previous dialogues is also included in the final prompt of the current conversation.

DeepSeek's official recommendation is that the thinking content should not be carried over to the next conversation's prompt.

✔️ Expected Behavior

the think process in between tag should not in included in the current prompt

❌ Actual Behavior

all conversations's think process is included in the current prompt

@dosubot dosubot bot added the 🐞 bug Something isn't working label Feb 13, 2025
Copy link

dosubot bot commented Feb 13, 2025

Hello @sinopec! I'm here to help you with any bugs, questions, or contributions you have for the project. Let me know how I can assist you!

To address the issue where the reasoning content within the <detail> tag is being included in the prompt for subsequent conversation turns, you can consider using Dify's mechanisms for filtering or excluding parts of the conversation. Dify allows you to set Stop Sequences to signal the LLM to stop generating text at specific points, which can help in excluding unwanted content from the prompt. Additionally, in Expert Mode, you can adjust the position of context and chat history within the prompts, providing more control over what is included [1].

If these options do not resolve the issue, you might need to look into customizing the prompt construction logic to ensure that the reasoning content is excluded as per DeepSeek's recommendations.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other | Bug Report

@crazywoola
Copy link
Member

#13521 (comment)

@just-wang just-wang linked a pull request Feb 26, 2025 that will close this issue
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants