-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for reasoning_content field in chat completion messages for DeepSeek R1 #925
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #925 +/- ##
==========================================
+ Coverage 98.46% 98.88% +0.42%
==========================================
Files 24 27 +3
Lines 1364 1790 +426
==========================================
+ Hits 1343 1770 +427
+ Misses 15 14 -1
Partials 6 6 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@goodenough227 could you please point to the OpenAI API reference for this one? Or is this field only supported by DeepSeek currently? |
I also have a need for this feature. |
sorry for this wrong description generated by AI code hepler. "reasoning_content" field is only supported by deepseek R1 currently. |
Originally, this repository was targeted on the official OpenAI API. But, in the context of the official client popping up — I'm willing to merge this My only ask here is: can we please clearly indicate that this field is not supported by OpenAI? Ideally, it should be obvious from the variable name and from reading the docs. Thank you! |
Get it. I have written a comment on this field. |
# Conflicts: # chat.go # openai_test.go
@sashabaranov I have added comments in a similar way as in the previous code, please review |
Hope to merge and support this as soon as possible. Thank you for your contribution! @sashabaranov @goodenough227 |
please review&approved, thank you @sashabaranov @goodenough227 |
mark |
I am exactly what I want, please approve, thank you @sashabaranov @goodenough227 |
@@ -29,7 +29,7 @@ func setupAzureTestServer() (client *openai.Client, server *test.ServerTest, tea | |||
|
|||
// numTokens Returns the number of GPT-3 encoded tokens in the given text. | |||
// This function approximates based on the rule of thumb stated by OpenAI: | |||
// https://beta.openai.com/tokenizer/ | |||
// https://beta.openai.com/tokenizer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this changing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to pass the golint check. When using golint for inspection, golint will require this to be changed to "."
chat_test.go
Outdated
Content: "Hello!", | ||
Role: openai.ChatMessageRoleUser, | ||
Content: "Hello!", | ||
ReasoningContent: "hi!", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this deserve its own test in isolation?
This is currently targeting a specific niche model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed the ReasoningContent section here but did not add the constants and other test functions for the deepseek model, as it seems that, from your perspective, go-openai is primarily intended for OpenAI models. Please share your thoughts on this so I can align it with my subsequent changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
go-openai is primarily intended for OpenAI models.
This is my personal stance, yes. The addition of support for DeepSeek feels like a knee-jerk reaction. While I appreciate the flexibility your addition provides, it is still a divergence from the spirit of the repo. The o1
models, and the future path of OpenAI's API may very well introduce a competing field.
The compromise, if you will, is to limit the coupling, like here in the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand. I have written test functions specifically for deepseek r1 using literal model names to minimize the invasiveness to the main codebase.
Additionally, I'd like to know if there are plans to support non-OpenAI official models in the future, as many new models (like deepseek open-source models) are using OpenAI's data structures. We are currently using our forked branch of go-openai in this way.
I see that openai/openai-python has already supported ResponseReasoning, and I believe it will be supported soon in official client. Whether we use the current PR or build another one, I hope we can support it soon, thank you! |
|
Any other questions, rush |
…ns, including support for reasoning content in responses.
…ns, including support for reasoning content in responses.
…ns, including support for reasoning content in responses.
feat: Add support for reasoning_content field in chat completion messages
This PR adds support for the
reasoning_content
field in the ChatCompletionMessage struct to accommodate models that return their reasoning process separately from the main content. This feature is particularly useful for:Both models return their step-by-step reasoning process through this field.
Changes:
reasoning_content
field to ChatCompletionMessage structExample response structure: