-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Dealing with Formatted responses #7920
Comments
Hey @aiadvantageuser . I'll confess this was a bit long so I couldn't read everything but should you perhaps use typed outputs in DSPy? You can specify a lot of structure in the output fields and their types, and you can make the types of some or all output fields be Pydnatic models. |
Thank you for the fast answer @okhat . I tried out with the Pydantic models, specifically with this example: Code: from pydantic import BaseModel, Field, conint class DebaterStatement(BaseModel): class DebateRound(BaseModel): class ModeratorDecision(BaseModel): class DebateSchema(BaseModel): class RequirementToRating(dspy.Signature):
predict = dspy.Predict(RequirementToRating) print(predict(question="Certificates Replacement shall be completed within max 5000 ms")) I replaced TypedPrecidot with dspy.Predict as i saw in an earlier issue, but i still got the following error. Any idea what goes wrong? Error: ValueError Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\utils\callback.py:202, in with_callbacks..wrapper(instance, *args, **kwargs) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\adapters\chat_adapter.py:85, in ChatAdapter.parse(self, signature, completion, _parse_values) ValueError: Expected dict_keys(['debate_json']) but got dict_keys([]) During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) TypeError: argument of type 'NoneType' is not iterable During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\utils\callback.py:202, in with_callbacks..wrapper(instance, *args, **kwargs) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\predict\predict.py:154, in Predict.call(self, **kwargs) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\predict\predict.py:188, in Predict.forward(self, kwargs) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\predict\predict.py:295, in v2_5_generate(lm, lm_kwargs, signature, demos, inputs, _parse_values) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\adapters\base.py:33, in Adapter.call(self, lm, lm_kwargs, signature, demos, inputs, _parse_values) File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\dspy\adapters\json_adapter.py:42, in JSONAdapter.call(self, lm, lm_kwargs, signature, demos, inputs, _parse_values) AttributeError: module 'litellm' has no attribute 'UnsupportedParamsError' |
What feature would you like to see?
I am creating a prompt where i am simulating a debate. The prompt for my LM is the following:
This is a multi round debate with 3 different roles:
-Affarmative debater: who tries to prove the rating is correct
-Negative debater: who tries to prove the rating is incorrect
-Moderator: who will compare the two debaters' arguments for the given requirement and evaluation criteria and draws a conclusion
The parameter which the debate is about it the following:
The need is accuretly represented in the requirement.
The ratings are from 1-5, 5 means strongly agree.
The debate goes on for three turns. The moderator has to draw a conclusion after the last round and if there is a chosen side, give the final rating and answer by the end of the third round.
And for this prompt the following json_schema is given as response format:
{
"name": "debate_schema",
"strict": true,
"schema": {
"type": "object",
"properties": {
"rounds": {
"type": "array",
"description": "Three rounds of debate.",
"items": {
"type": "object",
"properties": {
"affirmative": {
"type": "object",
"description": "The statement made by the affirmative debater.",
"properties": {
"argument": {
"type": "string",
"description": "Argument proving the rating is correct."
},
"rating": {
"type": "integer",
"description": "Rating given by the affirmative debater, ranging from 1 to 5."
}
},
"required": [
"argument",
"rating"
],
"additionalProperties": false
},
"negative": {
"type": "object",
"description": "The statement made by the negative debater.",
"properties": {
"argument": {
"type": "string",
"description": "Argument proving the rating is incorrect."
},
"rating": {
"type": "integer",
"description": "Rating given by the negative debater, ranging from 1 to 5."
}
},
"required": [
"argument",
"rating"
],
"additionalProperties": false
}
},
"required": [
"affirmative",
"negative"
],
"additionalProperties": false
}
},
"moderator": {
"type": "object",
"description": "The moderator's final conclusion.",
"properties": {
"conclusion": {
"type": "string",
"description": "The summary conclusion drawn by the moderator."
},
"final_rating": {
"type": "integer",
"description": "Final rating decided by the moderator, ranging from 1 to 5."
}
},
"required": [
"conclusion",
"final_rating"
],
"additionalProperties": false
}
},
"required": [
"rounds",
"moderator"
],
"additionalProperties": false
}
}
For this i would like to create a DSPY module. I tried with the following code:
class RequirementToRating(dspy.Signature):
# Input
question: str = dspy.InputField()
class RatingRound(dspy.Module):
def init(self):
super().init()
self.rating_extractor = dspy.Predict(RequirementToRating)
I would really appreciate it if you could give me ideas or any support how to do this
Would you like to contribute?
Additional Context
No response
The text was updated successfully, but these errors were encountered: