-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(dreamcode): copilotExtension.registerFunction()
#11
Conversation
I'm looking into applying that API to @JasonEtco's existing code for the models app: The app has 4 functions: Each function has a definition object and an async execute method, for example the |
… context provided by SDK
Here is a version for https://github.com/copilot-extensions/github-models-extension/blob/5f100491aefc04c04c387f3b373d8300e0eeccd8/src/functions/list-models.ts My understanding of that particular function is that the user sands a free-text question describing their use case about what they want to do with a model, the text is interpreted to be call to the "list_models" skill, in that skill we want to amend the entire conversation by setting the system message followed by the users question, which in turn will be sent to the model and the response streamed to the user. copilotExtension.registerSkill({
name: "list_models",
description: "This function lists the AI models available in GitHub Models.",
async run() {
const models = await this.modelsAPI.listModels();
const systemMessage = [
"The user is asking for you to recommend the right model for their use-case.",
"Explain your reasoning, and why you recommend the model you choose.",
"Provide a summary of the model's capabilities and limitations.",
"Include any relevant information that the user should know.",
"Use the available models to make your recommendation.",
"The list of available models is as follows:",
];
for (const model of models) {
systemMessage.push(
[
`\t- Model Name: ${model.name}`,
`\t\tModel Version: ${model.model_version}`,
`\t\tPublisher: ${model.publisher}`,
`\t\tModel Family: ${model.model_family}`,
`\t\tModel Registry: ${model.model_registry}`,
`\t\tLicense: ${model.license}`,
`\t\tTask: ${model.task}`,
`\t\tSummary: ${model.summary}`,
].join("\n")
);
}
return systemMessage.join("\n")
},
}); I guess we need to declare the above text to be a system message? I would assume that the messages history would be available implicitly. |
This is the line where the function is called with the chat history and the parameters as it was parsed by our model: My understanding is that all tool functions return an object that can then used to send instructions to a model, as done here: |
copilotExtension.registerSkill()
copilotExtension.registerFunction()
note to self: I think I now understand what @josebalius meant with "loop" thanks to this video: It's possible that a single prompt can trigger multiple function calls, but the result of the first function call is needed before the 2nd function call can be done. In that video @daveebbelaar describes an airport assistant. The user can ask to look up a flight, book flights, and file complaints. I'll ignore the complaints for simplicity. So a prompt like this
would result in
Here is what the code could look like copilotExtension.registerFunction({
name: "lookup_flight",
description: "Look up a flight based on time, origin, and destination",
parameters: {
time: {
type: "string",
description:
"The time when the flight should depart as ISO 8601 date time string",
},
origin: {
type: "string",
description: "The airport short code for the origin of the flight",
},
destination: {
type: "string",
description: "The airport short code for the destination of the flight",
},
},
async run({ time, origin, destination }) {
const result = await myFlightLookupFunction(time, origin, destination);
return {
departureTime: result.departureTime,
timezoneDifference: result.timezoneDifference,
arrivalTime: result.arrivalTime,
travelTime: result.travelTime,
flightNumber: result.flightNumber,
airline: result.airline,
originCity: result.originCity,
originCode: result.originCode,
destinationCity: result.destinationCity,
destinationCode: result.destinationCode,
travelTime: result.travelTime,
};
},
});
copilotExtension.registerFunction({
name: "book_flight",
description: "Book a flight based flight number and day",
parameters: {
flightNumber: {
type: "string",
description: "The flight number",
},
date: {
type: "string",
description: "The date of the flight as an ISO 8601 date string",
},
},
async run({ flightNumber, date }) {
const result = await myFlightBookingFunction(flightNumber, date);
return {
flightNumber,
departureTime: result.date,
confirmationNumber: result.confirmationNumber,
seat: result.seat,
};
},
}); The way this would work is
Does that make sense? |
It does, this is why I said you are essentially implementing an LLM orchestrator. You'll have to not only attach the function message itself, but also the models function call, so that the model can see in history that it requested a function execution. i.e.
The proposed API LGTM. re: including system prompt inside the skill itself, i think thats wrong. You want the system prompt defined at a higher level thats prevalent across all skill executions. Code should look like: copilotExtension.systemPrompt("You are a helpful travel assistant...")
copilotExtension.registerSkill(...)
copilotExtension.registerSkill(...)
copilotExtension.execute(...) |
I replaced the I think it makes more sense, unless you can think of a use case where an integrator would dynamically want to register functions based on user interactions? I'll probably go ahead and merge #20 as time is tight, but by its nature dreamcode is always up for discussion/revisions. |
/cc @josebalius