- Clone the repo to your local machine.
- Sign up for a free Algolia account and create an application.
- Make environment variables that the application can access called
ALGOLIA_APPLICATION_ID
andALGOLIA_API_KEY
using the Application ID and Search API Key from the API Keys page in Algolia. Note that you can't use the Search API Key to edit or add to an index - the project as-is only reads the data in the indexes we'll be creating in a moment, so we'll be good. But if you plan to extend the project in a way that requires writing to an index, you'll need the Write API Key. - Create two indexes in that application in Algolia called
announcements
andorders
. Seed them with the JSON data from the/searchable_data
folder of the repo, unless you'd like to experiment with your own data. They're super simple to reverse-engineer, and they're actually only going to get read by the LLM (who doesn't care about structure or consistency), so have fun with it. - Create an OpenAI account and add the minimum amount of funds for it on the Billing page. The minimum as of August 2024 is $5 USD. You likely won't use all of this though because this application uses the GPT 4o mini model, the most cost effective one at time of writing. The entire development and testing process for this repository used up $0.01 USD, and the other $4.99 USD will just sit there for future development.
- If you plan on making a publicly available version of this demo, make sure to set usage limits for your OpenAI account to prevent abuse.
- Create an API key for your OpenAI account and save it as an environment variable called
OPENAI_API_KEY
where the application can access it. - From the root folder of the repo, run
node src/index.js
. It'll start an Express server which you can access at http://localhost:3000/.
The authenticateUser
function on line 104 of /src/index.js
doesn't actually authenticate the user, it just always returns the same user information. This lines up with the user who made several orders to 321 Maple Street, Metropolis, NY 10001 in the dummy data in /searchable_data/orders.json
. So if you ask the AI agent about one of those orders, it will respond with the correct information because it will be able to look up that order in the Algolia index. If you ask about an order from another customer without modifying the authenticateUser
function, it won't be able to find any results. Exercise: Can you integrate your chosen authentication process right into the chat window so its almost completely frictionless? Can you modify the searchOrders
tool function starting on line 75 of /src/index.js
to return an authentication request if the order exists, but the current customer isn't authorized to access it?
The jury is still out on the ethics of putting AI agents into customer service roles without informing the customers that they're communicating with an LLM. This application strikes a compromise by giving the AI a name that it can use if it is asked, but also clearly identifying that the agent is artificial with the title of the chat window. This makes the agent more personable without misleading anyone. Exercise: Can you think of a different way to get the AI to identify itself as non-human without making it feel robotic?
Currently this application does not effectively handle the edge cases where the LLM returns a response that goes beyond the length boundaries or violates the content filter. Exercise: How would you effectively inform the user when the LLM's response terminated for one of these reasons and manage to continue the conversation effectively if appropriate?