- Evaluate the potential TODAY: The market is still very busy and evolving, and many of the tools are not that specific to software delivery tasks, or only focus on a narrow range. If you want to try things today, then Haiven offers a simple sandbox to do so
- Upskilling: A mindset shift is needed to using AI assistants, they don't work like other software, and practical usage is the best way to learn. And yet, many still lack the inspiration and ideas to try things in the daily software delivery work. Haiven prompts and features offer a bunch of ideas to get people started and extrapolate to tasks that are more specific to your team or organisation.
- Stop the speculation: AI demos look very magical, but many features are still in private preview or beta. At the same time, the technology is quite accessible to experiment relatively cheaply yourself, and find out which of YOUR frequently done or difficult tasks AI can meaningfully help with, to then adjust your strategy based on what you're learning.
- Focus on the usage, not the tech: We have found that people get into the technical weeds of setting up AI and Retrieval-Augmented Generation (RAG) in particular, neglecting the perspective of the users, and how AI can actually help in a software delivery team's workflow. Using a lightweight, low complexity setup that gets the basic technical challenges out of the way to let a team focus on the real-life usage can help with that. We also recommend that you don't get too hung up on making the RAG setup more and more sophisticated, as we are already seeing some interesting uplift and learnings with simpler technical approaches in combination with careful data curation.
- Chatbot builders like OpenAI's GPTs or Dify: You can absolutely cover part of Haiven's functionality with these platforms, e.g. build a GPT for writing a user story, or a Dify chatbot to help with an architecture decision record. Their Retrieval-Augmented Generation capability are probably even better. Haiven might give you a bit more control over where your data goes, and it has more flexibility to experiment with less chatbot-like interaction patterns in Haiven's "Guided mode".
- AI features being built into existing products like JIRA? - We absolutely recommend that you try these new features out as they are being released, and compare. The main difference for now is the level of flexibility. Most tools currently land somewhere between a chatbot and ready-baked workflows that are hard to customize. With Haiven you can work closer to the flexibility of AI and try a wider range of things.
- Cloud providers' AI platforms like Amazon's Bedrock and Q, Google's Vertex AI Studio, or Azure's AI Studio? All of these "studios" offer the ability to build applications like Haiven in a simple way, by connecting foundation models and also building vector-based knowledge bases that can be used for retrieval-augmented generation (RAG). For Haiven to work, you need to use one of these to get access to models. And you could even use these platforms' knowledge base capabilities to connect more sophisticated RAG capabilities than the simple in-memory ones we use in Haiven at the moment.
We want to emphasize that we are already seeing what we're always seeing: The actual challenges in AI assistance for software teams are not in deploying the technology, and it's not just about tools. It's also about people and processes.
- Are your teams understanding the strengths and weaknesses of AI and using them accordingly? Or are they trying to shoehorn RAG into a drop-in replacement for any type of search, regardless of the context? Software is a design process, are they just thinking of AI as a glorified artifact generator, or are they actually finding ways to help them use it as an assistant in their design process?
- Are you opening a firehose of documentation to improve your AI assistance? GenAI amplifies indiscriminately, so you might find that the quality of the assistance depends on the quality of your documentation and data, and you might still need an approach to data curation to improve the assistance.
- How are your teams making the mindset shift to AI assistance? Are they putting the assistant aside as soon as it gives them one bad answer, or are they properly leveraging the non-deterministic nature of AI?