Enhancing the user experience is key to improving any service, whether it’s a website, mobile app, or other digital product. Our plan is to take it to the next level by introducing CoPilot, an OpenAI-powered assistant that allows users to navigate through the system using natural language.
Why is this important?
Providing an AI-powered virtual assistant is crucial to meeting users’ needs and expectations. Let’s consider a scenario where you have a vast collection of recipes but aren’t sure which one to choose based on the ingredients you have. A virtual assistant could solve this problem by allowing users to simply list the ingredients they want to use, then returning and organizing the most relevant recipes. Users could then ask the assistant step-by-step questions to make their meals.
How can we build CoPilot in 2023?
Since the end of last year, chatbots like ChatGPT have demonstrated the possibilities of natural language interfaces. However, these models are limited by the maximum context length they can consider for any given query. A recipe service contains far more data than could fit in a single prompt.
This is where our solution comes in – we use ChatGPT Retrieval Plugin and LangChain, two powerful libraries that allow us to overcome these limitations. By encoding data into vector representations, we can store vast amounts of information to inform the assistant while providing responses. When a user asks a question, we encode it into a vector and compare it against the knowledge base to find relevant information. The assistant’s answer is built from these knowledge pieces.
Our approach uses OpenAI API along with two open-source libraries:
- ChatGPT Retrieval Plugin: Encodes and indexes data to provide context for the assistant. It offers an API to encode documents (our knowledge base) and find similar documents for any query.
- LangChain: Wraps language models like ChatGPT to simplify building chatbots. It handles details like setting up the OpenAI API, incorporating external tools (e.g. search), and storing conversation context. We use LangChain to generate answers using the context from ChatGPT Retrieval Plugin and maintain information about past conversations with each user.
To build this solution, the key steps are:
- Set up a vector database – our recommendation is Pinecone, SmartCat’s official partner
- Deploy the ChatGPT Retrieval Plugin server
- Implement the query interface using LangChain
- Store user conversation history in memory
With this approach, we can create an AI-powered virtual assistant that understands natural language and provides relevant responses based on a large knowledge base. The assistant gets smarter over time by learning from conversations and continuing to index new data.
Written by: Igor Tica
April 11, 2023