It's a somewhat technical process. We make use of OpenAI's embedding and ChatGPT APIs, along with vector databases for index storage. All the documentation we take in is first cleaned up, divided into smaller pieces, and labeled according to its source. After that, we create a vector representation for each piece and store it in our vector database index. When a user asks a question, we convert it into an embedding and carry out an advanced semantic and keyword search to find the closest matches to the user's query. Then, we select the most relevant pieces, include them as context along with the original question, and use the ChatGPT API to generate a response in markdown format. This response is then converted to HTML and shown to the user.