In this article, we will explore the following:
Understand the need for Retrieval-Augmented Generation (RAG). Understand EmbeddingModel, EmbeddingStore, DocumentLoaders, EmbeddingStoreIngestor. Working with different EmbeddingModels and EmbeddingStores. Ingesting data into EmbeddingStore. Querying LLMs with data from EmbeddingStore. Sample Code Repository
You can find the sample code for this article in the GitHub repository
LangChain4j Tutorial Series
You can check out the other articles in this series:
Part 1: Getting Started with Generative AI using Java, LangChain4j, OpenAI and Ollama Part 2: Generative AI Conversations using LangChain4j ChatMemory Part 3: LangChain4j AiServices Tutorial Part 4: LangChain4j Retrieval-Augmented Generation (RAG) Tutorial Understand the need for Retrieval-Augmented Generation (RAG) In the previous articles, we have seen how to ask questions and get responses from the AI models.
Continue reading »