GenAI Stack (old)
v0.2.0
v0.2.0
  • Getting Started
    • 💬Introduction
    • 🚀Quickstart with colab
    • 📘Default Data Types
    • 🪛Installation
  • Components
    • ✨Introduction
    • 🚜ETL
      • 🔥Quickstart
      • 🦜Langchain
      • 🦙LLama Hub
    • 🌱Embeddings
      • 🔥Quickstart
      • 🦜Langchain
      • 📖Advanced Usage
    • 🔮Vector Database
      • 🔥Quickstart
      • 📦Chromadb
      • 📦Weaviate
      • 📖Advanced Usage
    • 📚Prompt Engine
      • 🔥Quickstart
      • 📖Advanced Usage
    • 📤Retrieval
      • 🔥Quickstart
      • 📖Advanced Usage
    • ️️️🗃️ LLM Cache
      • 🔥Quickstart
    • 📦Memory
      • 🔥Quickstart
      • 📖Advanced Usage
    • 🦄LLMs
      • OpenAI
      • GPT4All
      • Hugging Face
      • Custom Model
  • Advanced Guide
    • 💻GenAI Stack API Server
    • 🔃GenAI Server API's Reference
  • Example Use Cases
    • 💬Chat on PDF
    • 💬Chat on CSV
    • 💬Similarity Search on JSON
    • 📖Document Search
    • 💬RAG pipeline
    • 📚Information Retrieval Pipeline
  • 🧑CONTRIBUTING.md
Powered by GitBook
On this page
  • Python Implementation
  • Importing Components
  • Initializing Stack Components
  • ETL
  • Embeddings
  • VectorDB
  • Model
  • Prompt Engine
  • Retriever
  • Memory
  • Initializing Stack
  • Stack
  • Performing ETL operations
  • Now you can start asking your queries.
  1. Example Use Cases

Chat on CSV

Python Implementation

Importing Components

from genai_stack.stack.stack import Stack
from genai_stack.etl.langchain import LangchainETL
from genai_stack.embedding.langchain import LangchainEmbedding
from genai_stack.vectordb.chromadb import ChromaDB
from genai_stack.prompt_engine.engine import PromptEngine
from genai_stack.model.gpt3_5 import OpenAIGpt35Model
from genai_stack.retriever.langchain import LangChainRetriever
from genai_stack.memory.langchain import ConversationBufferMemory

Initializing Stack Components

ETL

etl = LangchainETL.from_kwargs(name="CSVLoader", fields={"file_path": "/path/sample.csv"})

Embeddings

config = {
    "model_name": "sentence-transformers/all-mpnet-base-v2",
    "model_kwargs": {"device": "cpu"},
    "encode_kwargs": {"normalize_embeddings": False},
}
embedding = LangchainEmbedding.from_kwargs(name="HuggingFaceEmbeddings", fields=config)

VectorDB

chromadb = ChromaDB.from_kwargs()

Model

llm = OpenAIGpt35Model.from_kwargs(parameters={"openai_api_key": "your-api-key"})

Prompt Engine

prompt_engine = PromptEngine.from_kwargs(should_validate=False)

Retriever

retriever = LangChainRetriever.from_kwargs()

Memory

memory = ConversationBufferMemory.from_kwargs()

Initializing Stack

Stack

Stack(
    etl=etl,
    embedding=embedding,
    vectordb=chromadb,
    model=llm,
    prompt_engine=prompt_engine,
    retriever=retriever,
    memory=memory
)

Performing ETL operations

run() will execute Extract, Transform and Load operations.

etl.run()

Now you can start asking your queries.

response = retriever.retrieve("your query")
print(response)
PreviousChat on PDFNextSimilarity Search on JSON

Last updated 1 year ago

💬