GenAI Stack (old)
v0.2.0
v0.2.0
  • Getting Started
    • 💬Introduction
    • 🚀Quickstart with colab
    • 📘Default Data Types
    • 🪛Installation
  • Components
    • ✨Introduction
    • 🚜ETL
      • 🔥Quickstart
      • 🦜Langchain
      • 🦙LLama Hub
    • 🌱Embeddings
      • 🔥Quickstart
      • 🦜Langchain
      • 📖Advanced Usage
    • 🔮Vector Database
      • 🔥Quickstart
      • 📦Chromadb
      • 📦Weaviate
      • 📖Advanced Usage
    • 📚Prompt Engine
      • 🔥Quickstart
      • 📖Advanced Usage
    • 📤Retrieval
      • 🔥Quickstart
      • 📖Advanced Usage
    • ️️️🗃️ LLM Cache
      • 🔥Quickstart
    • 📦Memory
      • 🔥Quickstart
      • 📖Advanced Usage
    • 🦄LLMs
      • OpenAI
      • GPT4All
      • Hugging Face
      • Custom Model
  • Advanced Guide
    • 💻GenAI Stack API Server
    • 🔃GenAI Server API's Reference
  • Example Use Cases
    • 💬Chat on PDF
    • 💬Chat on CSV
    • 💬Similarity Search on JSON
    • 📖Document Search
    • 💬RAG pipeline
    • 📚Information Retrieval Pipeline
  • 🧑CONTRIBUTING.md
Powered by GitBook
On this page
  • Python Implementation
  • Importing Components
  • Initializing Stack Components
  • ETL
  • Embeddings
  • VectorDB
  • Model
  • Prompt Engine
  • Retriever
  • Memory
  • Initializing Stack
  • Stack
  • Performing ETL operations
  • Now you can start asking your queries.
  1. Example Use Cases

Chat on PDF

Python Implementation

Importing Components

from genai_stack.stack.stack import Stack
from genai_stack.etl.langchain import LangchainETL
from genai_stack.embedding.langchain import LangchainEmbedding
from genai_stack.vectordb.chromadb import ChromaDB
from genai_stack.prompt_engine.engine import PromptEngine
from genai_stack.model.gpt3_5 import OpenAIGpt35Model
from genai_stack.retriever.langchain import LangChainRetriever
from genai_stack.memory.langchain import ConversationBufferMemory

Initializing Stack Components

ETL

etl.json

{
    "name": "PyPDFLoader",
    "fields": {
        "file_path": "/path/to/sample.pdf"
    }
}
etl = LangchainETL.from_config_file(config_file_path="/path/to/etl.json")

Embeddings

embeddings.json

{
    "name": "HuggingFaceEmbeddings",
    "fields": {
        "model_name": "sentence-transformers/all-mpnet-base-v2",
        "model_kwargs": { "device": "cpu" },
        "encode_kwargs": { "normalize_embeddings": false }
    }
}
embedding = LangchainEmbedding.from_config_file(config_file_path="/path/to/embeddings.json")

VectorDB

chromadb = ChromaDB.from_kwargs()

Model

llm = OpenAIGpt35Model.from_kwargs(parameters={"openai_api_key": "your-api-key"})

Prompt Engine

prompt_engine.json

{
    "should_validate": false
}
prompt_engine = PromptEngine.from_config_file(config_file_path="/path/to/prompt_engine.json")

Retriever

retriever = LangChainRetriever.from_kwargs()

Memory

memory = ConversationBufferMemory.from_kwargs()

Initializing Stack

Stack

Stack(
    etl=etl,
    embedding=embedding,
    vectordb=chromadb,
    model=llm,
    prompt_engine=prompt_engine,
    retriever=retriever,
    memory=memory
)

Performing ETL operations

run() will execute Extract, Transform and Load operations.

etl.run()

Now you can start asking your queries.

response = retriever.retrieve("your query")
print(response)
PreviousGenAI Server API's ReferenceNextChat on CSV

Last updated 1 year ago

💬