GenAI Stack (old)
v0.2.0
v0.2.0
  • Getting Started
    • 💬Introduction
    • 🚀Quickstart with colab
    • 📘Default Data Types
    • 🪛Installation
  • Components
    • ✨Introduction
    • 🚜ETL
      • 🔥Quickstart
      • 🦜Langchain
      • 🦙LLama Hub
    • 🌱Embeddings
      • 🔥Quickstart
      • 🦜Langchain
      • 📖Advanced Usage
    • 🔮Vector Database
      • 🔥Quickstart
      • 📦Chromadb
      • 📦Weaviate
      • 📖Advanced Usage
    • 📚Prompt Engine
      • 🔥Quickstart
      • 📖Advanced Usage
    • 📤Retrieval
      • 🔥Quickstart
      • 📖Advanced Usage
    • ️️️🗃️ LLM Cache
      • 🔥Quickstart
    • 📦Memory
      • 🔥Quickstart
      • 📖Advanced Usage
    • 🦄LLMs
      • OpenAI
      • GPT4All
      • Hugging Face
      • Custom Model
  • Advanced Guide
    • 💻GenAI Stack API Server
    • 🔃GenAI Server API's Reference
  • Example Use Cases
    • 💬Chat on PDF
    • 💬Chat on CSV
    • 💬Similarity Search on JSON
    • 📖Document Search
    • 💬RAG pipeline
    • 📚Information Retrieval Pipeline
  • 🧑CONTRIBUTING.md
Powered by GitBook
On this page
  • Weaviate:
  • Installation
  • Usage
  1. Components
  2. Vector Database

Weaviate

PreviousChromadbNextAdvanced Usage

Last updated 1 year ago

Weaviate:

We recommend this database when you have a more large usecase. You would have to deploy weaviate separately and connect it to our stack to use the running weaviate instance.

We recommend running weaviate without any vectorizer module so that the embedding component is utilized for creating embeddings from your documents.

Installation

Prerequisites:

Here the docker-compose configurations:

  • This is a sample docker-compose file for installing weaviate without any vectorizer modules.

version: "3.4"
services:
  weaviate:
    image: semitechnologies/weaviate:1.20.1
    ports:
      - 8080:8080
    restart: on-failure:0
    environment:
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: "true"
      PERSISTENCE_DATA_PATH: "/var/lib/weaviate"
      DEFAULT_VECTORIZER_MODULE: "none"
      CLUSTER_HOSTNAME: "node1"
    volumes:
      - weaviate_db:/var/lib/weaviate

volumes:
  weaviate_db:

Supported Arguments:

url: str
text_key: str
index_name: str
auth_client_secret: Optional[AuthCredentials] = None
timeout_config: Optional[tuple] = (10, 60)
additional_headers: Optional[dict] = None
startup_period: Optional[int] = 5
search_method: Optional[SearchMethod] = SearchMethod.SIMILARITY_SEARCH
search_options: Optional[dict] = Field(default_factory=dict)

Supported Search Methods:

  • similarity_search

    • Search Options:

      • k : The top k elements for searching

  • max_marginal_relevance_search

    • Search Options

      • k: Number of Documents to return. Defaults to 4.

      • fetch_k: Number of Documents to fetch to pass to MMR algorithm.

      • lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

Usage

A Vectordb definitely needs a embedding function and you connect these two components through a stack.

from langchain.docstore.document import Document as LangDocument

from genai_stack.vectordb.chromadb import ChromaDB
from genai_stack.vectordb.weaviate_db import Weaviate
from genai_stack.embedding.utils import get_default_embedding
from genai_stack.stack.stack import Stack


embedding = get_default_embedding()
# Will use default persistent settings for a quick start
weaviatedb = Weaviate.from_kwargs(url="http://localhost:8080/", index_name="Testing", text_key="test")
chroma_stack = Stack(model=None, embedding=embedding, vectordb=weaviatedb)

# Add your documents
weaviate_stack.vectordb.add_documents(
            documents=[
                LangDocument(
                    page_content="Some page content explaining something", metadata={"some_metadata": "some_metadata"}
                )
            ]
        )
        
# Search for your documents
result = weaviate_stack.vectordb.search("page")
print(result)

You can also use different search_methods and search options when trying out more complicated usecases

weavaite_db = Weaviate.from_kwargs(
    url="http://localhost:8080/",
    index_name="Testing",
    text_key="test",
    search_method="max_marginal_relevance_search",
    search_options={"k": 2, "fetch_k": 10, "lambda_mult": 0.3},
)

Note: Weaviate expects class_name in PascalCase otherwise it might lead to weird index not found errors.

This docker compose file uses sentence transformers for embedding for more embeddings and other options

🔮
📦
docker
docker-compose
refer this doc.