GenAI Stack (old)
v0.1.0
v0.1.0
  • Getting Started
    • 📚Introduction
    • 🚀Quickstart with colab
    • 📘Default Data Types
    • 🪛Installation
  • Components
    • ✨Introduction
    • 🚜Data Extraction and Loading
      • 🔥Quickstart
      • 📖Advanced Usage
    • 🔮Vector Database
      • 🔥Quickstart
      • 📦Chromadb
      • 📦Weaviate
      • 📖Advanced Usage
    • 📤Retrieval
    • 🦄LLMs
      • OpenAI
      • GPT4All
      • Custom Model
      • 📖Advanced Usage
  • Example Use Cases
    • 💬Chat on PDF
    • âš¡Chat on Webpage
    • 📜Chat on PDF with UI
  • 🧑CONTRIBUTING.md
Powered by GitBook
On this page
  • CSV
  • PDF
  • Web
  • JSON
  • Markdown
  1. Getting Started

Default Data Types

By default, the LLM stack supports the following data types:

CSV

To use CSV as a source, use the data type (the first argument to the add_source() method) as csv. Eg:

from genai_stack.model import OpenAIGpt35Model

model = OpenAIGpt35Model.from_kwargs(
 fields={"openai_api_key": "Paste your Open AI key"}
)
model.add_source("csv", "valid_csv_path_or_url")

PDF

To use pdf as a source, use the data type as pdf. Eg:

from genai_stack.model import OpenAIGpt35Model

model = OpenAIGpt35Model.from_kwargs(
 fields={"openai_api_key": "Paste your Open AI key"}
)
model.add_source("pdf", "valid_pdf_path_or_url")

Web

To use the web as a source, use the data type as web. Eg:

from genai_stack.model import OpenAIGpt35Model

model = OpenAIGpt35Model.from_kwargs(
 fields={"openai_api_key": "Paste your Open AI key"}
)
model.add_source("web", "valid_web_url")

JSON

To use JSON as a source, use the data type as json. Eg:

from genai_stack.model import OpenAIGpt35Model

model = OpenAIGpt35Model.from_kwargs(
 fields={"openai_api_key": "Paste your Open AI key"}
)
model.add_source("json", "valid_json_path_or_url")

Markdown

To use markdown as a source, use the data type as markdown. Eg:

from genai_stack.model import OpenAIGpt35Model

model = OpenAIGpt35Model.from_kwargs(
 fields={"openai_api_key": "Paste your Open AI key"}
)
model.add_source("markdown", "valid_markdown_path_or_url")

To make predictions you can execute the below code snippet:

response = model.predict("<Question on top of any of your data>")
print(response)
PreviousQuickstart with colabNextInstallation

Last updated 1 year ago

📘