# LLMs

Model is the component that determines which LLM to run. This component is mainly for running LLM models under a http server and access through an API endpoint. Model is for loading the model and its necessary preprocess and postprocess functions to parse the retrieval context and the user prompt properly and give to the model for inference. The response classes can also be customized according to the model’s requirements. GenAI Stack supports things like raw Response (strings or bytes) or JsonResponse. Default is JsonResponse.

LLMStack pre-includes few models for trying out some popular models available out there.

More models will be added in the later releases. We welcome contributions if a model has to be included.

### Supported Models:

1. [OpenAI](https://genaistack.aiplanet.com/components/llms/openai)
2. [GPt4All](https://genaistack.aiplanet.com/components/llms/gpt4all)

### Custom Models

Instructions on how to create a custom model can be found [here](https://genaistack.aiplanet.com/components/llms/custom-model).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://genaistack.aiplanet.com/components/llms.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
