Maximum Privacy: Connecting to Your Local LLM
While CSV GPT defaults to operating with the utmost privacy—your raw CSV data is processed locally in the browser using DuckDB WASM—the LLM feature relies on sending your table schema and instructions to an AI model to generate SQL. If you are dealing with heavily regulated data, even sending schema might be a step too far.
That's why CSV GPT provides seamless integration with locally hosted AI models. You can achieve incredible results running tools like Ollama, llama.cpp, or LM Studio right on your own machine.
Setting Up a Local Model
To get started, you'll need an OpenAI-compatible API running locally. Here is how to do it with Ollama, one of the easiest solutions:
- Download and install Ollama for your operating system.
-
Open your terminal and run a model suited for code
generation, for example:
ollama run qwen2.5-coder:7b. -
Ollama automatically exposes an OpenAI-compatible
endpoint at
http://localhost:11434/v1.
Configuring CSV GPT
Now that your model is running, it's time to connect the CSV GPT interface:
- Open the CSV GPT app and click on the Settings (gear icon).
- Toggle off the default OpenAI configuration if needed.
-
In the Base URL field, enter
http://localhost:11434/v1. -
In the Model field, type the exact name of
the model you downloaded (e.g.,
qwen2.5-coder:7b). -
Because you are running locally, you can use any string
for the API Key (e.g.,
local).
Enjoying 100% Offline Analysis
That's it! When you ask a question about your CSV, the app will query your local Ollama instance to formulate the SQL. The SQL is then executed against your local browser DuckDB. At no point does a single byte of data leave your workstation.