Interfaze

logo

Beta

pricing

docs

blog

sign in

Run LLMs inside Postgres

copy markdown

Postgres LLM lets you run an LLM directly inside Postgres. Translation, classification, summarization, OCR, web search and more, all triggered by a write to your table, all without your data ever leaving the database.

v2 makes it async by default. Writes return instantly, jobs flow through a Postgres-native queue, and a background worker calls the LLM and writes the result back into the row.

Demo

Get Started

1. Install Requirements

Make sure your Postgres instance has the http, hstore, and pg_cron extensions enabled. On Supabase, all three are one toggle away.

2. Run init.sql

  • Download or copy init.sql from the repo.
  • Replace the API_KEY with your Interfaze API key from the dashboard.
  • Execute the script in your SQL environment.

This creates the llm schema, the llm.queue table, the llm.call trigger function, and registers a cron worker that drains the queue every 5 seconds.

If you want to use a different provider, swap both the BaseURL to a chat completion API and the API_KEY in init.sql. It works with anything OpenAI-compatible.

3. Add a trigger

Pick a table, pick a column to watch, and write a prompt with {column_name} placeholders. The LLM result lands in the target column you choose.

Picking the Right Model

The model you pick matters more than usual when it sits this close to your database. You don't want LLM bloat, hallucinated values, or inaccurate fields landing in production rows.

Highly deterministic models that are great at structured output are your best bet. Interfaze is built for exactly this kind of workload, but any provider with strong structured output and tight instruction following will work.

Example: Sentiment Analysis

Suppose you have a customer review system and want sentiment filled in automatically as each review comes in.

CREATE TABLE public.user_reviews (
  id bigint generated by default as identity not null,
  created_at timestamp with time zone not null default now(),
  user_id uuid not null default gen_random_uuid(),
  review_text text not null,
  sentiment text null,
  constraint user_reviews_pkey primary key (id)
);

A primary key is required since the worker uses it to write results back into the right row.

DROP TRIGGER IF EXISTS analyze_sentiment ON user_reviews;
CREATE TRIGGER analyze_sentiment
AFTER INSERT OR UPDATE OF review_text ON user_reviews
FOR EACH ROW
WHEN (NEW.review_text IS NOT NULL)
EXECUTE FUNCTION llm.call(
  'Analyze the sentiment of this text and respond with only "positive", "negative", or "neutral". Return value in lowercase. Text: {review_text}',
  'sentiment'
);

Now insert a row:

INSERT INTO user_reviews (review_text)
VALUES ('I love this hackathon, I can build anything I want!')
RETURNING *;

The INSERT returns immediately with sentiment still NULL. Within a few seconds the background worker fills it in (e.g. "positive"). Updating review_text later re-runs the trigger and refreshes sentiment automatically.

Features You Should Know

  • Async by default so writes return instantly and the LLM runs out-of-band.

  • {column_name} placeholders pull any column from the same row as prompt context.

  • Multi-column output in a single LLM call by passing extra target column arguments.

  • Built-in retries with a configurable max-attempts ceiling before a job is marked as errored.

  • Concurrent-safe workers using FOR UPDATE SKIP LOCKED so multiple workers never race on the same job.

  • Deduplicated jobs so rapid updates to the same row collapse into a single LLM call against the latest values.

  • Observable because llm.queue is just a Postgres table, query status, attempts, and last_error with plain SQL.

  • Provider-agnostic since it works with any Chat Completions compatible API. Pre-configured for Interfaze.

llm.call takes the following parameters:

ParameterDescriptionRequired
promptThe prompt to use for the LLM. Supports {column_name} placeholders that get replaced with values from the row.Yes
target_column(s)One or more columns to update with the LLM result. Pass additional column names as extra arguments.Yes
-- Single column
llm.call('<prompt>','<target_column>');

-- Multiple columns
llm.call('<prompt>','<target_column_1>','<target_column_2>','<target_column_3>');

One gotcha: never reference the same column you're updating inside the prompt. A trigger that watches summary and writes back to summary will loop forever. Read from one column, write to another.

More Example Use Cases

Translation

CREATE TRIGGER translate_es
AFTER INSERT OR UPDATE OF user_review_og ON user_reviews
FOR EACH ROW
WHEN (NEW.user_review_og IS NOT NULL)
EXECUTE FUNCTION llm.call(
  'Translate the following text to Spanish (es). Only return the Spanish text with no additional text. Text: {user_review_og}',
  'user_review_es'
);
CREATE TRIGGER background_search
AFTER INSERT OR UPDATE OF full_name ON user_reviews
FOR EACH ROW
WHEN (NEW.full_name IS NOT NULL)
EXECUTE FUNCTION llm.call(
  'Give a summary background on {full_name}.',
  'user_background'
);

Image Vision OCR

CREATE TRIGGER vision_ocr
AFTER INSERT OR UPDATE OF attached_image_url ON user_reviews
FOR EACH ROW
WHEN (NEW.attached_image_url IS NOT NULL)
EXECUTE FUNCTION llm.call(
  'Extract all text from this image: {attached_image_url}',
  'image_description'
);

Multi-column: Sentiment Analysis + Translation

A single trigger can populate multiple columns at once from one LLM call:

CREATE TRIGGER analyze_and_translate
AFTER INSERT OR UPDATE OF user_review_og ON user_reviews
FOR EACH ROW
WHEN (NEW.user_review_og IS NOT NULL)
EXECUTE FUNCTION llm.call(
  'Analyze the sentiment and translate the following review to Spanish. Text: {user_review_og}',
  'emotion',
  'user_review_es'
);

More Things You Can Do

Copy any of the triggers above, swap the table and column names, then change the prompt. A few patterns we've seen:

  • OCR pipelines that extract text from any new image URL on insert.
  • User enrichment that pulls a LinkedIn-style background summary from a name or email.
  • Auto-tagging and categorization that fills classification columns the moment content lands.
  • Moderation that flags suspicious user-generated content before it's served.

It's the same one-line trigger every time. Your database starts behaving like an AI workflow engine.

Postgres LLM v2 brings async LLM execution right to where your data lives. No middleware, no ETL, no app-side schedulers, just a Postgres trigger and your data.

Grab the code and full readme here:
👉 https://github.com/JigsawStack/postgres-llm