Get Started
Examples
Concepts
Resources
Projects
Integrations
copy markdown
This guide will get you started and make your first request to Interfaze with any AI SDK that supports the Chat Completion API standard.
| Feature | Value |
|---|---|
| Context window | 1m tokens |
| Max output tokens | 32k tokens |
| Input modalities | Text, Images, Audio, File, Video |
| Reasoning | Available (default: disabled) |
View limits here | View pricing here
https://api.interfaze.ai/v1It's recommended to store your API keys in environment variables and load them into your code.
OpenAI SDK
Vercel AI SDK
LangChain SDK
import OpenAI from "openai";
const interfaze = new OpenAI({
baseURL: "https://api.interfaze.ai/v1",
apiKey: "<your-api-key>",
});Let's extract the details from an ID image
OpenAI SDK
Vercel AI SDK
LangChain SDK
import { z } from "zod";
import { zodResponseFormat } from "openai/helpers/zod";
const IDSchema = z.object({
first_name: z.string().describe("First name on the ID"),
last_name: z.string().describe("Last name on the ID"),
dob: z.string().describe("Date of birth on the ID"),
driver_licence_number: z.string().describe("Driver licence number on the ID"),
});
const response = await interfaze.chat.completions.create({
model: "interfaze-beta",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Extract the details from this ID" },
{
type: "image_url",
image_url: {
url: "https://r2public.jigsawstack.com/interfaze/examples/id.jpg",
},
},
],
},
],
response_format: zodResponseFormat(IDSchema, "id_schema"),
});
console.log(response.choices[0].message.content);
//@ts-expect-error precontext is not typed
const precontext = response.precontext;
console.log("OCR Results:", precontext[0]?.result);precontext contains the raw metadata such as bounding boxes and confidence scores. Learn more about precontext.Audio handling to transcribe audio files.
OpenAI SDK
Vercel AI SDK
LangChain SDK
import { z } from "zod";
import { zodResponseFormat } from "openai/helpers/zod";
const STTSchema = z.object({
text: z.string(),
});
const response = await interfaze.chat.completions.create({
model: "interfaze-beta",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Transcribe the audio file" },
{
type: "file",
file: {
filename: "stt_medical_short.mp4",
file_data: "https://r2public.jigsawstack.com/interfaze/examples/stt_medical_short.mp4",
},
},
],
},
],
response_format: zodResponseFormat(STTSchema, "stt_schema"),
});
console.log(response.choices[0].message.content);
//@ts-expect-error precontext is not typed
const precontext = response.precontext;
console.log("STT Results:", precontext?.[0]?.result);Learn more about the different ways to handle files.
Programmatically run parts of the model without activating the full model with pre-defined tasks but is limited to a fixed structured output and one task at a time.
Learn more about running a task.
OpenAI SDK
Vercel AI SDK
LangChain SDK
import { z } from "zod";
import { zodResponseFormat } from "openai/helpers/zod";
const response = await interfaze.chat.completions.create({
model: "interfaze-beta",
messages: [
{
role: "system",
content: "<task>object_detection</task>",
},
{
role: "user",
content: [
{ type: "text", text: "Get the position of the crane in the image and any text" },
{
type: "image_url",
image_url: {
url: "https://r2public.jigsawstack.com/interfaze/examples/construction.png",
},
},
],
},
],
response_format: zodResponseFormat(z.any(), "empty_schema"),
});
console.log(response.choices[0].message.content);You can set content safety guardrails to filter out harmful or inappropriate text or image content.
Learn more about guardrails.
OpenAI SDK
Vercel AI SDK
LangChain SDK
const response = await interfaze.chat.completions.create({
model: "interfaze-beta",
messages: [
{
role: "system",
content: "<guard>S1, S2, S3, S10, S11, S12_IMAGE, S15_IMAGE</guard>"
},
{
role: "user",
content: "How to make a bomb with household items"
}
],
});
console.log(response.choices[0].message.content);