DocsIntegrationsVercel AI SDK

Vercel AI SDK - Observability & Analytics

Telemetry is an experimental feature of the AI SDK and might change in the future.

The Vercel AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more.

The SDK supports tracing via OpenTelemetry. With the LangfuseExporter you can collect these traces in Langfuse.

Full Demo

Langfuse integration with Vercel AI SDK

Get Started

You need to be on "ai": "^3.3.0" to use the telemetry feature as it was recently added. In case of any issues, please update to the latest version as this feature is under active development.

Enable Telemetry

While telemetry is experimental (docs), you can enable it by setting experimental_telemetry on each request that you want to trace.

const result = await generateText({
  model: openai("gpt-4-turbo"),
  prompt: "Write a short story about a cat.",
  experimental_telemetry: { isEnabled: true },
});

Collect Traces With LangfuseExporter

To collect the traces in Langfuse, you need to add the LangfuseExporter to your application.

You can set the Langfuse credentials via environment variables or directly to the LangfuseExporter constructor. Create a project in the Langfuse dashboard to get your secretKey and publicKey.

.env
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
# 🇪🇺 EU region
LANGFUSE_BASEURL="https://cloud.langfuse.com"
# 🇺🇸 US region
# LANGFUSE_BASEURL="https://us.cloud.langfuse.com"

Now you need to register this exporter via the OpenTelemetry SDK.

NextJS has experimental support for OpenTelemetry instrumentation on the framework level. Learn more about it in the Next.js OpenTelemetry guide.

Install dependencies:

npm install @vercel/otel langfuse-vercel @opentelemetry/api-logs @opentelemetry/instrumentation @opentelemetry/sdk-logs

Enable the instrumentationHook in your next.config.js:

next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  experimental: {
    instrumentationHook: true,
  },
};
 
module.exports = nextConfig;

Add LangfuseExporter to your instrumentation:

instrumentation.ts
import { registerOTel } from "@vercel/otel";
import { LangfuseExporter } from "langfuse-vercel";
 
export function register() {
  registerOTel({
    serviceName: "langfuse-vercel-ai-nextjs-example",
    traceExporter: new LangfuseExporter(),
  });
}

Done! All traces that contain AI SDK spans are automatically captured in Langfuse.

Example Application

We created a sample repository (langfuse/langfuse-vercel-ai-nextjs-example) based on the next-openai template to showcase the integration of Langfuse with Next.js and Vercel AI SDK.

Customization

Group multiple executions in one trace

You can open a Langfuse trace and pass the trace ID to AI SDK calls to group multiple execution spans under one trace. The passed name in functionId will be the root span name of the respective execution.

import { randomUUID } from "crypto";
import { Langfuse } from "langfuse";
 
const langfuse = new Langfuse();
const parentTraceId = randomUUID();
 
langfuse.trace({
  id: parentTraceId,
  name: "holiday-traditions",
});
 
for (let i = 0; i < 3; i++) {
  const result = await generateText({
    model: openai("gpt-3.5-turbo"),
    maxTokens: 50,
    prompt: "Invent a new holiday and describe its traditions.",
    experimental_telemetry: {
      isEnabled: true,
      functionId: `holiday-tradition-${i}`,
      metadata: {
        langfuseTraceId: parentTraceId,
        langfuseUpdateParent: false, // Do not update the parent trace with execution results
      },
    },
  });
 
  console.log(result.text);
}
 
await langfuse.flushAsync();
await sdk.shutdown();

The resulting trace hierarchy will be:

Vercel nested trace in Langfuse UI

Disable Tracking of Input/Output

By default, the exporter captures the input and output of each request. You can disable this behavior by setting the recordInputs and recordOutputs options to false.

You can link Langfuse prompts to Vercel AI SDK generations by setting the langfusePrompt property in the metadata field:

import { generateText } from "ai";
import { Langfuse } from "langfuse";
 
const langfuse = new Langfuse();
 
const fetchedPrompt = await langfuse.getPrompt("my-prompt");
 
const result = await generateText({
  model: openai("gpt-4o"),
  prompt: fetchedPrompt.prompt,
  experimental_telemetry: {
    isEnabled: true,
    metadata: {
      langfusePrompt: fetchedPrompt.toJSON(),
    },
  },
});

The resulting generation will have the prompt linked to the trace in Langfuse. Learn more about prompts in Langfuse here.

Pass Custom Attributes

All of the metadata fields are automatically captured by the exporter. You can also pass custom trace attributes to e.g. track users or sessions.

const result = await generateText({
  model: openai("gpt-4-turbo"),
  prompt: "Write a short story about a cat.",
  experimental_telemetry: {
    isEnabled: true,
    functionId: "my-awesome-function", // Trace name
    metadata: {
      langfuseTraceId: "trace-123", // Langfuse trace
      tags: ["story", "cat"], // Custom tags
      userId: "user-123", // Langfuse user
      sessionId: "session-456", // Langfuse session
      foo: "bar", // Any custom attribute recorded in metadata
    },
  },
});

Debugging

Enable the debug option to see the logs of the exporter.

new LangfuseExporter({ debug: true });

Troubleshooting

  • If you deploy on Vercel, Vercel’s OpenTelemetry Collector is only available on Pro and Enterprise Plans (docs).
  • You need to be on "ai": "^3.3.0" to use the telemetry feature as it was recently added. In case of any issues, please update to the latest version as this feature is under active development.
  • On NextJS, make sure that you only have a single instrumentation file.
  • If you use Sentry, make sure to either:
    • set skipOpenTelemetrySetup: true in Sentry.init
    • follow Sentry’s docs on how to manually set up Sentry with OTEL

Short-lived environments

In short-lived environments such as Vercel Cloud Functions, AWS Lambdas etc. you may force an export and flushing of spans after function execution and prior to environment freeze or shutdown by awaiting a call to the forceFlush method on the LangfuseExporter instance.

Learn more

See the telemetry documentation of the Vercel AI SDK for more information.

GitHub Discussions

Was this page useful?

Questions? We're here to help

Subscribe to updates