Prerequisites
- A Lunary project with its public key copied from Settings → API Keys.
- A Vercel AI SDK app running on Node.js 18+ with access to modify instrumentation files.
- Optional (Next.js ≤14): experimental.instrumentationHookenabled innext.config.mjs.
1. Enable Vercel’s OpenTelemetry instrumentation
Install the instrumentation helper if it is not already included in your project:instrumentation.ts exists at the project root (or inside src/ if
you use that folder) and register OpenTelemetry:
next.config.mjs:
generateText, streamText, or other AI SDK functions.citeturn5search1turn5search0
2. Point OpenTelemetry to Lunary
Configure the OTLP exporter to send traces to Lunary’s managed collector. Using environment variables keeps local and hosted deployments aligned:3. Emit AI spans with Lunary metadata
Telemetry remains an opt-in experimental flag in the AI SDK. Wrap the calls you want to observe withexperimental_telemetry and include metadata that Lunary can use for
filtering and trace grouping:
4. Validate traces inside Lunary
Deploy or start your app locally and trigger the instrumented route. Open the Lunary dashboard and visit Observability → Traces to confirm new spans tagged with yourservice.name and metadata. From there you can drill into token usage, latency, and
prompt/response payloads per trace.