LLM Analytics
Langfuse analytics derives actionable insights from production traces.
→ Not using Langfuse yet? Explore the dashboard in our interactive demo.
Metrics
- Quality is measured through user feedback, model-based scoring, human-in-the-loop scored samples or custom scores via SDKs/API (see scores). Quality is assessed over time as well as across prompt versions, LLMs and users.
 - Cost and Latency are accurately measured and broken down by user, session, geography, feature, model and prompt version.
 - Volume based on the ingested traces and tokens used.
 
Dimensions
Analytics is incrementally adoptable based on the data you send to Langfuse. The following dimensions are available:
- Trace name: differentiate between different use cases, features, etc. by adding a 
namefield to your traces. - User: track usage and cost by user. Just add a 
userIdto your traces (docs). - Tags: filter different use cases, features, etc. by adding tags to your traces.
 - Release and version numbers: track how changes to the LLM application affected your metrics.
 
Feedback
We are continuously adding new charts to the dashboard. If you have any feedback or requests, please create a GitHub Issue or share your idea with the community on Discord.