
OpenLIT AI is an open-source platform based on the OpenTelemetry standard, providing observability, monitoring, and evaluation capabilities for generative AI and large language model applications.
It automatically instruments to collect LLM requests metadata, performance metrics, and cost information, offering distributed tracing, dashboard visualization, and error analysis.
Supports multiple integration methods: install the SDK for minimal code changes, or use the Kubernetes Operator for zero-code monitoring.
Supports self-hosted deployments, for example via Docker Compose or Kubernetes, and also offers cloud-native deployment options.
The platform includes built-in evaluation frameworks to assess prompts, models, and end-to-end applications, analyzing related output quality metrics.
According to its open-source license, it is an Apache-2.0 licensed open-source project available for free use and deployment.

Langfuse AI is an open-source LLM engineering and operations platform designed to help development teams build, monitor, debug, and optimize applications based on large language models. It enhances AI application development efficiency and observability by providing features such as application tracing, prompt management, quality assessment, and cost analysis.

Evidently AI is an open-source platform focused on evaluating, testing, and monitoring machine learning and large language models, helping data scientists and engineers ensure the quality and reliability of AI systems in production.

Adaline AI is a collaborative platform focused on the development and management of large language model applications, helping teams efficiently build, optimize, and deploy AI solutions powered by LLMs.

Openlayer AI is a unified AI governance and observability platform designed to help enterprises securely and compliantly build, test, deploy, and monitor machine learning and large language model systems, boosting deployment confidence and operational efficiency.

LangWatch AI is an LLMOps platform for AI development teams, focused on providing testing, evaluation, monitoring, and optimization capabilities for AI agents and large language model applications. It helps teams build reliable, testable AI systems, covering the entire lifecycle from development to production.
OpenMeter is an open-source platform for real-time usage measurement and billing that helps AI, API, and SaaS companies implement usage-based pricing to accelerate monetization of their services.

Freeplay AI is a development and operations platform for enterprise AI engineering teams, focused on helping teams efficiently build, test, monitor and optimize applications powered by large language models. The platform provides collaborative development, production observability and continuous optimization tools to standardize workflows and improve the reliability and iteration speed of AI applications.

WhyLabs AI is a platform focused on AI observability and security, designed to provide monitoring, protection, and optimization capabilities for machine learning models and generative AI applications in production, helping teams manage the performance and risks of AI systems.
Laminar AI is an open-source AI engineering and observability platform that helps developers build, monitor, evaluate, and optimize applications and agents based on large language models.

Langtrace AI is an open-source observability and evaluation platform that helps developers monitor, debug, and optimize applications built on large language models, turning AI prototypes into reliable enterprise-grade products.