The current logging system within the AI module needs significant enhancements to provide more granular logging capabilities and better support for debugging, monitoring, and analysis of AI operations.

Specifically, the system needs to:

  • Distinguish between LLM and Agent operations.
  • Provide different logging levels.
  • Support process chain tracking for calls with child operations.
  • Better integrate with external monitoring systems.
  • Present data in a more user-friendly way for better troubleshooting.

Proposed Resolution

We will enhance the AI Logging module with a new data model and an improved user interface. The goal is to create a robust and comprehensive logging system that provides detailed insights into every AI call and its context.

Enhanced Log Types

The new system will differentiate between log types to better categorize operations:

  • LLM operations logging
  • Agent operations logging

Logging Levels

LLM operations will now be logged with distinct severity levels:

  • Error level: For failed or problematic operations.
  • Info level: For successful and routine operations.

UI Improvements

The log viewer will feature a new tabbed design for a clean, organized display of information. The log details for each call will be broken down into the following tabs:

  • AI Generated Summary:
    • User query: Summary of the query sent to the LLM.
    • Response of this call: Summary of the LLM's response.
    • Response of the last child call: Summary of the response from the last child call, if one exists.
  • Prompt Sent to LLM:
    • Prompt: The prompt formatted as a chat, with assistant messages indented.
    • Actions: Tables showing which actions were available to the LLM and which were actually called.
    • Child calls chain: A table summarizing any sub-calls.
  • Metadata & Tags:
    • Tags: An unordered list of tags assigned to the LLM call.
    • Prompt metadata:
      • Operation Category: Agent or LLM.
      • Severity Level: Info or Error.
      • Operation Type: chat, code, etc.
      • Provider & Model: The name of the AI provider and model used.
      • Timestamps & Duration: Start and finish times, along with the total duration in milliseconds.
  • Raw Response:
    • Raw Response: The complete JSON raw response from the LLM API.

API & Data Model Changes

This resolution will introduce a new log type and new data model fields to support the proposed features. An integration API will also be exposed to allow other modules to leverage the enhanced logging capabilities.

Project Status: Archived / Internal Use Only

Summary: This project was initiated to provide a lightweight, architectural alternative to existing AI logging solutions, specifically tailored for high-performance decoupled environments.

Reason for Discontinuation: During the initialization phase, it became clear that the ecosystem's appetite for alternative approaches in this domain is limited. Rather than being viewed as an opportunity for architectural diversity or collaboration, this initiative was immediately flagged as "redundant" by proponents of existing solutions.

Conclusion: At Joshi Consultancy Services, we believe Open Source thrives on diversity and the cross-pollination of ideas. However, we also believe in allocating our engineering resources where they are welcomed.

Consequently, we have decided to maintain ailogging as a proprietary internal tool for our enterprise clients. We will not be releasing the source code to the public repository at this time. We wish the maintainers of the existing solutions the best of luck.

Supporting organizations: 

Project information

Releases