Problem/Motivation

The generic ECA actions provided by this module (e.g., "AI Chat") are an excellent bridge for interacting with various AI provider backends. They successfully abstract the core functionality of sending a prompt and receiving a text-based answer.

However, this abstraction is currently a "lossy" one. AI provider APIs return a wealth of valuable metadata alongside the text response, most critically the usageMetadata which contains token counts. The current implementation of the generic actions discards this metadata, returning only the string answer.

This limitation prevents site builders and developers from implementing essential production-level features, such as:

Cost Tracking: It's impossible to log the totalTokenCount for each call, which is necessary for budgeting and cost analysis.

Usage Monitoring: Without access to token data, we cannot monitor usage against API rate limits or build custom throttling logic.

Advanced Workflows: We cannot create ECA workflows that branch based on metadata. For example: "IF token_usage > 4000, THEN send a notification to an administrator."

A parallel issue has been created for the gemini_provider module to expose this data at the service level (see: https://www.drupal.org/project/gemini_provider/issues/3549099). However, even if an individual provider exposes this data, the generic actions in ai_integration_eca need a mechanism to receive and forward it into the ECA workflow.

Steps to reproduce

Proposed resolution

It is proposed that the generic AI actions in this module be enhanced to provide the full API response and specific metadata as optional outputs. This would require establishing a new, richer contract between ai_integration_eca and the AI provider plugins it calls.

A pragmatic, backward-compatible approach would be:

Establish a New Convention: Define a new, optional method that AI provider services can implement, such as getLastResponse(): ?array. This method would return the full, decoded JSON response from the most recent API call. The gemini_provider issue linked above proposes exactly this implementation.

Enhance the Generic Actions: The generic actions within ai_integration_eca (like the "AI Chat" action) should be updated. After calling the primary method (e.g., generateText()), they should check if the provider's service has a getLastResponse() method.

Provide New Outputs: If the getLastResponse() method exists, the action should call it and make the data available as new, optional output contexts (tokens) for the ECA workflow. For example:

full_response: A JSON string of the complete API response.

token_usage: The integer value of totalTokenCount if it exists in the response.

This approach allows modules that don't need this feature to continue working as-is, while enabling advanced functionality for those that opt-in by implementing the new method. This would make the entire AI integration ecosystem in Drupal significantly more powerful and ready for production use cases.

Remaining tasks

  1. Discuss and agree upon the proposed contract (e.g., the getLastResponse() method).
  2. Update the generic ECA actions in this module to check for and use the new method.
  3. Add the new output contexts (full_response, token_usage, etc.) to the actions.
  4. Update documentation to inform provider module developers on how to support this new feature.

User interface changes

None directly in the action's configuration form. However, users will see the new output tokens ([task_id:full_response], [task_id:token_usage]) become available in the token browser for all subsequent tasks in a workflow.
So it should be mentioned in the helptext.

API changes

  • This proposes a new, optional, de-facto standard method (getLastResponse) for AI provider services that wish to integrate with this enhanced functionality.

The generic ECA actions will gain new, optional output contexts.

Data model changes

None.

Comments

maxilein created an issue. See original summary.

maxilein’s picture

I have drafted a module for a custom ECA action - yet untested. Because if you like the idea then it will probably be much better if it is included into your module. Also I would like to hear your considerations.

The code is just an idea, yet:

name: 'AI Chat with Full Data'
type: module
description: 'Provides an ECA Action for AI chat that returns the full API response and token count.'
core_version_requirement: ^10 || ^11
package: 'Artificial Intelligence'
# This dependency ensures this module loads after the modules it uses.
dependencies:
  - ai_integration_eca:ai_integration_eca
  - gemini_provider:gemini_provider
<?php

namespace Drupal\chat_full_data\Plugin\ECA\Action;

use Drupal\eca\Plugin\Action\ActionBase;
use Drupal\gemini_provider\ApiClient;
use Symfony\Component\DependencyInjection\ContainerInterface;

/**
 * Provides the "AI Chat with full response" action.
 *
 * @Action(
 * id = "chat_full_data_action",
 * label = @Translation("AI Chat with full response"),
 * category = @Translation("AI"),
 * description = @Translation("This action calls an AI model with a prompt and provides multiple outputs.
 *
 * OUTPUTS:
 * The data from this action is available to subsequent tasks as tokens. The token pattern is [task_id:output_name], where 'task_id' is the unique ID you give this task in the BPMN modeler.
 *
 * For example, if you set this task's ID to 'security_check':
 * - Text answer: [security_check:answer]
 * - Full JSON response: [security_check:full_response]
 * - Token usage count: [security_check:token_usage]
 * "),
 * context_definitions = {
 * "prompt" = @ContextDefinition("string",
 * label = @Translation("Prompt"),
 * description = @Translation("The main text/question for the current turn."),
 * required = TRUE
 * ),
 * "history" = @ContextDefinition("string",
 * label = @Translation("Chat History (Token Name)"),
 * description = @Translation("The token name for a variable containing the JSON conversation history."),
 * required = FALSE
 * ),
 * "model" = @ContextDefinition("string",
 * label = @Translation("Model"),
 * description = @Translation("The specific AI model to use. Overrides the global setting."),
 * required = FALSE
 * ),
 * "config" = @ContextDefinition("string",
 * label = @Translation("Specific Configuration (YAML)"),
 * description = @Translation("YAML for extra settings like temperature or system_prompt."),
 * required = FALSE
 * )
 * }
 * )
 */
class ChatWithFullResponse extends ActionBase {

  /**
   * The Gemini Provider API client.
   *
   * @var \Drupal\gemini_provider\ApiClient
   */
  protected $geminiApiClient;

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
    $instance = parent::create($container, $configuration, $plugin_id, $plugin_definition);
    $instance->geminiApiClient = $container->get('gemini_provider.api_client');
    return $instance;
  }

  /**
   * {@inheritdoc}
   */
  public function execute() {
    // Get all inputs from the ECA workflow's input mapping.
    $prompt = $this->getContextValue('prompt');
    $history_json = $this->getContextValue('history');
    $model = $this->getContextValue('model');
    $config_yaml = $this->getContextValue('config');

    if ($prompt) {
      // Pass all inputs to the intelligent service from the patched gemini_provider module.
      $result = $this->geminiApiClient->generateText($prompt, $history_json, $model, $config_yaml);

      if ($result) {
        $fullResponse = $this->geminiApiClient->getLastResponse();
        // Provide all outputs for the next tasks in the workflow.
        $this->setProvidedContext('answer', $result['answer']);
        $this->setProvidedContext('full_response', $fullResponse ? json_encode($fullResponse, JSON_PRETTY_PRINT) : NULL);
        $this->setProvidedContext('token_usage', (int) $result['usage']);
      }
    }
  }

  /**
   * {@inheritdoc}
   */
  public function getProvidedContext() {
    return [
      'answer' => @ContextDefinition("string", label: @Translation("Answer")),
      'full_response' => @ContextDefinition("string", label: @Translation("Full Response (JSON)")),
      'token_usage' => @ContextDefinition("integer", label: @Translation("Token Usage")),
    ];
  }

}
jurgenhaas’s picture

Status: Active » Needs work

@maxilein would you mind providing this as an issue fork with an MR? That would allow us to review the code and also let the tests run on it.

maxilein’s picture

Yes I will try.
Can you please suggest to which files or locations you would prefer that in the structure of your module?
Thank you.

jurgenhaas’s picture

Well, action plugins go into a specific location. If you create a new one, then just look at the pattern of the existing file names, etc. If you rather want to extend existing action plugins, then it should be easy to find them in the src/Plugin/Action directory.

murz’s picture

As a backward compatible option, we can consider switching from the string to a stringable object as a DTO class, so an object that holds the string and all the required metadata, that can be easily converted to a string via something like $value = (string) $response but still can provide the metadata via something like $tokenUsage = $response->getTokenUsage().

We already started to use DTO in AI module (Drupal\ai\Dto) so can just follow the same approach here.

Relying on the last response, I believe, is not a good idea, because we can have multiple parallel requests if we use async Guzzle requests and fibers, so the last response may not be the expected one.