[Tracker]
Update Summary: [One-line status update for stakeholders]
Short Description: [One-line issue summary for stakeholders]
Check-in Date: MM/DD/YYYY
Metadata is used by the AI Tracker. Docs and additional fields here.
[/Tracker]
Problem/Motivation
In #3567784: Tools Function Input should give back an empty json schema skeleton we introduced and fixed a bug where the default parameters where not following proper standards. This caused issues in the Mistral client, Mistral on LiteLLM and Ollama for certain models. Even if there was thourough testing, we did not see that different models in LiteLLM doesn't work the same.
This fix seems to have caused regression on Claude Bedrock models when using LiteLLM. It gives a validation error.
The idea was to move parameter less functions from:
{
"name": "reindex_content",
"description": "Rebuilds the site search index.",
"parameters": {
"type": "object",
"additionalProperties": false
}
}
to
{
"name": "reindex_content",
"description": "Rebuilds the site search index.",
"parameters": {
"type": "object",
"properties": {},
"additionalProperties": false
}
}
This was introduced in 1.2.6 and caused this regression. Its actually not following correct standards, but instead the solution should be
{
"name": "reindex_content",
"description": "Rebuilds the site search index.",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
However this breaks certain models (Mistral) from the happy path, when there are parameters.
@narendrar suggested this:
{
"name": "reindex_content",
"description": "Rebuilds the site search index.",
"parameters": null
}
We have tested that so far with LiteLLM (mistral, claude), OpenAI, Anthropic, Ollama (llama3.1) and Mistral (mistral-medium) and it seems to work well in both parameter function and parameter less function.
Steps to reproduce (required for bugs, but not feature requests)
Setup LiteLLM with Claude (you can use Amazee)
In the AI Test module there is a parameter less trigger function.
Try to use that, it will give back an exception.
Proposed resolution
Change to null instead of stdClass.
Remaining tasks
Optional: Other details as applicable (e.g., User interface changes, API changes, Data model changes)
AI usage (if applicable)
[ ] AI Assisted Issue
This issue was generated with AI assistance, but was reviewed and refined by the creator.
[ ] AI Assisted Code
This code was mainly generated by a human, with AI autocompleting or parts AI generated, but under full human supervision.
[ ] AI Generated Code
This code was mainly generated by an AI with human guidance, and reviewed, tested, and refined by a human.
[ ] Vibe Coded
This code was generated by an AI and has only been functionally tested.
Issue fork ai-3572765
Show commands
Start within a Git clone of the project using the version control instructions.
Or, if you do not have SSH keys set up on git.drupalcode.org:
Comments
Comment #3
marcus_johansson commentedComment #4
narendrarTested on my local and applying this MR fixes issue on LiteLLM with Claude 4.5 Sonnet.
Comment #5
marcus_johansson commentedWhat we need to do is to test this against providers that offers tool calling and that is "official":
The preparation testing steps are:
1. Install any provider and set it up.
2. Set this following in setting.php
$settings['extension_discovery_scan_tests'] = TRUE;, this makes it possible to install test modules.3. Install the AI API Explorer module and the AI Test module.
4. Visit /admin/config/ai/explorers/chat_generator
For testing without parameters:
1. Write "Trigger this" in the prompt
2. Open the Advanced -> Function Calling, search for "Trigger" and choose that one function call.
3. Check Execute Funcion Call
4. Send and if succesful you should see that it picked and executed it.
For testing with parameters:
1. Write "what is 12345+12345" in the prompt
2. Open the Advanced -> Function Calling, search for "Calculator" and choose that one function call.
3. Check Execute Funcion Call
4. Send and if succesful you should see that it picked and executed it and the result 24690.
Comment #6
marcus_johansson commentedProviders/Models I can confirm works:
* OpenAI (gpt-4.1, gpt-5.2)
* Anthropic (5 models)
* LiteLLM/Amazee (Mistral and Claude)
* Mistral (mistral-medium, mistral-large)
* Ollama (llama-3.1)
Comment #7
brunocarvalho commentedHi Marcus_Johansson, I noticed the need for the AI Test module to run the tests, but the module isn't available to be added to the project. Is there any way to add this module externally or another way to perform the tests?
Comment #8
marcus_johansson commented@brunocarvalho - if you follow this instruction:
Set this following in setting.php
$settings['extension_discovery_scan_tests'] = TRUE;it will show up.Comment #9
marcus_johansson commentedSince this is urgent, I will remove the QA tag and since its tested on the major providers. I have since also tested on Azure.
I will switch it to critical as well, so we can prepare a new release based on it urgently.
I have run a script to figure out usage and it is as follows, which means that all major providers we need to test are tested except for Gemini, but its recently been added and the module is not stable.
See usage:
Comment #13
marcus_johansson commentedMerged and forward ported.