[Tracker]
Update Summary: [One-line status update for stakeholders]
Short Description: [One-line issue summary for stakeholders]
Check-in Date: MM/DD/YYYY
Metadata is used by the AI Tracker. Docs and additional fields here.
[/Tracker]

Problem/Motivation

In #3567784: Tools Function Input should give back an empty json schema skeleton we introduced and fixed a bug where the default parameters where not following proper standards. This caused issues in the Mistral client, Mistral on LiteLLM and Ollama for certain models. Even if there was thourough testing, we did not see that different models in LiteLLM doesn't work the same.

This fix seems to have caused regression on Claude Bedrock models when using LiteLLM. It gives a validation error.

The idea was to move parameter less functions from:

{
  "name": "reindex_content",
  "description": "Rebuilds the site search index.",
  "parameters": {
    "type": "object",
    "additionalProperties": false
  }
}

to

{
  "name": "reindex_content",
  "description": "Rebuilds the site search index.",
  "parameters": {
    "type": "object",
    "properties": {},
    "additionalProperties": false
  }
}

This was introduced in 1.2.6 and caused this regression. Its actually not following correct standards, but instead the solution should be

{
  "name": "reindex_content",
  "description": "Rebuilds the site search index.",
  "parameters": {
    "type": "object",
    "properties": {},
    "required": []
  }
}

However this breaks certain models (Mistral) from the happy path, when there are parameters.

@narendrar suggested this:

{
  "name": "reindex_content",
  "description": "Rebuilds the site search index.",
  "parameters": null
}

We have tested that so far with LiteLLM (mistral, claude), OpenAI, Anthropic, Ollama (llama3.1) and Mistral (mistral-medium) and it seems to work well in both parameter function and parameter less function.

Steps to reproduce (required for bugs, but not feature requests)

Setup LiteLLM with Claude (you can use Amazee)
In the AI Test module there is a parameter less trigger function.
Try to use that, it will give back an exception.

Proposed resolution

Change to null instead of stdClass.

Remaining tasks

Optional: Other details as applicable (e.g., User interface changes, API changes, Data model changes)

AI usage (if applicable)

[ ] AI Assisted Issue
This issue was generated with AI assistance, but was reviewed and refined by the creator.

[ ] AI Assisted Code
This code was mainly generated by a human, with AI autocompleting or parts AI generated, but under full human supervision.

[ ] AI Generated Code
This code was mainly generated by an AI with human guidance, and reviewed, tested, and refined by a human.

[ ] Vibe Coded
This code was generated by an AI and has only been functionally tested.

Issue fork ai-3572765

Command icon Show commands

Start within a Git clone of the project using the version control instructions.

Or, if you do not have SSH keys set up on git.drupalcode.org:

Comments

marcus_johansson created an issue. See original summary.

marcus_johansson’s picture

Priority: Normal » Major
Status: Active » Needs review
Issue tags: +needs forward port
narendrar’s picture

Version: 1.2.8 » 1.2.6
Issue summary: View changes
Issue tags: +Needs manual testing

Tested on my local and applying this MR fixes issue on LiteLLM with Claude 4.5 Sonnet.

marcus_johansson’s picture

Issue tags: +Needs QA

What we need to do is to test this against providers that offers tool calling and that is "official":

The preparation testing steps are:
1. Install any provider and set it up.
2. Set this following in setting.php $settings['extension_discovery_scan_tests'] = TRUE;, this makes it possible to install test modules.
3. Install the AI API Explorer module and the AI Test module.
4. Visit /admin/config/ai/explorers/chat_generator

For testing without parameters:
1. Write "Trigger this" in the prompt
2. Open the Advanced -> Function Calling, search for "Trigger" and choose that one function call.
3. Check Execute Funcion Call
4. Send and if succesful you should see that it picked and executed it.

For testing with parameters:
1. Write "what is 12345+12345" in the prompt
2. Open the Advanced -> Function Calling, search for "Calculator" and choose that one function call.
3. Check Execute Funcion Call
4. Send and if succesful you should see that it picked and executed it and the result 24690.

marcus_johansson’s picture

Providers/Models I can confirm works:

* OpenAI (gpt-4.1, gpt-5.2)
* Anthropic (5 models)
* LiteLLM/Amazee (Mistral and Claude)
* Mistral (mistral-medium, mistral-large)
* Ollama (llama-3.1)

brunocarvalho’s picture

Hi Marcus_Johansson, I noticed the need for the AI ​​Test module to run the tests, but the module isn't available to be added to the project. Is there any way to add this module externally or another way to perform the tests?

marcus_johansson’s picture

@brunocarvalho - if you follow this instruction:

Set this following in setting.php $settings['extension_discovery_scan_tests'] = TRUE; it will show up.

marcus_johansson’s picture

Priority: Major » Critical
Status: Needs review » Reviewed & tested by the community
Issue tags: -Needs manual testing, -Needs QA

Since this is urgent, I will remove the QA tag and since its tested on the major providers. I have since also tested on Azure.

I will switch it to critical as well, so we can prepare a new release based on it urgently.

I have run a script to figure out usage and it is as follows, which means that all major providers we need to test are tested except for Gemini, but its recently been added and the module is not stable.

See usage:

provider total_usage
ai_provider_openai 8742
ai_provider_anthropic 5437
ai_provider_amazeeio 2810
gemini_provider 705
ai_provider_litellm 552
ai_provider_azure 371
ai_provider_ollama 244
ai_provider_mistral 125
ai_provider_aws_bedrock 123
ai_provider_deepl 107
ai_provider_deepseek 96
ai_provider_dxpr 84
ai_provider_huggingface 75
ai_provider_groq 67
elevenlabs 66
ai_provider_google_vertex 60
ai_provider_perplexity 37
ai_provider_lmstudio 35
ai_provider_openrouter 29
ai_provider_x 21
fireworksai 15
ai_provider_nanobanana 13
ai_provider_vllm 13
ai_provider_yandex 7
ai_provider_alibabacloud 6
ai_provider_anythingllm 4
ai_provider_acquia 3
ai_provider_aliyun_bailian 3
deepgram 3
writer_ai 3
ai_provider_apertus 2
auphonic 2
ai_provider_xai 2
ai_document_ocr 1
ai_provider_moonshot 1
ai_provider_apple 0
ai_provider_baidu 0
ai_provider_browser 0
ai_provider_bytedance 0
ai_provider_cohere 0
did_ai_provider 0
ai_provider_docker 0
ai_provider_doubao 0
ai_provider_dreamstudio 0
ai_provider_drupal_coder 0
ai_provider_doubao 0
ai_provider_infomaniak 0
ai_provider_mittwald 0
rail_score 0
ai_provider_siliconflow 0
ai_provider_stackit 0
ai_provider_tencent_hunyuan 0
ai_provider_voyage 0
ai_provider_webt 0
workers_ai_provider 0
ai_provider_zhipuai 0

marcus_johansson’s picture

Status: Reviewed & tested by the community » Fixed

Merged and forward ported.

Now that this issue is closed, review the contribution record.

As a contributor, attribute any organization that helped you, or if you volunteered your own time.

Maintainers, credit people who helped resolve this issue.

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.