[Tracker]
Update Summary: [One-line status update for stakeholders]
Short Description: Add Input Length Limit guardrail plugin
Check-in Date: MM/DD/YYYY
[/Tracker]

Problem/Motivation

There is currently no guardrail to limit the length of user input sent to AI providers. Excessively long inputs can be used for context stuffing attacks, denial-of-wallet attacks (consuming expensive API tokens), or to overwhelm the AI model's context window.

A simple input length limit guardrail would provide a configurable safety net that blocks requests exceeding a character or token count before they reach the provider API.

Proposed resolution

  • Create a new InputLengthLimit guardrail plugin that implements processInput().
  • Allow configuring a maximum length in characters, with an option to use the ai.tokenizer service for token-based counting instead.
  • The limit should apply to the last user message by default, with a configurable option to apply to the total conversation length (all messages combined).
  • Return a StopResult when the input exceeds the limit, with a configurable violation message.
  • Add tests covering both character-based and token-based limits.

AI usage (if applicable)

[x] AI Assisted Issue
This issue was generated with AI assistance, but was reviewed and refined by the creator.

[ ] AI Assisted Code
[x] AI Generated Code
[ ] Vibe Coded

- This issue was created with the help of AI

Issue fork ai-3582856

Command icon Show commands

Start within a Git clone of the project using the version control instructions.

Or, if you do not have SSH keys set up on git.drupalcode.org:

Comments

marcus_johansson created an issue. See original summary.

marcus_johansson’s picture

Issue summary: View changes

marcus_johansson’s picture

This also found that schemas was not correctly set and fixed it, I should probably split that up though, since its a bug and should be ported back.

ahmad khader’s picture

Assigned: Unassigned » ahmad khader
ahmad khader’s picture

Status: Needs review » Reviewed & tested by the community

Tys, @marcus.
Tested the guardrailis working as expected, small bug is the tokenizer_model visibility, which is not working correctly, should be ':input[name="guardrail_settings[use_tokens]"]' => ['checked' => TRUE], instead of ':input[name="use_tokens"]' => ['checked' => TRUE],.

Since it's a small bug, I'll fix it and move to RTBC.

ahmad khader’s picture

Assigned: ahmad khader » Unassigned
marcus_johansson’s picture

Status: Reviewed & tested by the community » Needs work

I will split this up into one bug issue for the schema missing in Guardrails in general, so it can be backported.

Will assume that its RTBC'd on both, but setting back to Needs Work, so no one merges it by mistake.

ahmad khader’s picture

marcus_johansson’s picture

Status: Needs work » Reviewed & tested by the community

Setting RTBC again and then merge.

marcus_johansson’s picture

Status: Reviewed & tested by the community » Fixed
Issue tags: +AI Initiative Sprint, +AI Innovation, +needs forward port

Now that this issue is closed, review the contribution record.

As a contributor, attribute any organization that helped you, or if you volunteered your own time.

Maintainers, credit people who helped resolve this issue.