AI coding tools for Drupal developers: an introduction

Last updated on
7 April 2026

This documentation needs review. See "Help improve this page" in the sidebar.

AI coding tools have become a standard part of many developers' workflows. You have probably heard colleagues mention them, seen debates about them in the Drupal issue queues, or encountered the term "vibe coding" somewhere on the internet. This page explains what these tools actually are, where they genuinely help, where they fall short, and what is specific about using them with Drupal.

This page is written for developers at any point on the spectrum from "mildly curious" to "deeply unconvinced" — prior experience with AI tools is not assumed, and neither is excitement about them.

If you are looking to add AI capabilities to a Drupal site — chatbots, content generation, semantic search — see the AI module instead.

What are these tools, exactly?

"AI coding tools" covers a wide spectrum — from pasting a question into a chat window to running fleets of autonomous agents working in parallel overnight. The landscape breaks into three broad categories:

  • Chat assistants (ChatGPT, Claude.ai, Gemini) are useful for explaining code, drafting documentation, or thinking through a problem, but they have no access to your actual project unless you paste it in, and no memory between sessions. Most developers have already used something in this category, even if they do not think of it as an "AI coding tool."

  • Single coding agents (Claude Code, Cursor, Aider, Cline, GitHub Copilot) run directly in your development environment. They can read your actual codebase, make multi-file changes, run terminal commands, and iterate on their own output. This is the tier this guide focuses on.

  • Multi-agent systems are multiple agents working in parallel with coordination layers between them (tools include Conductor, Claude Code's Agent Teams, Codex Web, and GitHub Copilot Coding Agent). This is real and in production at many organizations in 2026, despite the concept seeming like "sci-fi" a year ago.

Developers who use these tools tend to move through recognisable stages — from approving every single file change an agent makes, to guiding it at a higher level, to eventually running multiple agents in parallel on different parts of a codebase. This guide focuses on the early stages: getting comfortable with an agent running in your project, understanding what it produces, and knowing when to trust it and when not to. If you are curious about the full progression, The Code Agent Orchestra maps it out clearly.


Where they actually help

Used well, coding agents are genuinely useful for:

  • Tedious boilerplate. Module scaffolding, routing files, service definitions, config schema, PHPUnit test structure. An agent can produce a correct first draft in seconds. You still need to review it, but you are reviewing instead of typing.
  • Unfamiliar territory. If you are a backend developer who needs to write a Twig template, or a site builder who needs to write a migration, an agent can get you most of the way there and explain what it is doing. The output will need review, but it is faster than starting from zero.
  • Understanding existing code. Pointing an agent at an unfamiliar module and asking it to explain what the code does is one of the most reliable use cases. It is not generating anything — just reading and summarizing. Hard to go wrong.
  • Repetitive tasks with clear patterns. Adding the same type of field to ten content types. Writing a config update hook. Converting deprecated function calls. Anything with a clear pattern that you would otherwise do by hand, repeatedly.

Where they fall short

  • They do not know Drupal. Out of the box, coding agents know generic PHP. They have a rough idea of what Drupal looks like from code they have seen in training data, but they do not deeply understand the hook system, the plugin API, the service container, the Config API, or Drupal coding standards. Without explicit guidance, they produce code that looks plausible but is subtly wrong in ways that an experienced Drupal developer will spot immediately — and a newer one might not.
  • They cannot validate their own output. An agent can run your tests — it has full terminal access — but it cannot meaningfully interpret whether a test failure is its fault, whether the tests it wrote are actually testing the right things, or whether passing tests mean the code is correct. Always verify generated code yourself before committing it.
  • They hallucinate confidently. Coding agents will invent function names, module names, and API methods that do not exist, and present them with complete confidence. This is not a bug that will be fixed — it is a fundamental characteristic of how these models work. Treat all generated code as a first draft that requires review, not a finished product.
  • They can generate insecure code. SQL injection vulnerabilities, missing access checks, unsafe use of user input — these appear regularly in AI-generated Drupal code. The Security and contrib considerations page in this guide covers what to look for specifically.
  • "Vibe coding" accumulates debt fast. Accepting generated code you do not fully understand, because it seems to work, is a quick way to accumulate a codebase nobody can maintain. The productivity gains are real, but they compound into problems if you stop reading what the agent produces. An agent that writes code you cannot explain is a liability, not an asset.

All of these can be taken together as Never submit code you don't understand.


What is specific about Drupal

Drupal's conventions are specific enough that generic AI output is reliably suboptimal. A few concrete examples:

  • Drupal uses dependency injection via the service container. Agents default to static calls and procedural code.
  • Configuration is managed through the Config API with schema files. Agents often hardcode values or use variables.
  • There are right and wrong base classes to extend depending on what you are building. Agents frequently choose the wrong one.
  • The contrib ecosystem is vast. A module almost certainly exists for what you are about to write from scratch. Agents do not know this.
  • Drupal coding standards are specific and enforced. Agents need to be told about them explicitly.

The good news is that all of these are fixable with context — telling the agent what Drupal expects before you start. The Setting up AI tools for a Drupal project page covers how to do this practically.


Ethical and environmental considerations

These are real concerns and worth taking seriously rather than dismissing.

AI models require significant energy to train and operate, and the environmental cost is largely invisible to users. Most major AI companies do not publicly disclose the carbon cost of their models, which makes it genuinely difficult to know the footprint of your usage. Water consumption for data center cooling is a related concern that receives less attention than energy but is equally real.

Training data provenance is also contested. Large models are trained on code and text scraped from the internet, and there are ongoing legal and ethical debates about whether this constitutes fair use of developers' and writers' work — debates that are currently being resolved in courts and legislatures, not by the companies themselves.

There is no clean individual solution to these systemic problems. Some things that are actually worth doing if these concerns matter to you:

  • Support meaningful AI policy advocacy. The Electronic Frontier Foundation works on digital rights and AI accountability broadly. The AI Now Institute produces independent research and advocates specifically for AI regulation in the public interest. In the EU, the AI Act represents the most substantive regulatory framework currently in force and is worth following and engaging with through your local representatives.
  • Use local models where feasible. Running models via Ollama paired with an open source agent like Aider or Cline keeps your code on your own hardware. Output quality is currently lower than cloud-hosted frontier models, but the gap is narrowing and the tradeoff may be worth it for your context.
  • Use AI deliberately, not reflexively. Every query has a cost. Reaching for an agent for tasks you could do in two minutes yourself is not neutral — neither environmentally nor in terms of the skill atrophy that comes from outsourcing thinking.

These are individual choices in the context of industry-wide problems. The Drupal community values both sustainability and informed decision-making — knowing these concerns exist and engaging with them thoughtfully is more useful than either ignoring them or letting them prevent you from evaluating useful tools.


Ready to try it?

The next page covers getting set up with Drupal-specific context — the part that actually makes these tools useful for Drupal work, without repeating what each tool's own documentation already covers.

Help improve this page

Page status: Needs review

You can: