Problem/Motivation

Currently, Drupal lacks a built-in AI-powered solution for toxic content detection while adding comments like Blog posts , Product reviews, Forum posts, Articles and form submissions like contact form, Webforms and Node Submission Forms (Content Creation).

The AI-Powered toxic content or spam detection Module will integrate with AI APIs that automatically detect toxic content which can be used to perform Client-Side Detection in real time feedback before submission.

Automatically detecting spam or toxic content helps protect your website from abuse, reduces manual moderation work, improves UX, and improves content quality.

Project Goal

To implement the AI based system to detect and filters the toxic/spam content to improve content quality and user experience that minimise the manual repetitive efforts.

Mentor Details

Name : Pooja Sharma

Email / Slack: @pooja_sharma - slack

Project Size

175 hours

Project Difficulty

Intermediate

Project Skills/Prerequisite

  • Proficiency in PHP and JavaScript
  • Familiarity with Drupal module development
  • Experience with AI/ML models and API integration (FASTAPI, Hugging Face)
  • Experience with Git, containerization (Docker,Docksal), and RESTful APIs
  • Basic understanding of UI/UX principles and responsive design

Project Resources

R&D Tasks

  • Research AI models for toxic or spam detection.
  • Develop backend integration with AI APIs..
  • Design and implement a user-friendly warning message to user for toxic/spam content.
  • Ensure compliance with accessibility standards (WCAG, ARIA, etc.).
  • Write module documentation in readme.txt file and conduct testing.

Issue fork gsoc-3574074

Command icon Show commands

Start within a Git clone of the project using the version control instructions.

Or, if you do not have SSH keys set up on git.drupalcode.org:

    Comments

    pooja_sharma created an issue. See original summary.

    pooja_sharma’s picture

    Title: Proposal 2026: Toxic Content or Spam Detection » Proposal 2026: AI Powered Toxic Content or Spam Detection
    pooja_sharma’s picture

    Issue summary: View changes
    pooja_sharma’s picture

    Issue summary: View changes
    pooja_sharma’s picture

    Issue summary: View changes
    pooja_sharma’s picture

    Issue summary: View changes
    pooja_sharma’s picture

    Issue summary: View changes
    pooja_sharma’s picture

    Issue summary: View changes
    talhaa’s picture

    Issue tags: +GSoC 2026

    I’m interested in working on this project idea for GSoC 2026 and would love to contribute. I’ll start reviewing the related modules and existing discussions to better understand the scope.

    Please let me know if there are any specific directions or expectations I should keep in mind while preparing my proposal.

    aditya4478’s picture

    Issue summary: View changes
    codeguyakash’s picture

    Hi @pooja_sharma,

    I’m Akash (codeguyakash), a GSoC 2026 applicant interested in the AI-Powered Toxic Content and Spam Detection project.

    I’ve started exploring possible approaches for this module and had a few thoughts:

    - Implementing a modular AI integration layer (Detoxify / Hugging Face)
    - Supporting both client-side (real-time feedback) and server-side validation
    - Providing configurable thresholds (toxicity/spam) for admins
    - Extending support across comments, node forms, and webforms

    I’m currently setting up a local Drupal environment and will start working on a small prototype / POC for toxicity detection.

    Would love your guidance on:
    1. Preferred AI provider (Detoxify vs Hugging Face vs custom FastAPI)
    2. Whether we should prioritize client-side detection or backend validation first
    3. Any existing modules or prior work I should explore

    Looking forward to contributing!

    Thanks,
    Akash

    @_lakshya.pro’s picture

    Hi @pooja_sharma,

    I’m interested in the “AI Powered Toxic Content or Spam Detection” project and have been exploring how such a system can be effectively integrated within Drupal’s content workflows.

    I had a few questions regarding the architecture and scope:

    1. For model integration, is the expectation to rely primarily on external APIs (e.g., Hugging Face/Detoxify), or would hosting a lightweight model via a FastAPI service be preferred for better control and latency?

    2. For real-time validation (client-side), should the detection be synchronous with API calls, or would a debounced/asynchronous approach be better to balance UX and performance?

    3. How should false positives/negatives be handled? Is there interest in a configurable threshold system or admin feedback loop to improve moderation accuracy over time?

    4. Should the module integrate with Drupal’s existing content moderation workflows (e.g., flagging content instead of blocking submission)?

    5. For scalability, especially on high-traffic sites, would queue-based processing be expected for server-side validation?

    I’m currently reviewing existing Drupal modules related to spam prevention and moderation, and experimenting with API-based toxicity detection approaches.

    Any guidance on preferred direction or initial contribution areas would be very helpful.

    Thanks!

    pooja_sharma’s picture

    Issue summary: View changes