SHIELD: System for Harmful explicit-content Identification and Evaluation through LLM-Driven approach

  • Dishant Kapoor
  • , Karan Ahuja
  • , Deepika Kumar
  • , Paanav Puri
  • , Srinivas Jangirala*
  • , Vedika Gupta
  • , Anandadeep Mandal*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

The surge in access to explicit content across various platforms has sparked major concerns, yet existing content filtering systems find it difficult to analyze different media formats leading to the spread of unchecked dissemination of harmful content. To tackle these shortcomings, the authors proposed SHIELD, which is an optimized end-to-end pipeline to detect & analyze explicit content, using a large-language-model (LLM) driven approach. SHIELD processes multimedia inputs by segregating and pre-processing them, followed by converting all formats into text through advanced models, extracting meaningful textual context and subjecting the resulting data to two parallel evaluation mechanisms: an LLM-based classifier for contextual analysis, and a semantic vector-based scoring system for quantitative measurement. Explicitness classifications are output in a JSON format, which allows easy integration into real-world systems. When benchmarked against a manually curated ground truth dataset, the LLM-based system surpasses vector-based approach, with an accuracy of 93.32%, as against 67.81%. The pipeline shows robustness across all media types and file sizes, confirming its viability as a scalable, context-aware solution.
Original languageEnglish
Article numberAccess-2026-05855
Number of pages31
JournalIEEE Access
Early online date23 Feb 2026
DOIs
Publication statusE-pub ahead of print - 23 Feb 2026

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being
  2. SDG 4 - Quality Education
    SDG 4 Quality Education
  3. SDG 10 - Reduced Inequalities
    SDG 10 Reduced Inequalities
  4. SDG 16 - Peace, Justice and Strong Institutions
    SDG 16 Peace, Justice and Strong Institutions

Keywords

  • Explicit content detection
  • Large Language Model (LLM)
  • vector embeddings
  • Content Moderation
  • Multimodal Content Analysis

Fingerprint

Dive into the research topics of 'SHIELD: System for Harmful explicit-content Identification and Evaluation through LLM-Driven approach'. Together they form a unique fingerprint.

Cite this