Company
About Lexcore AI Governance Contact
Research
Cortina Zero R&D Hub Benchmarks Whitepaper AHI Framework Organoid Lab Genesis Codex EgoReversal
Products
All Products Naira AI Cortina Shield Deepfake Detection Cortina Naukri Agent NEO
Invest
Investor Relations Capital Allocation Investment Deck
More
Blog Enterprise
← Blog Technical

FakeReaper: Building Military-Grade Deepfake Detection with One API Call

APR 20, 2026 9 min read By Lexcore AI Team

In November 2025, a synthetic audio clip of a sitting MP was used to spread election misinformation in a state assembly by-election. The clip circulated on WhatsApp for 11 hours before being debunked. By then, it had been forwarded 2.3 lakh times.

This is the problem FakeReaper is built to solve — not in 11 hours, but in 0.3 seconds.

The Detection Challenge

Deepfake detection is hard because generation keeps improving. Every time a new detection model ships, the generation community adapts. This arms race has a structural problem: single-model detectors have a shelf life.

Our core architectural insight: no single detector is reliable across all modalities and all generation methods. The solution is a multi-model ensemble with dynamic weighting — where the confidence score from each constituent model influences how much it contributes to the final verdict.

The Architecture

FakeReaper processes images, video frames, audio waveforms, and text through separate specialist models, then fuses their outputs through a learned meta-classifier trained on adversarial examples.

The pipeline for a single media item:

  1. Ingestion: Media file received via API. Format normalised (MP4 → frames + audio track; image → normalised tensor).
  2. Routing: Content-aware router identifies which specialist models are relevant.
  3. Parallel inference: Each relevant model runs simultaneously on GPU.
  4. Feature fusion: 128-dimensional feature vectors from each model are concatenated and passed to the meta-classifier.
  5. Calibration: Output probability is calibrated against a held-out validation set to ensure well-calibrated confidence scores.
  6. Response: JSON payload with verdict, confidence, and per-model breakdown.

The Specialist Models

We run 7 specialist models in the ensemble:

0.28s
AVG LATENCY
97.3%
ACCURACY
7
SPECIALIST MODELS
1 call
API INTERFACE

One API Call

The entire pipeline is exposed as a single REST endpoint. A detection request looks like this:

POST https://api.lexcoreai.com/v1/fakereaper/detect Content-Type: multipart/form-data { "media": [binary file or URL], "mode": "full", // or "fast" for image-only "modalities": ["auto"] // or ["image","audio","text"] } // Response { "verdict": "synthetic", "confidence": 0.94, "latency_ms": 281, "breakdown": { "ViT-Fake-v3": 0.91, "FreqNet": 0.97, "LipSync-Check": 0.88 } }

The Indian Deepfake Dataset

One differentiator we're proud of: we built and maintain a proprietary dataset of Indian deepfake samples — synthesised faces, voices, and video clips of Indian public figures. Every major benchmark dataset (DFDC, FaceForensics++, DGM4) is almost exclusively Western faces and voices.

A detector trained only on those datasets performs materially worse on Indian faces and Indian accents. Our dataset corrects for this. It currently contains 140,000 synthetic Indian face images, 22,000 synthetic voice clips in 12 Indian languages, and 8,000 synthetic video segments.

What's Next

We are in conversations with two state governments about deploying FakeReaper as a real-time monitoring layer for election-related social media content. We are also building a WhatsApp integration — forward any suspicious media to a number and receive a verdict in seconds. No app download required.

← All posts lexcoreai.com →