Invisible Ink: Why Even Real Content Gets Flagged as AI

Posted by Saugata Chatterjee on April 15, 2025 · 6 mins read

Invisible Ink: Why Even Real Content Gets Flagged as AI

Executive Summary:

  • AI detectors under pressure: Tools like DetectGPT are failing in subtle but dangerous ways
  • Text watermarking decoded: Hidden signals baked into LLM output
  • False positives rise: Why polished human writing is now at risk
  • Paraphrasing attacks: How even basic rewording breaks detection tools
  • Regulatory implications: Watermarking may become a legal requirement

Table of Contents

Introduction: Can Your Content Still Be Trusted?

What if your content is flagged as synthetic — even when it’s 100% human?

That’s the reality we’re facing today. In Episode 71 of Machine Learning Made Simple, we dive into how modern content moderation tools are misfiring, mislabeling, and quietly rewriting what “authentic” even means in the age of LLMs.

It started with an experiment: I ran a personal academic abstract from 2015 — written years before GPT existed — through a leading AI detector. It got flagged as AI-generated. Meanwhile, ChatGPT-written text passed.

We’re entering a phase where polished = suspicious. And that’s a problem.

DetectGPT and the Probability Trap

DetectGPT was a promising technique from 2023. It relied on the assumption that LLMs produce high-probability, stable outputs that lie in “curvature valleys” of the language distribution. By perturbing the text slightly and watching how the likelihood scores change, the model could estimate whether a passage was AI-generated.

But that assumption no longer holds.

Modern LLMs, especially those trained with RLHF and instruction tuning, produce more human-like outputs, blending creativity with fluency. These models are not only harder to detect — they actively evade detection.

What’s worse: human writing, especially when edited over time (think: research abstracts or published articles), also appears statistically “too perfect” — and gets misclassified.

The Rise of Watermarking in Text

To tackle this fragility, a new approach is gaining traction: text watermarking.

Unlike DetectGPT, which tries to detect AI after the fact, watermarking happens during generation. Tools like MarkLLM embed imperceptible statistical patterns into text by biasing word choices — for example, favoring “rich” over “opulent” in specific contexts.

Think of it as invisible ink for LLMs. These patterns survive light paraphrasing and editing, making them more robust than post-hoc detectors.

But this raises new issues:

  • Transparency: Users aren’t told if their content is watermarked.
  • Security: Watermarks can be stripped or mimicked by adversaries.
  • Policy: Regulators may soon mandate watermarking for all AI outputs.

Paraphrasing Attacks: The Detector’s Kryptonite

A major study from UMass introduced a model called DIPPER — trained to take AI-generated content and rephrase it. When DetectGPT analyzed the paraphrased output, its detection accuracy dropped from 70% to 4.6%.

Let that sink in: a simple rewording turned a reliable detector into noise.

This is the core vulnerability of all zero-shot detection approaches. They’re brittle. And in the arms race of content authenticity, fragility is a liability.

False Positives in the Wild: A Demo

Here’s what I tested in the episode:

  • I took a 2015 abstract I personally wrote (pre-GPT).
  • Ran it through a popular AI detection tool.
  • It was flagged as AI.

Meanwhile, content generated by ChatGPT — without watermarking, without human polish — was not.

This raises a critical question: Are AI detectors punishing good writing?

And if so, what does that mean for academic publishing, SEO, job applications, and even journalism?

Commercial Tools: What’s Actually Useful?

Several tools are already on the market:

  • Originality.AI – ML classifiers trained on mixed data; used by SEO agencies.
  • Winston AI – Sentence-level detection and rephrasing suggestions.
  • Vastav AI (India) – A national-level deepfake and media authenticity system, likely powered by GANs.

While helpful, these tools face the same core problem: paraphrasing attacks, dataset drift, and statistical lookalikes.

The next generation of tools may require a blend of:

  • On-generation watermarking
  • Post-hoc verification
  • Multimodal forensics (text + metadata + behavior)

Where This Leaves Us

In a world flooded with AI-generated content, authenticity is no longer about intent — it’s about traceability.

But if the tools designed to detect machine content start penalizing high-quality human work, we risk building a system that rewards mediocrity and punishes clarity.

As platforms roll out detection filters and as legislation looms, developers, creators, and regulators need to ask:

Can we verify content origins without compromising real expression?


🎧 Dive Deeper into the Episode

🎧 Listen on Spotify:
https://creators.spotify.com/pod/show/mlsimple/episodes/Ep71-The-AI-Detection-Crisis-Why-Real-Content-Gets-Flagged-e31i96b

📺 Watch on YouTube:
https://youtu.be/PPyjMxG4wFQ

🔄 Share this episode with your team, your CTO, or your favorite writer.
They might be next to get flagged.