The AI "Experts" Are Cheating (And You Have The Advantage)
We continue our deep dive going through the trash that's found its way to AI
The biggest AI conference in the world just got caught with its pants down.
On November 27, 2025, researchers at the International Conference on Learning Representations (ICLR) discovered something embarrassing: 21% of the peer reviews submitted for their 2026 conference were completely written by AI.
Not edited by AI. Not polished by AI. Fully generated.
These weren’t amateurs. These were AI researchers. PhDs from Carnegie Mellon, Cornell, Stanford. The people literally building the technology couldn’t figure out how to use it responsibly.
Graham Neubig from Carnegie Mellon was one of the first to notice. His peer reviews seemed off - “very verbose with lots of bullet points” and asking for statistical analyses that made no sense for machine learning papers. He posted on X (formerly Twitter) offering a reward for anyone who could scan all the conference submissions for AI-generated text.
Max Spero, CEO of Pangram Labs (an AI detection company), took him up on it. Over 12 hours, his team analyzed all 19,490 papers and 75,800 peer reviews submitted to the conference.
The results? Published in Nature on November 27, 2025: 15,899 reviews were fully AI-generated. Over half showed some AI involvement.
The irony is so thick you could cut it with a knife.
What Is Peer Review (And Why Does This Matter)?
Quick explanation: In science, when researchers want to publish their work, other experts in the field review it anonymously. They’re supposed to read the paper carefully, evaluate the methods, check for errors, and provide thoughtful feedback.
It’s the foundation of scientific integrity. It’s how we separate good research from garbage.
ICLR had clear policies: You could use AI to help polish your writing or fix grammar, but you had to disclose it. Writing an entire review with AI? That potentially violated their Code of Ethics.
Why? Because peer review requires human judgment. Understanding context. Spotting subtle flaws. Connecting ideas across different domains.
AI can’t do that. Not yet, anyway.
But these reviewers - who spend their days working on AI - decided to outsource their thinking to the very tools they were supposed to be evaluating.
The Real Problem: Everyone’s Faking It
Here’s what makes this story bigger than just one conference:
The “experts” don’t have this figured out either.
Bharath Hariharan, the senior program chair for ICLR 2026 and a computer science professor at Cornell, admitted this was the first time the conference faced AI-generated content “at scale.” Translation: They didn’t see it coming.
One researcher, Desmond Elliott from the University of Copenhagen, had a PhD student who suspected one of their reviews was AI-generated just by reading it. When Pangram’s analysis confirmed it, Elliott said it was “deeply frustrating” - especially since that AI review gave them a borderline rating that could’ve kept their paper out of the conference.
The reviews had telltale signs:
Hallucinated citations (references that don’t exist)
Vague, generic feedback
Weirdly formal language
Requests for analyses that made no sense
Missing the actual point of the paper
Sound familiar? It should.
Because you’ve probably received AI-generated proposals, marketing emails, or content that had the same problems.
Why This Happened (And Why It Matters To You)
Multiple sources point to the same root cause: Overwhelm.
ICLR 2026 received nearly 20,000 paper submissions - almost double the previous year. Each reviewer was assigned 5 papers to review in 2 weeks.
Keep in mind: peer review is unpaid volunteer work. These people have full-time jobs.
Abhinav Shukla, an applied scientist at Amazon Robotics, put it bluntly: “It was just not going to work with people having a full-time job that’s also in crunch time. I can see why a lot of people would just write completely AI-generated reviews in that case.”
When you’re overwhelmed, AI looks like a magic solution. Just paste in the prompt, copy the output, done.
The AI experts fell into the same trap you might face.
Your Competitive Advantage: Honesty
Here’s what the AI research community is learning the hard way:
Transparency beats expertise.
The small business owner who says “I used ChatGPT to draft this, but I reviewed and edited it personally” is more trustworthy than the consultant who pretends their AI slop is original thinking.
The barbershop that automates appointment reminders (and tells customers about it) is more honest than the marketing agency sending AI-generated proposals without disclosure.
You don’t need a PhD to use AI well. You need to:
Understand what you’re trying to accomplish (the experts skipped this - they just wanted reviews done)
Know when AI is appropriate and when it’s not (peer review requiring expert judgment? Not appropriate)
Review and verify everything (they didn’t check the outputs)
Be honest about what you used (they violated disclosure requirements)
The experts failed on all four counts.
How To Spot AI BS (Because You’ll See It)
You’re going to receive AI-generated content. Proposals from vendors. Marketing from competitors. Content from contractors.
Here are the red flags, based on what researchers noticed in those 15,899 fake reviews:
1. It’s Way Too Verbose AI loves bullet points. It loves lists. It never met a sentence it couldn’t split into three.
If a proposal or email is full of numbered lists and subsections for a simple question, that’s a flag.
2. It’s Vague and Generic AI can’t get specific because it doesn’t actually understand your business. Watch for phrases like:
“Leverage synergies”
“Optimize workflows”
“Drive meaningful engagement”
“Utilize best practices”
Translation: “I asked ChatGPT and didn’t bother customizing it.”
3. It Sounds Weirdly Formal AI writes like it’s giving a TED talk at a corporate retreat. Real humans are more casual.
If someone who normally texts “sounds good” suddenly sends you “I am writing to express my sincere gratitude for your consideration” - that’s AI.
4. The Facts Are Wrong The ICLR reviews had “hallucinated citations” - references that don’t exist. AI makes stuff up confidently.
If a proposal mentions a case study you can’t find, or cites statistics without sources, verify before trusting.
5. It Misses The Point This was the researchers’ biggest complaint. The AI reviews would write 800 words without actually engaging with the core argument of the paper.
If a response to your question is long but doesn’t actually answer what you asked, that’s probably AI.
Try This Today: The Transparency Test
Take one piece of content you created with AI help this week - an email, a social media post, a client proposal.
Ask yourself: Could I explain every point in this without looking at it? Did I verify any facts or statistics? Would I be comfortable telling the recipient I used AI?
If the answer to any of these is “no,” revise it. Add your actual thoughts. Check the facts. Make it yours.
This is what the AI researchers didn’t do. And that’s why they got caught.
The Golden Nugget
Being honest about AI use is your unfair advantage.
The “experts” tried to hide it. They violated policies. They submitted work they didn’t actually review.
You can win simply by: Using AI for the boring stuff (not the thinking stuff). Review everything before you send it. Being transparent about what you used
While consultants are pretending their AI outputs are genius insights, you can be the business owner who says: “I used AI to draft this, but here’s my actual thinking...”
Trust beats polish. Every time.
One More Thing
After this scandal broke, ICLR had another problem: their system got hacked. On November 27, 2025 (the same day the AI review story broke), someone found a security flaw that exposed reviewer identities. The supposedly anonymous peer review process became completely public.
The schadenfreude from AI skeptics was, according to Plagiarism Today, “immense.”
Two lessons:
Even the experts are making it up as they go. The biggest AI conference in the world didn’t anticipate this problem.
Technology failure compounds human failure. When you cut corners (AI reviews) and your systems fail (security breach), it gets ugly fast.
The Question
If AI researchers can’t figure out when to use their own technology responsibly, what chance do the rest of us have?
Answer: A better chance than you think.
Because you’re not trying to fake expertise. You’re just trying to save time so you can get home for dinner.
That honesty is more valuable than any PhD.
This newsletter is written by a human (me). I used AI to help research the story and verify facts. I wrote every word myself. See? Transparency.
P.S. If you’re getting value from these newsletters, forward this to anyone else who’s confused about AI. The “experts” aren’t ahead of us. We’re all figuring this out together.

