Skip to main content

Changes to AI-Generated Content Detection in Talent Insights

Written by Chloe Buswell
Updated today

What's changing

Starting July 2026, we're removing the AI-Generated Content (AGC) flag from individual Talent Insights reports.

You won't see AI flags on individual candidate reports anymore.


Why we're making this change

Two years ago, a candidate using ChatGPT to write their interview responses felt like cheating.

Today, the same candidate might be using Grammarly to tidy their grammar, Microsoft Co-Pilot to organise their thoughts, or ChatGPT to help structure a tricky answer. AI tools are becoming part of how people write, work, and communicate.

Here's what led us to this decision:

Detection isn't a long-term strategy

Our detection model has recently been updated and performs at industry-leading accuracy. But we also know this won't last forever. LLMs are evolving fast, and no tool can reliably tell the difference between AI-assisted and human-written content across every model and use case. That gap will only grow. Detection works well for tracking trends at the aggregate level, but it's not reliable enough to make individual hiring decisions on.

There's a tension we can't ignore

Sapia.ai uses AI to assess candidates. It's in our name. Penalising candidates for using AI to prepare their responses doesn't feel like the right thing to do.

Fairness matters

Flagging individuals based on a signal that isn't always accurate risks unfair outcomes. Fairness has always been core to who we are. This decision reflects that.

What's staying in place

We're not removing all safeguards. Here's what remains:

Aggregate reporting

You'll still see AI usage trends across your candidate pool in your Discover Insights dashboard. This gives you visibility into patterns without labelling individuals.


Authenticity guidance

Candidates see clear messaging at the start of their interview, encouraging them to respond in their own words. This sets expectations and creates a psychological contract around honest self-expression.


Copy-paste blocking

The chat interface blocks candidates from copying and pasting pre-written content into the interview.

What this means for your hiring process

For most customers, nothing changes in how you use Sapia.ai day to day. Candidates are still assessed on their skills and behaviours. You still get Talent Insights reports with scores, rankings, and recommendations. You still see aggregate data on AI usage.

The difference is that individual candidates won't be flagged for suspected AI-assisted responses.

If you've built processes or policies around the AGC flag, your CS partner can help you think through the transition.


Frequently asked questions

Why are you removing the flag?

A few reasons. The data shows the risk is low: less than 1% of candidates use AI across all five responses. Detection is strong today, but it won't stay ahead of LLMs forever. And penalising candidates for using AI when we use AI to assess them doesn't hold up. Flagging individuals based on a signal that isn't always accurate risks unfair outcomes, and fairness has always been core to who we are.

Are you still detecting AI usage?

Yes. We still run detection models internally. The difference is that we're reporting at the aggregate level rather than flagging individuals. You'll see AI usage trends in your Discover Insights dashboard.

How accurate is the detection you're using for aggregate reporting?

Our detection model has recently been updated and performs at industry-leading accuracy: it correctly identifies AI-generated content around 97% of the time, with less than 2% of human-written responses incorrectly flagged. We'll continue to update and monitor accuracy over time.

At the aggregate level, we're tracking trends rather than making decisions about individual candidates. That's an important distinction. We can't predict how accuracy will hold up as LLMs continue to evolve, which is exactly why we're not using it for individual decisions, where a false positive has real consequences for a candidate.

If you'd like more details on detection methodology, we can share the full technical report under NDA. Reach out to your CS partner to arrange.

What if I have concerns about a specific candidate's responses?

If a response raises questions, you can explore it in later stages of your process. Follow-up interviews are a natural place to probe specific examples and test depth of experience. Talent Insights Pro also provides customised follow-up questions to help you dig deeper.

Are candidates still told to write in their own words?

Yes. Candidates see clear guidance at the start of every interview, encouraging them to respond authentically. This hasn't changed.

What if I've built processes around the flag?

Reach out to your CS partner. They can help you think through the transition and make sure you're set up well.

Will I still see AI usage data?

Yes. Aggregate reporting in your Discover Insights dashboard shows AI usage trends across your candidate pool. You get visibility without the risks of labelling individuals.

Did this answer your question?