Levi Logo

Finance Transformation

Embrace a new era of empowered finances. Redefine success through innovative financial solutions.

Levi Logo

Taxation

PAYE. VAT, Self Assessment Personal and Corporate Tax.

Levi Logo

Accounting

A complete accounting services from transasction entry to management accounts.

Levi Logo

Company Formation

Company formation for starts up

VIEW ALL SERVICES

Discussion – 

0

Discussion – 

0

CFO

Growing challenge of recognizing AI-generated content heightens risk

This audio is auto-generated. Please let us know if you have feedback.

Among the quiver-full of risks presented by AI, don’t overlook the fact that employees do not necessarily recognize it when they see it.

In a survey of 1,006 U.S. workers by Resume Now, in which participants were asked to determine which of two messages was written by AI, about half of those polled (48%) wrongly chose the human-written message.

Compounding the problem, 74% said they were confident or extremely confident they had made the correct choice.

Both messages used similar professional language, structure and tone reflecting a typical workplace scenario. The result underscores the difficulty of differentiating between the two even in such a familiar context, Resume Now noted in its survey report.

Indeed, 66% of the surveyed workers acknowledged they have mistaken AI-generated content for human-written work.

The disparities demonstrated in the results clearly expose companies to attacks. “We are entering an era where being ‘fooled’ by AI is inevitable,” Resume Now wrote. “This shift suggests that the primary challenge for teams is no longer just about detection, but also about managing the errors and biases that go unnoticed.”

Nearly half of the workforce is now encountering AI-generated content daily, while 27% said they see it a few times a week and 15% do so a few times a month.

At the moment, workers are widely divergent in their assumptions about the contents of communications. While 58% assume messages received at work are written by a human, 42% assume they involve AI in some capacity.

“The baseline for workplace trust is shifting,” the report said. “The ‘human-by-default’ assumption is disappearing, replaced by lingering doubt about intellectual ownership. This suggests that even purely human messages may now be unfairly scrutinized for AI red flags.”

Among the negative aspects of this environment, the psychological impact of being tricked by a machine shouldn’t be underestimated. For 65% of workers, realizing their intuition has failed to differentiate between human and machine output would have an immediate effect on their self-assurance in the office. One in five said it would “significantly” reduce their confidence.

The lack of transparency regarding AI use can also affect office culture, the report noted. More than half (56%) of workers said their trust in a coworker would decline if they learned that content they believed was written by that person was actually AI-generated.

“When AI transparency is discovered rather than disclosed,” it creates a sense of intellectual dishonesty,” Resume Now wrote. “This suggests that the risk isn’t the AI use itself, but a lack of policies governing its use.”

Tags:

You May Also Like