A new report from a provider of an AI-powered accounts receivable platform has a perhaps curious message: Be careful about placing too much trust in artificial intelligence.
The vendor, Billtrust, surveyed 500 finance professionals and C-suite leaders, 82% of whom registered concern about AI’s potential for misuse. One key piece of advice offered in the report: Make sure to maintain an effective level of human oversight of AI systems.
Finance leaders’ wariness regarding AI “isn’t irrational technophobia,” Billtrust wrote. “It’s based on witnessing what happens when AI is deployed without proper oversight, transparency or ethical guardrails.”
External criminal behavior provides the clearest example of AI’s potential for misuse, and finance teams are encountering the consequences firsthand.
Almost half (45%) of the survey participants reported AI-generated phishing emails sophisticated enough to fool experienced staff. Almost as many (39%) have encountered AI-created emails that perfectly mimic executive or vendor communication styles.
About a third (31%) said their companies have been targets of AI-generated fake invoices with convincing branding and formatting, while 29% have seen AI voice cloning used to impersonate known contacts.
When finance teams see these incidents, it raises fundamental questions about AI’s trustworthiness in general. But, Billtrust stressed, the incidents don’t argue against AI, but rather for responsible AI deployment featuring human oversight, as well as appropriate safeguards and transparency.
The report noted that AI and humans excel at different things. “AI handles speed, volume and pattern recognition across massive datasets,” Billtrust wrote. “Humans provide contextual judgment, ethical reasoning and the ability to understand nuance. Together they create a more capable system than either could achieve alone.”
However, almost half (46%) of those polled expressed concern that AI-generated fraud might become so realistic that human review would not be sufficient for detecting even the biggest frauds.
The message there, according to Billtrust, is not that humans should do more work. Rather, humans need better tools so they’re freed up for value-creating activities like building vendor relationships and optimizing cash flow. But, they should also maintain human evaluation of “the genuinely suspicious cases that require contextual understanding and judgment.”
The report noted that current fraud-detection practices in the accounts receivable space all leverage human involvement in some way.
Also, more than a quarter (27%) of the survey respondents said their organizations don’t track suspicious invoice activity at all or aren’t sure of their numbers. “This represents a fundamental trust problem — not with AI specifically, but with any system that operates without transparency or accountability,” the report said.





