Levi Logo

Finance Transformation

Embrace a new era of empowered finances. Redefine success through innovative financial solutions.

Levi Logo

Taxation

PAYE. VAT, Self Assessment Personal and Corporate Tax.

Levi Logo

Accounting

A complete accounting services from transasction entry to management accounts.

Levi Logo

Company Formation

Company formation for starts up

VIEW ALL SERVICES

Discussion – 

0

Discussion – 

0

CFO

The CFO automation imperative: Balancing tech and human judgment

This audio is auto-generated. Please let us know if you have feedback.

The following is a guest post from Diana Mugambi, senior manager – FP&A operations at GE Vernova. Opinions are the author’s own.

Over the past year, nearly every finance conference and webinar I’ve reviewed has repeatedly featured similar slide decks. These presentations often include sessions on artificial intelligence and automation for finance teams, emphasizing the promise of increased productivity and the urgent steps required to achieve it. CFOs are facing unyielding pressure to embrace automation or risk obsolescence. 

For the modern finance leader, the challenge is knowing what to leave out of the automation loop. These choices will often not stay abstract. They will surface downstream during a forecasting refresh, financial reporting review, internal controls walkthrough or testing by the assurance team, where no one can point to a single decision owner.

I support automation, but not in the way it’s currently marketed. What I am wary of is how subtle human accountability on financial decisions is being subtly eroded, where outputs are automatically accepted because a system produced them.

When speed quietly replaces context

Many AI automation product pitches and productivity discussions frame human judgment in financial decisions as a flaw, a defect that is biased, inconsistent and slow. This, of course, must be engineered away. AI is objective, immune to fatigue, never takes a vacation day or calls in sick and can be scaled at speed. This framing is definitely compelling. It is also incomplete.

Judgment doesn’t disappear when a process is automated. Instead, it shifts from being exercised by a person to being embedded within the computer program. Design decisions may be made once, but the model uses these judgments daily to provide output decisions. These would be far more difficult to challenge once embedded for months or even years. In practice, financial decisions, such as reserves, impairments and revenue recognition, are rarely binary, and the human element relies on contextual evaluation of the inputs, not just computational accuracy.

When automation treats these decisions as deterministic, it doesn’t remove the risk; it simply concentrates on it. We are locking in judgment to the AI-driven processes, often without the same level of scrutiny applied to human decisions. Human risk is visible, showing up in management review and audit findings. Automated errors, on the other hand, don’t tend to announce themselves at all and have the potential to be far more dangerous due to scale.  

If the AI applies flawed assumptions and does this consistently, the implications will be much more dire and the consequences surface later. Ultimately, the CFO’s signature doesn’t come with an AI or software license disclaimer. Fiduciary responsibility for the integrity of financial statements remains human. If the algorithm hallucinates a forecast or scales a flawed assumption, the implications can be dire. You can’t delegate your legal responsibility to a vendor’s black box algorithm.

Is there a downside?

One unintended consequence of aggressive automation that I didn’t fully appreciate early on is how aggressive automation rarely takes into consideration the erosion of financial judgment over time within the finance talent development. Judgement in finance is not intuitive. It’s built through exposure to repeated cycles of ambiguity, errors and the consequences of making wrong decisions.

Earlier in my career, those moments were unavoidable, especially during close and budgeting cycles when forecasts didn’t reconcile cleanly. Estimates were debated. Exceptions made uncomfortable conversations during reviews. That friction, while inefficient, was formative. It taught teams to recognize when numbers technically worked but conceptually didn’t make sense. 

As more validation moves inside an automated loop with inbuilt logic, fewer assumptions are questioned. That friction disappears, and this memory is no longer exercised, transferred or challenged. Forecasts converge too neatly, and exceptions are filtered out before they are questions and assumptions go unchallenged because the model has cleared them.

Over time, the finance function becomes a highly efficient machine and is excellent at processing data. However, they become historically shallow and less capable of recognizing when results don’t align with reality. The cost of atrophy rarely shows up during steady-state operations. It will surface during disruptions, crises, market shocks and restructuring. In those moments, models trained on stable patterns encounter reality which they were never designed to handle, when decisions must be defended to the Board, auditors, regulators or markets. No one will ask the algorithm to explain itself. The responsibility and the judgment required over financial results remain human.

Defining the hard lines

The debate we should be having as we put together our finance automation strategy isn’t whether a process can be automated, but whether failure can still be detected and challenged. While we can all agree that not all finance processes deserve the same treatment, some low-impact, repeatable processes could benefit enormously from the productivity of AI.

However, where we have material impact, and typically, we lean heavily on human judgment, then caution does apply. That doesn’t mean that AI has no role, but AI should be used to identify risks and exceptions, simulate scenarios and strengthen judgment. It’s not about resisting innovation. It’s about preserving the credibility of the finance function and ensuring we have explicit ownership of decision-making.

When slowing down feels risky

In practice, restraint is harder than acceleration. I have seen automation decisions driven less by conviction and more by momentum. The automation roadmap is already approved, consultants arrive with polished decks with the promise of transformation, vendors showcase only the happily ever after stories and peer pressure from industry peers posting their gains. Against this backdrop, pausing feels almost irresponsible, especially when the CFO is under pressure to justify headcount reduction to prove a return on investment on these massive tech investments.

The CFO’s role is to ensure that automation enhances the institution’s intelligence rather than quietly erasing it. Execution can and should be automated, but understanding why the numbers behave the way they do cannot be. When the environment does shift, institutions won’t fail from lack of data but from lack of human memory required to interpret it. We have to be intentional about curating the human expertise required to oversee the model’s inputs and outputs.

Execution can be automated. Explanations cannot.

Tags:

You May Also Like