Global Finance: Can you briefly describe what your model does?
Joanne Horton: Yes. We’ve got what we think is a rather exciting model, which we describe in a working paper, that helps forecast in advance the likelihood that a firm will go on to commit accounting fraud.
What’s the likelihood that fraud will take place in the future? There’s lots of motivation, obviously, because a lot of fraud takes place: few cases, but each one is very expensive. In a recent interview, the US Securities and Exchange Commission’s (SEC) enforcement director said they had a record $600 million in penalties in 2024 just for 70 cases. So obviously, the penalties and deterrence for doing it are not doing their role, and we need to find something else that can hopefully prevent this from happening.
But most—well, all—of the research into accounting fraud has focused on detection rather than prevention. We wanted to examine prevention. So, what can we do before the fraud occurs? Can the board of directors, the auditors, or other gatekeepers do something? Can I identify the year the fraud occurred in the account? That’s what all the models currently do.
We’re trying to look at data well before the fraud took place and say, “Would we have red-flagged this firm as likely to be committing fraud in the future?” Our model will not tell you it will happen—it simply says there’s a high risk of it happening in the future, which allows us, hopefully, to take corrective action so the fraud doesn’t take place. We don’t include the fraud at all in our model, so we can accurately identify those who are likely to commit fraud and those who are unlikely to commit fraud at 87.68% of cases on average: 90.58% one year before the fraud takes place, 83% two years, and 75% three years before the fraud takes place.
GF: What does your model tell you about how accounting fraud happens?
Horton: We know the antecedents to fraud: It is never a cliff edge but always a slippery slope. You start off small, and then it starts escalating. If we think about it, a manager—if facing pressure to beat an analyst forecast, or beat last year’s earnings, or wanting a particular bonus—has enough flexibility in the accounting rules to manage those numbers while staying within the rules. So, they change inventory methodology, or they change their assumptions on revenue recognition, and they make it such that they beat these forecasts.
But eventually, they’ll hit the limit, and then the only thing they can do is either come clean or go on to egregious misreporting. Now, we know from the academic literature that three years before the fraud, they tend to beat earnings benchmarks. And there’s a recent paper that says you’re more likely to round up your earnings-per-share number about five years before. However, the problem with this research is that they already know the fraud has taken place.
So, how are we going to track the slippery slope? Ultimately, what the managers are doing is increasing their human intervention in the accounts—legitimately, within the rules, but then that human intervention has to keep escalating because with accruals reversals, you’ve got to cover the reversal, and then you’ve got to increase the amount to beat any forecast. So, human intervention in the accounts escalates.
GF: So how do you capture that human intervention—that higher risk of fraud?
Horton: We use Benford’s law, which is a mathematical frequency model. And what we know from prior literature is that the data in the financial statements and notes will follow Benford’s law on average—if there is no human intervention in the accounts. Now, some human intervention may be legitimate, so it will change the deviation, and some may be illegitimate. So, we have to infer whether the deviations are legitimate or not, and we do that by seeing whether the deviations increase and escalate over time. That shows the slippery slope.
Even if there are small but consecutive increases in deviation, they’re having to use human intervention to cover it up; and it’s still increasing relative to what the firm should look like.
The key benefit of Benford’s law is that it doesn’t matter what kind of firm it is—public, private, what accounting policies it follows, what currency it operates in, whether it’s loss-making, whether it’s a growth company, highly leveraged or no leverage at all—makes absolutely no difference. This enables this model to be universal because you can apply it to any company, country, or industry. Once we’ve got a probability from the model, we use that to determine a red flag. And we have to have a red flag twice, so we don’t have anything that’s just random.
GF: What does it take to get that first red flag?
Horton: If they say they made a legitimate change in depreciation, you’ll see an increase in human intervention, but then the deviation shouldn’t escalate.
The model learns from prior fraud as to what it takes: when it gets to a point where, in other cases, there is a higher likelihood. The model creates this hazard ratio, which tells us the likelihood, and then we compare that to what we’d expect in the overall population. If it’s higher than we’d expect in the population, then it’s red-flagged.
GF: And when does the company get a second red flag?
Horton: So what we’ve actually found out, which is interesting, is that it’s very rare—almost impossible—to stop being red-flagged. The model keeps red-flagging you, and then you either go bankrupt or commit fraud. What we haven’t been able to observe is a firm with a red flag that then suddenly stops.
In firms that commit fraud, there’s a culture where you can be overly optimistic about things and rationalize what you’ve done prior. This is why auditors are hopeless at capturing and identifying fraud: because it’s so incremental. The problem for auditors is that if they agree to one change, it’s quite difficult not to agree to a second change, because you’ve rationalized the agreement on the first change.
GF: How do you know to look for fraud in M&A?
Horton: There’s fraud in a lot of places; and the more opaque, the more fraud. You can hide it more easily in M&A, but it’s more about due diligence. So, you are acquiring another company; and we all know that if it’s a hostile takeover, the company is going to make itself look very expensive. So, more human intervention is needed in accounting. And you see that happening over time. And even if it’s not hostile, you’re going to make yourself look good for a takeover.
The other thing we notice is where most of the fraud takes place. It’s not in the parent company, it’s in the subsidiaries. They’re not under the purview of the top brass. They may have different auditors. The parent may be putting a lot of requirements on their subsidiaries to provide a huge return, and if they can’t do it, how do you alleviate the pressure? You manage your numbers.
GF: Do you have an example?
Horton: Here’s one. HP was under pressure to achieve high revenue targets. Their initial response was to increase their human intervention in 2008: They changed their inventory valuation assumption, their revenue recognition assumptions, and a few other things. But in the end, they couldn’t maintain that. So, they ended up, in 2015 and 2016, creating fictitious revenues, valuing the inventory upward, channel stuffing, and many other things. The SEC announced in 2020 that HP had committed fraud. Our model identified the fraud, and we red-flagged HP in the fourth quarter of 2010. So, we already knew at the end of 2010 that they were likely to commit fraud.
A more recent one is [fitness-beverage maker] Celsius. They committed accounting fraud in the second and third quarters of 2021; it was announced in 2025. We red-flagged it in the fourth quarter of 2019.
GF: How are you making your model available?
Horton: We have been offered quite a lot of money to buy the model. But being an academic, I think research is a social good; and therefore, we would just like to build up the model so it’s global and then provide the output to anybody who wants it. So, we would like to allow anyone to download our red flags. The other thing is that we will publish it in detail so our model will be perfectly replicable.
The other thing we’ve noticed in our analysis is that identifying escalating human intervention also exponentially improves bankruptcy risk models, because what do you do before you go bankrupt? You try to delay it, and you will do that through the accounting. So, we think this human intervention measure should be utilized in IPOs and M&As when you’re doing due diligence—all that sort of thing. In that respect, I want it to be a public good.
GF: Would it be possible for fraudsters to use AI to fly under the radar of Benford’s law?
Horton: That is very difficult, because human intervention is human intervention in whatever form it takes. We actually tried to use AI to create a set of accounts that had a huge level of human intervention but followed Benford’s law, and it was practically impossible. Because the trouble is, if you change a few numbers in revenue, it’s going to change a lot of numbers in accounting. It’s going to change your equity, your retained earnings, your profits, your earnings per share, your EBIT, your EBITDA—all these numbers would change. And it’s incredibly difficult. I’m sure someone could spend a lot of time trying to do it, but doing it quarter on quarter on quarter, we believe, is incredibly difficult, because we’ve tried it. But nothing’s impossible.
GF: Who do you foresee using the model besides academics?
Horton: I think auditors, for sure, because they want to know their audit risk, especially if you are taking over from a previous auditor.
I think board members, because it’s their risk as well. I think for due diligence in IPOs and M&A, because you’ll notice a lot of IPOs that commit accounting fraud. So I think short sellers. Regulators could use it, too.
GF: Will there be some technology available using your model?
Horton: I imagine somebody will be capitalizing on that in the future. But we’ve just got money for a postdoc to put this into AI and see what other things we can do. We have used all listed US firms from 1962 till 2020 because that’s when we wrote the paper. We use quarterly data, which we download from Compustat. Anything in the notes, as long as it’s not a repetition of another number.
Since Benford’s law is indifferent about currency or anything else, we’re going to build the model globally: put India in there, China, the UK, Europe, etc. We’re hoping this might actually improve the accuracy because it’ll have more data to learn. But to date, it’s all listed US firms.
GF: What specific changes do you see that might suggest a company is on the slippery slope?
Horton: We look at misreporting: all types of misreporting. We also looked at fraudulent security class actions. And we also look at firms that have made restatements. Nobody said it was a fraud, but nobody said it wasn’t a fraud, either. We can forecast restatements with quite a high level of accuracy.
GF: Are regulators doing anything to anticipate fraud, or are their efforts all retrospective?
Horton: It’s very difficult because the regulator is going in because something has happened. The Public Company Accounting Oversight Board [PCAOB] looks at companies’ accounts and audit papers and tries to make sure that the accounting is being done correctly. Here in the UK, the Financial Reporting Council looked into audit papers of the FTSE 100 and basically gave them a good health score. So, I think regulators have been trying to do it, but I don’t think they’re as good as they should be.
I think regulation should be about prevention, because the people who win are the people who commit the fraud, and the people who lose—because who pays these fines?—are the shareholders. They price it in. I would have hoped the PCAOB looked at audit reports, but you still have failure.
GF: Why is so much fraud connected with IPOs? Because they don’t do enough due diligence?
Horton: If they have an IPO, they’ll be big firms, and they’ll follow International Financial Reporting Standards or US GAAP. Even if they’re private, they will still be doing so because they’re larger firms.
So, some of it is because they’re overly optimistic. If you’re overly optimistic, you’ll make more changes because you think it’s all going to happen. You are going to make those sales, right? You’ve got to look like you’ve got a future. And then, of course, they have to maintain it, even if things don’t turn out as optimistically as they thought.
GF: If your model becomes widely used, could its presence deter people or companies from committing fraud?
Horton: I hope it would. However, let’s say you’re the CEO, and you think, “Well, let’s see if I can just get away with it.” You’re going to do a cost-benefit analysis of just keeping going. Then I hope the auditors are looking at it and asking questions. Our model might improve auditing since it can provide a list of X red flags across all listed companies in the US.
Interestingly, we also find problems like a lack of an internal control system, which is also a prelude to human intervention. If you’ve been found to have poor internal controls, you’re highly likely to have this increasing human intervention.