What the 12% did before anyone knew they were doing anything.
They bought before they were ready. They announced before they had prepared. They handed a tool to a team that did not understand why it was there. And when the results did not arrive on the timeline the business case promised, they had no way of knowing whether to fix it, adjust it, or stop it — because nobody had defined what success looked like before the money left the building.
The 12 percent who are succeeding did not do anything extraordinary. They did the ordinary things in the right order.
This is that order.
Your leadership team needs to have completed the honest conversation that Part 2 described — the ninety-minute session through the five readiness dimensions. If you have not done that, do it first. The 90-day sequence below is built around your gap map. Without knowing where your gaps are, you are following a plan without knowing what problem it is solving.
No tool. No vendor. No budget beyond this phase. Four things written down.
Not a vision. Not a strategy. One sentence — the kind a non-technical board member would understand in three seconds.
That sentence is worth more than any technology demonstration you will sit through in the next 90 days.
It tells you what you are buying, what success looks like, and what the money is for.
If your leadership team cannot agree on that sentence in the first 30 days, that is important information. It means the organisation is not yet ready to commit capital to an AI initiative — and finding that out now costs you a conversation. Finding it out after the money is spent costs you significantly more.
Most organisations have never calculated this. They know the process is slow or expensive or unreliable — but they have never sat down and worked out what that actually costs the business per month.
Do it now. Because without this number, you have no way of knowing whether the AI investment is worth making. And you have no way of knowing, twelve months from now, whether it worked.
This does not require a finance team or a spreadsheet model. It requires one hour, the right people in the room, and a willingness to be honest about what the current situation is actually costing — in staff time, in delays, in decisions made late.
Not the technology. The outcome.
This is the person whose job it is to make sure the business result improves — not to make sure the tool gets installed, not to make sure the project runs on time, but to make sure the actual problem gets solved.
In most organisations this person does not currently exist for AI initiatives. There is a person who owns the technology. There is a person who approved the budget. Nobody has formally accepted accountability for whether the business result improves.
Name that person in the first 30 days. Make sure they have the authority to change the processes, the team behaviours, and the workflows that will need to change. Without that authority, accountability is a label, not a reality.
AI works by finding patterns in your existing information. If that information is incomplete, inconsistent, or out of date, the AI will find patterns in the problems, not in the truth.
This is not a technical question. It is a business question: can you trust the information your business runs on?
Ask it plainly. If you pulled the records from the system this AI will use — the customer files, the job histories, the sales logs — would the information in there accurately reflect what is actually happening in your business? Or has it accumulated errors, gaps, and inconsistencies over the years that nobody has formally addressed?
You do not need to fix everything in 30 days. You need to know what the state of play is — so you can make an informed decision about what needs to be addressed before you build on top of it.
Between thinking and technology, there is preparation. This is where the 12% slow down — and where everyone else speeds up.
Cleaning up records, standardising how information gets entered, closing the gaps between systems that should be talking to each other.
This is not glamorous work. It does not make it onto a press release. It is also the kind of work that, according to the research on companies generating real returns from AI, was done first by the organisations now succeeding — and skipped by most of the ones that are not.
Some organisations can clean up the relevant records in a few weeks. Others are carrying years of accumulated problems that take several months to address properly. Be honest about which category you are in, and plan accordingly.
This is the instruction most organisations reverse — and the reversal is expensive.
The team's experience of a new tool in the first two weeks largely determines whether they continue using it — or quietly revert to the old way.
The first two weeks are shaped entirely by how well the adoption was designed — not by how good the technology is.
Design it first. What will change for the people who will use this tool day to day? What will be easier? What will feel harder, at least initially? Who needs to understand this before it arrives — not just that it is coming, but why it is coming and what it means for their work?
Which manager in your business is going to take personal responsibility for making sure their team actually uses it — not as a general expectation, but as a named accountability with a real target? And critically: when will the old way of doing things formally stop being available? Because if the answer is never, adoption will remain optional. Optional adoption produces optional results.
The frontline team are important. The managers between you and the frontline are more important — and they are the group most consistently ignored in AI adoption planning.
A middle manager uncertain about what AI means for their team — and for their own role — will not stand in the way of the initiative. They will do something quieter and more effective: they will allow the old process to keep running alongside the new one. They will not push their team to adopt. They will not escalate the problems that slow adoption down.
Six months later the initiative will appear to be running while delivering a fraction of what it was supposed to deliver.
Have the conversation early. What is changing? What is not changing? What does this mean for their team's targets, their own role, their day-to-day work?
The managers who understand the answer to those questions become the initiative's strongest internal advocates. The ones who don't become its most effective silent obstacles.
Decide now what success looks like in numbers.
The baseline. The current cost, time, or error rate you're trying to improve. You should have this from Phase 1.
The target. The specific improvement you expect to see, and the timeframe.
The breakeven. The minimum level of adoption required for the investment to pay back. If only half your team uses the tool — does it still make financial sense?
The review trigger. The point at which you formally review whether to continue, adjust, or stop — rather than allowing the initiative to drift indefinitely.
These are not complicated decisions. They are uncomfortable ones. Make them now — with the numbers visible, before the technology is in the building and the sunk cost makes honest assessment harder.
The technology arrives last. Not because it is least important — but because everything that determines whether it delivers a return was decided before it got here.
Not the most impressive tool. Not the one with the best demonstration. The one that most directly addresses the specific business problem you wrote down in your one sentence — at a cost that makes financial sense against the baseline you established.
By this point your team knows it is coming and why. Your information is clean enough to build on. The person responsible for the outcome has the authority to make it work. The adoption plan is ready. The measurement framework is in place.
The technology is the last piece, not the first. In the organisations that are succeeding, it was always the last piece.
Not impressions. Not feedback. Numbers.
Is the time going down? Is the cost going down? Is adoption above the breakeven threshold you defined? Is the person responsible for the outcome seeing what they expected — or are there early signals that something needs to adjust?
The first 30 days of operation will tell you more about whether this initiative will deliver a return than any amount of planning could. The data from this period is the most valuable information in the entire 90-day process. Use it.
At the end of 90 days, one of three things will be true. Stopping is not failure. Continuing without understanding why results are not arriving is.
Early results are on track. Adoption is above breakeven. The baseline metric is moving in the right direction.
Continue, with a clear plan for the next 90 days and a target for when the investment is expected to pay back.
Results are below target — but the gap analysis is clear. The reason is visible and fixable.
Adopt a specific change, with a specific timeline, and a specific person responsible for the adjustment.
Results are below target — and the gap analysis is not clear. You don't know why it's underperforming.
Stop. Understand what happened. Apply that understanding to the next attempt. Stopping is not failure. Continuing without understanding why results are not arriving is.
The organisations in the 12 percent treat every AI initiative as a financial bet with defined return expectations — not as a transformation programme the business is obligated to see through. They are prepared to stop. That willingness is part of what makes them successful — because it means every initiative they continue is one they have actively chosen to continue based on evidence.
Measured in the percentage of AI investment that delivers no return — across 2,400 enterprise initiatives studied globally.
They asked the right questions before they spent the money.
They fixed the foundation before they built on it.
They named someone responsible for the outcome before the technology arrived.
They designed the adoption before they selected the tool.
They defined success in numbers before they could be tempted to define it in whatever way made the results look acceptable.
None of that is complicated. All of it is uncomfortable.
The discomfort is not a sign that you are doing it wrong. It is a sign that you are doing it right.
End of Series
With the readiness questions from Part 2 on the table, and the numbers visible. If you would like support running that session — or want to talk through what the right first move looks like for your specific situation — the team at Fuzionest works with mid-size organisations on exactly this.
The Honest AI Series