Most AI failures are not sudden. They are slow accumulations of small structural problems that nobody names until the cost becomes impossible to ignore.

A compliance incident. A brand consistency audit that turns up uncomfortable findings. A budget review where the question “what have we actually got back from this?” does not have a clean answer.

S&P Global’s 2025 survey found that 42% of companies scrapped most of their AI initiatives, up sharply from just 17% the year before.

Most of those were not failed experiments. They were systems that looked functional for months before someone noticed they were not delivering anything that compounded.

The signals are usually there well before the reckoning. They just require the right questions.

Here are five to ask your team this week.

1. Is your review cycle getting longer?

When an organisation adopts AI for content production, the expectation is that everything gets faster. Output volume increases. Time to market shortens. Teams do more with less.

What often happens instead: output volume increases, but review time increases with it. More content means more to check. Compliance teams that were already at capacity are now reviewing three times as much. Senior marketers who were supposed to be freed up for strategy are spending their afternoons correcting AI drafts.

If your approval cycle is taking longer now than it did before AI adoption, the system is not failing at the output stage. It is failing at the architecture stage.

You have accelerated production without redesigning the workflow that production feeds into. The bottleneck was always there. AI made it visible.

2. Does every campaign start from zero?

Ask your team how much of the approved language, validated claims, and brand standards from the last campaign were automatically available at the start of this one.

If the answer involves searching through previous emails, finding the right version of a brief document, or relying on whoever happened to work on that campaign to remember what was decided — your system has no memory.

Expecting a pre-trained model to understand your proprietary processes or accumulated standards without deliberately encoding them is one of the most common and costly implementation mistakes.

The model does not know what good looks like for your organisation. Unless you have built something that captures and carries forward every approval decision, every correction, every compliance resolution, you are starting each cycle with a blank sheet.

That is not AI-powered marketing operations. It is AI-powered drafting with traditional marketing operations wrapped around it.

3. Where does compliance enter the process?

Draw your content production workflow on paper. Mark the point at which compliance review happens.

If that mark is at the end, after copy has been written, after creative has been developed, after sign-off has been sought from the business — your governance is structural decoration. It is a quality gate that fires after most of the risk has already been taken.

Organisations reporting significant financial returns from AI are twice as likely to have redesigned end-to-end workflows before selecting AI tools.

The ones that redesigned workflows built compliance in at the brief stage and the strategy stage, before a word of copy was written, not as a final check before launch.

In regulated industries, a compliance issue caught at the end of the process is still a problem. In some cases it is a costly one. The question is not whether your compliance team is reviewing. It is whether the system is designed to make their job easier or harder, earlier or later.

4. Can you answer this in under two minutes?

A regulator, an auditor, or a senior leadership team asks: for a specific piece of content that went out last quarter, who approved it, what version was approved, and what instructions were in place at the time it was produced?

If your honest answer involves checking email threads, asking the person who ran the campaign, or admitting that the records are not held in a single place — you do not have a governed AI system. You have an AI-assisted workflow with a paper trail that was never designed for accountability.

Generic AI tools excel for individuals because of their flexibility, but stall in enterprise use because they do not learn from or adapt to workflows.

Part of what they also fail to do is produce the kind of documented, attributable, retrievable output that enterprise accountability requires.

An audit trail is not an administrative nice-to-have. It is the difference between a governed system and a liability.

5. What are your best people actually spending their time on?

Not what they are supposed to be doing. What they are actually doing, day to day, when AI is in the workflow.

If your senior marketers, brand leads, and compliance specialists are spending the majority of their AI-adjacent time reviewing drafts, correcting errors, and re-explaining standards that should already be encoded — they are augmenting the AI rather than the other way around.

This is the most common and least discussed failure mode in AI adoption. The technology creates volume. The humans absorb the quality burden. The expertise that should be compounding is instead being consumed, review cycle by review cycle, with nothing to show for it.

McKinsey’s 2025 research confirms that organisations reporting meaningful financial returns are those that treat human oversight as a feature of their workflow architecture, not as an emergency valve.

The test is simple: are your best people doing more of the work that only they can do? Or less?

What the answers tell you.

These five questions are not a scorecard. They do not produce a percentage or a grade. What they produce is a structural diagnosis — a clear view of where the operating model is holding the system back.

If your review cycle is growing, your pipeline needs restructuring, not more reviewers. If every campaign starts from zero, your system needs memory — a governed Knowledge Bank that carries forward every approved decision. If compliance enters at the end, it needs to be built into the earlier stages, not bolted on at the back. If you cannot produce an audit trail on demand, you do not yet have a governed system. If your best people are on a treadmill, the architecture is extracting their expertise rather than encoding it.

None of these problems are solved by a better model. None of them are solved by more prompting guidelines or stricter tool policies.

They are workflow problems, which means they require workflow solutions.

The organisations that will compound their advantage over the next three years are not the ones that adopted AI first. They are the ones that built it into a system designed to learn, govern, and remember.

That system does not emerge by accident. It has to be architected.

See how we architect it →