PART TWO: The Hidden Cost of Shit In, Shit Out.
This is the second piece in a series. If you haven't read part one, the short version is this: the quality of your AI output is entirely dependent on the quality of your input, and most organisations are less ready for that reality than they think. This piece is about what happens when that unreadiness scales.
Every prompt has a cost.
Not a metaphorical cost. An actual, measurable, line-on-the-invoice cost. And right now most organisations aren't paying attention to it, because right now it feels negligible. That's exactly when the pattern becomes a problem, because by the time it feels expensive, the habits that created it are already baked in.
Let me explain what's actually happening.
Tokens, and why they matter more than
you think.
When you send a prompt to an AI model, the system processes it in units called tokens. Roughly speaking, a token is about three quarters of a word. Every word you put in costs tokens. Every word you get back costs tokens. Every retry when the output wasn't right costs tokens. Every iteration, every refinement, every loop where an automated workflow runs the same process again because the first attempt didn't land costs tokens.
Right now token prices have dropped significantly as the models have matured. That's made the cost feel almost invisible, which has made behaviour loose. Nobody's being careful because nobody's feeling the pain yet. But consumption is surging at the same time as prices are falling, and underneath the efficiency gains that get reported in the press, usage is growing fast.
On the All-In Podcast, Jason Calacanis shared that some AI agents are already costing around $300 a day, close to $100,000 a year, while delivering a fraction of a capable human's output. In creative production there are already documented cases of campaigns requiring tens of thousands of prompt iterations to land a final result. One widely shared example referenced over 70,000 prompts across a single workflow. Not because the work was complex. Because the process wasn't controlled and the inputs weren't disciplined.
That's SISO at industrial scale. And it compounds.
There's a second cost that's being discussed even less.
Every token processed requires compute. Compute requires energy. Estimates vary and the research is still developing, but generating a single AI response can use meaningfully more energy than a standard web search, and large-scale model usage is now being factored into serious discussions about data centre energy demand and carbon impact.
Individually a prompt feels weightless. At scale, across an organisation running multiple tools, multiple workflows, multiple teams all iterating loosely because nobody taught them to prompt with discipline, it isn't weightless at all. Poor prompting doesn't just affect the quality of your output. It affects your costs, your performance, and increasingly your environmental footprint.
Most organisations aren't measuring any of it yet. They will be.
What this means for how you build your
AI practice.
The industry conversation right now is almost entirely focused on capability. What can the tools do. Which model is most powerful. How quickly can we automate this workflow. Those are reasonable questions but they're the wrong starting point, because capability without discipline is just a faster way to produce the wrong thing.
We've been on this journey for nearly three years, and one thing has become consistently clear. The quality of the input determines far more than the quality of the output. It shapes how long something takes, how many times it runs, and how much it costs to complete. Organisations that understand this early will build AI practices that are efficient, measurable, and genuinely useful. Organisations that don't will find themselves with impressive-looking stacks, significant costs, and outputs that require as much human intervention as the process they were supposed to replace.
This is part of why we've started building our own platform. Something designed to bring structure to how AI is used, tighter inputs, more controlled workflows, clearer visibility on what's being consumed and why. Because once AI becomes genuinely operational inside a business it needs the same discipline as any other part of the operation. Being a smaller team made this visible to us earlier than it might have otherwise. When there's less margin for waste, every prompt has to earn its place.
The point
SISO used to be about output quality. Write a lazy brief, get lazy creative back. It was annoying and expensive in its own way, but the cost was measured in rounds of amends and strained client relationships.
Now it carries three implications. How well something performs. How much it costs to get there. And what it consumes along the way.
AI scales capability brilliantly. It scales inefficiency just as well. And as adoption grows across organisations that haven't invested in the human side of this, that inefficiency is going to become a lot easier to measure and a lot harder to ignore.
The knowledge layer is still the critical part. It always was. The difference now is that getting it wrong has a price attached, and that price is only going in one direction.
If you want to talk about what building a disciplined AI practice actually looks like for your team, drop us a message. We've been in the room with enough organisations navigating this to know where the gaps usually are.

