The Five Questions Every Organisation Should Ask Before Building with AI
Modernize IDEO’s 3-lens framework for AI: Add Strategic Fit and Ethics to move from "can we build it" to "should we" and "can we win."
Signal Boost: "Come Into My House" by Queen Latifah
I'm not sure if this song relates to the article, but I heard it on BBC Radio 6 on Saturday and wanted to boost the awareness!
Most organisations evaluating AI opportunities are still working from the same framework IDEO introduced in the early 2000s: is it desirable, is it feasible, is it viable?
It is a solid foundation. But it is no longer sufficient.
The framework you use to evaluate opportunities determines which ones you see clearly and which ones you miss. As AI capability accelerates, two important additions have entered the conversation. Ignoring either carries real cost.
The Original Framework Still Matters
IDEO's trifecta has endured for good reason. It asks the three questions that separate ideas worth pursuing from ones that sound compelling but collapse under scrutiny.
Is there genuine demand? Can it actually be built? And will it generate sustainable value? Most innovation failures can be traced back to a weak answer on at least one of these. The framework forces honesty early, which is where it belongs.
But completeness is not the same as sufficiency.
Two Additions that Strengthen It
Strategic fit, proposed by BCG X in 2024, asks whether an organisation can actually win with an idea given its assets, positioning, and competitive context. It is a necessary filter. Most innovation fails not because the idea is wrong, but because the organisation backing it has no right to win in that space. Capability, brand, and competitive advantage all shape whether an idea belongs to you or to someone better positioned to execute it.
Desirability itself also deserves closer scrutiny than it typically receives. It is a layered question, not a single one. Is there a real problem? Does the solution actually solve it? And is the experience good enough that people will genuinely use it?
That last layer matters more than most teams admit. An organisation can identify a genuine problem and build a technically sound solution and still fail. Poor experience drives workarounds. Low adoption produces no value. Desirability without usability is an incomplete answer.
Where Frameworks Fail in Practice
Consider a realistic scenario. An organisation builds an AI tool that automates a high-volume internal process. It is technically sound, commercially justified, and aligned to corporate strategy. Four lenses assessed. Four boxes ticked.
But the experience is poor, so people route around it. And no one asked whether the automation displaces roles in ways that damage workforce trust. The reputational cost outweighs the efficiency gain.
Four lenses passed. Two problems missed.
That second problem, the one nobody asked about, is what brings us to the fifth lens.
Why Ethical is now a Governance Imperative
Alexandra Almond first made the case for adding ethical as a fourth design dimension in 2020. At the time it was a design community conversation. Practitioners asking whether we should build something, not just whether we could.
AI has changed the stakes considerably. Four developments in particular have moved this from a design principle to a board-level necessity.
Bias at scale. AI systems trained on historical data inherit and amplify existing patterns — in hiring, credit allocation, content moderation, and beyond. Human decisions carry the same biases. What is different is the speed and volume at which AI executes them before anyone notices or intervenes.
Data access and privacy risk. AI systems consume personal data at a scale that creates consent, privacy, and regulatory exposure most organisations have not fully mapped. The risk is often structural rather than intentional — data collected for one purpose repurposed for another, with consequences that were never considered at the outset.
Accountability gaps. Many AI systems cannot adequately explain their own decisions. When automated processes affect employment, credit, healthcare, or access to services, the inability to explain why creates both legal and reputational exposure. Regulators are moving quickly to address this, and organisations that have not prepared will find themselves caught.
Regulatory momentum. The EU AI Act, evolving UK frameworks, and sector-specific guidance are creating compliance obligations that are accelerating. Ethics is transitioning from a considered choice to a structured requirement. Organisations that treat it as the former will be overtaken by those who treat it as the latter.
The Right Framing for the Boardroom
In practice, ethical considerations in AI are not performative. They are risk-based, requiring organisations to surface, investigate, and either mitigate or control for potential harms, most acutely reputational ones.
It is similar to the argument for building teams that reflect the customers they serve. You should do it because it is the right thing to do. But in most organisations, risk and governance are the levers that get it taken seriously. Most boards would not frame that as a values conversation. They would frame it as risk.
That is not cynical. That is how change moves through large institutions.
The same logic applies to AI ethics. The moral case is clear. The board-level case needs to be framed in risk, control, and consequence.
The Five Questions
When evaluating any AI opportunity, these are the questions that matter:
-
Is it desirable? Is there a real problem, and will people actually use the solution?
-
Is it feasible? Can it be built with what we have?
-
Is it viable? Will it generate sustainable value?
-
Is there strategic fit? Does the organisation have the right to win here?
-
Is it ethical? Should we be doing this at all?
That last question used to belong to designers. Now it belongs in the boardroom.
What it All Comes Down To
Modern AI governance requires moving beyond "Can we?" to "Should we?" and "Is it us?". By integrating these five questions into your evaluation process, you transform AI from a speculative experiment into a disciplined strategic asset.
The organisations that thrive won't be those with the most powerful models, but those with the most robust frameworks for deciding which models to build, and which ones to leave on the drawing board.