In a special long read comment piece, Martha Tsigkari and Sherif Tarabishy consider how the need for critical human reflection remains paramount in a profession that might otherwise risk becoming a recycling service

The rise of artificial intelligence feels similar to the emergence of a fictional anti-hero: flawed, unconventional, yet irresistibly compelling. Despite its evident shortcomings, we find ourselves rooting for its success. And yet, AI’s eventual accomplishments will not merely affirm its worth; they will compel us to re-examine our own.
In a future where AI commoditises skill, what should architects charge for? When expertise becomes accessible to anyone, what justifies the professional title? And finally, the most uncomfortable yet necessary question: if automation can do much of what architects once did, what remains distinctly human in the practice of architecture?
Not the production of drawings – that is already being automated. Not the generation of options – machines do that faster. Not even technical coordination – software will stitch that together. What remains is the capacity to ask the right question before any prompt is written, to stand accountable when the algorithm fails and to say no when the optimised solution is the wrong one.
That last capacity is the hardest: saying no. To the client who wants speed over safety, to the platform that promises efficiency without accountability, to the market that will accept adequacy in place of care, saying no is not a skill that AI threatens. It is a skill the profession has always struggled to exercise.
Much of what we have written over the past seven years has focused on integration: how AI might augment existing roles, streamline workflows and fit within current structures. That work remains useful, particularly for practices navigating near-term decisions. But it answers only one question.
This piece asks a different one: not how we adapt, but how do we re-invent if adaptation is not enough. Not as prediction, but as a serious exercise in following the logic to its conclusions.
The stakes
A client arrives with a site and a brief. Within hours, AI tools generate dozens of massing options, rendering them in multiple styles and producing imagery that looks, to an untrained eye, like a resolved design.
It is not resolved. The outputs are plausible, not reliable; suggestive, not buildable. But that distinction is invisible to many clients and increasingly difficult to defend as a basis for professional fees.
Should we assume that the market will reward the difference? It may not. The architecture sector already undervalues expertise – fees have compressed for decades, procurement often selects on price, and many clients treat design as a commodity service rather than a source of strategic value. If AI makes production cheap, why wouldn’t clients simply accept good-enough solutions?
Unfortunately, some projects will fall short of the expected standards for ethical stewardship and creative synthesis, as the groundwork for design migrates to AI-enabled platforms. Much of the work that architects currently provide may migrate to AI-enabled platforms without critical resistance. It may then be the case that architects are not competing against other firms, but against software providers and start-ups.
Architects cannot assume demand for judgment exists; they must cultivate it. But first, they must understand what is changing
These could be AI-empowered configurators operating on particular verticals that provide solutions from conception to cost, such as supply chains for bespoke typologies and particular regions. When buildings become products, one doesn’t need to be an expert to run the app; no more than you need to be an expert to use an online configurator to “design” your kitchen.
The uncomfortable possibility, then, is that the perception of the architect’s role – this application of critical judgment – is reserved only for the premium tier of practice.
This does not make repositioning futile. It makes it urgent. Architects cannot assume demand for judgment exists; they must cultivate it. But first, they must understand what is changing.
This change is arriving more quickly than anticipated, and it will impact not just isolated components but the very core of the design process as we know it. In order to get a glimpse of what this may look like we have but to see what is happening with the creative branches of the film and games industries in the past couple of years. Their reaction to AI has followed a predictable trajectory—one that maps rather neatly onto the five stages of grief.
First came denial: AI tools dismissed as gimmicks, incapable of any meaningful contribution to production pipelines. As the technology improved, anger surfaced—artists and studios alike voiced legitimate concerns about job losses, intellectual property theft, and the hollowing-out of craft.
Then bargaining: unions, studios, and developers began negotiating boundaries, trying to carve out acceptable uses for AI-assisted workflows. More recently, a kind of depression set in, as teams realised AI-driven automation was reconfiguring entire pipelines faster than anyone had anticipated. Now, though, something like acceptance is emerging.
Leading studios are embedding AI into pre-production, asset creation, simulation, and optimisation—not as a capitulation, but as a recognition that the technology, handled responsibly, can enlarge creative possibility rather than contract it.
While this trajectory is not universal, nor complete, it reflects a dominant pattern in how large creative systems tend to metabolize disruptive tools. The question is no longer whether AI will reshape these industries, but how we ensure it does so without eroding the human expertise that remains irreplaceable.
From tools to collaborators
CAD and BIM function as extensions of human design capacity but do not challenge design authorship. AI is different. These are not passive instruments but active collaborators that generate thousands of options in hours, simulate performance in real time and potentially stitch workflows across disciplines.
From webapps that can optimise site massing against solar exposure and wind patterns, platforms that can generate concept imagery in minutes, to language models that can draft specifications and client narratives – the combined effect is a compression of time and an expansion of possible design permutations.
AI will not make architects redundant overnight. What it will do is rapidly absorb routine, repeatable work – tasks that have historically been the training ground for junior staff and the steady income for certain verticals. The question is what value remains when execution becomes cheap.
Where the profession once rewarded manual execution, time and resources, value will migrate to activities that AI struggles to replicate: problem framing, critical synthesis, and professional accountability. Practices will shift from linear delivery to conversational loops between human intention and machine suggestion.
The architect frames constraints, values and aspirations. The AI then returns ensembles of options annotated with trade-offs – which, from these options, the architect curates, edits and narrates.
This reframing elevates a new craft: the art of posing the right questions. When a firm approaches a headquarters design, the critical work often happens before any geometry is generated, understanding the client’s culture, sensing internal politics, recognising which stated requirements mask unstated anxieties.
AI can generate ten thousand floor plans; it cannot discern that the CEO’s request for “collaborative space” reflects anxiety about a recent merger. Framing the real problem remains human work.
Without critical reflection, the profession risks becoming a recycling service for historical styles, generating endless variations on what has already been built rather than confronting what should be built next
But this reframing is also where the profession is most vulnerable to self-deception. Prompt engineering can seduce practitioners into believing that generating images is equivalent to design.
The architect who spends a morning cycling through generated image variations, tweaking style keywords and adjusting parameters, may feel productive. They have selected from a distribution of forms, recombined and optimised for visual appeal rather than spatial experience or structural logic.
The result looks like architecture. It is not architecture. It is a rendering of probability. The model has no stake in the outcome, no client to disappoint, no community to answer to and no reputation when the project fails to deliver.
Without critical reflection, the profession risks becoming a recycling service for historical styles, generating endless variations on what has already been built rather than confronting what should be built next.
Current AI models operate within the dreamscape of our past creations – at least until artificial general intelligence (AGI) comes along, which will shift the conversation beyond how we define creativity to the far more elusive question of consciousness itself. But, until we have to have that impossible debate, the hard question we have to answer now is not “which option looks best?” but “what problem are we actually solving and for whom?”
This raises another practical question: if AI reduces production costs, how should architects price their services? When cost is no longer proportionate to time and headcount, what becomes the measure of compensation?
The answer requires confronting what creativity means in this era. If machines generate endless permutations, does creativity lie in generation, or in selection, synthesis and narration?
Commoditised packages will compete on speed and cost. Bespoke services must sell what speed cannot deliver – the judgment that determines which option serves the client’s actual interests, not merely their stated brief.
When skill stops being scarce
Early research suggests that AI disproportionately benefits less experienced workers, compressing the gap between novice and expert. Just as GPS rendered encyclopaedic knowledge of city streets unnecessary for taxi drivers, AI threatens to render years of architectural training less decisive.
Routine tasks – drafting, scheduling, compliance checks – are the first to fall. But even higher-order skills, such as schematic design or performance optimisation, are increasingly automated. If anyone with a subscription can generate a plausible house design, what distinguishes the architect?
The answer cannot be technical execution alone. Architects must reposition as custodians of judgment: contextual, human responsibility that resists commoditisation. Whether the market will value that responsibility, or simply accept good-enough, remains to be seen.
But scarcity still governs value. When execution becomes abundant, what remains scarce commands the premium. The market may bifurcate with high-volume automated delivery at one end and high-value human-led practice at the other, leaving the middle ground squeezed.
Between commodity platforms and elite consultancies, there may be viable niches, firms combining AI efficiency with deep specialisation in healthcare, education, or adaptive reuse, where regulatory complexity creates barriers to automation. Whether such niches sustain enough practitioners, or whether they too will be absorbed as AI expands, remains uncertain.
The mobility of talent will perhaps increase with architects who can master AI-enabled synthesis finding opportunities beyond traditional practice
This bifurcation could reshape careers. Income distribution within the profession risks polarisation, with entry-level roles tied to commoditised production facing wage compression and those who synthesise across disciplines while navigating ambiguity commanding premiums.
New roles may emerge. Authorship, bias auditing, and liability allocation would need dedicated expertise. The mobility of talent will perhaps increase with architects who can master AI-enabled synthesis finding opportunities beyond traditional practice, in consultancy, policy, software and business growth.
Professional bodies and educators must be able to accommodate these potential transitions: re-skilling programmes, updated accreditation standards for AI use and guidance on documentation will be critical to protect public interest.

What is certain is that complex projects requiring political navigation and deep interdisciplinary thinking will remain premium services. Think of the decision to design an airport. This is not a technical question AI can optimise; it requires weighing impact against infrastructure necessity, articulating a position on mobility, development and professional responsibility. No prompt generates that judgment.
The erosion of tacit knowledge
Much of architecture’s reliability comes from embodied experience, including site judgment, craft intuition and on-the-spot problem solving. A seasoned architect visiting a renovation project senses structural compromise before any engineer confirms it. A site walk reveals neighbour sensitivities and microclimatic conditions that no dataset captures.
Embodied knowledge shapes design in ways data cannot replicate. Take Foster + Partners’ design for Maggie’s Cancer Centre in Manchester. This design drew on Norman Foster’s experience of hospitalisation, an intuition for what makes a space welcoming that no dataset can capture.
Or take the Reichstag dome in Berlin, with its public ramps and observatory – elements that were not asked for in the brief. But bringing light and public access into the heart of parliament became evident through site-visits and context-identification.
At the time, such a move would have been out of distribution, precisely the kind of judgment, rooted in lived experience rather than training corpora, that no model would suggest.

Overreliance on AI risks producing architects fluent in prompts but thin on practical wisdom. If junior staff spend their formative years curating machine outputs rather than learning through direct experience, the profession may lose the embodied expertise that makes buildings work in reality rather than merely in renderings.
Schools and professional bodies may need to respond—mandating site rotations, renovation work, or physical making as prerequisites before AI-enabled practice.
At the same time, tacit knowledge was never evenly distributed. Apprenticeship favoured those who fit dominant professional cultures and knowledge transmission, which often reinforced hierarchy rather than merit.
AI may erode embodied wisdom, but it may also expose how selectively that wisdom was shared and force the profession to confront who was excluded from the craft it now fears losing.
Liability and verification
Liability will be a defining battleground. If an AI-assisted design fails – a facade system that traps moisture, a circulation pattern that creates dangerous crowding, a structural assumption drawn from a flawed training set – who bears responsibility? The architect who approved the output? The software vendor whose model generated it? The client who demanded speed over scrutiny?
To understand why this question is so difficult, it helps to recognise that we have seen something like this before. The printing press did not merely make books cheaper; it restructured knowledge itself.
Before printing, truth lived in people: religious leaders, master craftsmen, community elders. You believed the scribe because you had no way to check. Printing made knowledge inspectable, comparable, refutable and in doing so shifted authority from the person who held knowledge to the process by which knowledge could be verified.
Without verification systems, the profession won’t make fewer errors, just more of them, faster
Architecture faces something similar. Design knowledge, long residing in senior practitioners and passed through apprenticeship, is becoming inspectable. The senior architect’s judgment, once accepted on experience alone, can now be compared against thousands of documented precedents, tested against simulation, challenged by anyone with access to the same tools.
AI compresses expertise. Without verification systems, the profession won’t make fewer errors, just more of them, faster.
Consider, too, what printing did to governance. It enabled forms, records, procedures, repeatable decisions that shifted power from people to systems. Building codes, planning permissions and compliance frameworks are all forms of frozen reasoning. AI does not merely accelerate this bureaucracy – it liquefies it.
In some locations permits are becoming algorithmic. Planning decisions could become model based. This is not a refinement of existing systems but a restructuring of how governance itself operates in the built environment and architects may find themselves navigating rules that update faster than they can learn them.
Against this backdrop, current professional indemnity frameworks still assume human authorship. AI complicates this. Practices that implement transparent audit trails and verification pipelines will command trust and premiums. Those that rely on opaque models or lock themselves into closed platforms whose decision-making they cannot audit, expose themselves to risk.
Models trained on narrow datasets may reproduce exclusionary assumptions – architects who deploy them without scrutiny inherit that liability.
Professional bodies must adapt, defining standards for AI use, bias auditing and accountability. Without such frameworks, public trust in architecture could erode.
The limits of professional will
Professional will has limits. Architects can advocate for human oversight, but they cannot mandate it. The actors who will determine how much judgment architecture requires are not architects themselves.
Developers choose procurement frameworks. States set regulatory standards. Insurers define liability thresholds. Platform companies design the tools. The risk architects mitigate is not the risk developers insure against, it is the risk that emerges after handover: litigation, reputational damage, regulatory retrofits and stranded assets. While AI deployments could mostly optimise for delivery, architects absorb responsibility for consequences.
The profession’s future depends less on how well architects articulate their value than on whether external forces create structural demand for it. Regulation could mandate human oversight for projects affecting public safety. Clients may grow more risk averse as AI failures accumulate liability. Public backlash against algorithmic errors – in housing, infrastructure, public space – could expand rather than shrink the premium tier.
Repositioning requires political and economic change, not just professional reform
These are not predictions, but they are plausible. The point is that architects alone cannot determine the outcome. Repositioning requires political and economic change, not just professional reform.
AI will be used. The real debate is where human judgment must remain mandatory: projects involving public safety, irreversible environmental impact, long-term habitation, or vulnerable populations. In these cases, speed without accountability is not efficiency, it is deferred cost.
This means educating clients on the risks of AI-only approaches, demonstrating measurable value through post-occupancy outcomes and lifecycle performance, advocating for regulatory frameworks that require human oversight and building public literacy about what architecture actually does.
Procurement systems that select on lowest fee, professional indemnity structures that discourage innovation and accreditation standards designed for a pre-AI era all constrain architects’ capacity to reposition. Collective action – through professional bodies, regulatory reform and client education – will be as important as individual adaptation.
What remains
Workforce preservation instincts dressed up as strategy are not going to do much to address the foundational changes that AI is going to bring to our – and any – profession. A lot of the rhetoric around architecture is service sector anxiety dressed up as a plan. But disruptive acceleration cannot be simply “managed” when hype outpaces reality. We need to optimise for a business model evolution not a workflow preservation.
The integration of AI into architecture is not a change in tools. It is a redefinition of work, value and responsibility. Over the next decade, software may move from auxiliary aid to active collaborator: systems that generate options, test performance, stitch workflows and execute tasks directly. These shifts will compress timeframes and commoditise skills, but not uniformly.
What remains of the architect is not production, not generation, not coordination. What remains is the capacity to ask the right question before any prompt is written, to stand accountable when the algorithm fails and to say no when the optimised solution is the wrong one.
If architects cannot refuse the client who wants speed over safety, the platform that promises efficiency without accountability, the market that accepts adequacy in place of care, they will not be displaced by machines. They risk becoming the people who just press the button.
Martha Tsigkari is a senior partner and head of the Applied R+D group at Foster + Partners. Sherif Tarabishy is an associate partner in the group. They are both architects, technologists, software developers and researchers. They also teach programming to postgraduate students at the Bartlett, UCL








No comments yet