There are five gates to the prophetic AI Heaven. Those gates are not necessarily open to everyone who makes the AI pilgrimage. Not everyone gets through all five gates.
Any individual or organization always has to knock open and get through one gate after another, until they reach the final destination.
Just because you want to get through the gate, it doesn’t mean you are able to. Each gate can’t be opened by willingness alone. Physical prowess, formidable skills and impeccable timing are required both for lone pilgrims and for organizations.
Here they are–

Gate 1: The Gate of Interface
The Gate of Interface determines what, how and how much we can access AI’s power.
What prompting is to using generative AI is what coding is to using software. Most people don’t code. Most people will not prompt – at least not as it is right now. Some even argue that chatbot is not the future of interface (or AI).
Natural language expression is fundamentally slower than many faster alternatives.
Would you really bring up a voice assistant and ask it to save a file for you, or would you click on the “save” button? Would you really type into a prompt to ask to save a file, or would you hit “Ctrl + S” to do it? Which is faster and more convenient?
Saving a file may be a naive example, but there’s no prompt or verbal command that cannot be abstracted into a simple button or input shortcut when your intention can be clearly defined. That’s also why the intention engine – which establishs context to predict or guess people’s intentions – is far more important for applying AI.
Not understanding the basic principles and best practices behind human computer interaction (HCI) is nobody’s fault. As users we’re not supposed to be HCI experts.
Just because prompting by inputing text or voice in natural language is what’s availlable right now, it doesn’t mean it’s the best interface ever available between us and computer. It almost never was, never is and probably will never be. Guess why we’re still primiarly using graphic user interfaces (GUI) with touch screen, keyboard or mouse device to do most of what we do on computers? It’s not because we don’t have alternatives, but because they’re still the most effective and efficient in most cases.
There are many kinds of interfaces between humans and computers. As a matter of fact, typing and voice are some of the slowest modes of input. They are suitable for certain use cases and they are NOT suitable for many other cases. They are certainly not suitable for all cases. In other words, when timing or speed is of importance in a task, chatbots is not the best or even appropriate approach for human-computer interactions.
Singing the generative AI praise doesn’t have to be blind to basic HCI principles. It’s not just theory, because people always show their true preference in real practice. Clicking on the “save” button or using “Ctrl + S” keyboard shortcut always wins over other slower alternatives in most – albeit not all – cases for most people.
That’s the gate of interface for applying AI at scale. We can’t really deploy generative AI at scale if we don’t design the most appropriate interfaces for whatever we want to achieve with it on a case-by-case basis.
Gate 2: The Gate of Reliability
The Gate of Reliability determines what, how and how much we should access AI’s power.
There’s a reliability barrier that divides the use of AI between critical use and casual use.
There are several factors that largely determine to what extent you can deploy and use AI, among them:
- Tolerating errors: To what extent can the system tolerate errors coming from you and other systems? To what extent can you tolerate the system’s errors (e.g. your generative AI hallucinates)? How would those errors prevented, handled and resolved?
- Managing risks: Given the fact that no system is error-free, how would you manage the respective risks to make sure it’s feasible and viable enough for real work?
- Being reliable: How reliable does the system need to be for you to have confidence that potential errors are properly handled and risks are sufficiently managed?
Generative AI has its enemies.
For now, deploying and using AI at scale requires not just humans in the loop, it requires strong humans in the loop. The problem with reliability necessitates far more than just deploying and using it:
Work is changing, and we’re only beginning to understand how. What’s clear from these experiments is that the relationship between human expertise and AI capabilities isn’t fixed. Sometimes I found myself acting as a creative director, other times as a troubleshooter, and yet other times as a domain expert validating results. It was my complex expertise (or lack thereof) that determined the quality of the output.
Ethan Mollick, Speaking things into existence
[…]
The current moment feels transitional. These tools aren’t yet reliable enough to work completely autonomously, but they’re capable enough to dramatically amplify what we can accomplish.
Gate 3: The Gate of Utility
The Gate of Utility determines what, how and how much we, as individuals, end up accessing AI’s power in real practices.
When reliability is at risk, would the usage be worth the investment? Or would the benefits worth the usage? How would we calculate and justify the return on investment?
Chanage management and cultural transformation are at the heart of organizational utility. After all, organization is merely the mobilizaiton of bias.
…urgency alone isn’t enough. [M]essages [of adopting AI] do a good job signaling the ‘why now’ but stop short of painting that crucial, vivid picture: what does the AI-powered future actually look and feel like for your organization? […] …workers are not motivated to change by leadership statements about performance gains or bottom lines, they want clear and vivid images of what the future actually looks like: What will work be like in the future? Will efficiency gains be translated into layoffs or will they be used to grow the organization? How will workers be rewarded (or punished) for how they use AI? You don’t have to know the answer with certainty, but you should have a goal that you are working towards that you are willing to share. Workers are waiting for guidance, and the nature of that guidance will impact how The Crowd adopts and uses AI.
Ethan Mollick, Making AI Work: Leadership, Lab, and Crowd
Leadership’s role is not just singing the AI praise (commonly known as “paying lip services”):
An overall vision is not enough […] because leaders need to start to anticipate how work will change in a world of AI. While AI is not currently a replacement for most human jobs, it does replace specific tasks within those jobs.
Ethan Mollick, Making AI Work: Leadership, Lab, and Crowd
Leadership’s critical role is in enabling, supporting and modelling:
Leadership can help. Instead of vague talks on AI ethics or terrifying blanket policies, provide clear areas where experimentation of any kind is permitted and be biased towards allowing people to use AI where it is ethically and legally possible. Leaders also should consider training less an opportunity to learn prompting techniques (which are valuable but getting less important as models get better at figuring out intent), but as a chance to give people hands-on AI experience and practice communicating their needs to AI. And, of course, you will need to figure out how you will reassure your workers that revealing their productivity gains will not lead to layoffs, because it is often a bad idea to use technological gains to fire workers at a moment of massive change. Build incentives, even massive incentives […] for employees who discover transformational opportunities for AI use. Leaders can also model use themselves, actively using AI at every meeting and talking about how it helps them.
Ethan Mollick, Making AI Work: Leadership, Lab, and Crowd
Whereas innovation is exciting and often at lightening speed, utility is boring and often numbingly slow:
…adoption is about software use, not availability. Even if a new AI-based product is instantly released online for anyone to use for free, it takes time to for people to change their workflows and habits to take advantage of the benefits of the new product and to learn to avoid the risks. […] The path to adoption inherently requires demonstrating appropriate behavior in increasingly consequential situations. This is not a lucky accident, but is a fundamental feature of how organizations adopt technology.
AI as Normal Technology
Utility thrives in well-supported ecosystems and requires systems thinking, not just deploying and using:
Realizing the benefits of AI will require experimentation and reconfiguration. Regulation that is insensitive to these needs risks stymying beneficial AI adoption. Regulation tends to create or reify categories, and might thus prematurely freeze business models, forms of organization, product categories, and so forth.
AI as Normal Technology
Just because we have it, it doesn’t mean it’s useful. More subtlely, it doesn’t mean that it’s useful enough to justify our investment.
Some might argue that sometimes investment is a matter of faith rather than one of justification – it’s sometimes true for innovation, but it’s almost always not true for adoption. Some of the most impactful technologolical transformations from innovation to utility took years and decades, not weeks or months.
Most of us would agree that AI is or will be leading such a transformation. But it’d be naive to mistake innovation for utility.
Gate 4: The Gate of Translation
The Gate of Translation determines what, how and how much organizations earn in deploying and using AI.
Translating personal gain to organizational gain requires a lot more than individual adoption:
AI use that boosts individual performance does not naturally translate to improving organizational performance. To get organizational gains requires organizational innovation, rethinking incentives, processes, and even the nature of work. But the muscles for organizational innovation inside companies have atrophied. For decades, companies have outsourced this to consultants or enterprise software vendors who develop generalized approaches that address the issues of many companies at once. That won’t work here, at least for a while. Nobody has special information about how to best use AI at your company, or a playbook for how to integrate it into your organization.
Ethan Mollick, Making AI Work: Leadership, Lab, and Crowd
Deployment is not the same as adoption:
Like other general-purpose technologies, the impact of AI is materialized not when methods and capabilities improve, but when those improvements are translated into applications and are diffused through productive sectors of the economy. There are speed limits at each stage.
AI as Normal Technology
Adoption itself is a transformative act rather than a plug-and-play add-on:
Our organizations, from their structures to their processes to their goals, were all built around human intelligence because that’s all we had. AI alters this fundamental fact, we can now get intelligence, of a sort, on demand, which requires us to think more deeply about the nature of work.
Ethan Mollick, Making AI Work: Leadership, Lab, and Crowd
Everything-first is the same as nothing-first. By definition, AI-first means human-second. The path to adoption becomes much more difficult when humans are not part of the growth:
The key is treating AI adoption as an organizational learning challenge, not merely a technical one. Successful companies are building feedback loops between Leadership, [experimentation], and [employees] that let them learn faster than their competitors. They are rethinking fundamental assumptions about how work gets done. And, critically, they’re not outsourcing or ignoring this challenge.
Ethan Mollick, Making AI Work: Leadership, Lab, and Crowd
Through the gate of translation, AI has a real chance of evolving from being a technology in search of problems to becoming the solution to our problems.
Gate 5: The Gate of Trust
The Gate of Trust determines how we relate to the new synthetic reality of living with AI and how that relationship could evolve from the short-term to the far future.
Both ratiional and emotional, trust is a deeply humane issue. If people don’t trust AI, they won’t use it, regardless of its potential benefits.
Unlike the machines we build, we humans inherently trust things that are predictable and reliable. An AI system that consistently performs as expected, provides accurate information, and doesn’t behave erratically builds confidence over time.
Additionally, when we feel we understand how an AI system works, what data it uses, and why it makes certain decisions (even at a high level), our trust increases. Responsiveness to feedback and the ability to correct errors also build a positive user experience that reinforces trust.
With the undeniable trend of our declining trust in institutions, media, and technology, how would we handle our trust issue with AI? What would it mean in each organizational context?
Those are wicked problems with no quick fixes. They won’t go away just by deploying AI with some change management sprinkles. Technological transformation is almost always also organizational transformation.
Last but not the least, trust demands resilience:
…resilience requires both minimizing the severity of harm when it does occur and minimizing the likelihood of harm when shocks do occur.
AI as Normal Technology
Through the gate of trust, we’re forced to rethink how we relate to ourselves and our societies. Deployment is not the solution but merely the prelude to it.
Conclusion
There’s a mismatch between how innovative technological transformation actually works and how organizational decision makers wish it to work – most organizations are in front of the Gate of Interface while the executives have a fervent urge to boast about breaking through the Gates of Translation and Trust to the AI Heaven in one pass.
That’s just not going to happen.
Unlike previous waves of fashions like design thinking or agile, this wave of AI fashion will find many organizations hitting those gate barriers much faster than before, because you can only “fake it” for so long with no “making it” in sight.
Each gate keeps out the majority of aspiring organizations who:
- Don’t invest in designing the appropriate user interfaces;
- Don’t invest in risk management and systems engineering;
- Don’t have and don’t invest in infrastructure that affords AI integration;
- Don’t do proper knowledge management and data management;
- Don’t architect the organizational function, process, structure and culture;
- Don’t invest in change management and organizational transformation;
- Don’t think beyond “deploying AI as a technology.”
So much ado about the fashion, faith and fantasy in the new age of deploying and adopting AI.
{END}
1 comment