Weekend Edition: AI-powered vs AI-empowered
In the breathless race to become an ai-powered society, have we missed the step of becoming AI-literate enough to be AI-empowered without becoming AI-driven?

A social media-powered media and information landscape overpowered a society that still isn't media literate enough to manage and harness its power. That lack of media literacy, lack of the capacity and willingness to shape how these tools will affect us and our civic experiences in particular has had wide-ranging unintended consequences that we are still unraveling and that we're still working to fully understand.
Into this dysfunctional civic landscape, we are even more rapidly deploying AI-powered tools that we understand even less well.
The current generation of LLM-based AI technologies is a wildly more powerful and disruptive technology that we understand even less well than social media and the algorithms that redefined our information and cultural landscape. At least social media was understood by its creators — even if they hid its effects and dissembled about their intentions. The creators of our current generation of AI tools do not fully understand how they work, and we have failed to demand interpretability as a core feature of and precursor for our ongoing deployment of these technologies.
And yet the dueling conversations about AI doom and AI utopianism continue apace: new narratives of how AI-powered society will be a post-human apocalyptic hellscape or a post-work, post-inequality heaven crop up everyday. None of these narratives include almost any discussion of what we need to be and how we need to be in order to leverage these AI technologies into a human-centered AI-empowered society where we harness the opportunity of augmentation over replacement and deploy them differently than the last major technology wave. Both conversations are dominated by discussions of power and effects but neither ask fundamental questions about us, about the flexibility and resiliency of the social and civic structures they will affect, and about how ready or not or willing or not or capable or not we as individuals or communities are to harness these tools rather than be harnessed by them.
What is AI literacy?
How deeply do we need to understand these tools themselves? How they work? What they are based on? Rooted in our current language and systems will these tools deepen existing inequality and inequity? Lead to deeper access and exploitation patterns or liberation? The abstraction of costs of virtual experience hide all kinds of social, cultural, and resource costs associated with them.
How does interacting with AI-powered agents via typical human modes of interaction (conversation) humanize them? Dehumanize us? Should we be anthropomorphizing them as we deploy agentic teammates alongside our human teammates? Or should we explicitly not do that because our whole experience of virtual teammates may be similar but our expectations of them should not be?
What does it mean to understand a discontinuous reality? What new pressures do these tools add to our already dysfunctional information and media systems in terms of our ability to discern authority and credibility? How does the rampant proliferation of impersonation challenge our concept of of identity? Online and off? How does an unmoored sense of reality change our trust in our own capacity for discernment? Trust of others? Reshape our understanding of ourselves, our communities, our country?
How and when do we use these tools? How and when are they being used on us? For us? Against us? Lots has been made about the disruption to how we work (including a deep warning from Dario Amodei from Anthropic just this week about the likely reality of a new regular state of 20% unemployment and the elimination of all entry-level white collar work) but how we react to and assess and consider these options versus how we guide the deployment and augmentation of our experiences is completely beyond our current capacities as a society.
If we have no more entry-level human teammates, how will we ever develop mid-level or leadership human teammates? Are we willing to just allow AI teammates to shape the direction and context and set the ceilings for our organizations in an accidental AI-driven future? Do leaders need to be able to do the work of the teams they lead at all? Or is leadership becoming a philosophical and moral capacity to hold the containers and set the intentions of mixed AI and human teams? If we eliminate the entry-level of a whole sector (or entire team) we are committing to an AI-only sector (or team) in the long run. Do we mean to be committing to that? Are we comfortable with the ceiling that might set on our creativity? How might we imagine our teams differently if we embrace the idea that AI teammates might take responsibility for certain functions or that every human on our teams might be augmented and elevated by a co-pilot of some kind across every function at every level? Expanding our capacity and extending our capabilities, not replacing anyone in an AI-empowered future?
No one has ready answers to all these questions – and to the many many others. AI-literacy isn't about the answers. What we need to reach for is the willingness and courage to develop the capability and a capacity to embrace the catalytic conversations around the questions themselves.
Building literacy in a rapidly evolving, emergent language
This development and deployment and disruption isn't happening to us: it's happening with us but largely as passive passengers. We're in a boat we didn't build in a river we didn't choose with our oars out of the water. We could put our oars in the water. We have choices that we aren't exercising and agency we aren't embracing in this process, but first we must find the capacity and capability to join the conversation with clarity and purpose. And that means building AI literacy right now or the momentum of perceived inevitability will become actual inevitability, and the incentives of profit not benefit will make these choices for us.
The path we're on and the paths ahead are actually up to us — but only if we're capable of seeing it and choosing them. We need to consider deeply the nature of the boat (social systems, org structures, and resource commitments), where the river is taking us (a coherent intentional vision of an AI-empowered future), and get our oars in the water.
Last updated: 31 May 2025