Is AI truly eliminating navigation, or is it simply shifting its agent and form?
Written by
Francisco Nunes

This essay is the result of two encounters. The first took place during a session of It's Book Time, a study group focused on practicing English among Brazilian designers and product enthusiasts. In that meeting, we discussed Andrew Sims’ article, “Do AI Products Even Need Navigation?”—an article that served as the starting point for this reflection on the navigational paradigm introduced by AI.
The second encounter happened at Friends of Figma Porto Alegre, where we exchanged ideas on how we use AI tools in our daily workflows, and what perspectives we see for a future increasingly shaped by artificial intelligence.
While writing this essay (still a work in progress), I recalled another text I had read in Aeon Magazine, written by Bryan Norton, on Bernard Stiegler's philosophy (a reading I believe every designer working with technology should explore, particularly about how our tools shape us). What follows is the result of several days of reflection on the question of navigation in digital products. I hope the reader finds the journey worthwhile.

Black Venus by Mark Bradford, 2005. Mixed media collage; 130 × 196 in © Mark Bradford
To navigate is to read the world in order to move through it, whether it means scanning a crowd to find a familiar face, deciphering the logic of a bookstore’s layout, or following the stars at sea. This ability has always been mediated by tools (many of them disruptive and transformative). Still, the rise of artificial intelligence presents us with a radical promise: a world where we no longer need maps, because the information or the product “comes to us.”
Faced with this transformation, this essay begins with a central question: Is AI truly eliminating navigation, or is it simply shifting its agent and its form?
My central argument is that AI (or more precisely, AI as a product, rather than a pure technology) does not eliminate navigation. Instead, navigation is delegated to a systemic agent that operates invisibly. From this starting point, I want to reflect on how such delegation may impact our navigational abilities. In some cases, generate a kind of cognitive debt by depriving us of a formative practice.
Finally, I want to leave you, the reader, with a provocation: perhaps this transformation demands more than just an individual adaptation, maybe it requires rethinking more systemically how we engage with these new tools of agency.
In this essay, I use the term navigability (navigation + ability) to define navigation as a human practice that predates the digital world. This clarification matters: I am not referring to navigability as the product's capacity to be navigable, as often seen in product design literature.
To understand AI's impact on interaction and navigation, we must first reclaim navigation from its purely digital sense. At its core, navigation is a form of human technology; a savoir-faire that has always allowed us to go from point A to point B.
Technology here is not understood as something merely “technological,” but rather, as Bryan Norton explains through Bernard Stiegler’s thought, as “[...] an open-ended creative process—a relationship with our tools and with the world.”
It is the craft of moving through the world, a creative, open-ended relationship between ourselves, our tools, and our environments. This practice, which I often refer to as anthropotechnics [1], is not an instinct but a learned ability that emerges from the very experience of “going through”, facing the friction and cognitive effort imposed by the material world.
This understanding opens up a compelling perspective for thinking about design itself. Here, I consider navigation equivalent to cognition: the cognitive operations we perform, whether intentional or not. Therefore, every product inherently has a navigational function. [2]
Historically, we’ve always delegated part of the navigational effort to tools: maps, compasses, astrolabes. It may sound obvious, but one of the defining features of human evolution is our ability to create tools. On this point, Stiegler offers an important insight:
According to Stiegler, the process of making and the use of technology in its broadest sense is what makes us human. Our unique way of existing in the world, distinct from other species, is shaped by the experiences and knowledge that our tools make possible. Whether it’s a state-of-the-art supercomputer like Neuralink or a prehistoric axe used to cut down trees.[3]
The digital era has accelerated this delegation (GPS being a prime example, or the calculator), but the fundamental paradigm described by Sims has remained: we are still the agent operating a tool to read a map. The current software structure still offers us a sense of confidence (or at least it should), a sense of place (or at least it should).

Keep Walking by Mark Bradford, Hamburger Bahnhof — Nationalgalerie der Gegenwart © Mark Bradford und Hauser & Wirth / Photo: © Staatlichen Museen zu Berlin, Nationalgalerie / Jacopo LaForgia
The shift introduced by AI runs deeper. Sims observes that AI disrupts the spatial model “not by moving things around, but by removing the need to go looking in the first place.” This is where the vector of the process inverts. AI promises that we no longer need to go through the information; instead, the information will now come to us. But what happens when the need to “go looking” is removed?
What occurs is a transfer of agency. Navigation still happens, but its agent is no longer the user; it is now a system operating invisibly. The interface ceases to function as a map we read, and becomes more like a “genie in a bottle” to which we make requests. The apparent immediacy of the response conceals the entire complex path the system has traced on our behalf. [4]

Structural navigation model by Frank Krausch (Source)
The question is not whether navigation ceases to exist, but what happens to us when we stop practicing it.
If navigation doesn’t disappear but merely becomes hidden, we must ask: Who benefits from the erasure of this process? On a material level, the answer is clear. In today’s digital product ecosystem, removing navigational friction is not a neutral gesture—it is a strategy of optimization with clear commercial goals. A “seamless” interface that anticipates or instantly satisfies a desire is designed to reduce cart abandonment, accelerate the customer journey, and ultimately increase conversion and profit.
In this model, the AI agent functions simultaneously as a facilitator for the user and an efficient data collector for the company’s business intelligence.
For this optimization to be accepted, it must operate under a deeper ideological function: naturalization [5]. It aims to make this technological mediation feel as natural and obvious as the air we breathe, hiding its constructed nature, programmed biases, and underlying interests. Pierre Bourdieu once said, “The strength of ideology lies in not being recognized as such.” Symbolic violence is precisely the kind that becomes normalized and accepted as part of the natural order of things.
This process, the loss of savoir-faire or the unconscious internalization of technical mediation, is what Bernard Stiegler calls cognitive proletarianization. By erasing the traces of the path, the system disables our ability to question it. Indeed, empirical science is beginning to measure this effect. One example comes from a recent study by the team at Fermat’s Library, which investigated the impact of AI assistants on essay writing. The study found a reduction in participants’ neural connectivity and a lower ability to recall content and feel authorship over their own texts. They called this effect cognitive debt:
“A growing dependence on AI assistants for cognitively demanding tasks may lead to a decrease in memory retrieval effort, harming the consolidation of learning and the sense of authorship.” (Fermat’s Library, Your Brain on ChatGPT, 2023)
This debt is paid with our capacity to internalize, remember, and feel ownership over thought. It is the long-term cost we pay for short-term convenience. The system also strips us of its formative benefits by sparing us the cognitive stress that is essential to the journey.
We lose the space for reflection that comes from the act of choosing. We also lose the possibility of serendipity: those “epoched” processes that arise during non-linear searches and lead us to unexpected discoveries. More gravely, we risk the atrophy of cognitive skills like researching, comparing, and organizing—which are the foundations of autonomous and critical thinking.
This concern is powerfully explored in Old Man’s War, a novel by John Scalzi. In the story, human soldiers use a brain-integrated AI system that functions like a superbrain, translating languages, performing calculations, and organizing decisions. However, during a conflict with an alien species, this technology is suddenly disabled, and the soldiers cannot act, think, or decide independently.

Some of John Scalzi's books (Photo)
The metaphor is clear: by outsourcing our cognitive faculties, we may also lose the anchor of our autonomy. When technology fails, what remains is not just the silence of the system, but the emptying out of the subject itself.
The analysis thus far confronts us with a fundamental dilemma: the same tool that offers liberating convenience threatens to atrophy a vital cognitive capacity. How can we make sense of this duality without falling into naïve technophobia or uncritical optimism? Stiegler offers us a conceptual tool for this: the notion of pharmakon.
For the philosopher, every technique is a pharmakon: simultaneously a remedy and a poison. Technology is not something external to us: it shapes and constitutes who we are, for better and for worse. Navigational AI is an excellent example of the pharmakon. Its remedy is the promise of a frictionless life, the optimization of tasks, and the freeing of our attention for other pursuits. As we’ve seen, its poison is cognitive debt — the loss of navigational savoir-faire and our growing dependence on systems whose interests are not aligned with our own.
Recognizing AI as a pharmakon prevents us from offering simplistic answers. The question is no longer “Is AI good or bad?” but rather “How can we care for this poison that is also a remedy?” For Stiegler, the answer is not to reject technique, but to develop a therapeutics: a practice of care, a conscious mode of relating to technology that mitigates its toxic effects while amplifying its benefits. This is precisely what we must strive for: a critical response that allows us to pilot the tool, rather than being piloted by it.

A Treatise on Adulteration of Food and Culinary Poisons, 1820 (Source)
Before we delve into therapeutics, it’s worth returning to Andrew Sims's vision, which catalyzed much of this discussion. For Sims, traditional navigation has long been the “backbone of the experience”—the element that makes software legible and gives users a sense of predictability and place.
In his view, the disruption introduced by AI lies precisely in the promise of a dynamic, conversational interface, where tools “arrive just in time,” eliminating the need to go looking in the first place.
Sims’ description of the user experience is accurate. Indeed, it often feels like we are no longer moving through “rooms in a house.” However, our analyses diverge in the diagnosis of implications. What Sims describes as “the removal of the need to go looking” is, in my view, not an elimination but a delegation.
The search still happens, but an invisible agent carries it out. Sims’ “dynamic interface” is the materialization of a structure that becomes naturalized, and its tools that “arrive just in time” are the very mechanisms that, by optimizing the task, deprive us of reflection and serendipity. In this case, the map does not disappear; it merely becomes hidden. The navigational structure still exists, but it turns invisible, embedded in the algorithmic decisions that determine what we see, when we see it, and how it appears.
This process reflects what some theorists call the algorithmic black box: a system that makes decisions based on data and criteria inaccessible to the user. The interface no longer offers clues about how the information was organized or selected, creating an asymmetrical relationship between subject and tool. We don’t just stop navigating, we stop knowing that navigation is happening at all. [6]

Asvirus 87 by Derek Lerner (Source)
The delegation of navigation to opaque algorithmic systems does not eliminate navigation. It transforms it, shifting its agency and obscuring its criteria. While this transformation is still in its early stages, it foreshadows a deeper reconfiguration: the dissolution of visible maps in favor of systems that guide the user without revealing how (or even where) they are being led.
But what is lost in this process? And what must be reclaimed?
This is not about technical nostalgia or moralistic technophobia. As we’ve seen with Stiegler, the technique is always a pharmakon: remedy and poison, as well as the condition of possibility and risk. The question is whether we can build therapeutics capable of restoring navigation to its formative value, power as resistance, and openness to the unexpected.
These therapeutics won’t be found in a new button but perhaps in new ways of teaching, thinking, and practicing design, in approaches that understand that designing an interface also creates a structure of thought. Choosing what to reveal, what to hide, or how to order things is not merely a functional decision—it is a form of discourse, ideology, and power.
In light of this, some urgent questions remain:
If delegating navigation to AI becomes the norm, what forms of navigational resistance or relearning might emerge outside the commercial circuit?
Can interface design be understood as a politically implicated field in reconfiguring agency, or is it structurally subordinated to the logics of efficiency and capital?
How can we incorporate a critical education on tool usage into design processes without falling into moralism or empty idealism?
—
This essay is more than offering a definitive answer; it is an invitation to continue navigating. After all, like every meaningful journey, it only makes sense if it transforms us along the way.
I would like thank all my colleagues from “It’s Book Time” who helped me to think about the theme of this essay.