At its core, an interface is any system that translates intent into action. Think about human language itself: an interface that converts abstract thoughts into audible words and written symbols, enabling complex communication. The invention of the alphabet served as a standardized interface for knowledge, allowing ideas to be encoded, stored, and retrieved. Beyond language, simple mechanical tools have always functioned as mediators for how we interact with the world. A lever translates human strength into amplified force, a steering wheel translates a driver's hand movements into the precise turning of a vehicle. The concept of interface is timeless: it's a bridge designed to minimize the friction between a person and the world around them.
An interface is a shared boundary across which two components of a computer system exchange information. The exchange can be between software, hardware, peripheral devices, humans, and combinations of these. Whenever there is a human involved, we often use the term “user interface” or UI.
Doug Engelbart at an NLS workstation | Source: Doug Engelbart Institute
Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what's in your head and what the product can do.
Interfaces take the intricate logic, complex algorithms, and all the data of a system and present them in a way that is understandable and actionable for a human being. Could be pixels on a screen. Could be a simple buzz in your pocket. However it shows up, the interface defines how you experience technology. It's the system, made visible and actionable to human senses.
The first computer interfaces were predominantly text-based and involved command lines. The advent of Graphical User Interfaces (GUIs) changed everything with their windows, icons, menus, and pointers—by offering direct manipulation and a visual metaphor for actions. For the past few decades, the word "interface" mostly meant websites and mobile apps for most people. These defined our digital lives for a long time. More recently, with the rise of Large Language Models (LLMs) and AI Agents, interfaces are once again embracing text-based command line interactions, with a whole new level of power. And we should expect the interface to keep evolving as AI models and devices evolve to become increasingly multi-modal and ubiquitous. The command line wasn’t the end of computers; today’s chat interfaces aren’t the end of AI.
Design tools come and go. People's habits evolve. New devices are born. The way we designed interfaces five years ago is very different from how we do it now. The only common thread in our work is understanding people. Everything else is just a vehicle for that. The job is to make complex things simple. To make things feel intuitive, no matter what new technology is behind the curtain. That's the one thing that will never change.
Morse code translates language into dots-and-dashes interfaces
They enable users to interact with technology through whatever means feels most natural at a given moment—be it a tap, a swipe, a voice command, or even a subtle gesture. The interface is just a layer of interaction that adapts to you, not the other way around. It’s part of a designer’s job to understand user goals, context, and environment to determine which interface modality is the most appropriate for the task at hand.
We usually picture interfaces as what a person sees and clicks, but an API (Application Programming Interface) is essentially the same thing: a clear set of rules that lets different software applications talk to each other. It's the point of contact where one program sends requests to another and receives responses, not so different from a human user clicking on a button and getting visual feedback. For developers, your API is the "user interface" to all your product's core data, features, and services. Similarly, MCP (Model Context Protocols) are a set of rules that help AI Agents interface with one another and with other systems without human interference.
With Generative UI (GenUI), computers can imagine and build interfaces on the fly—interfaces that adapt fluidly to users, contexts, and devices. Give it a prompt or some context, and the AI can figure out the best interface elements to use to render their response. This isn't about static designs anymore; it's about highly personalized, fluid experiences, delivered exactly when you need them. LLMs started with language but are very quickly expanding into other types of inputs and outputs. In this reality, two people won't ever experience the same product.
With interfaces that build themselves and adapt on the fly based on user needs, the focus shifts from meticulously crafting static screens to defining rules, parameters, and intelligent systems that can generate optimal experiences. We'll set the stage for how information and interactions flow—less as an architect and more as a choreographer who is orchestrating a dynamic environment. The real expertise will be understanding human needs and translating them into flexible frameworks that AI can understand. That way, even when interfaces are generated on the fly, they remain intuitive, effective, and centered on people.
Design is becoming more the work of a choreographer than an architect
We're moving past simple multimodal interaction into an omnimodal experience, where different ways of interacting happen simultaneously. Imagine pointing at a screen and speaking a command, with the system understanding both cues instantly. Or pointing your camera at something while asking a question with your voice. This convergence creates a more natural, efficient, and intuitive dialogue with the products we use. It's about seamlessly merging touch, voice, gesture, haptics, and whatever else comes next, all at once.
Brain Computer Interfaces (BCI). Sure, looking forward to it.
This technology promises to bypass all conventional interfaces, allowing thoughts and desires to become immediate digital commands without the friction of language or the delays of physical movement. They’re the shortest distance between an idea and an action. But the promise of a thought-to-machine connection doesn’t come without its own risks. When your mind can be the input and output, your thoughts aren't just your own anymore—opening up risks around privacy, lack of user control, and other vulnerabilities.
The ideal of "no interface" promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It's a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.
When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.