Written by
Caio Barrocal
Disruption seems to be the norm for design, especially when it comes to the techniques and tools we employ in our craft. In 2025, the field faces yet another shift. Technical optimism seems to have lost unanimity, and creative professionals are trying to understand their place in a future of economic uncertainty, in which AI seems capable of delivering aesthetic quality with unmatched speed. In more philosophical terms, this topic has become a central discussion once again, raising a set of questions: in a world where machines can create, what is creativity, really? How does generative creation influence our creative processes? And perhaps more interestingly: who is the author of an artifact produced through generative techniques?
My views on technology are those of sociotechnical systems, meaning I don't see how the tech tools our societies create can be considered apart from their organization, motivations, and human elements in general. Therefore, in a moment of pressing climate challenges, economic turmoils, and increasing global tension, I cannot help but see the race for AI as a symptom of a logic that pushes human needs aside, seeks profit at all costs, and disregards environmental and social long-term consequences. Yet, this is a trend we need to observe and shaping a critical view of such a complex topic requires a comprehensive set of skills that a single expertise won't likely cover. Therefore, this text is first and foremost a call for multidisciplinarity and collaboration. Equally important is the understanding that sharing authorship with machines is not something new, but rather something that has been intensified. In fact, designers, artists, and intellectual workers, in general, have been apprehending computational technologies since their conception while seeking (or being pressured) to explore new mediums and perform their tasks more efficiently.
The first efforts to bring computers to the creative domain began in the 1950s, amid an effervescent and diverse artistic scene. Pioneers like Georg Nees, Frieder Nake, Vera Molnár, and Lillian Schwartz in the US and Europe, Hiroshi Kawano in Japan, and Waldemar Cordeiro in Brazil were among the first to explore what would become known as “computer art.” They began experimenting with computers and algorithms at a time when these machines were still very expensive and limited. Yet, driven by the rapid advancements in computer graphics, they—along with other creators—recognized the computer as a potentially rich medium for expression.
In the beginning, explorers of “computer art” mainly were mathematicians, physicists, and engineers — the ones who could operate giant and complex computers and who were interested in how the machines could help visualize the phenomena and models they were studying. In parallel, there was also within the community significant enthusiasm for exploring graphical synthesis per se and making the use of computers more appealing. Such efforts in improving graphic user interfaces ultimately made computers easy to operate and accessible to the general audience; however, until very recently, one had to be extremely knowledgeable about programming techniques, mathematical principles, and how digital images are manipulated and rendered in order to tackle the complete expressiveness of the medium. Interestingly, the grammar of creating with computers mixes up technical concepts such as logical thinking, algorithms, programming, and resource consumption with design principles such as proportion, balance, hierarchy, rhythm, usability, and perception.
As I was interested in outlining how computers reached the level of aesthetic relevance they have today, I concluded that explaining it solely as a product of developments in computer graphics wouldn't do it justice. There would still be a missing piece: the one capable of clarifying the creative practice and motivations. With time, I framed this object in two components: first, as a continuation and extrapolation of the employment of mathematics and rationalization as mediums for art, design, and creativity. And secondly, as a consequence of the improvements in computer graphics that have occurred since the last century, which essentially enabled computers to synthesize with ever-increasing quality.
Left: fresco by Piero della Francescca; Right: della Francescca's sketches on proportion.
While using the computer as a medium for creative production is a somewhat recent phenomenon, mathematics, art, and design have met each other many times across history. After the Middle Ages, for example, Brunelleschi (1377–1446) is credited with having rediscovered the concept of depth and formulated an understanding of linear perspective and the vanishing point. The architect and engineer was one of the pioneers of the Renaissance period and is most famous for designing machines to help build the dome of the Cathedral of Santa Maria del Fiore. Brunelleschi influenced many other artists and mathematicians who were seeking to create convincing depth and realism in paintings. Further steps towards creating an aesthetically convincing perspective system were taken by Piero della Francesca (1416–1492) who wrote many mathematical texts discussing geometry, algebra, and the application of perspective principles into art. In the first volume of his On Perspective In Painting (near the 1460s), Della Francesca establishes geometry theorems and then discusses their application to frames. In the other two volumes, the artist examines three-dimensional drawing techniques applied to prisms and deals with more complex shapes such as the human body and architectural ornamentation (Robertson & O’Connor, 2003).
Both artists were deeply invested in their artistic intentions to the point that they turned to the design of their very creative processes, leading them to elaborate machines and concepts to help them realize what they had in mind. For instance, in the introduction of On Perspective In Painting della Francesca explains his process — one in which mathematics was not only a tool for creating with greater quality, but also a creativity engine: “First is sight, that is to say the eye; second is the form of the thing seen; third is the distance from the eye to the thing seen; fourth are the lines which leave the boundaries of the object and come to the eye; fifth is the intersection, which comes between the eye and the thing seen, and on which it is intended to record the object.”
Some years later, Leonardo Da Vinci (1452–1519) built on top of important contributions made by those before him, as he enriched the field with further studies on perspective and the optical principles of the eye. The deep understanding of perspective and the physics of shade and light that Da Vinci developed throughout his life turned out to be his signature and the epitome of a Renaissance man: a multidisciplinary creator in sync with artistic sensibilities and attentive to the techniques of creation. In his various sketches, he registered ideas, sketches, theorems, prototypes, and mathematical concepts as he was trying to represent and design the real world. In Leonardo, perhaps more than any other Renaissance artist, mathematics, art, and creativity were fused in a single concept (Robertson & O’Connor, 2003).
It's important to note that having mathematics and geometry as engines for creativity isn't exclusive to European cultures. In other parts of the world, and in different times, creators relied on such instrumentalization in order to realize their creative intentions. That is the case, for example, for the Japanese "Sangaku", which refers to geometrical problems and theorems that were carved into wooden tablets to be placed as offerings at Shinto shrines or Buddhist temples, and also to the usage of mathematical abstract patterns found in Islamic art.
At the dawn of the computational era at the beginning of the 20th century, Europe went through the consolidation of industrialization and the rise of thoughts on modernity and positivism. These times were shaped by technological optimism and an emphasis on human-centered progress, which also saw the rise of rationalism as a normalized way to see life, meaning emphasis on reason, logic, quantification, and systematization. The trends that led to Bauhaus' functionalism, for instance, also influenced many creators and scientists who started to seek ways to formalize, describe, and rationalize aesthetic creation. To these intentions, the computer came as a perfect creative partner and a catalyst of motivations that were around before.
Using the computer to create and manipulate images became possible in the 1950s when the first displays were made available to exhibit graphics, and the first plotters were created to print them. Although the initial focus was on visualizing and understanding mathematical models and physical phenomena, it didn't take long for artists to see the computer as a potential creative ally and the provider of new languages and means of expression.
The first artworks of this kind were produced using analog computers associated with cathode-ray oscilloscopes, which served as basic displays. The graphics could then be registered in analog film strips or photographed. One example is the Electronic Abstractions, a series of computer-generated graphics produced by American Ben F. Laposky in 1952 and credited as one of the first pieces of computer art.
Ben F. Laposky. Electronic Abstractions. 1952
Distinguishably, Laposky recognized the beauty and artistic capabilities of computers at a time when most discussions were focused on practical, mathematical uses, hence illustrating a first generation of creators of the kind. This is reflected in his descriptions: “Electronic Abstractions are abstract art forms, traced by intricate electrical waves on the screen of a cathode-ray oscilloscope. [...] They are compositions of electrical vibrations in light as pleasing to the eye as compositions of sound vibrations in music are pleasing to the ear. These beautiful visual rhythms and harmonies of electronic abstract art may be recorded by means of photography".
Although not calling it explicitly computer art, Laposky intended for an artistic placement of his oscillons, which is demonstrated not only by his descriptions above but also by how he published and presented the artifacts as artistic material (Laposky, 1952).
Ben F. Laposky. Electronic Abstractions. 1952
The term "computer art" came only later, and generally speaking, it refers to a broad set of artistic procedures, acts, and strategies that artists can employ in association with the computer. Authors of the field also crafted categories for classifying the role of a computer in a given creative process, regardless if they remain digital or transferred to another medium. In short, the computer can be seen either as a tool for creating and synthesizing preconceived ideas or as a creative means through which the very concept of computing becomes an artistic subject. Such distinction is important as it results in different levels of autonomy and novelty.
Despite a variety of mediums, it is undeniable that the popularity of computer art as a practice and the level of aesthetic relevance computers gained in the previous decades cannot be disassociated from the chain of improvements in computer graphics happening since then. Although exciting, the first explorations were limited by modest graphic expression, which changed drastically after Ivan Sutherland developed the Sketchpad in 1962 and set the foundations for modern user interfaces and real-time graphics. In another article, “From Computer Graphics to Computer Art”, I discussed in depth how developments in computer graphics enabled high-quality output and sparked the interest of a community of artists who began using computers to create art.
Yet, computer-aided art and design faced a troubled beginning as a valid discipline due to resistance from conservative practitioners and critics who questioned its relevance and legitimacy as an artistic practice. It was argued that the predominantly technological and scientific focus of the first publications on the subject, as well as the difficulty of establishing methodology and definitions for it, should place computationally produced works in a category of “non-art”, unable to find space in exhibitions. The very multidisciplinary nature of making art with a computer also made it hard for the discipline to find its place within the community. From the beginning, the scene was composed of scientists, artists, engineers, designers, physicists, and mathematicians who often have different perspectives on the work to be developed and on which aspects should be emphasized (Taylor, 2014).
The art-historian Frank Popper pointed out that many were the influences that sparked what he called “computer and virtual art”. In his book From Technological to Virtual Art, Popper looks back at some of the art movements of the 20th century and outlines their impact on a growing class of creators who were beginning their experiments with the computer. Among some of the main influences are the luminous aspects of kinetic art, the exploratory nature of Pop Art, and the effervescence of cinema and animation. Mainly though, during the second half of the 20th century, authors of the fields of philosophy, psychology, aesthetics, sciences, and art started exploring the concept of information aesthetics, which was a short-lived but strongly influential movement that sought to create mathematical models capable of evaluating and, thus, quantifying the aesthetic quality of artifacts. Such an effort had Abraham Moles (1920–1992) and Max Bense (1910–1990) as its most notable agitators and resulted in theories that were widely spread among European designers and artists during the 1960s.
The authors of the information aesthetics proposed that "modern aesthetics" should be developed, and they intended to create universal mathematical models capable of describing the aesthetic qualities of all forms. In other words, they aimed to define perception in objective terms, using mathematical principles instead of personal interpretation. The ideas of Moles and Bense ended up finding better adoption in fields of contemporary art that sat closer to mathematical and scientific communities, such as abstract art, concrete art, and the soon-to-be computer art. Despite being a possibly too simplistic and schematic approach for apprehending the vast territory of artistic creation, their ideas undeniably set the scene for the closest of relationships between arts and digital technology. As a matter of fact, the computer was the perfect partner for exploring how quantitative concepts and procedures could become aesthetic production.
Bense's collaborators Georg Nees and Frieder Nake are considered the first ones to put up exhibitions of computer art, which happened during the early 1960s. Both German scientists belonged to a group of pioneering scientist-artists known as The Algorists, a name for those who created their own algorithms to synthesize visual pieces. Max Bense was Nees’ supervisor and also the main responsible for introducing him to the still-young medium of computer art. After getting in touch with Nees’ computer graphics experiments in 1964, Bense decided to invite him to exhibit such works at his gallery. At that time, the place was mainly dedicated to concrete art and explorations based on the rationalist approaches Bense had been developing. Nees and Bense were also the publishers of the booklet Rot 19. Computer-Grafik (1965), a small publication that is possibly the first one made on computer art, and which contains much of Nees’ work accompanied by explanations of the algorithms behind them. Around the same time, Frieder Nake—another scientist-artist influenced by Max Bense— was experimenting with computer art using the famous Graphomat Z64, a flatbed drawing machine of high precision created by engineer Konrad Zuse. Nake became especially known for his series of colored computer drawings produced through operations of matrix multiplications, having contributed to all major exhibitions of computer art (Compart, 2018, 2012).
Left: Georg Nees in 1986 © Alex Kempkens; Center: Frieder Nake; Right: Max Bense in 1964 © Goebel Weyne.
Cover of the booklet Rot 19; Right: Georg Nees. Andreaskreuz. 1965
Although short-lived, the ideas of information aesthetics and its strong association with digital computers influenced experimental artists and designers around the World. In North America, John Whitney (1917—1995) became a pioneering artist for his works on computer-generated animation, while in Latin America the Brazilian-Italian Waldemar Cordeiro (1925—1973) is usually considered the most notable agitator of the field.
The trajectory of Cordeiro is also a good example of how the creative use of computers evolved as a product of the Modernist intentions of the 20th century. The artist was significantly in sync with the discussions on how technology would impact the arts and design to the point that it's impossible to discuss Brazilian contemporary art without considering his contributions. As he wrote in 1973, computer art could be seen “as a process of objectifying ideas through images, approaching psychological, ethical, sensory, ideological, sensitive, and intellective variables through arithmetic and logical operations”. Cordeiro identified that this new kind of art would have a tendency to create multidisciplinary works by taking advantage of scientific research and discoveries of the time, which for him was a continuation of the trends on concrete art “developed in the historical conditions of the first industrial revolution (suprematism, neoplasticism, constructivism, etc.)”. More interestingly, Cordeiro considered the rationalization of the creative process and the employment of computers a way to reflect on our human creativity, which illustrates the mindset of the creators of his time who were excited about this new type of partnership: “In case the artistic issues can be treated by machines or by teams including a ‘partner’ — computer — we will learn more about how man handles artistic issues.”
In 1971, Cordeiro introduced computer art to Latin America through an initiative and exhibition he called Arteônica. It took place at the Fundação Armando Alvares Penteado in São Paulo and was one of the first events worldwide dedicated to art and technology.
Left: Waldemar Cordeiro ©The Mayor Gallery; Right: Waldemar Cordeiro. A mulher que não era BB. 1973.
A similar path was shared by pioneer Vera Molnár (1924–2024), which began with her studies in 1947 at the Faculty of Plastic Arts in Budapest. There, she was trained as a traditional painter but developed a style based on the already-mentioned rationalist and mathematical trends that were influencing European art in the 1950s. In her early work, Molnár focused on exploring the aesthetic possibilities of combining simple shapes and colors. Over time, she deepened her reflection on the mechanisms of artistic creation, which made her study the work of people such as Piet Mondrian (1872–1944), Kazimir Malevich (1879–1935), and the artists of concrete art. Ultimately, she approached the scientific community and in particular, the mathematicians, which helped her elaborate her signature style.
Molnár’s engagement with mathematics and geometry led her to incorporate methodical patterns into her work and develop an algorithmic mode of creation. However, she found such iterative procedures to be exceedingly laborious and prone to a lack of precision when done by hand, which motivated her to seek out mechanical alternatives. In 1968, she discovered the computer and the benefits it could bring to her practice. What is particularly fascinating is that Molnár’s ability to work with computers predates most efforts on making them easier to manipulate and program, implying that she and other programming artists of her time were extremely determined.
Vera Molnár in her atelier. 2017. ©Galerie La Ligne, Zürich.
In an interview in 1975, Molnár described her creative process, referring specifically to the RESEAUTO, a computer program she created to render the artworks below:
“This program permits the production of drawings starting from an initial square array of sets of concentric squares. The available variables are: the number of sets, the number of concentric squares within a set, the displacement of individual squares, the deformation of squares by changing angles and length of sides, the elimination of lines or entire figures, and the replacement of straight lines by segments of circles, parabolas, hyperbolas and sine curves. Thus, from the initial grid, an enormous variety of different images can be obtained”.
Vera Molnár. Structure de quadrilatères. 1988; Right: Vera Molnár. 144 trapezes. 1975.
Essentially, Molnár believed that the computer could satisfy artists’ desires for innovation and, at the same time, encourage the mind to work in ways other than conventional. What is important for our discussions is to acknowledge that for Molnár, Cordeiro, and other like-minded creators of their time, using the computer came as a desire to gain efficiency and precision, but also as a strong push to reflect on their practice and explore new ways to create. Molnár recently passed away in 2024, and her remarkable career was fondly remembered by major art publications and newspapers.
In the second half of the 1960s, computer art began to gain significance in the art world as the computer itself became an irresistible cultural object (Taylor, 2014). The chains of technical improvements associated with ever-increasing enthusiasm culminated in one of the most important events for the field: the Cybernetic Serendipity, an exhibition of computer-aided art and creativity that happened at the Institute of Contemporary Arts of London in 1968. The exhibition, credited with being the first one of its kind, was curated by Jasia Reichardt and contained not only graphic pieces created with the aid of computers but also music, poetry, dance, and animation. It was a milestone for the dissemination of the artistic qualities of the computer to the world.
In her book that was published on the occasion of the exhibition, Reichardt wrote two things that caught my attention. First, there is a realization that the potential of the computer is still unknown, as demonstrated by her words:
“Cybernetic Serendipity deals with possibilities rather than achievements, and in this sense, it is prematurely optimistic. There are no heroic claims to be made because computers have so far neither revolutionized music, nor art, nor poetry, the same way that they have revolutionized science”.
But what is more pertinent is what Reichardt considered the biggest impact of computers on art, design, and creativity: "New media, such as plastics, or new systems such as visual music notation and the parameters of concrete poetry, inevitably alter the shape of art, the characteristics of music, and content of poetry. [...] It is very rare, however, that new media and new systems should bring in their wake new people to become involved in creative activity [...]. This has happened with the advent of computers. [The engineers] have occasionally become so interested in the possibilities of this visual output, that they have started to make drawings which bear no practical application, and for which the only real motives are the desire to explore, and the sheer pleasure of seeing a drawing materialize. Thus, people who would never have put pencil to paper, or brush to canvas, have started making images, both still and animated, which approximate and often look identical to what we call 'art' and put in public galleries”.
Reichardt was naturally referring to the advent of computing, but her point is still very much pertinent to the current debate around generative AI.
Computer Dance performance during the exhibition Cybernetic Serendipity in London, 1968; Right: Computer paintings exhibited during the same expo.
It is true that creators had been since the 1950s successfully experimenting with printed computer art, but the field gained another level of expressiveness through the efforts of researchers, engineers, and companies in improving the quality of displays and rendering technologies. What is particularly interesting to observe is that this relationship happened both ways. While many creators were motivated by the ever-increasing improvements and investments in graphic technologies, some were also actively working with universities, laboratories, and companies to push the limits of computer-generated graphics, improving both their quality and expressiveness.
Computer artist Lilian Schwarz, for instance, collaborated with various tech laboratories throughout her career, such as the famous Bell Labs, when she partnered with engineers to experiment with graphics and animation through programming languages such as BEFLIX, EXPLOR, and SYMBOLICS. Her art experiments as a programming artist combined elements of hand painting, digital collage, and digital image processing, resulting in pieces that mix traditional artistic techniques and digital technology. Unlike other pioneers, Schwartz transcended the somewhat rational aesthetics of early works, employing the computer as an artistic medium to develop a fun, vivid, and colorful style. The artist was also known for playing with color perception and sound to create interactive installations.
Lillian Schwartz at Bell Labs. ©Lillian Schwartz's Website
Lillian Schwartz and Ken Knowlton. Frames from the Pixillation movie. 1970. ©Lillian Schwartz’s Website
During the 1970s, the field of computer graphics saw most of its formal methodologies developed as well as an increase in popularity due to efforts that took them from research labs to industries, television, and other mass media. It was also during the 1970s that Thomas A. DeFanti developed the GRASS and ZGRASS programming languages, which enabled creators to script 2D animation in an easier way and thus became a hit in the artistic world. In the upcoming years, three-dimensional rendering became more accessible and efficient as researchers and animation companies delivered many improvements. Ultimately, these graphics came to occupy a position of great prominence within the community, which aesthetically influenced many practitioners. It was around the 80s as well that 3D computer graphics acquired a prominent position outside laboratories and studios, catching the attention of general audiences through TV, movies, and advertisements (Jankel; Morton; Leach, 1984).
From the 1990s on, the very term "computer art” acquired a somewhat nostalgic character and is often used today to describe the initial phase of the discipline in which equipment was less powerful and the aesthetics of results simpler. With greater computational power and diversification, a new generation of artists began to explore bolder aesthetics, interaction, and animation as machines evolved to deliver greater expression. Moreover, computers also evolved to allow for artistic experiments to assume a distributed scope, not necessarily occurring on a single machine, but rather through the internet. At the same time, it also became possible to deploy experiences onto a diversity of new devices such as wearables, projections, IoT, sensors, actuators, mobile devices, and VR equipment.
Also in the 90s, developments in user interfaces enabled designers and artists to employ computers professionally, making their work more efficient, although mostly through proprietary software. Such new ease of manipulation ultimately implied two types of relationships creators could establish with computers: one in which the computer is seen as an executor of the ideas of a human designer operating it mainly through proprietary software (eg. Adobe, Macromedia, Sketch, Figma…), and one in which computers are seen as co-creators, sharing part of the creative process with a designer that intentionally employs generative creation.
Over time and for the need of greater focus, the term computer art gave place to many other "sub-areas" that saw computers as co-creators, such as net art, generative art, software art, creative coding, algorithmic art, and, finally, AI-art—all with a certain element of generative creation at their core. Interestingly, this very distinction is open to debate today, as proprietary tools with features enabled by generative AI have been repackaging the approach of generative creation into ready-to-use user interfaces.
At first, the introduction of computers to the creative community faced resistance, especially because of their mechanical, mathematical, and multidisciplinary nature. The core of the debate was the comprehension that creating artifacts alongside computational systems involved ceding a part of the creative process — which before belonged entirely to humans— to the machine. Such a shift naturally challenged the conventional conception of ownership, as it raised an intuitive question:
Who should be considered the author of a given piece of intellectual production, art, or design when computers are involved? The human, the machine, or both?
With time, and as computers got more popular, such resistance diminished as creators embraced digital techniques, whether by choice or pressure of the market. In any case, an important remark for our discussion is the realization that sharing authorship with machines is not something new brought by AI, but rather something that has been unfolding throughout the past decades and that was more recently intensified. This is also why we are now seeing a re-edition of the ownership debate I discussed before.
Nonetheless, the possibility of pairing with computational intelligence has since the beginning motivated professionals to explore these machines as a fruitful creative medium.
For designers, for example, accustomed to apprehending current technologies and repurposing them to the tasks at hand, such exploration came with many intents. Some of these were to extrapolate the capabilities delimited by available proprietary software (think of Adobe, Macromedia, Sketch, Figma), to obtain novel and unpredictable aesthetics, to enable the work with parameterization and optimization, or for building artifacts that respond more autonomously in real-time. In recent years, many designers have been leveraging generative creation very literally, building or employing systems that can render flexible visual identities, parametric objects, responsive interfaces, and generative fonts.
In many ways, ceding a part of our creative processes to machines allowed us to design more efficiently, accurately, and with greater—or at least novel—expression.
We say something is generative when it is capable of producing an outcome or reproducing itself autonomously. Therefore, designing with generative creation implies intentionally employing an autonomous element that contributes to the achievement of a certain goal or to the synthesis of desired outcomes. Such autonomy can be granted in several ways: by letting systems make choices based on complex models, by relying on sufficiently smart Gen-AI agents, by designing with genetic algorithms, or simply by designing systems to respond to unexpected interactions.
This is why we say generative AI is generative. Because such agents are capable of autonomously generating outputs and synthesizing artifacts, regardless of how predictable the input is. Some even learn from previous interactions to respond in smarter ways and produce even better.
Formally, generative creation means employing systems or processes, which are put into execution with a certain degree of autonomy, contributing to or resulting in a complete work. The critical point is that computational intelligence is intentionally used as an active participant in the creative process and not only to support the decisions made by humans (Groß et al., 2018; Grünberger, 2019; Galanter, 2003).
In design, working with such computational autonomy promotes a fundamental change in the creative process as designers become no longer executors of tasks, but conductors. A role that Groß and his collaborators consider to be that of an “orchestrator of decision-making processes” in their book Generative Design (2018). Essentially, bringing generative agents to our work means giving up total control, which is now partially conducted by a form of computational intelligence we need to manage.
To illustrate this fundamental change, Groß proposed a model for the design process around generative creation characterized by an emphasis on abstraction. The main change, according to him, is not only that traditional craft recedes to the background while abstraction and information become the protagonists, but also that designers need to constantly reflect on how to translate their ideas into information that autonomous agents can "understand".
Thus, the relevant question is no longer “How do I draw/sketch/paint?”, but “How do I abstract?”.
Groß's original illustration for the model focused on generative design through coding, but I amended it to highlight the possible role of generative AI agents (in blue).
Unlike the conventional process where designers implement ideas directly, generative creation involves a process of abstraction to transform our ideas into pieces that can feed the generative engine. Until recently, to work with such technology, designers would either need to program themselves or partner with engineers skilled in building systems around generative logic. Tools powered by Gen-AI, however, brought new interfaces capable of streamlining such processes, allowing designers to more easily partner with intelligent agents to render their intentions. Prompting in natural language has become the quasi-standard interface; every popular design tool nowadays either has or is developing its own GenAI features, aiming to cater best to how we work (sketching, manipulating images, brainstorming visual ideas…).
Regardless of the approach, bringing generative creation into the design process turns it into a somehow “indirect” one, meaning that designers participate, even if only partially, through activities that happen on a level of abstraction above the craft, such as planning input to AI systems, prompting through natural language, creating algorithms or rules, programming, evaluating the results generated, and refining output until they get a satisfactory result. In fact, when suggesting that ‘How do I abstract?’ is the most relevant question now, Groß is not only illustrating the existence of a layer of abstraction that intermediates the design process but also demonstrating the continuation and intensification of the approximation between design, art, and computing we have been witnessing throughout the last decades.
When implying that ‘How do I abstract?’ is the most relevant question when designing alongside generative creation, Groß is not only illustrating the existence of a layer of abstraction that intermediates the creative process but also establishing a continuation and intensification of the approximation between design, art, and computing we have been discussing so far. For designers, this shift poses the need for comprehending concepts of both areas so they can interact with such systems more intentionally, to which the trajectory we have discussed in this text can serve as inspiration.
In essence, when abstracting, people express an idea in a specific context while suppressing details irrelevant to that context (Beecher, 2016). In this way, the ability to abstract is related to the act of choosing the correct details to be removed from a problem so that it can be better understood or represented. By placing “How do I abstract?” as the central question when designing with generative systems and acknowledging that designers might no longer elaborate the solution directly, Groß illustrates the insertion of a more prominent layer of abstraction that mediates the creative process. In this context, GenAI tools come as one more layer above.
At the crossroads of a crucial moment for the future of technology, I believe every designer should consider the following: working with generative creation can indeed help us make our work faster and more efficient, and unlock our creative processes to paths we wouldn’t consider otherwise. Aesthetically, for example, one of the main strengths of generative creation is its ability to offer new directions to design projects and break with habitual and predictable choices of form and representation — something that occurs because AI agents can be adopted as co-drawers with whom designers need to “negotiate” creation (Agkathidis, 2015).
When it comes to how we design with generative systems, it helps to consider the existence of two possible approaches. In the first approach, which authors Zhang and Funk call concept-based ideation, creative work begins conceptually, in our heads, probably before computational tools are used. Since the main challenge in this approach is to turn an already existing abstract idea into a satisfactory outcome, generative creation comes as a tool that designers employ to materialize concepts or execute tasks according to their expectations.
In the second approach, material-based ideation, these systems are seen as the creative “material” that will be experimented with, and which will point out concepts to be elaborated. More practically, this means that the creative work will gradually take a more concrete shape and direction only after a series of experiments with different software tools, AI agents, and systems are executed. In his book Analog Algorithm (2021), Christoph Grünberger acknowledges that this is essentially a practice in which total control is abandoned in favor of results, which can mean greater quality, speed, or aesthetic novelty. Because of this, the author states that designers start behaving mainly as interpreters and curators.
On the other hand, designers should also keep in mind that generative systems, whether powered by AI or not, are not neutral or immaterial entities but rather exist as a complex chain of interests, ownership, capital, and usage of natural resources with very practical and concerning implications. If today’s drive of weaving technology into creativity continues a thread that has been unfolding for decades already, embracing this paradigm without thorough criticism would mean simply disregarding the urgency of our times. In the words of authors Brian and Levin: “With the fracturing of civic life after social media, the malignant growth of digital authoritarianism, and the looming threat of environmental catastrophe, the sheen has come off Silicon Valley and the folly of technological solutionism has become clear. To the extent that we continue to prototype new futures within the framework of late capitalism, [...] there is a new urgency for artists and designers to have a seat at the tables where technological agendas are set”.
In this sense, the authors argue that “technologically literate” designers have an essential role to play in “checking society’s worst impulses”. Not only because we can ring the alarms when freedom and imagination are threatened, but also because we are in a better position to try and make space in conversations and organizations for as many perspectives as possible. Hopefully, also the most critical ones. In fact, many designers and artists have been engaging in political, experimental and subversive initiatives through generative poetics aiming to challenge their main applications and also to repurpose their use. An example is Lucas Rochelle's QT.bot, an AI agent that generates speculative queer futures along with possible scenes, defying the "normalizing character" of classification agents.
Another example is the set of projects Design Against the Machine, developed by Professor Boris Müller's students when challenged to explore "speculative future scenarios through creative, experimental websites, co-created with AI". One of the projects, Luminari uses the help of GenAI to picture a "Solar Renaissance Eco-Utopia", in which humans beat climate change through miraculous improvements in photovoltaic technologies, and manage to establish a healthy relationship with the environment. In another more provocative project, The User Manual, we are invited to consider a world in which machines are not only made to be operated by humans but also by other machines. Who is the user in this case? Who should we have in mind when planning the User Manual?
And as a final note: designers can and should be active, critical agents in this debate, instigating reflections and pushing for positive change. The last few years have been challenging for the design field, marked by a series of layoffs and a social reality plagued by complex problems. Notably, the leaders of companies that have long shaped our profession have shown explicit and concerning complacency towards right-wing extremism. Not surprisingly, our community lives through a moment of apathy as it seeks out its place once more. Yet, if the future is not being designed by those who care, then we’re guaranteed not to have a positive outlook.