Design Against the Machine
Instead of banning AI tools, I made them a requirement for a speculative design class
I don’t particularly like ‘Artificial Intelligence’ (AI) — for all the obvious reasons. The results of AI design tools tend to be derivative, unimaginative and unethical. AI is not going to save us from global warming, AI is not going to eradicate poverty and AI is most certainly not going to make us smarter.
But generative AI is here — and I believe as an academic institution it is our obligation to experiment with new technologies and explore and evaluate its possibilities. Designers tend to be sceptical adopters. New technologies are always interesting — but the important question is how they can be applied to problem solving or to aesthetic expression.
So the question is: as designers, what exactly can we do with generative AI? Being a design professor, I simply asked my students to come up with good answers.
In the summer semester 2024, I gave a class in the Interface Design programme that I aptly called ‘Design Against the Machine’. Instead of banning AI, I made it a requirement. For better or worse, my students had to use AI tools for every part of the design process: ideation, sketching, storytelling, prototyping, image creation, visual design, typography and coding. But I also told them not to take the results for granted. Instead, they should question, evaluate, modify and even destroy the AI ouput.
There were three assignments in my class:
- Develop and explore a near-future narrative on digital technology. Think about how tech and society will evolve in the next ten or twenty years.
- Create an experimental website that documents or describes the above scenario. Make use of experimental typography.
- Use generative AI tools in every step of the design process.
This is a fairly long essay. If you want to jump straight to the results of the class, please do: Nonlinear Narratives of the Near Future; Exploring speculative future scenarios through creative, experimental websites, co-created with AI.
Speculative Design and Narratives of the Near Future
Speculative Design (initially termed ‘Critical Design’) is an approach to design developed in the mid-nineties by Fiona Raby and Anthony Dunne at the Royal College or Art London. Their book ‘Speculative Everything’ is a seminal publication in this particular field of design.
Speculative Design is not aimed at problem solving. Its artefacts are not meant for production and they are not supposed to be useful or practical. Rather, Speculative Design wants to question the status quo, foster debates and present alternative realities. In the last few years, it evolved into an influential strategy for negotiating the advancement of digital technology. I was a MA student at the Royal College of Art in the late nineties — so for me it is really interesting to observe how speculative design advanced since then.
To quote from ‘Speculative Everything’:
[The term Critical Design] grew out of our concerns with the uncritical drive behind technological progress, when technology is always assumed to be good and capable of solving any problem. Our definition then was that ‘critical design uses speculative design proposals to challenge narrow assumptions, preconceptions, and givens about the role products play in everyday life.’ It was more of an attitude than anything else, a position rather than a methodology.
Later, Dunne and Raby write:
Critique is not necessarily negative; it can also be a gentle refusal, a turning away from what exists, a longing, wishful thinking, a desire, and even a dream. Critical designs are testimonials to what could be, but at the same time, they offer alternatives that highlight weaknesses within existing normality.
While ‘Design Against the Machine’ was not a speculative design class in a strict sense, it was obviously speculative and experimental. The resulting projects are not ‘designs for production’ but ‘designs for debate’.
I explicitly wanted to keep away from the subject of how AI might be applied to todays problems. Rather, I asked the students to speculate very broadly about possible technological near-futures. Instead of looking for AI applications, I asked them to develop visual, interactive narratives that consider both social and technological developments. Additionally, I told them not to be afraid of the absurd. In the past, the future has often turned out to be preposterous.
Experimental Web Design
In 2018 I wrote an essay called ‘Why do All Websites Look the Same?’ — a scathing critique of the visual stagnation in web design. Six years later, most of the criticism still holds up. There (still) is very little variation, expression, and experimentation in web design. If you don’t know the essay, I suggest to read it.
As I wrote back then: ‘One of the fundamental principles of design is a deep and meaningful connection between form and content.’ This line is the most highlighted quote from the essay and sums up my problem with current web design. The designs are highly interchangeable and unspecific. Many web sites are simply the generic exhaust of a content management system. The deep and meaningful connection of form and content is completely absent.
Unsurprisingly, an important inspiration for the class were Flash websites from the nineties and noughties. I have a special reverence for the websites of the movies ‘Requiem for a Dream’ (2000) and ‘Donnie Darko’ (2001), designed by Alexandra Jugovic and Florian Schmitt. Both sites were hauntingly beautiful, dark and scary, and full of mystery. The aim of our websites was to create the same level of atmospheric density and drama.
We treated the browser window as a stage. Visual and textual storytelling entwine. The drama unfolds as you follow the narrative. Unusual and unexpected typography intensify the experience. Form and content have a deep and meaningful connection.
In order to achieve all this, I had to make sure that the students could utilise the full potential of CSS. So I invited Jonas Pelzer as a co-teacher. Jonas recently graduated from the FHP with an experimental web design project and brought both a strong technical and a strong design angle to the class.
Using and Breaking Generative AI Tools
Let’s not mince words here: Midjourney is scary. The images generated by Midjourney are tremendously vivid, lifelike and photographic. Specific options and parameters allow you to tweak the quality and create a sequence of images that are internally coherent and visually consistent. Just take a look at some images from the student project ‘Luminari’ — a depiction of a fictional solar punk community:
The images quite literally seem to be the result of a photographic documentary, taken by one person at one specific time and location.
And Midjourney gets better every week. When I first used it, the generated images were full of obvious mistakes. Midjourney was especially terrible with typography and calligraphy. It is still not perfect — but it is catching up.
But the great thing about Midjourney is that it allowed the students to quickly create several variations of a visual concept. Here you can see an image sequence from the project ‘Fashion Forge’. The student tried to pin down the best visual appearance for a fictional machine that turns old clothes into new fashion pieces :
But this class was not only about using Midjourney and creating the illusion of real photography. As mentioned above, the students had to use generative AI tools throughout their design process. There were no rules or recommendations — everyone had to figure out which tool worked best for each stage of the process. Consequently, there was a great variety of approaches and experiments. I can only highlight a small selection.
In the ideation process, many students used ChatGPT for evaluating and developing ideas. ChatGPT worked like a sparring partner for bouncing ideas and for creating interesting, coherent narratives.
But the most important role of ChatGPT was obviously writing text. Not only short sale pitches for the Fashion Forge (‘A machine humming with purpose and renewal’) but also for creating complete longform stories like the Luminari article for a fictional magazine called ‘NeoGraphics’. Unsurprisingly, this worked really well. Students did a bit of tweaking here and there — but the ChatGPT text was quite convincing.
Copilot and ChatGPT were also used extensively for coding. A lot of the animations and transitions that you see on the websites have their origin in generated code.
However, we noticed that code generation only seemed to be easy. It worked really well with clear assignments (like ‘put <span></span> tags between each words in the following sentence’) but it became increasingly challenging with more abstract concepts. If you cannot describe a function in plain language, you are not going to end up with usable code. Furthermore, the code tended to be bloated. AI code is convenient but will probably not contribute to a leaner web.
Ethical and Environmental Considerations
The use of generative AI is unethical. There is no way around it. All AIs rely on training data from the web. The only reason ChatGPT works so well is because it was trained on alomost every word and every sentence on the web. And although a lot of web content is available for free, this does not mean that it has no creator and no owner. Publishing a text on the web does not automatically mean that it can be re-used without explicit consent of the author. But this is exactly what is happening right now.
A video with Mira Murati — the former CTO of OpenAI — illustrates this approach to gathering training data very well. When asked how OpenAI trained models for Sora (a text-to-video model), she replied ‘with publicly available data and licensed data’. When the interviewer asked her if she meant YouTube she completely avoided the question and mumbled ‘I’m actually not sure about that’. At least she had the decency to look embarrassed.
The fact that Midjourney allows you to specify certain films, cameras or lenses is an indication that they scraped a large part of Flickr. (I don’t know if they did that with or without consent.)
But the point is: the only reason generative AI works so well is that it was trained on billions of images and texts — including their metadata. And most the creators of these texts, images and metadata did not see a cent. So when we generate an image with Midjourney, we are indirectly exploiting a photographer. Which is — well — unethical.
Furthermore, AI uses a lot of electricity. The Verge recently published an article on AI energy consumption. In the article, the author states that image generator like Midjourney use about 2.907 kWh per 1.000 inferences. That is about the same amount of electricity you need to fully charge your smart phone.
That does not sound like much — but it sums up. The exact amount of energy consumption of the AI sector is difficult to calculate. But the Verge quotes a paper by Alex de Vries who assesses the electricity usage of AI technology. De Vries discusses both current energy consumption and future scenarios. He writes that ‘the worst-case scenario suggests Google’s AI alone could consume as much electricity as a country such as Ireland (29.3 TWh per year).’ De Vries points out that this is unlikely to happen soon — but it is a valid scenario and it gives you an idea of the amount of electricity that is needed to run generative AIs. And not all of that energy comes from renewables.
For many years, the IT giants of the world were on track to become carbon neutral. Every year, they reduced their emissions and they were much more successful in doing this than other sectors like transport. However, this trend has reversed. Microsoft announced that their overall emissions increased by 29.1% in 2023. The reason for this is obviously related to their AI activities.
The use of generative AI is highly problematic from an ethical and environmental perspective. So why did we do an entire class with this technology? Simply put: I believe it is the responsibility of an academic institution to critically evaluate evolving technologies. And you cannot do that from the sidelines — you have to get your hands dirty. In order to fully understand the risks and potentials of emerging technologies, you have to work and design with them.
Student Projects
The student teams produced nine different design projects. You can find an overview of all projects here. For this essay, I have selected four projects that represent the class and the design process very well:
The Fashion Forge
By Hanne Dahlmann
What if sustainable clothing was less about timeless basics and more about imagining bold and experimental fashion styles each season? The Fashion Forge does exactly that. It is a machine that dissolves old garments and turns them into new fashion pieces. Simply toss in your old clothes into the machine, prompt whatever you would like to wear, and get new pieces that fit you perfectly.
Burst Day
By KimLi Kaya Balzer, Lilly Stöckle and Enrico Reinhardt
In the near future, life mostly happens online. Remote jobs, virtual reality socialising, and hyper-controlled algorithms are the norm. However, this technological landscape facilitated the proliferation of niche sub-cultures. Especially young adults create distinct groups — called ‘Bubbles’ — that share an unmistakable visual appearance and a special interest, from neo-biologist who reactivate extinct species to theatre lovers enjoying Shakespeare. A special day has been introduced for these communities: ‘Burst Day’ allows you to ‘burst your bubble’ and explore new ideas.
Luminari
By Sascha Hoffmann and Elizaveta Mironova
Mankind has overcome the climate crisis and — with the help of technology — has mastered a harmonious coexistence with nature. Sustainable thinking and action were essential for this incredible progress. Pioneers of this development were the ‘Luminari’ who paved the way for a sustainable future. Back in 2030, the photographer and author James Dann had the special opportunity to witness these early communities of technological innovators and social reformers.
→ Luminari
The User Manual
By Sarah Kiss and Theodor Hillmann
We are on the threshold of a new era: we are dealing with machines that are no longer constructed by humans. Instead, we are facing a technological landscape that itself was created by technology. These machines only provide limited insights into their inner workings and set the framework for our relationship themselves. A manifesto — a psychogram.
Conclusion
So — is it worth it? Is AI going to make design better? Is the electricity well spend? Will generative AI tools unleash a new area of creativity?
Yes and no.
The class was very stimulating and insightful — and I am really happy with the resulting projects. The students did amazing work and the discussions in the class were intense and illuminated the impact of AI. Midjourney is an exceptional AI tool for generating highly convincing visions of a near future. Generative AI images are able to translate abstract concepts into tangible and relatable stories. ChatGPT was great for ideation and for generating the descriptions of the near-future scenarios. Copilot helped the students to quickly create complex websites. Furthermore, the students experimented with many other obscure AI tools — which gave the projects often an unusual edge.
So the results from my class are pretty good. But was it worth to spend so much electricity in order to create speculative designs? For an experimental educational exercise I think it was. But is it worth to spend so much energy just to replace everyday tasks? Or for creating bland and irrelevant imagery? Or — even worse — for fake images, for misinformation or for stirring hatred? Obviously not.
The risks and the costs of generative AI tools are high. But as we have demonstrated in this class, there are also opportunities and advantages. In the design process, AI tools can be powerful instruments that provide us with new ways of storytelling and visual expression.
However, I would argue that AI tools should only be used responsibly in a way that reflects the costs and risks. Arbitrary use of AI is pointless and perilous and should be avoided.
Easy to say and impossible to enforce.