The sector is changing rapidly with the advent of so-called "generative" artificial intelligence, which designs objects based on textual descriptions
Machines might not be ready to replace humans just yet, but there's no escaping the fact that they're playing a growing role in certain fields. Design is one of them. The sector is changing rapidly with the advent of so-called "generative" artificial intelligence, which designs objects based on textual descriptions.
This technology allows anyone to slip into the shoes of a designer and imagine products that could become part of our daily lives. Eric Groza has tried his hand at this. The Dubai-based creative director decided to entrust the design of a fictional collaboration between the Swedish furniture giant IKEA and the California-based outdoor clothing brand Patagonia, to Midjourney, an artificial intelligence image generation program. He shared the results of his experiment on LinkedIn, in a post that spurred nearly 43,000 reactions. It features fictive creations such as camping chairs, an outdoor sofa and lamps that combine the visual identities of the two companies.
To come up with this capsule collection, Eric Groza launched the generation of 200 different images in Midjourney. In the end, he selected just eight that he believes illustrate both brands' commitment to sustainability and environmental protection. "For IKEA [this collaboration] would push the boundaries of their furniture to a new frontier. For Patagonia, it would showcase how their sustainable multifunctional comfort can be used indoors as well," he explained on LinkedIn.
But creating this imaginary collection was not so straight-forward for the creative director. The first images generated by Midjourney were not very convincing because Eric Groza had not yet mastered the art of the prompt, ie, the ability to formulate instructions sent to the artificial intelligence tool to enable it to create a visual from scratch. The image generation software that has flooded onto the internet in recent months, including DALL-E 2, Imagen, DreamBooth and Stable Diffusion, works on the basis of keywords and textual descriptions. They use language understanding and learning models on very large amounts of data to design images from any written request. But you still have to know how to formulate these requests correctly. And that's exactly where the challenge lies in using generative artificial intelligence.