British designer Ross Lovegrove and his studio have collaborated with Google DeepMind to co-create a chair using a generative AI trained on the designer’s sketches.
Google’s AI development company worked with Lovegrove, his studio’s creative director Ila Colombo and design practice Modem on the project, which involved generating hundreds of iterations of a chair in the designer’s signature sinuous style.
The team then took one iteration – Seed 6143 – and refined it into a 3D silhouette and CAD model using a mix of AI and industry software, before finally 3D printing the seating design in metal.
Ross Lovegrove has co-designed a chair with AI
The project began at Google DeepMind, where researchers were keen to explore how artists and designers use AI and shape future development of the technology around their input.
Lovegrove’s studio has been experimenting with AI for several years, but the collaboration presented an opportunity to gain a deeper understanding of the workings of AI models and explore the potential for creativity.
“We’re not interested in using AI to do something that exists or something that is a replica,” Colombo told Dezeen. “It had to feel different, but different and structurally intelligent.”
Creative director Ila Colombo led prompt experimentation on the project
DeepMind and the creative team began by fine-tuning Google’s text-to-image generator Imagen on Lovegrove’s sketches, applying a process known as low-rank adaptation (LoRA) that is used to adjust large pre-trained models for specific tasks.
Language quickly became a key focus of the project – both on the training side, in articulating a design lexicon appropriate for Lovegrove’s work, and on the prompting side, to get the AI model to break from convention.
“The language wasn’t working…because Google’s training data is coming mostly from what Google is known for, which is Google search or anything available online, and that is extremely generic,” said Colombo.

Read:
"It’s your duty" as a designer to promote sustainability says Ross Lovegrove
This meant that when the team tried prompting with specialised technical or descriptive terms, they could not generate appropriately matching images.
“For example, we couldn’t use parametric language,” Colombo said. “We couldn’t use the language used in the studio to describe Ross’s practice.”
“You name it, organic essentialism, parametric, dematerialised – all of the words we would normally use to describe his work were not understood.”
One iteration was chosen for 3D printing
Ultimately, even after enriching the model with a Lovegrove-ian vocabulary, the designers found that they had to think laterally with their prompting to generate images that interested them.
This included not using the word “chair”, even though the collaborators had early on decided that their project would be a chair.
“One thing that I understood is that the chair is a highly spoiled noun,” said Colombo. “Every time I used the noun chair, it was impossible for the machine to go beyond the stereotypical straight back, straight seat, four legs.”

Read:
Designers most likely among creatives to believe AI dulls creativity
When it comes to the kinds of prompts the team ended up using, Colombo provides the example of “seamless single surface extension, biomorphic form, lateral flows”.
After selecting an iteration they were happy with, the creative team used Google’s Gemini virtual AI assistant to visualise the design across multiple perspectives before creating a CAD model and running simulations using standard industry software. They then fabricated the design in metal using direct robotic-arm 3D printing.
The project marked the first time that Google DeepMind has pushed the Imagen and Gemini models to translate 2D images into 3D models using a specific design language like Lovegrove’s, according to product manager Bea Alessio.
The AI model was trained on Lovegrove’s sketches
DeepMind’s research scientists are thrilled with the project, particularly because it yielded a physical object that was functional and could be sat in, Alessio explained.
“We are eager to learn alongside designers,” she said. “We go through all the comments…We want to understand where the potential and the challenges come from, and how we can best solve them.”
For their part, Lovegrove and Colombo have mixed feelings about the value of AI in design, but see it as essential to actively explore its implications as the industry evolves.
When it came to iteration, Lovegrove felt that the exaggerated, machine-generated results sometimes more closely resembled the work of HR Giger – the concept artist behind Ridley Scott’s Alien films – rather than the designer’s own organic style.
“Whether the result through the Google DeepMind project is an improvement is incredibly debatable, for me,” he said.
The research project was a collaboration with Google DeepMind
“Because the way the human mind works – especially after so many years of designing – is that instantaneously, in a flash, you are evaluating material mass, function, ergonomics, weight,” the designer said. “You can tell in an instant whether something has what it takes.”
“It’s not bad,” Lovegrove concluded about Seed 6143. “If I were to step in [more actively], I would change it quite radically. But it’s not far off proportionally or in the height of the seat and those things, which are major factors in the creation of a chair.”
It will be over the coming years, Lovegrove says, that collaborations with AI models will bring truly astonishing results in 3D design.
Designers were found to be the creative industry group most sceptical of AI in a recent attitudes survey in the UK, with four in five saying that the technology undermines originality.
The top photo is by Enric Badrinas, all others by Martina Ferrera.
The post Ross Lovegrove co-creates chair with Google AI trained to emulate his style appeared first on Dezeen.











