A robot drawing a picture of flowers

Study participants perceived robots as more creative when they watched the drawing process. (Photo by StudyFinds on Shutterstock AI Generator)

In a nutshell

  • People rated robot-generated drawings as significantly more creative when they could see the drawing being made, especially when they saw the robot itself in action.
  • Despite using two robots with very different appearances (one simple and mechanical, the other more human-like), the type of robot didn’t significantly affect creativity ratings.
  • Participants with more experience using AI tended to rate the robot art as less creative, while those with more familiarity with robotics gave higher creativity scores.

ESPOO, Finland — Is robot art more creative when you see the metallic hand behind it? According to Finnish researchers, the answer is yes. A new study reveals that humans give robot-created art higher creativity scores when they actually witness the mechanical artist at work, rather than just seeing the final product. Creativity ratings climb highest when observers can see both the process and the robot artist itself.

AI-generated art is exploding in popularity and sparking heated debates about creativity and authenticity in the digital age. While much attention has focused on purely digital AI art systems, this research specifically examined physical robots that create tangible artwork, drawing robots that put pen to paper.

The study, published in ACM Transactions on Human-Robot Interaction, found that creativity ratings increased at each stage as participants were shown more of the creative process.

AI is playing an increasingly large role in creative practice. Whether that means we should call it creative or not is a different question,” says lead study author Niki Pennanen from Aalto University, in a statement.

How Watching Robots Create Art Changes Our Perception

Drawing robot
In the study, participants were asked to evaluate the creativity of robots based on their still life drawings, both after and during the process of making them. (Credit: Matti Ahlgren / Aalto University)

The study involved 60 participants who evaluated simple still-life drawings supposedly created by robots. Participants were shown these drawings under three different conditions: first, seeing only the finished drawing, then watching a video of the drawing being created (without seeing the robot), and finally watching the actual robot draw in person.

Creativity ratings increased with each level of exposure. Seeing the creative process unfold boosted ratings compared to just viewing the final product, and watching the robot artist at work pushed creativity ratings even higher.

The more information they had about the creative process and the creator, the higher participants rated creativity.

The study compared two robots with dramatically different appearances: a mechanical plotter robot called AxiDraw that resembles a simple machine, and a more complex, arm-like robot called xArm that more closely resembles human form. Despite their different appearances, both robots received similar creativity ratings.

The experiment involved some creative deception. While participants believed they were evaluating drawings autonomously created by robots, the drawings were actually made by a human artist first, then precisely reproduced by the robots. This allowed researchers to control for artistic quality and focus specifically on how observers’ perceptions changed based on what they saw of the creative process. Most participants considered the drawings creative, suggesting that perceived creativity isn’t limited to human creators.

People with more experience using AI technology tended to rate the robot art as less creative, perhaps because they had higher expectations, while those more familiar with robotics rated the creativity higher.

A robotic hand painting
Seeing the physical robot behind the artwork increased creativity ratings. (paulista/Shutterstock)

AI companies often deliberately present creative AI in ways that heighten the perception of creativity, like showing a virtual hand mimicking the process of painting or having humans physically perform AI-generated moves on a material board. Such staging techniques aren’t just marketing gimmicks; they shape how we perceive artificial creativity, according to this research.

If merely watching a robot draw makes us perceive its art as more creative, what does that tell us about how we assign value and meaning to art? AI-generated art continues to appear in gallery exhibitions and auction houses, and is being used by corporations to demonstrate their AI capabilities.

“Now that we’ve found this about people’s perception of AI creativity… does it also apply to people’s perception of other people?” asks study author Christian Guckelsberger from Aalto University.

AI-generated images and artwork now appear with just a simple prompt. Although drawing robots are now a reality, the human desire to witness the artistic journey remains unchanged, even when the artist has gears instead of hands.

Paper Summary

Methodology

The researchers conducted a within-subjects lab experiment with 60 participants who assessed the creativity of robot-created drawings. The experiment used a 3Ă—2 factorial design, manipulating two independent variables: Perceptual Evidence (PE) and Robot Embodiment. PE had three levels: Product only (viewing just the finished drawing), Product+Process (watching a video of the drawing being created), and Product+Process+Producer (watching the actual robot create the drawing in person). Robot Embodiment compared two physical robots with different morphologies: AxiDraw (mechanistic) and xArm (organismoid). The order of drawings and robots was counterbalanced, but PE levels were presented in a fixed order. The researchers used still life drawings of object arrangements consisting of artificial plants, toy animals, and artificial fruits. Participants rated creativity using visual analogue scales from 0-100.

Results

The study found a significant main effect of Perceptual Evidence on creativity assessment, with creativity ratings increasing as more evidence was revealed. Creativity was rated highest in the Product+Process+Producer condition, followed by Product+Process, and lowest for Product only. There was no statistically significant main effect of Robot Embodiment, meaning the two different robot types didn’t produce significantly different creativity ratings. The researchers also found that participants’ AI usage experience was negatively correlated with creativity ratings, while robotics experience was positively correlated. Additionally, robot likeability and perceived intelligence were positively associated with creativity ratings.

Limitations

The study acknowledged several limitations. The stimuli were simple croquis-style still life drawings, which some participants found difficult to evaluate creatively. The fixed order of PE conditions could have introduced order effects. The lab setting limited ecological validity compared to natural settings like museums. Technical difficulties with the xArm robot occasionally caused malfunctions. The researchers also couldn’t fully isolate embodiment effects from other factors, as different robot embodiments may have subtly influenced the product and process despite efforts to keep them constant.

Funding and Disclosures

The study was financially supported by the Academy of Finland (#328729, CACDAR) and the Helsinki Institute for Information Technology. The researchers declared no conflicts of interest.

Publication Information

The paper titled “From Product to Producer: The Impact of Perceptual Evidence and Robot Embodiment on the Human Assessment of AI Creativity” was published in ACM Transactions on Human-Robot Interaction (Vol. 14, No. 3, Article 41) in April 2025. The authors are Niki Pennanen, Simo Linkola, Anna Kantosalo, Nicolas Hiillos, Tomi Männistö, and Christian Guckelsberger from Aalto University, University of Helsinki, and Queen Mary University of London.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Comment