Robots are becoming nearly as common in assembly plants as nutrunners and conveyors. The advent of collaborative robots is only furthering that trend. However, as robots play a greater role on the line, engineers must ensure that workers remain safe around the technology.

A new study conducted by researchers at the University of Southampton in England shows why. The researchers found that robots can encourage people to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviors. “We know that peer pressure can lead to higher risk-taking behavior,” explains lead researcher Yaniv Hanoch, Ph.D., associate professor of risk management. “With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”

Published in December, the study involved 180 undergraduate students taking the Balloon Analogue Risk Task, a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player’s “temporary money bank.” The balloons can explode randomly, meaning the players lose any money they have won for that balloon. Players have the option to “cash-in” before this happens and move on to the next balloon.

One third of the participants took the test in a room on their own. One third took the test alongside a talking robot that only provided instructions on how to take the test, but was silent the rest of the time. These were the control groups. The remaining participants, the experimental group, took the test with the robot providing instruction, as well as encouraging statements, such as “why did you stop pumping?”

The robot was the Pepper humanoid robot from SoftBank Robotics. Just under 4 feet tall, the robot can recognize human faces and emotions and is designed to interact with people.

The results showed that the group encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups. They also earned more money overall. There was no significant difference in the behaviors of the students accompanied by the silent robot and those with no robot.

“We saw participants in the control condition scale back their risk-taking behavior following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before,” says Hanoch. “So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”

Hanoch says further studies are needed to see whether similar results would emerge from human interaction with artificial intelligence systems, such as digital assistants or on-screen avatars.

“With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community,” he says.