GLASGOW, Scotland--Digital displays on the exteriors of self-driving cars could help cyclists and pedestrians stay safe. Animated representations of virtual drivers, traffic-light-like projections onto the road and emojis displayed on their surfaces could be used to allow autonomous vehicles to share advance warning of their movements.

Engineres at the University of Glasgow are developing the technology to explore ways to replace the complex nonverbal language currently shared between drivers, cyclists and pedestrians. They studied how that language can be designed, displayed and interpreted to help reduce the risk of collisions between cars and bikes. 

“Over the years, drivers and cyclists have developed their own language of gestures and other nonverbal cues to help negotiate the roads safely,” says Stephen Brewster, Ph.D., a professor of human-computer interaction at the University of Glasgow’s School of Computing Science. “That language helps both parties decide who has right of way, for example, or signal an intention to merge lanes.

“Currently, self-driving cars lack the ability to communicate with cyclists with anything close to that level of detail or nuance, which could make cycling, scootering and wheeling much more dangerous unless we find a way to reproduce that dialogue,” warns Brewster.

“External human-machine interfaces (eHMIs), like digital displays on the outside of vehicles, are one promising solution to that problem, explains Brewster. “However, research into what forms these should take has been lagging behind the other technological developments of autonomous vehicles. With this study, we set out to develop some ideas that could lead to a new language for eHMIs to enable communication between road users.”

Brewster and his colleagues grouped the results of their consultations into four suggestions of new ways autonomous vehicles could communicate with cyclists and pedestrians.

The “virtual driver” concept would embed displays in autonomous vehicles’ windscreens, side windows and mirrors. Those displays would show a digital avatar of a human driver. The avatar would use its hands and head to gesture using the social cues that bike riders are currently accustomed to exchanging with real drivers without cyclists having to learn any new methods of communication.

In the “safe zone” concept, displays on autonomous vehicles’ exteriors would display traffic signs to advise riders if the car was going to yield or proceed. The cars would also project colors onto the road around them, with green areas safe for cyclists to enter and red spaces where cyclists should avoid.

The “emoji-car” design would use a roof-mounted display to emit emoticons to communicate with cyclists and pedestrians. Left and right arrows would echo indicators on the car and lightning symbols would show intent to accelerate.

The “LightRing” would use a band of LEDs wrapped around the body of the car paired with a sensor on the vehicle’s roof. It would use colors and animations to communicate with pedestrians as well as cyclists. The car could signal proximity awareness by displaying an amber patch that grows as people get closer. Intentions to change speed could be accompanied by strokes of light that get faster as the car speeds up and slow down in the opposite direction as it decelerates.