This ‘gazing car’ with robotic googly eyes recognizes pedestrian cues — here’s how

We can finally ‘see’ a future with safe self-driving, autonomous vehicles.

Deena Theresa
This ‘gazing car’ with robotic googly eyes recognizes pedestrian cues — here’s how
The cart was fitted with robotic eyes which could be moved in any direction, controlled by one of the research team. The windshield was covered to give the impression that there was no driver inside.

Chang et al. 2022 

Picture this. You’re walking down the street, and you’re feeling…watched. No, there are no surveillance cameras, just a bunch of cars with googly eyes on them. It’s not a dream; you’re not in the Pixar movie Cars, but in Takeo Igarashi‘s vision of the future.

The professor of computer science and his team of researchers from the Graduate School of Information Science and Technology, University of Tokyo, recently published their findings on autonomous cars with robotic eyes in the journal Association of Computing Machinery. Their field of interest? No, not cars or surveillance, but human-computer interaction research.

“Self-driving autonomous cars are the future, but they’re based on forecasts, algorithms, or sensors. The artificial intelligence of the car system does not have much interaction with pedestrians. And that’s a problem. That was our starting point,” Igarashi tells Interesting Engineering (IE) in an interview.

The researchers felt that there wasn’t much research or investigation into the more “human” element of self-driving technology, though the concept has been moderately explored before.

Robotic eyes were fitted on a golf cart for the experiment.

Enabling effective pedestrian-vehicle interaction

“We noticed that people who drove cars would use gestures, or communicate in some form with the pedestrian who is probably walking down the road. We wanted to do something similar with autonomous cars. Then again, the possibilities for communication, like sound or motion, are numerous,” Igarashi says.

The researchers then considered another factor. The driver becomes more of a passenger in self-driving vehicles, which is a “key difference.” As a result, they either pay little to no attention to the road and look away, or there is no one physically behind the wheel.

This makes it difficult for pedestrians to determine whether or not a self-driving car has registered their presence, as there may be no eye contact or communication from the people inside.

So, how would pedestrians know if a car has noticed their presence? “We decided to use eyes, or gaze, as the mode of communication between the vehicles and people. Our focus was on ‘paying attention,” Igarashi explains.

As an experiment, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes, and was appropriately called the “gazing car.” The team wanted to test if putting moving eyes on the car would impact pedestrian behavior and if people would still cross in front of the moving vehicle when in a hurry.

The gazing car noticing a pedestrian in a VR scenario.

However, there was a catch. It would be pretty dangerous to ask volunteers to choose whether or not to walk in front of a moving vehicle in real life.

“Currently, the eyes are mostly for display. In this evaluation, the movement of the eyes was controlled manually by someone inside the car,” continues Igarashi.

Behind the scenes with a buggy and googly eyes

Participants played out four scenarios, two where the cart had eyes and two without, using virtual reality. They then had to decide whether to cross a road in front of a moving vehicle or not.

When the virtual vehicle was fitted with robotic eyes, which either looked at the pedestrian (registering their presence) or away (not registering them), the participants were able to make safer or more efficient choices. The cart had either noticed the pedestrian and was considering halting or had not noticed them and was going to keep driving. Its eyes would either be looking toward the pedestrian or away from them.

n (a) the cart is paying attention to the participant (safe to cross); in (b) the cart is not paying attention to the participant (unsafe to cross); and in (c) and (d) the participant doesn’t know.

The scenarios were recorded using 360-degree video cameras and 18 participants (nine women and nine men, aged 18-49 years, all Japanese) took part in the experiment. They were given three seconds each time to decide whether or not they would choose to cross the road in front of the cart.

The researchers recorded their choices and measured the error rates of their decisions – how often they chose to stop when they could have crossed and how often they crossed when they should have waited.

They noticed a clear difference between genders, which they found “surprising”. Other factors like age and background may have also played into the participants’ reactions and the findings demonstrated that road users are subject to different behaviors and needs that would require different modes of communication in a self-driving world.

Male participants made several dangerous road-crossing decisions, such as choosing to cross when the car wasn’t stopping, but the cart’s eye gaze reduced those errors.

“The female participants didn’t make dangerous choices, but they were too cautious to a point that they stopped even when the car stopped. At the same time, they did say that they felt safe when the cars’ eyes were on them. Overall, the results were very interesting,” says Igarashi.

The experiment did prove that the eyes resulted in a safer crossing for all.

Will we ever have cars that can ‘see’ us?

The experiment has its limitations, Igarashi says. For starters, it was done in a VR environment, which makes it difficult to accurately measure data.

“Real life is much different. Our cohort was also arguably small. And most importantly, all our participants were Japanese. The Japanese tend to be very cautious when it comes to road safety. The scenario will be very different in other countries. But I do think our research was a good starting point,” says Igarashi.

“People really like this kind of thing – some said the car was cute, and the rest said it looked creepy. Regardless, I could see that contribution towards safety is appreciated in general,” Igarashi continues as he discusses the responses to his research.

Next, the researchers intend to work on developing automatic control of the robotic eyes connected to the self-driving AI (instead of being manually controlled), which could accommodate different situations.

“If eyes can actually contribute to safety and reduce traffic accidents, we should seriously consider adding them. But it’s very difficult to predict when it would become a reality. As a person in academia, we try to work on what could become necessary and realistic in 10 years or so, when self-driving technology matures,” he says.

A 3D-generated image of a modern car AI evaluating driving conditions and street elements.

Over the past few years, makers of autonomous vehicles have raised a lot of money based on promises to develop a fully robotic product. But recently, industry leaders and experts said that technology will always require human supervision.

“That’s a difficult question. Shortly, producing a fully autonomous car is very difficult. Human support is necessary for most generic situations,” Igarashi argues.

“Regardless,” he continues, “we should think more about pedestrian-vehicle interaction. That was the most important part of my research. It’s futuristic, but this kind of communication technology is quite essential,” he adds.

Study abstract:

Various car manufacturers and researchers have explored the idea of adding eyes to a car as an additional communication modality. A previous study demonstrated that autonomous vehicles (AVs) eyes help pedestrians make faster street-crossing decisions. In this study, we examine a more critical question, “can eyes reduce traffic accidents?” To answer this question, we consider a critical street-crossing situation in which a pedestrian is in a hurry to cross the street. If the car is not looking at the pedestrian, this implies that the car does not recognize the pedestrian. Thus, pedestrians can judge that they should not cross the street, thereby avoiding potential traffic accidents. We conducted an empirical study using 360-degree video shooting of an actual car with robotic eyes. The results showed that the eyes can reduce potential traffic accidents and that gaze direction can increase pedestrians’ subjective feelings of safety and danger. In addition, the results showed gender differences in critical and noncritical scenarios in AV-to-pedestrian interaction.