The University of Nottingham has completed the first UK study to investigate the effects of using visual displays to help autonomous cars communicate with pedestrians.

The results showed that pedestrians trust certain visual prompts more than others when deciding whether to cross in front of an autonomous car without someone in the driving seat.

The study used visual displays to communicate with pedestrians
The study used visual and written displays to communicate with pedestrians

The study aimed to understand how pedestrians respond to self-driving vehicles equipped with external human-machine interfaces (eHMIs).

To do so, a car was driven around the university’s Park Campus over several days with a ‘ghost-driver’ concealed in the driver’s seat.

Pedestrians Autonomous Cras
The ‘ghost driver’ was concealed in the car

During this study, a series of different designs were projected onto the eHMI to inform pedestrians of the car’s behaviour and intention. This included expressive eyes and a face, accompanied by short text-based language such as “I have seen you” or “I am giving way”.

This display was controlled by a team member in the back seat, while front and rear dash cam footage was collected to observe pedestrians’ reactions.

Additionally, researchers at four crossing points asked pedestrians to complete a short survey about their experience.

David R. Large, Senior Research Fellow with the Human Factors Research Group at the University of Nottingham said:

We used three different levels of anthropomorphism: implicit, an LED strip designed to mimic an eye’s pupil; low, a vehicle centric icon and words such as ‘giving way’; and explicit, an expressive face and human-like language.

An interesting additional discovery was that pedestrians continued to use hand gestures, for example thanking the car, despite most survey respondents believing the car was genuinely driverless – showing that there is still an expectation of some kind of social element in these types of interaction.

The study saw 520 pedestrians interact with the car, and collected 64 survey responses.

Pedestrians Autonomous Cars
A range of dynamic expressive faces were displayed on the front of the test vehicle to communicate with pedestrians as it approached the crossing

Several indicators from the dash cam footage were used to evaluate each pedestrian’s crossing behaviour, including how long it took to cross, how long they looked at the car and the number of times they glanced at the vehicle.

Combined with the survey results, this data provided significant insights into people’s attitudes and behaviours towards autonomous vehicles and the different eHMI displays.

Professor Gary Burnett, Head of the Human Factors Research Group and Professor of Transport Human Factors in the Faculty of Engineering said:

We were pleased to see that the external HMI was deemed to be an important factor by a substantial number of respondents when deciding whether or not to cross the road – an encouraging discovery for furthering this type of work.

With regards to the displays, the explicit eyes eHMI not only captured the most visual attention, but it also received good ratings for trust and clarity as well as the highest preference, whereas the implicit LED strip was rated as less clear and invited lower ratings of trust.

Moving forwards, the team intends to consider a broader range of vulnerable road users, such as cyclists and e-scooter riders.

Studies will also need to be carried out over extended periods of time to understand how the public’s response to a driverless car might evolve.


More News

Get in touch

Please fill in the contact form opposite. A member of the team will be in touch shortly.

    Advertise with usGeneral EnquirySubscribeEditorial Request

    We'd love to send you the latest news and information from the world of Future Transport-News. Please tick the box if you agree to receive them.

    For your peace of mind here is a link to our Privacy Policy.

    By submitting this form, you consent to allow Future Transport-News to store and process this information.