The Trucks of Tomorrow project asks what the role of the truck driver will be in an era when fully automated driving is available. We collaborated with Scania and the Interactive Institute Piteå to envision future intelligent truck driving environments.

2013 fall | UID | 6 weeks | together with JiaoJiao Xu & Regimantas Vegele


Here we present a multimodal (visual-audio-haptic) truck interface for a future truck driving scenario, in which the driver can spend time on other tasks than driving, thanks to all the available automation technologies. What sorts of things can the driver deal with in this 'downtime'? The way we imagine it, future truck drivers might as well form their freelance one-man companies, so they can take on transport jobs that large logistics companies find better to outsource. To arrange and schedule these efficiently, drivers have to be available all the time, even on the road.  When in fully automated driving mode, they can receive and apply for such job offerings, manage their schedules, or deal with any practical matters around their trucking business. The driver can freely switch back to manual mods and get back control over driving.

Design process

Researching the context of truck driving

To see with our own eyes how trucking in real life looks like, for a day we shadowed , observed and interviewed three drivers from a local transportation company. They showed us the vehicle drive, how they get along with traffic, and what they transport: we ended up at their client's logistics centre, where we could glance into their daily operations, like freight loading, unloading or assembly.

To sum up what we learned from user research, is that truckers are not concerned with driving per se, but with the freedom, security and future of the profession itself.

Sketching up a future logistics scenario

This hands-on experience led to the next research question. If we want to design the future of truck driving, we have to figure out how their larger context, the transportation and logistics industry will change. The trend research reports from the logistics industry outlined several large and vague concepts around flexible, organic, real-time distribution of goods. To make this concrete and useful, we sketched up a possible future logistics scenario from the perspective of truck driving. Based on the interview, a persona was created for the main role of the future driving scenario. This driving scenario was the base for defining necessary functionality and information that the UI has to present the driver with.

Developing multimodal UI concepts

First iteration

The approach towards ideating on multimodal UI concepts was detailed and explorative. We detailed the speculative logistics scenario in specific steps, and defined what kind of information the UI has to show. The aim was to match the necessary content with the right presentation modes.


The first iteration entailed trying out all possibilities of different input and output modalities in different combinations: 2D and 3D graphics on HUD, audio, tactile output, haptic output, gesture control, eye-tracking. This phase yielded many wireframes and several learning points about which modalities work with which sorts of information to be presented.

Second iteration

During the second iteration, the initial wireframes of the best ideas were refined into one overall concept with visual graphics, audio cues and tactile information on the dashboard. Prototyping ideas was a proof of concept, to see which aspects break and which ones stand the test.

Final design

The final version of the concept built all previous conclusions together by balancing the combination of visual, audio, and haptic information presentation. Visuals on the HUD serve as the main content carrier (schedules, itineraries, navigation. Audio cues raise the user's attention even when she's not looking at the screen; moreover, they reinforce the visual info by expressing personal preferences with subtle differences with the musical tone. The user controls the system with a haptic input device; also, tactile surfaces on the dashboard and the windscreen show meta info (weather, road condition) on the periphery.

When we design for fully automated driving,  we assume that all sensing technologies and information (traffic, weather, road, navigation, surroundings etc.) is available for the driving system to be able to make decisions. While the computing systems of the future truck will be able to handle all this data overflow, the truck driver will still need to be informed about a few things. A well balanced human-machine interaction can help with that: it can make the incoming information manageable for the driver, whenever she needs it. Information presented to the driver is filtered based on her preferences.