Creating a personal sound profile to take everywhere you listen.
In order to make Mimi the gold standard for personalized audio wherever people listen, we have built a mobile SDK that contains all essential features of the sound technology and UX. The Sound Profile is the premiere moment to engage users to use personalized sound on their current device, and reuse their Hearing ID on other devices in the future.
The product challenge was not to build our own pitch-perfect app, but to create a flexible integratable toolkit that enables an ecosystem of various audio apps and devices to be born.
JOURNEY MAPPING USER ENGAGEMENT
Informed by business goals and product strategy, our scope encompassed a robust SDK product supporting all mobile devices (iOS, Android) and three use-cases: personalized smartphones, companion apps for headphones and TVs. In order to find a universal user flow, we journey-mapped the levels of user engagement, to define requirements for our product designs.
A meticulous benchmark on contemporary best practices and our software needs, we arrived at the following principles to drive design work on the Sound Profile.
Consciously limit space and attention footprint.
Aid navigation with informative transitions.
Show simple stats and graphics to motivate engagement.
Alter the UI meaningfully for different devices.
Dictated by the universal user journey, all profile features were mapped out to support product planning and feature prioritization. Driven by design principles, we arrived at a joint engineering–design decision to build self-contained modules that are configured depending on what makes sense for the user at a certain moment on their device.
The UX architecture required several new components to be added to our semantic design system, to house all needed functionality flexibly. Several iterations of UI work we landed at a header–card–bulletin structure that fits all needs while minimizing complexity.
To optimize conversion from potential customers to Hearing ID users, the UI transitions limit friction between each key action, to guide users through onboarding, nearly unnoticeably.
Essentially, each touchpoint of the Personalized Sound Ecosystem is unique, users, devices, needs are different. To allow for various card configurations, the modular system is set up flexibly yet with a few meaningful constraints.
The Mimi mobile SDKs are at an MVP stage and iterative testing phase, this feature is yet to be released in partner products. I omitted some graphical details for reasons of confidentiality. Credits to Ruby Bouwmeester and Merrick Sapsford for collaborating on this project.