Refashioning Project Relate
Mobile app design
Key overview
Project Relate, an Android app from Google's Project Euphonia, aims to empower users with non-standard speech patterns through personalized speech recognition. Challenges included cognitive load, communication flow, receptive features, and voice synthesis customization. Research findings shaped the redesign, offering customizable voice settings, voice synthesis options, and a "Listen" receptive feature. The revamped UI enhances user engagement without sacrificing aesthetics. Continuous improvement, cohort segmentation, and dedication to inclusivity drive Project Relate's mission to bridge communication gaps for all.
Background
In the quest to revolutionize human-computer interaction, Project Relate emerged as an Android beta application, born from the noble efforts of Google’s Project Euphonia. Its mission is clear: to empower users with non-standardized speech patterns through personalized automatic speech recognition (ASR) models. The application's three pillars - Listen, Repeat, and Assistant - holds the promise of transforming communication for those who need it most. However, Project Relate faces challenges. Overwhelming cognitive load, rigid communication flow, a lack of receptive features, and limited voice synthesis customization hinder its true potential. These hurdles threaten to deter conversation rates and to shape a poor user experience.
A bit of a pickle
The conflict lies in bridging the gap between the application's noble mission and its current limitations. Users with non-standardized speech patterns struggle to engage effectively with Project Relate due to a few fixable shortcomings. The challenge was to redesign the application, address these issues head-on, and create a truly user-centric reality.
We took the scenic route
As part of our discovery process, questionnaires, surveys, and other employed research tactics gained the following insights about various neurodivergent populations:
Cognitive load: Rose et. al (2012) found that the largest proportion of forty adults with post-stroke aphasia selected Verdana, 14 point, 1.5 line spacing, and photographs were highly favored as helpful graphical representations of language.
Communication flow: Montgomery (2005) determined that a slower conversational rate (input and output) allowed younger and older individuals with speech language impairments more time to allocate their attention for comprehension and further processing of information.
Receptive features: Ebbels (2014) found that providing visual cueing improved receptivity of language; more specifically, grammatical comprehension was improved.
Voice synthesis customization: Repova et al. (2021) determined that patients who experienced a total loss of voice did not opt for invasive rehabilitative methods to restore their voice; rather, they preferred personalized speech synthesis, like voice banks. Mills et al. (2013) created personalized speech synthesis by extraction of user’s natural prosodic properties and maintenance of patients’ vocal identity.
Based on these research insights, I created multiple iterations of interactive designs that included the following points:
Google Assistant should employ short, clear sentences in conveying information the user.
Google Assistant should use 14 point, Verdana, 1.5 spacing, graphics, and bold meaningful units conveying orthographic information to the user.
User should have control over Google Assistant timing and response.
“Receptive” feature should have relevant pictures to enhance receptivity.
User should have control over personalized speech synthesis.
Then, I took on the challenge of reshaping the application's user interface, addressing each of its critical components:
Onboarding
Funnel tracking and value mapping became crucial for onboarding. By analyzing tutorial completion rates, the team gained insights into the importance of these screens for user engagement. Because a series of guided tutorials are essential to showcase to users the app’s functionality and features, It was imperative to avoid design pitfalls that could hinder conversion.
Home Screen
The transformation began with the creation of an inviting home screen. It was not just about aesthetics but about creating consistent alignment across the application. The home screen needed to be engaging and invoke simplicity, beckoning users to start their journey. Research and data analysis of user engagement with this screen informed aspects like the size of text, effective iconography, and placement of command buttons.
Voice Synthesis Customization
Understanding user preferences for voice synthesis customization allowed for resource allocation, ensuring that the most favored features received attention. Essentially, determining that users preferred features like voice effects, language style, pronunciation, and tone increased the app’s value and reduced drop-off. Data analysis of user adjustments provided insights into desired voice characteristics, leading to the development of new synthesis options. This kind of data intake can initially be used to create cohorts in order to understand behavior, engagement, and overall performance over time.
Home Assistant Settings
Customizable voice settings, including pitch, speed, and choice of gender, were introduced, aligning with user-centric design and gaining support from stakeholders. Essentially, users are put into control of how the Google Assistant listens and responds to them. The analysis of voice setting choices led to insights into user preferences, shaping default settings and projecting specific potential preferences.
"Listen" Receptive Screen
The inclusion of this novel feature was essential for inclusivity and increasing user support. Tracking the usage of the receptive feature provides insights into effectiveness. Data analysis can determine the type of scenarios and environments that users are enabling this particular feature. Also, understanding the trade-offs between the sequence of visual cues in visual reception and speech recognition for receptive users can guide future design decisions. For instance, determining the inclusion of visual cues really enhances real-time transcription can alter future design protocol.
The big showdown
The culmination of these efforts led to a meaningful and user-centric re-design of Project Relate. Through the editing process, the beta application gradually catered to the population’s specific needs. One of my main fears on the project had me initially believe that multiple onboarding screens would deter the user from exploring the app further, but I ultimately knew this kind of deliberate display was needed.
My anxiety was slightly alleviated with the inclusion of a “skip” option.
Overall, the balance between aesthetics and quick access to vital communication services was struck, enhancing the overall user experience.
The calm after the storm
With Project Relate now in its new form, the team looked ahead. Continuous improvement remained the mantra, refining user experiences, expanding personalized offerings, and exploring new avenues for engagement and support. Cohort segmentation further enriched the user experience, offering personalized recommendations and insights.
The closing chapter
The journey with Project Relate continues with unwavering dedication to ongoing improvement. It remains committed to its mission of making communication accessible to all, evolving and growing to meet the ever-changing needs of its users. Project Relate, with its current makeover, stands as a testament to the power of diverse user-centric design and data-driven decision-making, bridging the communication gap for those who need it most.