UX Design, Motion Graphic, Machine Learning
Capstone
2023
We designed a smart watch app that uses sound recognition and machine learning algorithms to recognize the non-verbal sounds users make on a daily basis to identify their behavior in real time, thus helping users to manage their smart homes more naturally and effortlessly.
In the beginning, we did some background researches and ideations, trying to figure out what improves the Home - IoT Experience.
❌ Smart devices passively respond to user's commands
❌ The user talks to smart devices individually
❌ Current devices mainly use voice control
From the research, we have noticed an interesting point that many people have Alexa, google home and other IoT devices at home, but do not use them often.
The reasons for this are:
1. Some people find frequent voice interactions exhausting.
2. Voice recognition is not friendly to people with different accents.
However, voice command is also a very convenient way to interact, so we began to think about whether another sound could be used instead of active voice control?
Inspired by the hearing-impaired population, we found that the human non-speech voice recognition system in the accessibility design is an innovative and convenient way for home IoT experience.
We interviewed one of our target users, here is her persona.
Users wants a home IoT experience that
“I would like more agile control at home, instead of such as walking up to each switch / device”
“I want to be able to control more devices freely, and devices can work together to create a cohesive experience”
“I want more ways than voice commands. Sometimes I don’t feel like I want to speak”
“I want elegant and intelligence way of home experience”
Base on these insights, we develop our problem statement:
Based on these insights, we start our first prototype, which is a coughing triggered humidifier. We collected a bunch of audio samples of their coughing from surrounding people, and trained the ML Model. When the micro controller hears someone coughing continuously around, it will turn on the humidifier for them.
We got some important feedbacks from the user testings:
From the research feedbacks, we realized:
1. We need a wearable that integrates sensors and can hear user anytime.
2. Allow the device Interacts with users for permissions and privacy setting.
Based on these insights we got, we decided iterating on three aspects:
For effortless and seamless, we designed a wearable that integrates sensors and can hear user anytime and interact with users for permissions.
Inspired by Apple Watch Home Screen, we found bubble is a good symbol for our design.
For intelligent, we collected a bunch of different categories of sound data and trained it using CoreML, which is a built-in tool in Xcode.
We picked four sounds from the video and converted them into real-time sonograms, the machine can recognize the sounds according to different color patterns. For example, the sound of opening a door, keyboard typing, cooking, and the sound of cat :)
We conducted one-on-one interviews and tests with 11 users.
From the testing, we found that:
Based on the insights we gained, next we decided to:
During the Usability testing, we found that in some specific user scenarios, it is difficult for the user to perform tapping operations on the watch screen, such as cooking. In our research we discovered that the apple watch has gesture interaction and found it to be a very convenient way to interact.
Because the bubble style UI was well received in user testing, we continued and iterated on this design. We improve the flow of controlling multiple devices at the same time and improve the notification experience and UI style.
We tested with Garmin Design Team, IOT Owner, Graphic Design Professor, Apple Watch Super User and UX Designer.
Our takeaways are as follows:
1. For effortless functionality: User can Identify and differentiate modes at a glance.
2. For seamless functionality: Display one mode at a time, making it cognitively easier and enabling value-conscious design based on information transparency.
Based on the testing feedbacks, we made the 3rd version of the watch interface.
Based on our 3rd design iteration, we made the motion graphic version of interfaces.
Next steps, we are going to: