Design for Well-being
Summary
Improving a user’s quality of life through the use of technology has become a trend over the past decade, and will continue to be one for the foreseeable future. Specifically, face-recognition software has immense potential to improve someone’s well-being and be implemented onto personal devices that people use on a daily basis. Therefore, this website’s goal is to make use of the Affectiva library to have it recommend a meal to the user based on the facial expressions they display while watching a video of potential meal ideas, thus streamlining a part of their day and improving their overall well-being.
Brainstorming
We began the brainstorming process by thinking of ways we could implement the data collected by Affectiva’s facial recognition algorithm onto a program that could potentially help streamline a part of a user’s daily life. As seen in Figure 1, we were exploring different ideas and concepts, but we decided to settle for a meal selecting software.
Originally, my team and I wanted to give the user the choice to select their meal (e.g breakfast, lunch, dinner) before watching the correspondent video for that meal they would have selected, but due to time constraints our implementation was only limited to different food choices.
The Design Process
Firstly, we needed to narrow down the features that our website was to offer. We were certain that we needed to implement a video showing different meal suggestions and have them show for a given time interval. That way, we would be able to record the user’s facial expressions over the same duration of time for consistent time intervals. We decided that 3-second time intervals were appropriate in length, and would allow us to display 10 meal suggestions within a 30-second video.
In order to make use of what Affectiva’s library could offer, we needed the user to allow their browser access to their webcam. This in turn, would enable our Javascript to measure the user’s coefficient of joy over a 3-second interval, 10 times (once for every meal suggestion shown on the video). That we could then trace back each coefficient of joy within each time interval to its corresponding meal suggestion shown on the video. However, there was a slight delay from when the user clicks to play the YouTube video, and the website prompts them for access to their webcam.
Furthermore, one of our design aims was to make each meal its own object; however, that idea would have proven to be extremely time consuming for the limited amount of time we had to design a fully functional prototype. So, instead of following our original plan as shown on the design sheet on Figure 5, we opted to simply call the function that would measure the user’s coefficient of joy over time, 10 times. Ultimately, after recording said data, we then measured what time interval exhibited the greatest coefficient of joy over time.
Prototyping
On demo day, our team was provided with insightful user feedback, which helped us determine how the average user would make use of our meal selecting website.
The overall impression that our website had on the testers was a positive one. Tester pointed out that some of the strengths of our site were: the originality of the idea itself, the simplicity of its use, and the engagement it yielded. We achieved these effects on our website by simplifying the core features to the least amount that Javascript needs to display a result. This minimalist approach enabled our website to look uncluttered, and thus draw the user’s attention to the video, which they had to watch in order for the algorithm to output a result.
However, some testers found our take on the minimalist style of our website quite ambiguous and confusing, as we did not include any directions as to how the tester should have used the meal selector. This caused some confusion at first, so we resulted to having one of our team members describe the instructions to the user prior to them testing the site. The testing process was much smoother and streamlined after making said choice.
Additionally, one of the testers suggested we have a ranking of the meals at end of video, in terms of favorability. This piece of feedback is one that would most likely be implemented on a future version of our website, as it would allow the user to see a more in-depth breakdown of their meal results.
Finally, the results that were shown to our users seemed to be in line with their expectations, given the meal they displayed the most joy while looking at the webcam. However, one tester seemed to have conflicting thoughts with what our website output. She suggested that this discrepancy could be caused by the lag that is created between when the user plays the video, and allows access to the webcam. Thus, causing the algorithm to record the users facial expression with a ~0.5 sec — 1 sec delay.
Conclusion
Overall, according to our user feedback, the are changes that could be made to our meal selector in order to improve the overall user experience.
Despite our algorithm not being totally robust, given the lag between user actions, some discrepancies in the results could also be attributed to the fact that facial recognition software is still somewhat in its infancy, and environmental changes, such as lighting, and glasses on the user’s face have drastic impacts on the output of our program.