top of page
Virtual Goggles


A novel approach for conversational support via an interactive Augmented Reality application, providing automatic insights on real-time transcribed words.

Initial Phase

One day, our group was simply brainstorming potential research ideas when our professor, Ryo Suzuki, recommended that we refine & submit one of our low-fidelity concepts for a contest for HCI researchers.

main (1).png

We threw together a pitch for a project called PinVoice and sent it to Snap.

Our submission had been one of 8 projects selected, with 100,000$ of funding split between the teams.

Image by Li Zhang
“a picture is worth a thousand words”

Indeed, relying solely on spoken words to convey information is often more difficult than using a visual medium to communicate that same information.

For example, when people mention new or unknown words, ideas and things, people often need visual references or additional descriptions to better understand  what has been said.


Today, most people would simply use their smartphone and query a search for a related image  or reference. However, the traditional process of looking up information on tablets or smartphones is more often than not multi-stepped and complicated.

This often distracts people from in-person communication due to the lack of an appropriate User Interface Design to aid in-person communications.


These behaviors end up isolating, interrupting and disengaging people from their in-person social interactions.

And so we developed our mission statement:

Can Augmented Reality provide us a platform to create a User Interface that simultaneously keeps us engaged  in our in-person social interactions while displaying crucial contextual  information to the user?



Our research paper featuring PinVoice will submitted by April 2022 to UIST* 

*UIST ’22: The 35th Annual ACM Symposium on User Interface Software and Technology, Oct 16–19, 2022

bottom of page