<aside> 😎 David Lindlbauer is an Assistant Professor in the HCII and leads the Augmented Perception Lab. Professor Lindlbauer’s research focuses on human perception and behavior when interacting with Augmented and Virtual Reality!

</aside>

Untitled

😎 Who are you and what do you do?

My name is David Lindlbauer, and I'm an Assistant Professor at the Human-Computer Interaction Institute. I'm a technical HCI researcher, so I build new types of technologies and study them, and deal with ideation, iteration, and exploration. I lead a small research group here at CMU called the Augmented Perception Lab with three PhD students Hyunsung, Catarina, and Yi Fei, one postdoc, Yukang, one current intern from Tsinghua University, Zhipeng, and a lot of very fantastic undergrad and masters students. Our main research area is on Augmented Reality, Virtual Reality, and Extended Reality, and trying to see if this technology is going to be the new interaction paradigm in a couple of years.


Augmented Reality vs. Virtual Reality vs. Mixed Reality - An Introductory Guide

Augmented Reality vs. Virtual Reality vs. Mixed Reality - An Introductory Guide

😎 What kinds of projects does the Augmented Perception Lab work on?

💫 Understanding Human Perception and Behavior in AR and VR

We are interested in three types of projects. One type is understanding human perception and behavior when it comes to interacting with these new types of technologies such as AR and VR. So, for example, we ran projects on what a user's preference is when it comes to navigation with Augmented Reality. We looked at, if people had perfect AR headsets, if they would want their navigation instructions to be arrows projected on the ground, following a small avatar, or more drastic changes such as desaturating the whole environment except for the path they want to walk on.

🦎 Adaptive User Interfaces

We also work on modeling users. For example, we’re interested in where they would be looking if presented with a certain image. That entails a mix of user studies and developing user models through optimization and machine learning. Another part we're interested in is adaptive user interfaces that control, using optimization and machine learning, when, where, and how to display virtual user interface elements for a future where we would all be wearing AR glasses.

🦸‍♀️ Extending and Augmenting Human Capabilities

The third type of project focuses on trying to extend and augment our human capabilities. My PhD student has been working to visualize events in a user’s surrounding area that they might have missed. This might be if something happens behind you while you're watching your favorite sports game, you turn to your friend to chat, and you miss the game winning play. With this, we track these things you might miss and, in Augmented Reality, visualize what has happened. It's a mix of building demonstrators and studying how people actually interact with this new type of technology.


😎 What are some of the challenges that come with designing these new technologies and interfaces?

One of the main challenges we face is the fact that we are designing for a new interaction modality that does not yet exist. There are Augmented Reality glasses like the HoloLens or Magic Leap. There are Virtual Reality glasses like the Quest or the HTC Vive, but none of them are anywhere near what I think a future device could look like. As Bill Buxton said, “we are building smart approaches with stupid technology”. This is one of the main challenges where we try to work our way around technological constraints to build approaches that are not concerned by these constraints, but actually leverage some of the benefits. Developing and staying up to date with these technologies is definitely a challenge.


😎 During UXA’s Flavors of HCI event, Yi Fei discussed the notion of trust in these kinds of technologies. What are your thoughts on this?

We are building technology that could be even more invasive than our current smartphones. Smartphones know everything about us and are constantly demanding attention, so the question is how are we building technology that has more capabilities while being less distracting, less obtrusive, and more privacy-preserving. A lot of the hand tracking approaches being used now only work through cameras, and cameras are not a great thing to wear on your head in terms of privacy. We have to think about how we can enable approaches without invading users' privacy. How can we do stuff that only works with cameras without using cameras? There are a lot of labs on campus that work with this. Chris Harrison's lab and Mayank Goel's lab work on different types of sensing that have different levels of privacy.

The nice thing with these technologies is that none of them actually work currently. None of the technology is close to being consumer-ready, so by having some of those developments on campus, we can talk to people from the College of Fine Arts, from Tepper, from policy, and try to figure out what ways we can create technology that is beneficial for individual users while creating a positive societal impact, rather than just building it because we can.