avatar

Reza Habibi

reza.hbi@gmail.com


About Me

DigitalOcean deleted everything. This is my new website. I’m working on it … :)

I’m working on the broader question of “How to achieve mutual understanding between humans and machines (LLMS), specifically focusing on how we can create tailored interactions with machines and enhance machine understanding of these interactions through visual communication. Additionally, I’m exploring the topics of Human-AI alignment and AI education.

I hold Bachelor’s and Master’s degrees in Computer Science, emphasizing HCI, NLP, and immersive, interactive environments. My contributions to esteemed venues such as CHI, CHI Play, FDG, HCII, ACII, and AAAI are documented on my Google Scholar profile. Beyond my academic pursuits, I channel my creativity into crafting art pieces that seamlessly blend music, digital sculpture, and technology. Alongside these artistic endeavors, I have been involved in motorsports as both a driver and engineer for years.

News

Posts

Projects

  1. Using GenAI and VR for learning,(https://youtu.be/nVn7aSTDrkM).

  2. Visualizing complex human behavior models while maintaining readability and interpretability for human analysts is an existing problem. The CADE project consists of the development of a human-in-the-loop visualization system depicting AI-generated, probabilistic behavior models.

  3. An artistic and interactive project integrating dance and brain activity.

  4. The resilience project was developed through an iterative design process aimed at engaging first-year students at UCSC's main campus. The project's goal was to model and measure their challenges and coping mechanisms. Spanning two years, this project involved 16 postdocs, PhDs, and undergraduates who collaborated to build an interactive experience. This included AI agents, Wizard of Oz (WoZ) techniques, puzzles, challenges, and AR, along with various phases of data collection and sensors to assess underrepresented students.

  5. Augmented Reality (AR) and Mixed Reality (MR) enable us to build a new generation of human-computer interfaces. This thesis investigates whether we can detect and distinguish between surface interaction events such as tapping or swiping using a wearable mic from a surface. Also, what are the advantages of new text entry methods such as tapping with two fingers simultaneously to enter capital letters and punctuation? Our results show that we can detect and distinguish between surface interaction events such as tap or swipe via a wearable mic on the user’s head.

Publications

Loading … Check my GoogleScholar

Services

Reviewers