- HOME
- Research
- Researcher's Profile
- Masahiko INAMI
Researcher's Profile
- Professor
- Masahiko INAMI
- Information Somatics
- drinamistar.rcast.u-tokyo.ac.jp
- Tel
- 03-5452-5368
- URL
Biography
March 1999 | PhD, School of Engineering, The Univiversity of Tokyo (UTokyo) |
---|---|
April 1999 | Research Associate, CCR, UTokyo |
September 2001 | Research Assistant, Graduate School of Information Science and Technology, UTokyo |
April 2003 | Lecturer, Faculty of Electro-Communications, Univ. of Electro-Communications |
October 2003 | PRESTO Researcher, JST |
April 2005 | Assistant Professor, Faculty of Electro-Communications, Univ. of Electro-Communications |
March 2005 | Visiting Scientist, CSAIL, MIT |
April 2006 | Professor, Faculty of Electro-Communications, Univ. of Electro-Communications |
January 2008 | Group Leader, ERATO IGARASHI Design Interface Project, JST |
April 2008 | Professor, Graduate School of Media Design, Keio Univ. |
November 2015 | Professor, Graduate School of Information Science and Technology, UTokyo |
November 2015 | Visiting Professor, Graduate School of Media Design, Keio Univ. |
April 2016 | Professor, RCAST, UTokyo |
Research Interests
What are the challenges in creating interfaces that allow a user to intuitively act and express his/her intentions? Today's Human-Computer Interaction (HCI) systems include virtual / augmented reality are limited, and exploit only visual and auditory sensations. However, in daily life, we exploit a variety of input and output modalities, and modalities that involve contact with our bodies can dramatically affect our ability to experience and express ourselves in physical and virtual worlds. Using modern physiological understandings of sensation and perception, emerging electronic devices, and agile computational methods, we now have an opportunity to design a new generation of "Human-Computer Integrated" systems.
We have archived several improvements that use multi/cross modal interfaces for enhancing human I/O. They include Transparent Cockpit, Stop-Motion Goggle, Galvanic Vestibular Stimulation, JINS MEME (electrooculography (EOG)-based smart glasses) and Superhuman Sports.
Our challenges include:
(1) Understanding human factors
(2) Enhancing human I/O
(3) Designing new body schema
(4) Experience engineering and entertainment computing
Keywords
Understanding human factors, Enhancing human I/O, Designing new body schema, Experience engeering and entertainment computing