Research

synchroLight

synchroLight

Three-Dimensional Pointing System for Remote Video Communication

Jifei Ou, Sheng Kai Tang, Hiroshi Ishii
Tangible Media Group, MIT Media Lab

Introduction

Although the image quality and transmission speed of current remote video communication systems have vastly improved in recent years, its interactions still remain detached from the physical world. This causes frustration and lowers working efficiency, especially when both sides are referencing physical objects and space. In this paper, we propose a remote pointing system named synchroLight that allows users to point at remote physical objects with synthetic light. The system extends the interaction of the existing remote pointing systems from two-dimensional surfaces to three-dimensional space. The goal of this project is to approach a seamless experience in video communication.

Publication

Ou, J., Tang, S.K., Ishii, H.: 2013, SynchroLight: Three-Dimensional Pointing System for Remote Video Communication, In Proceedings of CHI 2013 Extended Abstract, 169-174.

mindSet

mindSet

Discovering Intuitive Gestures for Smart TV Control Based on Depth Camera Technology

Institute of Information Industry and Sheng Kai Tang

Introduction

Do you think waving your hands in the air to switch channels of TV is a good idea?  Actually, the body gestures for TV games using large muscle motors are good for exercising but are not proper for controlling. So, we develop the idea of mindSet, including Plane-based, Vision-based and Affordance-based gestures. mindSet using small muscle motors not only reveals intuitive gestures, but also implements these gestures based on popular depth camera system, Kinect. According to our testing of mindSet, users are able to successfully execute commands via gestures in a shorter time and with more comfortable feelings than that of convention ways.

Elder e-Reader

Designing Government Funded Religious E-Readers by Adopting User Experience Methods

Adopting 2.5D Shape Display and Digital Shadow to Assist Collaborative Urban Massing

Sheng Kai Tang(1), Wen Kang Chen(2), Chih Hao Tsai(2), Yi Ting Chen(3)
Adaptive Artifact(1), AJA Creative Design(2), ASUS Computer(3)

Introduction

The design concept of user experience has been popularized and widely accepted in Taiwan. Many companies have successfully transformed from OEM into ODM, and even created their own brand by establishing their internal user experience design department. However, the current user experience research and design in Taiwan focusing too much on mainstream products is going to make the field become another red ocean. In order to prevent another outbreak of price war, the Taiwanese government has put in extensive resources to assist Taiwan’s well-known brand companies in R&D, expecting to develop and establish a new product benchmark via the researches on special consumer groups. This research introduce an actual government funded project of ASUS focusing on developing an electronic platform for a special consumer group, Tzu Chi. By going through a series of user experience design and research process, we eventually retrieve some valuable points which could be beneficial to the design of future e-reader.

Publication

Tang, S.K., Chen, W.K., Tsai, C.H., and Chen Y.T.: 2013, Designing Government Funded Religious E-Reader by Adopting User Experience Methods, HCI International 2013.

Tangible Surface V1

We are very happy that Actuating Geometry in Design Workshop has finished successfully!! In order to demonstrate the concept and signal transformation from physical simulation in Grasshopper to geometry calculation in Processing to physical actuation via Arduino, TsungHsien, Shih Yuan and I had a pre-workshop a week before the actual workshop and built the Tangible Surface example!

Dynamic tangible surface is not a brand new concept and many projects have already been built, such as Hyposurface of Decoi, Recompose of Hiroshi Ishii and my Tangible Pixels. These projects all use single axis mechanism to create free surface movements. However, if you observe these projects from top view in detail, their surface grid is actually fixed and surface area is unchanged. In other words, these so called dynamic surfaces are fake and limited. So, Tangible Surface try to break the limitation to create real dynamic surface by implementing a three axises actuation mechanism. With this mechanism, the Tangible Surface could be deformed to simulate any mathematical surface with physical characteristics.

很高興的,設計中的動態幾何工作營已順利結束。為了展示從物理模擬到幾何運算乃至於實體致動間的概念呈現與訊號轉換,聰憲、識源以及我,早在工作營開始前的一週,我們自己也閉關先來個預先工作營。

創造動態的實體表面的概念已經不是新鮮事,如早期的Decoi的Hyposurface到Hiroshi Ishii的Recompose,以及我自己的Tangible Pixels。但這些案子都僅是透過單軸的上下運動來產生看似自由的表面,然而這樣的表面基本上從上視來看,其格點都是固定不動的,整體的範圍也是固定的,換句話說,這樣的動態是假的有侷限的。因此,Tangible Surface試圖的創造出能三軸運動的機構,進而來產生真的能自由扭曲的據物理特性的數學網面,企圖為動態表面致動做一點突破性的嘗試。

Tangible Pixels

Although it’s about two months since the end of the 2011 Taiwan Designers’ Week, Tangible Pixels project is still far from done. There are many research issues and implementation techniques that require more time to address and discovery after all. However, a video consisted of previous working demos and collected images is still made as a way to share the development and exhibition processes with you. Hope you can enjoy it and have clear understanding of this project!!

離展覽結束已兩個月了,不過實體像素仍然離完成還有一段路要走,畢竟裡面有許多的研究議題要探討,以及一些實作的技術要克服。然而,根據三禮拜前的進度以及手邊的影像資料,剪輯了一段影像跟大家分享工作與展覽的過程,希望大家能更清楚理解這個案子的內容。

Florabot

Florabot

Swarm Robots for the 2010 Taipei International Flora Expo

Sheng Kai Tang, Patrick Chiu, Hunter Luo, Parks Tzeng
User Experience Design, ASUS

Introduction

In this project, we design and make 438 flower robots for the 2010 Taipei International Flora Expo. Our goal is to make each individual equip identical minimum rules but achieve highly intelligent behaviors. Hence, we adopt the idea of “Self-Organization” to realize “Swarm Intelligence”. Each “Florabot” has a MCU, IR transceivers, a tri-color LED, a motor and a stretch structure. The IR transceiver on top of the flower is capable of sensing the presence of a visitor. Once the visitor is detected, according to how close the visitor is, the tri-color LED in the head changes color and the stretch structure of the head modifies its size. Moreover, this Florabot propagates the information of visitor’s presence to neighbors by IR transceivers at the base. The neighbors will further react based on the received information.


Virtual Mouse

Virtual Mouse

A development of Proximity Based Gestural Pointing Device

Sheng Kai Tang
User Experience Design, ASUS

Introduction

What if the affordance of a device is totally removed? Without physical and visual clues, will users still be able to perform tasks as usual? In order to discover these curiosities, we propose a new pointing device that is named “Virtual Mouse”. Unlike conventional computer mouse, Virtual Mouse doesn’t have physical form for users to manipulate. Instead, by using a hand moving and tapping on the desk, users are capable of controlling the mouse cursor and triggering button functions intuitively. Technically, we implement a proximity-sensing bar and a pattern recognition algorithm to achieve foregoing goals. The proximity-sensing bar consisted of ten IR transceivers collects digital signals representing the contour of a hand nearby. The pattern recognition algorithm and a state machine further recognize collected signal patterns and their transitions. Finally, these patterns and transitions are mapped to cursor movements and button functions.

Publication

Tang, S.K., Tzeng, W.C., Chiu, K.C., Luo, W.W., Lin, S.T., and Liu, Y.P.: 2011, Virtual Mouse: A Low Cost Proximity Based Gestural Pointing Device. HCI International 2011.

Tang, S.K., Tzeng, W.C., Chiu, K.C., Luo, W.W., Lin, S.T., and Liu, Y.P.: 2011, Virtual Mouse: A Low Cost Proximity Based Gestural Pointing Device. TEI2011 Workshop.

 

Seamless Mobility

Seamless Mobility

Realizing “Interface Everywhere” by Object Recognition and Micro-Projection Technologies

Sheng Kai Tang, Patrick Chui, Hunter Luo and Parks Tzeng
User Experience Design, ASUS

Introduction

How to bring the conventional Graphical User Interface from the screen to the space is a popular research topic since a decade ago. Most approaches actually focus on discovering and improving related technologies of Augmented Reality. In detail, researchers develop high quality sensing and recognition technologies, and even turn them into toolkits for public use. These foregoing bases are actually good for designers, especially in the consumer product industry, to rethink and generate diverse applications. Hence, we adopt the Tangible Interface concept proposed by Professor Hiroshi Ishii and the ReactiVision toolkits to demonstrate the idea of a Seamless Interface of consumer products of the future. In our preliminary working presentation shown in COMPUTEX 2009, users can intuitively perceive and manipulate both tangible and graphical interfaces at the same time. Both physical and virtual information are highly integrated as well.


Calligraphic Brush

Calligraphic Brush

An Intuitive Tangible User Interface for Interactive Algorithmic Design

Sheng Kai Tang
User Experience Design, ASUS

Introduction

The development of better User Interface (UI) and Tangible User Interface (TUI) for 3D modeling has lasted for decades. With the popularity of free form style achieved by algorithmic methods, the existing solutions of UI/TUI for CAD are gradually insufficient. Neglecting the steep learning curve of algorithmic design requiring solid background of mathematics and programming, the common drawback is the lack of interactivity. All actions rely heavily on mental translations and experimental trial and error. In this research, we try to realize the idea of interactive algorithmic design by developing a tangible calligraphic brush, with this device designer can intuitively adopt algorithmic methodology to achieve highly creative results.

Publication

Tang, S.K. and Tang, W. Y.: 2009, Calligraphic Brush: An Intuitive Tangible User Interface for Interactive Algorithmic DesignIn Proceedings of The Fourteenth Conference on Computer Aided Architectural Design Research in Asia 2009.
<Best Paper Award in CAADRIA 2009>

Storytelling Cubes V2

Storytelling Cubes V2

Developing active cubes representing underlying structure

Sheng Kai Tang, Mark D Gross
Computational Design Program, CMU
Ellen Yi-Luen Do
ACME Lab, Georgia Tech

Introduction

Storytelling is a very critical activity for children, either as a listener or a teller, in terms of development of narrative skills and realization of the world. Generally, by 3 years old, children are gradually skilled at labeling items and further realizing relations among them. These skills are rudiments for advanced storytelling activity later. Proper assistance and guidance including story books and caregivers are required for the precedent achievements.

For the goal of this research, we seek to provide a computationally enhanced tool to assist storytelling behavior. Specifically, we aim at assisting children’s behaviors of labeling and correlating before 3 years old which are critical abilities and moment for their future development. We develop a set of tangible cubes, named Storytelling Cubes (SC), which can physically represent underlying structure of story characters, actively monitor children’s behaviors/choices, continuously evaluate their progresses and dynamically modify based on their progress. SC is both an assistant tool for storytelling and an experimental instrument for studying children’s learning behavior.


Adaptive Mouse

Adaptive Mouse

Toward a discovery of formal and functional adaptabilities

Sheng Kai Tang
User Experience Design, ASUS
Computational Design Program, CMU

Introduction

Adaptive Mouse (AM) consists of a smart material which is deformable and is capable of recognizing the deformation. The deformation provides perfect and comfortable ergonomic shape for users’ diverse hand gestures. The smart material itself could also dynamically activate any areas at will for conventional buttons and scroll wheel. The prediction of active areas is based on the recognition results of users’ hand gestures. Working with AM, all users have to do is to hold it with his/her comfortable and preferred hand gestures, then acting their fore and middle fingers intuitively will always correctly trigger related button functions. Users can also freely move the mouse and always get cursor feedbacks accurately.

Publication

Tang, S.K. and Tang, W. Y.: 2010, Adaptive Mouse: A Computer Mouse Achieving Form-Function Synchronization. In Proceeding of CHI 2010, p2785-2792.

Adaptive Camera

Adaptive Camera

Compensating for the disappeared formal implication by functional adaptability

Sheng Kai Tang
Computational Design Program, CMU

Introduction

Adaptive Camera is discovering the idea that whether it is possible to compensate the elimination of formal implications for flexible operation by providing functional adaptability. In detail, I am going to create a digital camera which has no elements of implications at all such as a rigid shutter button and a boxlike appearance. However, with functional adaptability users could still use this camera intuitive as normal. The button of this camera will be automatically assigned and the orientation of the camera scene will be calibrated as well based on how users hold it.


Hemisphere

Hemisphere

Hemisphere

Proximity Sensor Based Gesture Recognition For Social Robot Interaction

Sheng Kai Tang
Computational Design Program, CMU

Introduction

Robot control is a popular issue in robotics field. More and more researchers dedicate in developing easy and intuitive solutions for robot programming and control. The most popular approach is to recognize human’s gestures through computer vision applications. However, these computer vision based recognitions are limited by camera. In other words, the user should face a camera and keep in a specific distance in order for the computer vision application to capture clear images. Our idea is to provide user a mobil device which can also recognize user’s gesture command. Through this device, user can freely move in a space and give commands to a robot at the same time.

Documentation

Tang, S.K.: 2007, Hemisphere: An Intuitive Tangible User Interface for Controlling Domestic Robot, Final Report for “Computational Beauty of Nature” offered by Prof. Ramesh Krishnamurti.

Video

 


A Touch Free Microwave Door

A Touch Free Microwave Door

Sheng Kai Tang
Computational Design Program, CMU

Introduction

After studying 12 user’s behaviors of using a microwave oven through Contexual Inquiry Method, we finally figure out that a touch free interface is needed due to oily hands of users which always interrupt the cooking process. For clean and safty reasons, we create this touch free interface for users to test again. After comparing with previous ordinary microwave door, our touch free door does improve the usability of a microwave oven.


Co-Music Room

Co-Music Room

Co-Music Room

A Computational Enhanced Space For Children

Sheng Kai Tang, Tsung Hsien Wang, Yu Chang Hu
COmputational DEsign Lab, CMU

Introduction

Co-Music room is a music space designed for children to explore the music through collaborating with each other. The original idea is to allow children to play the music and experience the corporations though the process. This space, an 8×8 cube, consists of two fundamental components that are circular tile sensors on the floor and ball-shape sensors hanging on the ceiling. Both these two parts are utilized to activate the sound in this space, including music pitches and short tunes. Basically, each circular tile sensor in this demonstration is set up for one single music pitch and we have 7 different dimensional tile sensors in this stage. The other five ceiling ball sensors are regarded as the switches for background music tunes.

Publication

Tang, S.K., Wang, T.H. and Hu, Y.C.: 2006, Co-Music Room, Final Document for “Architectural Robotics” offered by Prof. Mark D. Gross.

Video

 
 

Storytelling Cubes

Storytelling Cubes

Storytelling Cubes

A Tangible User Interface For Children

Sheng Kai Tang, Ellen Do, Mark D. Gross
COmputational DEsign Lab, CMU

Introduction

Storytelling Cubes, unlike other open ended storytelling systems giving children unlimited capacity of extension to create unlimited stories, enable children to discover as many as possible combinations by means of providing a limited set of cubes where elements and underlying structures of stories are embedded. Storytelling Cubes also enhance children’s learning experiences by a set of wireless tangible devices connecting to an animated graphical system. By playing with Storytelling Cubes, we tend to enable children to observe the relations, similarities and differences between elements, stories and ideas. Furthermore, children are anticipated to create their own idea from those learned story structures or elements. It is not only a way for children to perceive ideas but also training for them to cultivate their creativity.

Publication

Tang, S.K., Do, E. and Gross, M.D.: 2005, Storytelling Cubes: A Tangible Interface for Playing a Story, A poster for HCII 12th Anniversary Celebration.


Tangible User Interface For Modelling

Tangible User Interface For Modelling

Tangible User Interface For Modelling

Wen Yan Tang, Sheng Kai Tang
Kun Shan University of Technology

Introduction

Recently, more and more researchers dedicated in the development of human computer interaction for CAD systems, such as gestural input of three dimensional coordinates, flexible manipulation of NURBS objects, and the creation of force feedback. These research results indicated that the more intuitive control the device can provide in modeling process, the more creative solutions can be generated. Based on what mention above, the problems of this research are that what kind of interactions with computer is necessary for designer while modeling? How to develop an intuitive modeling interface that fulfills the criteria generated by previous question? The objective of this research is to develop a tactile modeling interface by which designer could create three dimensional models as freely as playing with clay.

Publication

Tang, W.Y. and Tang, S.K.: 2006, A development of tactile modeling interface, In Proceedings of The Eleventh Conference on Computer Aided Architectural Design Research in Asia 2006. Conference Poster Link.


San Diego Visualization

San Diego Visualization

A Visualization of a Sustainable Urban Systems Design

Wilson Lee, Sheng Kai Tang
Center for Design Informatics, Harvard

Introduction

CDI was selected to develop a visualization (animation) of a house and community resource center as part of the San Diego region 100-year sustainable plan. This visualization was compiled with other animations from the team and presented at the International Competition for Sustainable Urban Systems Design (IC-SUSD) conference in Tokyo in June 2003. In this project, I participated in all parts of it including concept generation, digital visualization and final film editing.


Overlapped Spaces

Overlapped Spaces

How can people navigate several spaces at the same time?

Sheng Kai Tang
Center for Design Informatics, Harvard

Introduction

Do you ever suddenly conscious that why is your physical body alsways static every time when your virtual bodies are running with friends in cyberspace? Is it possible to navigate several spaces at the same time? In this project, we are going to realize this idea. There are three key issues to be discussed. First, how the movements of bodies in physical and virtual worlds are synchronized. Second, how your perceptions of spaces are changed under the synchronized movements. Third, how people connect to each other under this situation.