Research

Statement of Adaptive Artifact V1

Statement of Adaptive Artifact V1

Belief

In one discussion I had with Professor Scott Hudson in a class at CMU in 2007, I brought up my first idea of Adaptive Artifact, which is an Adaptive Mouse. The discussion inspired my strong interest and enthusiasm in the term of Adaptive Artifact. For three years, I have constantly been exploring research topics related to this term. Moreover, I plan to make this term to be a popular field within the next decade, for I believe that the Adaptive Artifact is the future of artifact design.

In the past, the design of artifacts was focused on dealing with function and form. As Don Norman, expert in industrial design, once said, good affordance provides better learnability and usability, and enables intuitive user experience; or as in the architect Ludwig Mies van der Rohe’s dialectic between “form follows function” and “function follows form,” they all emphasize the importance of seamless mapping between form and function. However, no matter how intuitive the form design is or how properly the function is arranged, it is proven in the history of artifact that it is impossible to satisfy all of the users’ needs. The cognition of the users varies, and their needs change over time; therefore, a solid, static, and passive artifact would never fit those needs, no matter how refined the design is.

The main issue of Adaptive Artifact would be how to make an artifact intuitively detect the user’s status and needs, to alter its form in order to fit ergonomics, and even to provide proper functions to meet the user’s dynamic requirements. This is also the area on which “Adaptive Artifact” group intends to focus, research, develop and promote.

Preliminary Studies

We currently propose two approaches to discover foregoing ideas, micro and macro, and get some prototypes done. In detail, micro approach treats an artifact as a smart object capable of detecting user’s conditions and fulfilling dynamic formal and functional purposes, while macro approach achieves the same capabilities by gathering a group of identical smart objects to conduct collective behaviors.

Take Adaptive Mouse for example, the mouse itself is a smart object consisting of sensible skin and pattern recognition mechanism. It dynamically predicts button locations by actively monitoring user’s hand grasps.

Florabot is the implementation of macro approach; around 500 identical Florabots independently sense activities on the field, propagate information and conduct collective reactions in terms of lighting and shape-changing effects.

With computational mechanism embedded, both approaches could potentially overturn the existing model of Human-Artifact Interaction. Conventionally, an artifact, such as a Swiss knife(Figure1), provides proper affordances (visual and tangible clues) for users to observe, think and manipulate (Figure2). However, based on adaptive perspective, this mental process will be removed from users and be further assigned to artifacts. In other words, artifacts will observe, think and provide properties for manipulation. All user have to do is to perform their desired action and correct formal and functional feedbacks will always be there (Figure3).

Figure1. Typical adaptable artifact

Figure2. Conventional Model of Human Artifact Interaction

Figure3. Adaptive Model of Human Artifact Interaction

Currently, several parallel adpative projects are planning and developing, such as Adaptive Keyboard and Adaptive Pad for micro approach, and Pixelbot for macro approach. They will soon be released in the coming year.

Adaptive Typing

Adaptive Typing

Adaptive Typing

A Study Looking for User’s Mental Model to Achieve Intuitive Typing

Sheng Kai Tang
User Experience Design, ASUS

Introduction

A laptop with dual screens is a trend for the coming future. It means typing on an LCD screen displaying dynamic content instead of a fixed physical keyboard is an emergent design possibility. However, the current approach only showing a static keyboard graphically on the screen wastes the potential of a sensible and displayable touch screen. In this research, we are going to design an “Adaptive Typing” mechanism that actively detects user’s hand palm resting on the screen and automatically predicts key locations. This mechanism also provides minimum visual clues to assist novice users. In order to realize this idea, we design an experimental desk to collect and observe users’ mental models and hand ergonomics while typing. As shown in the picture, the upper camera collects data about keys and fingers; the lower camera collects data of hand palms through a transparent part with which a conventional keyboard and a touch pad embedded in. We believe that by overlapping image data of upper and lower cameras, the hidden relationships between hand palm and typing can be further revealed.


Florabot

Florabot

Swarm Robots for the 2010 Taipei International Flora Expo

Sheng Kai Tang, Patrick Chiu, Hunter Luo, Parks Tzeng
User Experience Design, ASUS

Introduction

In this project, we design and make 438 flower robots for the 2010 Taipei International Flora Expo. Our goal is to make each individual equip identical minimum rules but achieve highly intelligent behaviors. Hence, we adopt the idea of “Self-Organization” to realize “Swarm Intelligence”. Each “Florabot” has a MCU, IR transceivers, a tri-color LED, a motor and a stretch structure. The IR transceiver on top of the flower is capable of sensing the presence of a visitor. Once the visitor is detected, according to how close the visitor is, the tri-color LED in the head changes color and the stretch structure of the head modifies its size. Moreover, this Florabot propagates the information of visitor’s presence to neighbors by IR transceivers at the base. The neighbors will further react based on the received information.


Virtual Mouse

Virtual Mouse

A development of Proximity Based Gestural Pointing Device

Sheng Kai Tang
User Experience Design, ASUS

Introduction

What if the affordance of a device is totally removed? Without physical and visual clues, will users still be able to perform tasks as usual? In order to discover these curiosities, we propose a new pointing device that is named “Virtual Mouse”. Unlike conventional computer mouse, Virtual Mouse doesn’t have physical form for users to manipulate. Instead, by using a hand moving and tapping on the desk, users are capable of controlling the mouse cursor and triggering button functions intuitively. Technically, we implement a proximity-sensing bar and a pattern recognition algorithm to achieve foregoing goals. The proximity-sensing bar consisted of ten IR transceivers collects digital signals representing the contour of a hand nearby. The pattern recognition algorithm and a state machine further recognize collected signal patterns and their transitions. Finally, these patterns and transitions are mapped to cursor movements and button functions.

Publication

Tang, S.K., Tzeng, W.C., Chiu, K.C., Luo, W.W., Lin, S.T., and Liu, Y.P.: 2011, Virtual Mouse: A Low Cost Proximity Based Gestural Pointing Device. HCI International 2011.

Tang, S.K., Tzeng, W.C., Chiu, K.C., Luo, W.W., Lin, S.T., and Liu, Y.P.: 2011, Virtual Mouse: A Low Cost Proximity Based Gestural Pointing Device. TEI2011 Workshop.

 

Tangible Slider

Tangible Slider

A Capacitive Touch Slider Enabling Intuitive Sidebar Manipulation and Control

Sheng Kai Tang
User Experience Design, ASUS

Introduction

Sidebar is a kind of widely used component for most window-based applications. A sidebar is defined as a container where application shortcuts are located. Currently, a sidebar is called out from hidden by pointing the cursor at a hot-area or a hot-edge. After calling out a sidebar, users can further point at a shortcut icon and click to trigger. In this project, such a conventional interaction model is replaced by a “Tangible Touch Slider” to increase the usability. The Tangible Touch Slider is a horizontal sensible area with lighting feedbacks on the C part of a laptop computer. By touching on this sensible area, the related sidebar on the screen will be called out; by sliding on it, user can switch among shortcut icons; by releasing from the sensible area, the selected shortcut will be triggered. After a preliminary test based on GOMS, this new interaction model can save up to 1.5 seconds especially when users are doing typing oriented works.


Seamless Mobility

Seamless Mobility

Realizing “Interface Everywhere” by Object Recognition and Micro-Projection Technologies

Sheng Kai Tang, Patrick Chui, Hunter Luo and Parks Tzeng
User Experience Design, ASUS

Introduction

How to bring the conventional Graphical User Interface from the screen to the space is a popular research topic since a decade ago. Most approaches actually focus on discovering and improving related technologies of Augmented Reality. In detail, researchers develop high quality sensing and recognition technologies, and even turn them into toolkits for public use. These foregoing bases are actually good for designers, especially in the consumer product industry, to rethink and generate diverse applications. Hence, we adopt the Tangible Interface concept proposed by Professor Hiroshi Ishii and the ReactiVision toolkits to demonstrate the idea of a Seamless Interface of consumer products of the future. In our preliminary working presentation shown in COMPUTEX 2009, users can intuitively perceive and manipulate both tangible and graphical interfaces at the same time. Both physical and virtual information are highly integrated as well.


Sneak Peeking Bars

Sneak Peeking Bars

Detecting Eye Position to Control the Presence of Side Bars

Sheng Kai Tang
User Experience Design, ASUS

Introduction

The control sidebar is essential for window-based application. Most of them have auto-hide mechanism to spare more window space for users. Users can bring out a sidebar at will by moving the mouse cursor to the edge where the bar is hidden. In this project, we propose a new way of bringing out sidebars; we called it “sneak peeking sidebars”. First of all, we put all sneak peeking sidebars outside the frame of the window, which means users can’t see them while looking at the window straightly. However, when users move their head a bit to peek at edges of a window, and they will easily see sidebars hidden outside the window. The sneak peeking mechanism is actually based on the idea of “dynamic perspective” widely adopted in the interactive computer graphic to create real 3D scene. The dynamic perspective technique is to actively detect the eye position of the user based on which the perspective scene is generated dynamically.


Calligraphic Brush

Calligraphic Brush

An Intuitive Tangible User Interface for Interactive Algorithmic Design

Sheng Kai Tang
User Experience Design, ASUS

Introduction

The development of better User Interface (UI) and Tangible User Interface (TUI) for 3D modeling has lasted for decades. With the popularity of free form style achieved by algorithmic methods, the existing solutions of UI/TUI for CAD are gradually insufficient. Neglecting the steep learning curve of algorithmic design requiring solid background of mathematics and programming, the common drawback is the lack of interactivity. All actions rely heavily on mental translations and experimental trial and error. In this research, we try to realize the idea of interactive algorithmic design by developing a tangible calligraphic brush, with this device designer can intuitively adopt algorithmic methodology to achieve highly creative results.

Publication

Tang, S.K. and Tang, W. Y.: 2009, Calligraphic Brush: An Intuitive Tangible User Interface for Interactive Algorithmic DesignIn Proceedings of The Fourteenth Conference on Computer Aided Architectural Design Research in Asia 2009.
<Best Paper Award in CAADRIA 2009>

Storytelling Cubes V2

Storytelling Cubes V2

Developing active cubes representing underlying structure

Sheng Kai Tang, Mark D Gross
Computational Design Program, CMU
Ellen Yi-Luen Do
ACME Lab, Georgia Tech

Introduction

Storytelling is a very critical activity for children, either as a listener or a teller, in terms of development of narrative skills and realization of the world. Generally, by 3 years old, children are gradually skilled at labeling items and further realizing relations among them. These skills are rudiments for advanced storytelling activity later. Proper assistance and guidance including story books and caregivers are required for the precedent achievements.

For the goal of this research, we seek to provide a computationally enhanced tool to assist storytelling behavior. Specifically, we aim at assisting children’s behaviors of labeling and correlating before 3 years old which are critical abilities and moment for their future development. We develop a set of tangible cubes, named Storytelling Cubes (SC), which can physically represent underlying structure of story characters, actively monitor children’s behaviors/choices, continuously evaluate their progresses and dynamically modify based on their progress. SC is both an assistant tool for storytelling and an experimental instrument for studying children’s learning behavior.


Adaptive Mouse

Adaptive Mouse

Toward a discovery of formal and functional adaptabilities

Sheng Kai Tang
User Experience Design, ASUS
Computational Design Program, CMU

Introduction

Adaptive Mouse (AM) consists of a smart material which is deformable and is capable of recognizing the deformation. The deformation provides perfect and comfortable ergonomic shape for users’ diverse hand gestures. The smart material itself could also dynamically activate any areas at will for conventional buttons and scroll wheel. The prediction of active areas is based on the recognition results of users’ hand gestures. Working with AM, all users have to do is to hold it with his/her comfortable and preferred hand gestures, then acting their fore and middle fingers intuitively will always correctly trigger related button functions. Users can also freely move the mouse and always get cursor feedbacks accurately.

Publication

Tang, S.K. and Tang, W. Y.: 2010, Adaptive Mouse: A Computer Mouse Achieving Form-Function Synchronization. In Proceeding of CHI 2010, p2785-2792.

Adaptive Camera

Adaptive Camera

Compensating for the disappeared formal implication by functional adaptability

Sheng Kai Tang
Computational Design Program, CMU

Introduction

Adaptive Camera is discovering the idea that whether it is possible to compensate the elimination of formal implications for flexible operation by providing functional adaptability. In detail, I am going to create a digital camera which has no elements of implications at all such as a rigid shutter button and a boxlike appearance. However, with functional adaptability users could still use this camera intuitive as normal. The button of this camera will be automatically assigned and the orientation of the camera scene will be calibrated as well based on how users hold it.


Different predispositions of design and scientific thinking

Different predispositions of design and scientific thinking

Sheng Kai Tang
Computational Design Program, CMU

Introduction

The important role of design in HCI is providing design thinking to support scientific thinking especially for generating novel ideas in the concept forming stage. However, why couldn’t scientific thinking generate novel ideas by itself and need the assistance of design thinking in HCI research is an interesting issue. In this paper, we conduct two experiments for this problem. We assume representation is a reliable evidence for human thinking. By studying representations, we can understand process of human thinking. We propose a classification table as our coding schemes for analyzing representations. This classification is based on observation of Leonardo da Vinci’s manuscripts and correlation between the former and Buchana’s ideas. Finally, we assure the need of design thinking when dealing with a problem in terms of generating novel ideas. We also prove that those characteristics of design thinking that scientific thinking lacks, such as reframing problems and inventing oriented problem solving, can benefit searching for novelty We finally propose the idea that the type of a problem is not rigid but a predisposition resulted from design or scientific thinking.

Documentation

Tang, S.K.: 2008, Different predisposition of design and scientific thinking, Final Report for “Design Perspective in HCI” offered by Prof. Jodi Forlizzi.


Hemisphere

Hemisphere

Hemisphere

Proximity Sensor Based Gesture Recognition For Social Robot Interaction

Sheng Kai Tang
Computational Design Program, CMU

Introduction

Robot control is a popular issue in robotics field. More and more researchers dedicate in developing easy and intuitive solutions for robot programming and control. The most popular approach is to recognize human’s gestures through computer vision applications. However, these computer vision based recognitions are limited by camera. In other words, the user should face a camera and keep in a specific distance in order for the computer vision application to capture clear images. Our idea is to provide user a mobil device which can also recognize user’s gesture command. Through this device, user can freely move in a space and give commands to a robot at the same time.

Documentation

Tang, S.K.: 2007, Hemisphere: An Intuitive Tangible User Interface for Controlling Domestic Robot, Final Report for “Computational Beauty of Nature” offered by Prof. Ramesh Krishnamurti.

Video

 


A Touch Free Microwave Door

A Touch Free Microwave Door

Sheng Kai Tang
Computational Design Program, CMU

Introduction

After studying 12 user’s behaviors of using a microwave oven through Contexual Inquiry Method, we finally figure out that a touch free interface is needed due to oily hands of users which always interrupt the cooking process. For clean and safty reasons, we create this touch free interface for users to test again. After comparing with previous ordinary microwave door, our touch free door does improve the usability of a microwave oven.


TeleEGGs

TeleEGGs

TeleEGGs

A Development of Multi Modal Communication Devices

Sheng Kai Tang
COmputational DEsign Lab, CMU

Introduction

In our daily life, there are many subtle actions which can deliver our emotions in very unique ways especially those between acquaintances. From a glance, one can perceive not only the other’s existence but demands and feelings. Via a smile and a blink, one can deliver good intentions and concerns to the other. Even though there is no verbal or physical communication further needed, the communications still happen. These are what we defined as indirect emotional communications. However, when two people are not in the same space, these kinds of communications seem not to happen. TeleEGGs is a pair of devices we developed to solve this problem. In the other word, this device can assist long distance indirect emotional communication.

Publication

Tang, S.K.: 2006, TeleEGGs: A Development of Multi Modal Communivation Devices, Final Document for “Physical Computing” offered by Prof. Pamela Jennings.

Co-Music Room

Co-Music Room

Co-Music Room

A Computational Enhanced Space For Children

Sheng Kai Tang, Tsung Hsien Wang, Yu Chang Hu
COmputational DEsign Lab, CMU

Introduction

Co-Music room is a music space designed for children to explore the music through collaborating with each other. The original idea is to allow children to play the music and experience the corporations though the process. This space, an 8×8 cube, consists of two fundamental components that are circular tile sensors on the floor and ball-shape sensors hanging on the ceiling. Both these two parts are utilized to activate the sound in this space, including music pitches and short tunes. Basically, each circular tile sensor in this demonstration is set up for one single music pitch and we have 7 different dimensional tile sensors in this stage. The other five ceiling ball sensors are regarded as the switches for background music tunes.

Publication

Tang, S.K., Wang, T.H. and Hu, Y.C.: 2006, Co-Music Room, Final Document for “Architectural Robotics” offered by Prof. Mark D. Gross.

Video

 
 

Storytelling Cubes

Storytelling Cubes

Storytelling Cubes

A Tangible User Interface For Children

Sheng Kai Tang, Ellen Do, Mark D. Gross
COmputational DEsign Lab, CMU

Introduction

Storytelling Cubes, unlike other open ended storytelling systems giving children unlimited capacity of extension to create unlimited stories, enable children to discover as many as possible combinations by means of providing a limited set of cubes where elements and underlying structures of stories are embedded. Storytelling Cubes also enhance children’s learning experiences by a set of wireless tangible devices connecting to an animated graphical system. By playing with Storytelling Cubes, we tend to enable children to observe the relations, similarities and differences between elements, stories and ideas. Furthermore, children are anticipated to create their own idea from those learned story structures or elements. It is not only a way for children to perceive ideas but also training for them to cultivate their creativity.

Publication

Tang, S.K., Do, E. and Gross, M.D.: 2005, Storytelling Cubes: A Tangible Interface for Playing a Story, A poster for HCII 12th Anniversary Celebration.


Tangible User Interface For Modelling

Tangible User Interface For Modelling

Tangible User Interface For Modelling

Wen Yan Tang, Sheng Kai Tang
Kun Shan University of Technology

Introduction

Recently, more and more researchers dedicated in the development of human computer interaction for CAD systems, such as gestural input of three dimensional coordinates, flexible manipulation of NURBS objects, and the creation of force feedback. These research results indicated that the more intuitive control the device can provide in modeling process, the more creative solutions can be generated. Based on what mention above, the problems of this research are that what kind of interactions with computer is necessary for designer while modeling? How to develop an intuitive modeling interface that fulfills the criteria generated by previous question? The objective of this research is to develop a tactile modeling interface by which designer could create three dimensional models as freely as playing with clay.

Publication

Tang, W.Y. and Tang, S.K.: 2006, A development of tactile modeling interface, In Proceedings of The Eleventh Conference on Computer Aided Architectural Design Research in Asia 2006. Conference Poster Link.


San Diego Visualization

San Diego Visualization

A Visualization of a Sustainable Urban Systems Design

Wilson Lee, Sheng Kai Tang
Center for Design Informatics, Harvard

Introduction

CDI was selected to develop a visualization (animation) of a house and community resource center as part of the San Diego region 100-year sustainable plan. This visualization was compiled with other animations from the team and presented at the International Competition for Sustainable Urban Systems Design (IC-SUSD) conference in Tokyo in June 2003. In this project, I participated in all parts of it including concept generation, digital visualization and final film editing.


Overlapped Spaces

Overlapped Spaces

How can people navigate several spaces at the same time?

Sheng Kai Tang
Center for Design Informatics, Harvard

Introduction

Do you ever suddenly conscious that why is your physical body alsways static every time when your virtual bodies are running with friends in cyberspace? Is it possible to navigate several spaces at the same time? In this project, we are going to realize this idea. There are three key issues to be discussed. First, how the movements of bodies in physical and virtual worlds are synchronized. Second, how your perceptions of spaces are changed under the synchronized movements. Third, how people connect to each other under this situation.