MAS.131/MAS.531 Computational Camera and Photography

Retrieve 3D coordinate by using Kinect


Due to not having knowledge of image processing, I decide to use Kinect for my final project to reconstruct three dimensional vectors of a physical object. My idea is firstly to detect points with designated colors; secondly, to get screen coordinates of the points; thirdly, to map the coordinates to the depth image, then retrieve the z coordinates and infer other points; lastly, map the inferred points from depth coordinates to screen coordinates. This tedious process eventually enables me to reach my partial goal of my final project.

[youtube_sc url=”″]


for HW2 Lightfields by Sheng Kai Tang (Tony)

0. Shift and Add in Processing

The idea of lightfields taught in the class is to create a 1D or 2D array of pinhole cameras to capture images . With slightly shifting of images based on camera positions, adding up values of pixels, and averaging these pixels values, we can create a general sense of lightfields. The code below is written in Processing according to the concept of 1D array:

int imageWidth;// image width
int imageHeight;// image height
PImage show;//final image
PImage[] images;//source images
String[] fileNames;//source filenames
int imageAmount;//number of source images
color c;//pixel color
float r, g, b;//r,g,b of a pixel color
int l = 0;//keycode counter

void setup(){
imageWidth = 420;//set image width
imageHeight = 240;//set image height
imageAmount = 15;//set image amount
size(imageWidth, imageHeight);//set canvas size
images = new PImage[imageAmount];//instantiate source images
fileNames = new String[imageAmount];//instantiate image filenames

//load image filenames
for(int i = 0; i < fileNames.length; i++){
fileNames[i] = "IMG_24" + (i + 10) + ".JPG";
//load source images
for(int i = 0; i < images.length; i++ ){
images[i] = loadImage(fileNames[i]);
//instantiate the final image
show = createImage(width, height, RGB);

void draw(){
//shift and add
for(int i = 0; i < width; i++){
for(int j = 0; j < height; j++){
for(int k = 0; k < imageAmount; k++){
r = r + red(images[k].get(i + l*k, j));
g = g + green(images[k].get(i + l*k, j));
b = b + blue(images[k].get(i + l*k, j));
c = color(r/imageAmount, g/imageAmount, b/imageAmount);
show.set(i, j, c);
r = 0;
g = 0;
b = 0;
image(show, 0, 0);

void keyPressed() {
if (key == CODED) {
if (keyCode == UP) {
else if (keyCode == DOWN) {

1. Code Test 

With the code developed above, we conduct test by importing 16 photos provided.

1. Focus at the window

2. Focus at the dots board

3. Focus at the Almo

4. Focus at the Pumpkin


2. Refocus and See through


3. Interactive Application

 Shift and Add (Mac version, Windows 32, Windows 64), original photos and source codes are included.









Computational Sunglass

for HW1 Color Swap by Sheng Kai Tang (Tony)

1. Motivation 

One afternoon, I was on my way home thinking of the assignment of the Computational Camera and Photography. All of a sudden, there was a bright light shining into my eye, and I barely saw anything for seconds. However, at that moment the idea of the assignment was emerged. Yes, a Computational Sunglass!

Because I just walked with very slow speed, seconds of losing vision was not a problem. However, when driving at night or in a tunnel, this kind of temporal vision loss caused by sudden shining lights will be a serious issue.


2. Idea

A sunglass reduces brightness of the scene. It can make the bright part darker but turn the dark part too dark to be seen. The best way is to make a computational sunglass. This sunglass constantly takes two photo with normal and minimun apertures. The photo of minimun apertures is used to detect the source of shining light. Once a shining light is detected, this minimun aperture photo will become a mask covering on another photo with normal aperture.

3. Experiments

F2.8                                                                F22


 Radical Gradient Transparency                Mix1


50% Transparency                                        Mix2


4. Conclusion

Instead of treating digital photo as recording media, this is my first time to apply the computational camera and photography concept to solve a real world problem. Although it is a preliminary test of the concept, we still can see the potential benefit. There are still some technical issues, or there might be better ways to solve this problem. But this is a cool and interesting process to discover.

Based on google glass technology, we believe the computational manipulation will not only mix digital information with real world image, but also change the way how people perceive the world. This computational sunglass idea and practice is just the begin of my new journey.