Welcome

Some of Kelvin's past work

 

Computing and Software Systems        UW Bothell   

 

 

   Interactive Navigation in Video Segments Project

 

Original

 

Turned Left

 

Turned and Moved Right

The Interactive Navigation In Video Segments project is designed to support a user navigates in a pre-recorded video footage. Navigation in this context means that while reviewing the pre-recorded video footage, a user can alter the camera position and the viewing direction. This is in contrast to the traditional fast-forward and/or rewind operations where a user can only move forward and/or backward along the fixed camera positions and the viewing direction of that of the video footage.
For example, please refer the three images on the left column, the top image is from a pre-recorded video footage. Based on this image, the prototype video navigation system allows the user to turn the viewing direction towards the left and constructed the middle image as a result. The bottom image was constructed based on turning the viewing direction towards the right and moving the camera position slightly backwards. It should be noted that the middle and bottom images do not actually exist in the pre-recorded video footage. These two images are synthesized by the prototype navigation system. The bulletin board on the right side of the images is a good reference for evaluating the results. Because of the high demand on computing resources, currently the prototype system is non-interactive. For example, the bottom image was constructed based on processing 60 images from the video footage. To support real-time user interactivity, the required processing must be carried out in parallel.

Demo Movies:
Click here (or the top image) to see the original video segment.
Click here (or the middle image) to see the results of turning left. All images in this segment are synthesized by the navigation system based on the original video segment.
Click here (or the bottom image) to see the results of turning right. All images in this segment are synthesized by the navigation system based on the original video segment.

 

   Spatial Temporal AntiAliasing

 

 

 

 

 

 

Simulation of Motion Blur In Computer Graphics. Before joining UWB, I spend quite a bit of time studying how to accurately simulate the effects of finite exposure time when synthesizing images. In real world, the effect of finite exposure time shows up as blurring of moving objects in photographs. Simulating motion blur artifacts in computer-generated images diminishes the disturbing jerkiness in animations. Algorithms for motion blur often make implicit assumptions about the scene database and solve a subset of the general problem. When designing the Maya Rendering System, we attempted to eliminate these implicit assumptions while preserving efficiency. We introduce a framework to describe motion blur image generation process, based on formulating the artifacts as the results of weighting visibility and shading functions in spatial-temporal domain. Studying previous approaches with our formulation, we identified a main area that is not well addressed: scenes that contain relatively high temporal frequencies (e.g. fast moving objects, or fast shading changes in time). We propose solving the spatial-temporal visibility and shading functions separately with different strategies. Our implementation of the proposed approach is based on newly developed adaptive super-sampling algorithms in the spatial-temporal domain. Currently, I am summerizing this work.

Demo Movie:
Click here to see a short animation of the cowboy kicking the saloon door (5MB Apple QuickTime Movie)

  Kelvin Home  |   Kelvin's Research Page  |   CSS Home  |   UW Bothell  |   UW Seattle


UWB Home
18115 Campus Way NE
Bothell, WA 98011-8246

(425) 352-5000
(425) 352-5303 (TDD)

*


University of Washington, Bothell
Copyright 2000, UWB. All rights reserved.
Comments to Kelvin Sung: ksung@u.washington.edu

Last updated: May 2007 KS