Keynote and Invited Speakers
- Shmuel Peleg
- School of Computer Science and Engineering, The Hebrew University of Jerusalem
- Non-Chronological Video Editing and Video Synopsis
Powerful effects in video editing can be obtained when relaxing the chronological constraints: activities that occurred in different times can be shown simultaneously and vise versa. The description of non-chronological video editing effects and the simple methods to perform them will start this talk.
The non-chronological approach to video is also powerful in creating video summaries. In particular, a full day recorded by a video surveillance camera can be summarized in a few minutes without loss of any activity. It is estimated that 40 million surveillance cameras are being installed annually. But none of the video they record is ever watched: it is too time consuming. The presented video synopsis approach can provide access to the untapped resource of recorded surveillance cameras.
Invited Speakers (in the alphabetical order) :
- Takeo Igarashi
- Associate Professor
- Graduate School of Information Science and Technology, The University of Tokyo
- Interactive "smart" computers
Current user interfaces are not very "smart"" in that computers dumbly do what the user explicitly commands it to do via buttons or menus. As the computers become more capable and applications become complicated, more "smart"" user interfaces are desired. We are exploring possible "smart"" user interfaces in the domain of pen-based computing and interactive 3D graphics. The idea is to allow the user to intuitively express his/her intention by combining sketching and direct manipulation, and have the computer take appropriate actions without explicit commands. This talk consists of many live demonstrations to illustrate the idea of interactive "smart"" interfaces. I plan to show 2D geometric drawing program, electronic whiteboard system, sketch-based 3D modeling, automatic zooming, clothing manipulation interfaces, and other interesting systems.
- Shigeo Morishima
- School of Science and Engineering, Waseda University
- Instant Casting System "Dive Into the Movie"
Our research project, Dive into the Movie (DIM) aims to build a new genre of interactive entertainment which enables anyone to easily participate in a movie by assuming a role and enjoying an embodied, first-hand theater experience. This is specifically accomplished by replacing the original roles of the precreated traditional movie with user created, high-realism, 3-D CG characters. DIM movie is in some sense a hybrid entertainment form, somewhere between a game and storytelling. We hope that DIM movies might enhance interaction and offer more dramatic presence, engagement, and fun for the audience. Our work on DIM is ongoing, but its initial version, Future Cast System (FCS), is up and running. In the initial version, we focus on creating audiences' highrealism 3-D CG characters with personal facial characteristics, replacing the original characters' faces in the original traditional (background) movie. The FCS system has two key features: First, it can full-automatically create a CG character in a few minutes from capturing the facial feature of a user and generating her/his corresponding CG face, to inserting the CG face into the movie in real-time which do not cause any discomfort to the participant; Second, the FCS system makes it possible for multiple participants to take part in a movie at the same time in different roles, such as a family, a circle of friends, etc. The FCS system is not limited to academic research; 1.6 million people enjoyed a FCS entertainment experience at the Mitsui-Toshiba pavilion at the 2005 World Exposition in Aichi, Japan. I introduce this ongoing DIM project with new technologies and many impressive demos.
psivt2009 at nii.ac.jp(replace '
at' by '
© PSIVT 2009 Organizing Committee All rights reserved.