Tutorials

Tutorial 1

Lecturer:
Hiroshi Ishikawa
Associate Professor
Graduate School of Natural Sciences, Nagoya City University
Title:
A Practical Introduction to Graph Cut
Abstract:

Over the past decade, energy-minimization techniques utilizing the s-t mincut algorithm have become increasingly popular in vision, image processing, and computer graphics. Now generally called the "graph-cut" methods, they are used for many low-level problems such as stereo, segmentation, and image stitching. In some cases, graph cuts produce globally optimal solutions. More generally, there are iterative graph-cut based techniques that produce high-quality solutions in practice. In this introductory tutorial, we first describe the major techniques as well as delineating their applicability and limitations. Then we discuss the design of the energies for some problems that have been successfully solved by graph cuts.

Outline:
  1. Brief history
  2. Energy minimization
  3. Graphs and their minimum cuts
  4. Energy minimization via graph cuts
    1. Global minimization
    2. Approximation methods
  5. Energy design for graph cut
    1. Stereo
    2. Segmentation
    3. Image stitching
    4. Digital tapestry

Tutorial 2

Lecturers:

Cees G.M. Snoek
Cees G.M. Snoek
Senior Researcher
Faculty of Science, University of Amsterdam
Marcel Worring
Associate Professor
Faculty of Science, University of Amsterdam
Title:
Concept-Based Video Retrieval
Abstract:

The ease with which video can be captured has lead to a proliferation of video collections in all parts of society. Getting content-based semantic access to such collections is a difficult task, requiring techniques from image processing, computer vision, machine learning, knowledge engineering, and human computer interaction. The semantic gap between the low level information that can be derived from the visual data and the conceptual view the user has of the same data is a major bottleneck in video retrieval systems. It has dictated that solutions to image and video indexing could only be applied in narrow domains using specific concept detectors, e.g., "sunset" or "face". This leads to lexica of at most 10-20 concepts. The use of multimodal indexing, advances in machine learning, and the availability of some large, annotated information sources, e.g., the TRECVID benchmark, has paved the way to increase lexicon size by orders of magnitude (now 100 concepts, in a few years 1,000). This brings it within reach of research in ontology engineering, i.e. creating and maintaining large, typically 10,000+ structured sets of shared concepts. When this goal is reached we could search for videos in our home collection or on the web based on their semantic content, we could develop semantic video editing tools, or develop tools that monitor various video sources and trigger alerts based on semantic events. This tutorial lays the foundation for these exciting new horizons. It will cover basis video analysis techniques and explain the different methods for concept detection in video. From there it will explore how users can be given interactive access to the data. For both indexing and interactive access TRECVID evaluations will be considered. Finally, some more insight on the challenges ahead and how to meet them will be presented.

Outline:
  1. Detecting Concepts in Video
  2. Concept-Based Video Search
  3. Evaluation
  4. Demonstration

Contact Address: psivt2009 at nii.ac.jp (replace 'at' by '@')

© PSIVT 2009 Organizing Committee All rights reserved.