Friday, September 3

8:00 — 9:20


9:20 — 10:20


10:20 — 10:40


10:40 — 12:00

Oral Session — Techniques and Applications for Smart Cameras (part I)

An Adaptive Method for Energy-Efficiency in Battery-Powered Embedded Smart Cameras

Mauricio Casares and Senem Velipasalar University of Nebraska-Lincoln

With the introduction of battery-powered wireless embedded smart cameras, it has now become viable to deploy large numbers of spatially-distributed cameras with more flexibility in terms of camera locations. However, many challenges remain to be addressed to build operational, battery-powered, wireless smart-camera networks. Battery life is limited, and video processing tasks, such as foreground detection and tracking, consume considerable amount of energy. Thus, it is essential to design and implement light-weight algorithms and methods to increase the energy-efficiency of each camera node, and thus the overall life-time of the camera network.~We present an adaptive method based on tracking that significantly decreases the energy consumption of the embedded camera. The microprocessor on the camera board is sent to an idle state depending on the amount of activity in the scene. The amount of time the camera remains in idle mode is adaptively changed based on the speeds of tracked objects. Instead of continuously capturing and processing every frame, the camera drops frames during idle mode while preserving the tracking performance and thus system reliability at the same time. We present experimental results showing the energy-efficiency of the proposed method, and the gain in battery life. The proposed methodology provides 25% to 37% savings in the energy consumption, and 45.83% to 65% increase in the battery life depending on the number of objects in the scene and their speeds.

Square Patch Feature Based Face Detection Architecture for High Resolution Smart Camera

Yasir Mohd Mustafah, Abbas Bigdeli, Amelia Azman and Brian Lovell The University of Queensland, National ICT Australia

Recognizing faces in a crowd in real-time is a key feature which would signicantly enhance Intelligent Surveillance Systems. Previously we proposed a high resolution smart camera system that can be used for crowd surveillance. The challenge is with the increasing speed and resolution of the image sensors, a fast and robust face detection system is required for real time operation. In this paper, we proposed a face detection architecture that is suitable to be implemented on a smart camera system. The face detection algorithm is based on a new weak classier type that we called square patch feature. The targeted platform is a low cost Spartan-3 FPGA. From The simulation result shows that the proposed face detection architecture could speed up the equivalent software based face detector up to 12 times. Parallelizing the feature classication modules could improve the performance further.

Distributed Real-Time Stereo Matching on Smart Cameras

Christian Zinner and Martin Humenberger AIT Austrian Institute of Technology

This work introduces a real-time capable realization of an area-based stereo matching algorithm that is distributed on two embedded smart camera platforms. Combining common industrial smart cameras by this way enables real time stereo vision as a new application domain for these platforms. With the proposed method, the computational load can be shared among the two cameras equipped with a digital signal processor each. This results in an efficient processing of a computational intensive stereo matching algorithm—the processing speed is significantly faster compared to a single chip solution. Beside that, various optimizations especially developed for digital signal processors additionally increase the performance. On input images of 450 x 375 and a disparity range of 60, the system achieves a stereo processing performance of 11.8 frames per second. The stereo matching quality is evaluated using the Middlebury stereo database where it is the only purely embedded algorithm.

Collaborative Sensing via Local Negotiations in Ad Hoc Networks of Smart Cameras

University of Ontario Institute of Technology

The paper develops an ad hoc network of active pan/tilt/zoom (PTZ) and passive wide field-of-view (FOV) cameras capable of carrying out observation tasks autonomously. The network is assumed to be uncalibrated, lacks a central controller, and relies upon local decision making at each node and inter-node negotiations for its overall behavior. To this end, we develop intelligent camera nodes (both active and passive) that can perform multiple observation tasks simultaneously. We also present a negotiation protocol that allows cameras nodes to setup collaborative tasks in a purely distributed manner. Camera assignments conflicts that invariably arise in such networks are naturally and gracefully handled through at-node processing and inter-node negotiations. We expect the proposed camera network to be highly scalable due to the lack of any centralized control.

12:00 — 14:00


14:00 — 15:40

Oral Session — Techniques and Applications for Smart Cameras (part II)

Adaptive Color Transformation for Person Re-identification in Camera Networks

Clemens Siebler, Keni Bernardin and Rainer Stiefelhagen Karlsruhe Institute of Technology

The problem of observing and finding a specific person again in a camera network is referred to as person re-identification or person recognition. It is an important topic in computer vision, because in real world scenarios many applications can profit from an automated re-identification. Especially large security scenarios as for example airports or train stations employ multi-camera systems. There, re-identifying a suspicious person in another camera is often done manually by an operator, which is a tedious, error-prone and expensive task. Therefore, it is desirable to have computer-aided assistance. Matching one person with given multiple targets is still being studied in recent research. Color features are often used to describe the appearance of a person, because of their robustness to change in pose and viewpoint. However, using color features to re-identify persons can be difficult, because those features are sensitive to the given lighting condition. In real world scenarios, it is very common that those conditions differ between cameras. Therefore, real world applications require that the different illumination conditions at different camera sites be compensated by automated algorithms. Often, such algorithms employ a fixed training phase, where a mapping between the colors in a pair of cameras is established. While this helps to improve the person matching accuracy, performance decreases when illumination conditions change at the camera sites. The incorrect mapping of colors occurs due to the fixed training phase. Therefore, rendering the scene illumination more constant can help the pre-trained function provide more constant results.

Strategies for Maximizing Coverage of Static Targets in Smart Camera Networks

Vikram Munishwar and Nael Abu-Ghazaleh Binghamton University (State University of New York)

Smart camera networks are becoming increasingly popular in a number of application domains. In many applications, cameras are required to collaboratively track objects (e.g., habitat monitoring, or surveillance). In smart networks, camera coverage control is necessary to allow automatic tracking of targets without human intervention, allowing these systems to scale. In this paper, we consider the problem of automatic control of the cameras to maximize coverage of a set of targets. We formulate an optimization problem with the goal of maximizing the number of covered targets. Since the optimization problem is NP-hard, even for static targets, we propose a computationally efficient heuristic to reach near-optimal solution. Centralized solutions achieve excellent coverage, and can work well for small-scale networks, however they require significant communication cost for large scale networks. As a result, we propose an algorithm that spatially decomposes the network and computes optimal solutions for individual partitions. By decomposing the partitions in a way that minimizes dependencies between them, this approach results in coverage quality close to the centralized optimal solution, with an overhead and reaction time similar to those of distributed solutions.

Uniform Access to the Cameraverse

Gregor Miller and Sidney Fels University of British Columbia

We introduce a new camera access framework which provides uniform access to many different camera types and includes a novel addressing scheme specifically designed for cameras. Attempts have been made in the past to provide simple access to cameras, however these are generally OS specific or lacking in functionality. We present a novel scheme called the Unified Camera Framework which works across operating systems, provides access to native images through an image descriptor and defines the cameraverse using a unique addressing protocol. A unified configuration model is presented to allow manipulation of camera parameters to the level each camera supports. Validation of the ideas presented is given in the form of a proof-of-concept implementation called the All Seeing Eye.

A Neuromorphic Smart Camera for Real-time 360° Undistorted Panoramas

Ahmed Nabil Belbachir and Roman Pflugfelder AIT Austrian Institute of Technology

This paper presents a novel neuromorphic camera system rotating at high-speed (1 to 4 rotations/sec) to acquire 360° panoramas in real-time by exploiting the high temporal resolution, the high dynamic range and the sparse visual information representation using a neuromorphic vision sensor with address-event (AE) signaling mounted on a high-speed mechanical rotation device. Contrary to state-of-the-art panorama cameras (e.g. rotational cameras or catadioptric cameras), this camera system can delivers several undistorted 360° panoramas per second at constant image resolution and efficient edge extraction of the scene under real illumination conditions without any further computation. This camera system could establish new sensing capabilities in challenging applications such as real-time environmental awareness for robotics and surveillance. After introducing panorama systems and the neuromorphic dual-line dynamic vision sensor, the new camera concept is presented. A comparative analysis of this system with state-of-the art cameras is given. The concept, the camera design and resulting images using an existing 256 pixel line sensor are presented.

15:40 — 16:00