Real-time multi-view human
activity recognition using a wireless camera network
Real
time recognition of human activities is increasingly becoming important in the
context of camera based surveillance applications to quickly detect suspicious
behavior and in the context of several interactive gaming applications. Our
goal in this project is to design middleware services for distributed feature
extraction complemented by fusion strategies that effectively combine data from
multiple views for fast and robust human activity recognition.
In recent work,
we have designed a score-based fusion technique for combining information from
multiple cameras that can handle arbitrary orientation of the subject with
respect to the cameras. Our fusion technique does not rely on a symmetric
deployment of the cameras and does not require that camera network deployment
configuration be preserved between training and testing phases. To classify
human actions, we use motion information characterized by the spatio-temporal shape of a human silhouette over time. By
relying on feature vectors that are relatively easy to compute, our technique
lends itself to an efficient distributed implementation while maintaining a
high frame capture rate. We have evaluated the performance of our system under
different camera densities and view availabilities using an 8 node embedded
wireless camera network. We have also evaluated the performance of our system
in an online setting where the camera network is used to identify human actions
as they are being performed. In order to handle arbitrary orientation of a
subject with respect to the cameras and to handle asymmetric deployment of
cameras, our fusion approach relies on first systematically collecting training
data from all view-angle sets and then using the knowledge of relative camera
orientation during the fusion stage.
For performing our study, we collected a
significant amount of multi-view data of subjects performing various actions. This
data could be potentially useful related research in the area of human activity
recognition. The multi-view action dataset is available here.
Supported by DoD Epscor
project on surveillance in urban environments using camera networks
Collaborators:
Natalia Schmidt, Xin Li, Brian Woerner
and Mathew Valenti
Publications
S. Ramagiri,
R. Kavi and V. Kulathumani,
“Real-time multi-view
action recognition using a wireless camera network”, ICDSC 2011
Student members
Srikanth
Parupati [M.S. student]
Sricharan Ramagiri [M.S. student]
Rahul
Kavi [Ph.D. student]