KAUST DepartmentVisual Computing Center (VCC)
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Electrical Engineering Program
VCC Analytics Research Group
Permanent link to this recordhttp://hdl.handle.net/10754/562560
MetadataShow full item record
AbstractThis paper proposes the problem of modeling video sequences of dynamic swarms (DSs). We define a DS as a large layout of stochastically repetitive spatial configurations of dynamic objects (swarm elements) whose motions exhibit local spatiotemporal interdependency and stationarity, i.e., the motions are similar in any small spatiotemporal neighborhood. Examples of DS abound in nature, e.g., herds of animals and flocks of birds. To capture the local spatiotemporal properties of the DS, we present a probabilistic model that learns both the spatial layout of swarm elements (based on low-level image segmentation) and their joint dynamics that are modeled as linear transformations. To this end, a spatiotemporal neighborhood is associated with each swarm element, in which local stationarity is enforced both spatially and temporally. We assume that the prior on the swarm dynamics is distributed according to an MRF in both space and time. Embedding this model in a MAP framework, we iterate between learning the spatial layout of the swarm and its dynamics. We learn the swarm transformations using ICM, which iterates between estimating these transformations and updating their distribution in the spatiotemporal neighborhoods. We demonstrate the validity of our method by conducting experiments on real and synthetic video sequences. Real sequences of birds, geese, robot swarms, and pedestrians evaluate the applicability of our model to real world data. © 2012 Elsevier Inc. All rights reserved.
CitationGhanem, B., & Ahuja, N. (2013). Modeling dynamic swarms. Computer Vision and Image Understanding, 117(1), 1–11. doi:10.1016/j.cviu.2012.09.002
SponsorsThe support of the Office of Naval Research under Grant N00014-09-1-0017 and the National Science Foundation under grant IIS 08-12188 is gratefully acknowledged.