About

Welcome to the Video, Image, and Sound Analysis Lab (VISAL) at the City University of Hong Kong! The lab is directed by Dr. Antoni Chan in the Department of Computer Science.

Our main research activities include:

  • Computer Vision, Surveillance
  • Machine Learning, Pattern Recognition
  • Computer Audition, Music Information Retrieval
  • Eye Gaze Analysis

For more information about our current research, please visit the projects and publication pages.

Opportunities for graduate students and research assistants – if you are interested in joining the lab, please check this information.

Latest News [more]

  • [Jan 27, 2021]

    Congratulations to Qi for defending his thesis!

  • [Sep 11, 2020]

    Congratulations to Sergio for defending his thesis!

  • [Nov 28, 2019]

    Congratulations to Weihong for defending his thesis!

  • [Nov 28, 2019]

    Congratulations to Tianyu for defending his thesis!

Recent Publications [more]

  • Meta-Graph Adaptation for Visual Object Tracking.
    Qiangqiang Wu and Antoni B. Chan,
    In: IEEE International Conference on Multimedia and Expo (ICME), to appear Jul 2021 (oral).
  • A Generalized Loss Function for Crowd Counting and Localization.
    Jia Wan, Ziquan Liu, and Antoni B. Chan,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), to appear Jun 2021.
  • Cross-View Cross-Scene Multi-View Crowd Counting.
    Qi Zhang, Wei Lin, and Antoni B. Chan,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), to appear Jun 2021.
  • Progressive Unsupervised Learning for Visual Object Tracking.
    Qiangqiang Wu, Jia Wan, and Antoni B. Chan,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), to appear Jun 2021 (oral).
  • Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive Compression.
    Yufei Cui, Ziquan Liu, Qiao Li, Antoni B. Chan, and Chun Jason Xue,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), to appear Jun 2021.
  • Do portrait artists have enhanced face processing abilities? Evidence from hidden Markov modeling of eye movements.
    Janet H. Hsiao, Jeehye An, Yueyuan Zheng, and Antoni B. Chan,
    Cognition, 211(104616), June 2021.
  • Eye Movement analysis with Hidden Markov Models (EMHMM) with co-clustering.
    Janet H. Hsiao, Hui Lan, Yueyuan Zheng, and Antoni B. Chan,
    Behavior Research Methods, to appear 2021.
  • Applying the Hidden Markov Model to Analyze Urban Mobility Patterns: An Interdisciplinary Approach.
    Becky P.Y. Loo, Feiyang Zhang, Janet H. Hsiao, Antoni B. Chan, and Hui Lan,
    Chinese Geographical Science, 31(1):1-13, Feb 2021.
  • Fine-Grained Crowd Counting.
    Jia Wan, Nikil S. Kumar, and Antoni B. Chan,
    IEEE Trans. on Image Processing (TIP), 30:2114-2126, Jan 2021. [code | data]
  • PRIMAL-GMM: PaRametrIc MAnifold Learning of Gaussian Mixture Models.
    Ziquan Liu, Lei Yu, Janet H. Hsiao, and Antoni B. Chan,
    IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), to appear 2021. [code]

Recent Project Pages [more]

Fine-Grained Crowd Counting

In this paper, we propose fine-grained crowd counting, which differentiates a crowd into categories based on the low-level behavior attributes of the individuals (e.g. standing/sitting or violent behavior) and then counts the number of people in each category. To enable research in this area, we construct a new dataset of four real-world fine-grained counting tasks: traveling direction on a sidewalk, standing or sitting, waiting in line or not, and exhibiting violent behavior or not.

Tracking-by-Counting: Using Network Flows on Crowd Density Maps for Tracking Multiple Targets

We propose a new multiple-object tracking (MOT) paradigm, tracking-by-counting, tailored for crowded scenes. Using crowd density maps, we jointly model detection, counting, and tracking of multiple targets as a network flow program, which simultaneously finds the global optimal detections and
trajectories of multiple targets over the whole video.

Modeling Noisy Annotations for Crowd Counting

We model the annotation noise using a random variable with Gaussian distribution and derive the pdf of the crowd density value for each spatial location in the image. We then approximate the joint distribution of the density values (i.e., the distribution of density maps) with a full covariance multivariate Gaussian density, and derive a low-rank approximate for tractable implementation.

Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over Simplex

We propose a generic framework to approximate the output probability distribution induced by a Bayesian NN model posterior with a parameterized model and in an amortized fashion. The aim is to approximate the predictive uncertainty of a specific Bayesian model, meanwhile alleviating the heavy workload of MC integration at testing time.

Compare and Reweight: Distinctive Image Captioning Using Similar Images Sets

To improve the distinctiveness of image captions, we first propose a metric, between-set CIDEr (CIDErBtw), to evaluate the distinctiveness of a caption with respect to those of similar images, and then propose several new training strategies for image captioning based on the new distinctiveness measure.

Recent Datasets and Code [more]

Fine-Grained Crowd Counting Dataset

Dataset for fine-grained crowd counting, which differentiates a crowd into categories based on the low-level behavior attributes of the individuals (e.g. standing/sitting or violent behavior) and then counts the number of people in each category.

Parametric Manifold Learning of Gaussian Mixture Models (PRIMAL-GMM) Toolbox

This is a python toolbox learning parametric manifolds of Gaussian mixture models (GMMs).

Eye Movement analysis with Switching HMMs (EMSHMM) Toolbox

This is a MATLAB toolbox for analyzing eye movement data using switching hidden Markov models (SHMMs), for analyzing eye movement data in cognitive tasks involving cognitive state changes. It includes code for learning SHMMs for individuals, as well as analyzing the results.

EgoDaily – Egocentric dataset for Hand Disambiguation

Egocentric hand detection dataset with variability on people, activities and places, to simulate daily life situations.

CityStreet: Multi-view crowd counting dataset

Datasets for multi-view crowd counting in wide-area scenes. Includes our CityStreet dataset, as well as the counting and metadata for multi-view counting on PETS2009 and DukeMTMC.