About

Welcome to the Video, Image, and Sound Analysis Lab (VISAL) at the City University of Hong Kong! The lab is directed by Prof. Antoni Chan in the Department of Computer Science.

Our main research activities include:

  • Computer Vision, Surveillance
  • Machine Learning, Pattern Recognition
  • Computer Audition, Music Information Retrieval
  • Eye Gaze Analysis

For more information about our current research, please visit the projects and publication pages.

Opportunities for graduate students and research assistants – if you are interested in joining the lab, please check this information.

Latest News [more]

  • [Jul 1, 2021]

    Dr. Chan was promoted to Professor!

  • [Apr 29, 2021]

    Congratulations to Yufei for defending his thesis!

  • [Jan 27, 2021]

    Congratulations to Qi for defending his thesis!

  • [Sep 11, 2020]

    Congratulations to Sergio for defending his thesis!

Recent Publications [more]

  • A Comparative Survey: Benchmarking for Pool-based Active Learning.
    Xueying Zhan, Huan Liu, Qing Li, and Antoni B. Chan,
    In: International Joint Conf. on Artificial Intelligence (IJCAI), Survey Track, to appear Aug 2021.
  • Hierarchical Learning of Hidden Markov Models with Clustering Regularization.
    Hui Lan and Antoni B. Chan,
    In: 37th Conference on Uncertainty in Artificial Intelligence (UAI), Jul 2021.
  • Multiple-criteria Based Active Learning with Fixed-size Determinantal Point Processes.
    Xueying Zhan, Qing Li, and Antoni B. Chan,
    In: Subset Selection in Machine Learning: From Theory to Applications, ICML Workshop, July 2021.
  • Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations.
    Ziquan Liu, Yufei Cui, and Antoni B. Chan,
    In: ICML workshop on Adversarial Machine Learning, July 2021.
  • Meta-Graph Adaptation for Visual Object Tracking.
    Qiangqiang Wu and Antoni B. Chan,
    In: IEEE International Conference on Multimedia and Expo (ICME), to appear Jul 2021 (oral).
  • A Generalized Loss Function for Crowd Counting and Localization.
    Jia Wan, Ziquan Liu, and Antoni B. Chan,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2021.
  • Cross-View Cross-Scene Multi-View Crowd Counting.
    Qi Zhang, Wei Lin, and Antoni B. Chan,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR):557-567, Jun 2021. [dataset]
  • Progressive Unsupervised Learning for Visual Object Tracking.
    Qiangqiang Wu, Jia Wan, and Antoni B. Chan,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2021 (oral).
  • Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive Compression.
    Yufei Cui, Ziquan Liu, Qiao Li, Antoni B. Chan, and Chun Jason Xue,
    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2021.
  • Understanding the collinear masking effect in visual search through eye tracking.
    Janet H. Hsiao, Antoni B. Chan, Jeehye An, Su-Ling Yeh, and Jingling Li,
    Psychonomic Bulliten & Review, June 2021.

Recent Project Pages [more]

A Generalized Loss Function for Crowd Counting and Localization

We prove that pixel-wise L2 loss and Bayesian loss are special cases and suboptimal solutions to our proposed loss function. Since the predicted density will be pushed toward annotation positions, the density map prediction will be sparse and can naturally be used for localization.

Cross-View Cross-Scene Multi-View Crowd Counting

In this paper, we propose a cross-view cross-scene (CVCS) multi-view crowd counting paradigm, where the training and testing occur on different scenes with arbitrary camera layouts.

Fine-Grained Crowd Counting

In this paper, we propose fine-grained crowd counting, which differentiates a crowd into categories based on the low-level behavior attributes of the individuals (e.g. standing/sitting or violent behavior) and then counts the number of people in each category. To enable research in this area, we construct a new dataset of four real-world fine-grained counting tasks: traveling direction on a sidewalk, standing or sitting, waiting in line or not, and exhibiting violent behavior or not.

Tracking-by-Counting: Using Network Flows on Crowd Density Maps for Tracking Multiple Targets

We propose a new multiple-object tracking (MOT) paradigm, tracking-by-counting, tailored for crowded scenes. Using crowd density maps, we jointly model detection, counting, and tracking of multiple targets as a network flow program, which simultaneously finds the global optimal detections and
trajectories of multiple targets over the whole video.

Modeling Noisy Annotations for Crowd Counting

We model the annotation noise using a random variable with Gaussian distribution and derive the pdf of the crowd density value for each spatial location in the image. We then approximate the joint distribution of the density values (i.e., the distribution of density maps) with a full covariance multivariate Gaussian density, and derive a low-rank approximate for tractable implementation.

Recent Datasets and Code [more]

CVCS: Cross-View Cross-Scene Multi-View Crowd Counting Dataset
  • Examples:
  • Comparison:

 

Dataset download:

Fine-Grained Crowd Counting Dataset

Dataset for fine-grained crowd counting, which differentiates a crowd into categories based on the low-level behavior attributes of the individuals (e.g. standing/sitting or violent behavior) and then counts the number of people in each category.

Parametric Manifold Learning of Gaussian Mixture Models (PRIMAL-GMM) Toolbox

This is a python toolbox learning parametric manifolds of Gaussian mixture models (GMMs).

Eye Movement analysis with Switching HMMs (EMSHMM) Toolbox

This is a MATLAB toolbox for analyzing eye movement data using switching hidden Markov models (SHMMs), for analyzing eye movement data in cognitive tasks involving cognitive state changes. It includes code for learning SHMMs for individuals, as well as analyzing the results.

EgoDaily – Egocentric dataset for Hand Disambiguation

Egocentric hand detection dataset with variability on people, activities and places, to simulate daily life situations.