AI Privacy and Security Seminars


This is the homepage of the AI Privacy & Security (AI-PriSec) Interest Group. It brings together researchers and practitioners around the world broadly interested in this topic. For the time being, it features recurring seminars, a couple of times a month, always on Sunday, around 10 AM (HK Time).

Get Involved

  • Subscribe to our WeChat Public Account to receive to seminar and job announcements
  • Join our WeChat Group – this is particularly useful for students, who maintain an active working group with monthly (virtual) meet-ups
  • Subscribe to our Bilibili Channel where we live stream and keep recordings of the talks

Upcoming Seminars

  • 30 April 2022, 20:00 (HK time)
    Bowen Tian (Sun Yat-sen University)
    Anomaly Detection by Leveraging Incomplete Anomalous Knowledge with Anomaly-Aware Bidirectional GANs
    [Recording]

    Abstract: The goal of anomaly detection is to identify anomalous samples from normal ones. In this paper, a small number of anomalies are assumed to be available at the training stage, but they are assumed to be collected only from several anomaly types, leaving the majority of anomaly types not represented in the collected anomaly dataset at all. To effectively leverage this kind of incomplete anomalous knowledge represented by the collected anomalies, we propose to learn a probability distribution that can not only model the normal samples, but also guarantee to assign low density values for the collected anomalies. To this end, an anomaly-aware generative adversarial network (GAN) is developed, which, in addition to modeling the normal samples as most GANs do, is able to explicitly avoid assigning probabilities for collected anomalous samples. Moreover, to facilitate the computation of anomaly detection criteria like reconstruction error, the proposed anomaly-aware GAN is designed to be bidirectional, attaching an encoder for the generator. Extensive experimental results demonstrate that our proposed method is able to effectively make use of the incomplete anomalous information, leading to significant performance gains compared to existing methods.

    Bio: Bowen Tian received his B.S. degree in Mathematics from the South China University of Technology, Guangdong, China, in 2020. He is now pursuing the M.S. degree in the Sun Yat-sen University. His research interests include Statistical machine learning, anomaly detection and deep learning.

Past Seminars

  • 10 April 2022, 10:00 (HK time)
    Shiyao Ma (Dalian Minzu University)
    Towards Efficient Semantic Communication Systems via Deep Learning Techniques
    [Recording] [Slide]

    Abstract: Communication technology is rapidly evolving to cater to emerging industries such as self-driving cars and the smart Internet of Things. As we enter the era of connected intelligence, a fundamental paradigm shift is necessary to meet the urgent demands for real-time communication, autonomous decision-making, and efficient distributed processing. Semantic communication is proposed in this context. In this topic, we introduce the related background knowledge of semantic communication and some semantic communication systems that have been proposed.

    Bio: Shiyao Ma received the B.Eng degree from the Heilongjiang University, China, in 2021. He is currently studying for a master`s degree in Dalian Minzu University. His research interests including security and privacy protection, and data mining.

  • 3 April 2022, 10:00 (HK time)
    Lin Zheng (University of Hong Kong)
    Linear Complexity Randomized Self-attention Mechanism
    [Recording] [Slide][Paper]

    Abstract: Attention mechanism is the core building block in many state-of-the-art models across various domains. It is powerful and expressive in capturing complicated and long-range dependencies within the input elements. Nevertheless, it does not scale efficiently to long sequences due to its quadratic time and space complexity in terms of the sequence length. In this talk, we will first discuss current strategies on reducing the time/space complexity of attention, and then focus on RFA, a particular linear attention variant that uses random feature methods to approximate the softmax function. Finally, we introduce a novel perspective to understand the approximation bias in RFA from the perspective of self-normalized importance sampling.

    Bio: Lin Zheng received his B.E. degree from Sun Yat-sen University (SYSU) and now is a Ph.D. student at the University of Hong Kong (HKU), supervised by Lingpeng Kong. His research interests include machine learning and probabilistic inference.

  • 20 March 2022, 10:00 (HK time)
    Chenhan Zhang (University of Technology Sydney)
    Information Bottleneck in Graph Structured Data
    [Recording] [Slide]

    Abstract: Graph Neural Networks (GNNs) are powerful to fuse information from network structure and node features. However, noise and redundancy in graph data make: 1) the prediction results lack interpretation; 2)GNNs are fragile to adversarial attacks. The theory of information bottleneck (IB) to can provide an effective way to optimally balances expressiveness and robustness of the learned representation of graph data. In this talk, the presenter will introduce enlightening instances in using IB to prune graph data to improve the robustness and maintain expressiveness at the same time.

    Bio: Chenhan Zhang received B.E. degree and M.S degree from University of Wollongong and City University of Hong Kong and now is a PhD candidate at University of Technology Sydney. His research interests include graph neural networks, and robustness of machine learning, etc.

  • 6 March 2022, 10:00 (HK time)
    Yi Liu (CityU)
    BadBatch: Practical Data Poisoning Attack against Federated Learning via Manipulating Local Training Batch
    [WeMeet Registration] [Live Stream]

    Abstract: Federated learning (FL) as a privacy-friendly collaborative learning framework benefits machine learning powered systems and services. Despite its advantages, FL is known to be vulnerable to poisoning attacks, where the adversary controls a set of clients to poison either the local training dataset or the local model update to degrade the global model performance. Existing poisoning attacks have demonstrated vast damage to FL, but they either require strong adversarial assumption on model and dataset knowledge of a certain number of clients, or rely on optimization-based attack methods that are normally expensive in a decentralized environment. In this paper, we propose a new practical poisoning attack against FL, named BadBatch, which can be launched in realistic settings of FL and does not rely on optimization. Our key observation is that by simply manipulating the local training batch, an adversary is capable of influencing global model performance and convergence. We propose gradient-oriented and model update-oriented attack strategies that focus on increasing the stochastic error of stochastic gradient descent and forcing the model to forget learned generalization features by finding bad batches. We also theoretically analyze the error upper bound and time complexity of our attack. Finally, we perform extensive experiments on two public datasets for convex and non-convex models, and evaluate our attacks against the latest defense, i.e., Byzantine robust aggregation. Our evaluation results show that Badbatch can achieve high attack performance than existing methods in a practical FL setting, shedding light on data poisoning attacks against practical FL.

    Bio: Yi Liu received the B.Eng. degree in Network Engineering from Heilongjiang University, Harbin, China, in 2019. He is currently pursuing the Ph.D. degree with the Department of Computer Scince, City University of Hong Kong, Hong Kong, China. His research interests include security and privacy, federated learning, edge computing, and blockchain. Home: https://yiliucs.github.io/

  • 27 February 2022, 10:00 (HK time)
    Zijing Ou (Tencent AI Lab)
    Model Explanation with Shapley Values
    [Recording] [Slide]

    Abstract: Deep neural networks (DNNs) become increasingly important in many applications while lacking explanations for their excellent performance. Shapley Value provides a theoretical and practical explainer for DNNs. In this talk, the presenter will introduce the most recent progress in model explanation with Shapley value, including its estimation, uncertainty, and potential research areas.

    Bio: Zijing Ou recently graduated with a B.E. degree from Sun Yat-sen University and now works as an intern in Tencent AI Lab. His research interests include approximate inference, energy-based models, and interpretable AI. His research has been published at venues including IJCAI, ACL, EMNLP, etc. He also works as a reviewer for ICML, IJCAI, ACL, etc. Home: https://j-zin.github.io/

Organizers