Tongliang Liu


Home


Tongliang Liu

Tongliang Liu

Lecturer in Machine Learning
ARC DECRA Fellow
Director of Trustworthy Machine Learning Lab (TML Lab)
School of Computer Science
Facult of Engineering
The University of Sydney

Visiting Scientist
Imperfect Information Learning Team
RIKEN AIP, Japan

Address: Room 315/J12/ 1 Cleveland St, Darlington, NSW 2008, Australia
E-mail: tongliang.liu [at] sydney.edu.au; tliang.liu [at] gmail.com
[Google Scholar] [DBLP]

I am currently a Lecturer and director of the Trustworthy Machine Learning Lab (TML Lab) with School of Computer Science at the University of Sydney. I am also a Visiting Scientist at RIKEN AIP, Japan. I received my PhD from the University of Technology Sydney in August 2016 and my BEng from the University of Science and Technology of China.

I am a recipient of Discovery Early Career Researcher Award (DECRA) from Australian Research Council (ARC); the Cardiovascular Initiative Catalyst Award by the Cardiovascular Initiative; and was named in the Early Achievers Leadboard of Engineering and Computer Science by The Australian in 2020.

Trustworthy Machine Learning Lab (TML Lab) at the University of Sydney hosts, attracts, and connects the best global talents to develop trustworthy machine learning techniques and tools, which are explainable, robust, fair, causally responsible, and privacy-preserving. Our mission is to make machines trustworthy, which is a foundation for our society to develop and deploy artificial intelligence to improve our lives. We are broadly interested in the fields of trustworthy machine learning and its interdisciplinary applications, with a particular emphasis on learning with noisy labels, adversarial learning, transfer learning, unsupervised learning, and statistical deep learning theory.

We are recruiting PhD and visitors. If you are interested, please send me your CV and transcripts.
We are organizing a special issue of Machine Learning journal on weakly supervised representation learning (CFP). Welcome to submit!

We are always looking for highly-motivated undergraduate and postgraduate students to join our group. Scholarships are available!

A few visiting positions in machine learning and computer vision are available.


Research Interests

My research interests lie in providing mathematical and theoretical foundations to justify and understand (deep) machine learning models and designing efficient learning algorithms for problems in computer vision and data mining, with a particular emphasis on
  • Learning with noisy labels

  • Deep adversarial learning

  • Deep transfer learning

  • Deep unsupervised learning

  • Image processing

  • Statistical deep learning theory


Top News

  • 11/2021, I accepted the invitation to serve as an Area Chair for UAI 2022.

  • 07/2021, I accepted the invitation to serve as an Area Chair for AAAI 2022.

  • 06/2021, I accepted the invitation to serve as an Area Chair for ICLR 2022.

  • 04/2021, I will serve as a Guest Editor of the Machine Learning Journal.

  • 03/2021, I accepted the invitation to serve as an Area Chair for NeurIPS 2021.

  • 02/2021, we are organising the first Australia-Japan Workshop on Machine Learning.

  • 02/2021, I was an Expert Reviewer for ICML 2021.

  • 10/2020, I was among the top 10% of high-scoring reviewers of NeurIPS 2020.

  • 9/2020, I was named in the Early Achievers Leaderboard by The Asutralian.

  • 8/2020, I accepted the invitation to serve as an Area Chair for IJCAI 2021.

  • 4/2020, I was appointed as a visiting scientist with RIKEN, Japan.

See more previous news here.


Selelcted Publications on Learning with Noisy Labels

  • Understanding and Improving Early Stopping for Learning with Noisy Labels. [PDF] [CODE]
    Y, Bai, E. Yang, B. Han, Y. Yang, J. Li, Y. Mao, G. Niu, and T. Liu.
    In NeurIPS, 2021.

  • Instance-Dependent Label-Noise Learning under Structural Causal Models. [PDF] [CODE]
    Y. Yao, T. Liu, M. Gong, B. Han, G. Niu, and K. Zhang.
    In NeurIPS, 2021.

  • Me-Momentum: Extracting Hard Confident Examples from Noisily Labeled Data [CODE] [Oral]
    Y. Bai and T. Liu
    In ICCV, 2021.

  • Confidence Scores Make Instance-dependent Label-noise Learning Possible [PDF] [CODE] [Long Talk]
    A. Berthon, B. Han, G. Niu, T. Liu, and M. Sugiyama
    In ICML, 2021.

  • Provably End-to-end Label-noise Learning without Anchor Points [PDF] [CODE]
    X. Li, T. Liu, B. Han, G. Niu, and M. Sugiyama
    In ICML, 2021.

  • Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [PDF] [CODE]
    S. Wu*, X. Xia*, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu
    In ICML, 2021.

  • A Second-Order Approach to Learning with Instance-Dependent Label Noise. [PDF] [CODE] [Oral]
    Z. Zhu, T. Liu, and Y. Liu.
    In CVPR, 2021.

  • Robust early-learning: Hindering the memorization of noisy labels. [PDF] [CODE]
    X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang.
    In ICLR, 2021.

  • Part-dependent Label Noise: Towards Instance-dependent Label Noise. [PDF] [CODE] [Spotlight]
    X. Xia, T. Liu, B. Han, N. Wang, M. Gong, H. Liu, G. Niu, D. Tao, and M. Sugiyama.
    In NeurIPS, 2020.

  • Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning. [PDF] [CODE]
    Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
    In NeurIPS, 2020.

  • Learning with Bounded Instance- and Label-dependent Label Noise. [PDF] [CODE]
    J. Cheng, T. Liu, K. Rao, and D. Tao.
    In ICML, 2020.

  • Are Anchor Points Really Indispensable in Label-Noise Learning? [PDF] [CODE]
    X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
    In NeurIPS, 2019.

  • Learning with Biased Complementary Labels. [PDF] [CODE] [Oral]
    X. Yu, T. Liu, M. Gong, and D. Tao.
    In ECCV, 2018.

  • Classification with Noisy Labels by Importance Reweighting. [PDF] [CODE]
    T. Liu and D. Tao.
    IEEE T-PAMI, 38(3): 447-461, 2015.

Selelcted Publications on Adversarial Learning

  • Probabilistic Margins for Instance Reweighting in Adversarial Training. [PDF] [CODE]
    Q. Wang, F. Liu, B. Han, T. Liu, C. Gong, G. Niu, M. Zhou, and M. Sugiyama.
    In NeurIPS, 2021.

  • Removing Adversarial Noise in Class Activation Feature Space
    D. Zhou, N. Wang, C. Peng, X. Gao, X. Wang, J. Yu, T. Liu
    In ICCV, 2021.

  • Towards Defending against Adversarial Examples via Attack-Invariant Features [PDF]
    D. Zhou, T. Liu, B. Han, N. Wang, C. Peng, and X. Gao
    In ICML, 2021.

  • Maximum Mean Discrepancy is Aware of Adversarial Attacks [PDF] [CODE]
    R. Gao, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu, M. Sugiyama
    In ICML, 2021.

  • Learning Diverse-Structured Networks for Adversarial Robustness [PDF] [CODE]
    X. Du, J. Zhang, B. Han, T. Liu, Y. Rong, G. Niu, J. Huang, and M. Sugiyama
    In ICML, 2021.

  • Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks. [PDF]
    Y. Zhang, Y. Li, T. Liu, and X. Tian.
    In ICML, 2020.

Selelcted Publications on Transfer Learning

  • Confident-Anchor-Induced Multi-Source-Free Domain Adaptation. [PDF] [CODE]
    J. Dong, Z. Fang, A. Liu, G. Sun, and T. Liu.
    In NeurIPS, 2021.

  • Domain Generalization via Entropy Regularization. [PDF] [CODE]
    S. Zhao, M. Gong, T. Liu, H. Fu, and D. Tao.
    In NeurIPS, 2020.

  • Transferring Knowledge Fragments for Learning Distance Metric from A Heterogeneous Domain. [Paper] [CODE]
    Y. Luo, Y. Wen, T. Liu, and D. Tao.
    IEEE T-PAMI, 41(4): 1013-1026, 2019.

  • LTF: A Label Transformation Framework for Correcting Label Shift. [PDF] [CODE]
    J. Guo, M. Gong, T. Liu, K. Zhang, and D. Tao.
    In ICML, 2020.

  • Deep Domain Generalization via Conditional Invariant Adversarial Networks. [PDF] [CODE]
    Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao.
    In ECCV, 2018.

  • Understanding How Feature Structure Transfers in Transfer Learning. [PDF]
    T. Liu, Q. Yang, and D. Tao.
    In IJCAI, 2017.

  • Domain Adaptation with Conditional Transferable Components. [PDF] [CODE]
    M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Schölkopf.
    In ICML, 2106.

Selelcted Publications on Statistical (Deep) Learning Theory

  • On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier. [PDF]
    J. Zhang, T. Liu, and D. Tao.
    IEEE T-NNLS, accepted 2021.

  • Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence. [PDF]
    F. He, T. Liu, and D. Tao.
    In NeurIPS, 2019.

  • Algorithmic Stability and Hypothesis Complexity. [PDF]
    T. Liu, G. Lugosi, G. Neu and D. Tao.
    In ICML , 2017.

  • Algorithm-Dependent Generalization Bounds for Multi-Task Learning. [Paper]
    T. Liu, D. Tao, M. Song, and S. J. Maybank.
    IEEE T-PAMI, 39(2): 227-241, 2017.

See more publications here.


Sponsors

Australian Research Council Usyd CVI CPA Meituan NSSN InteliCare InteliCare