Tongliang Liu
Home
Trustworthy Machine Learning Lab (TML Lab) at the University of Sydney hosts, attracts, and connects the best global talents to develop trustworthy machine learning techniques and tools, which are explainable, robust, fair, causally responsible, and privacy-preserving. Our mission is to make machines trustworthy, which is a foundation for our society to develop and deploy artificial intelligence to improve our lives. We are broadly interested in the fields of trustworthy machine learning and its interdisciplinary applications, with a particular emphasis on learning with noisy labels, adversarial learning, transfer learning, unsupervised learning, and statistical deep learning theory.
We are recruiting PhD and visitors. If you are interested, please send me your CV and transcripts.
We are always looking for highly-motivated undergraduate and postgraduate students to join our group. Competitive scholarships are available!
A few visiting positions in machine learning and computer vision are available.
Research Interests
My research interests lie in providing mathematical and theoretical foundations to justify and understand (deep) machine learning models and designing efficient learning algorithms for problems in computer vision and data mining, with a particular emphasis on
Learning with noisy labels
Deep adversarial learning
Causal representation learning
Deep transfer learning
Deep unsupervised learning
Statistical deep learning theory
Top News
04/2022, I served as a Session Chair for ICLR 2022.
03/2022, I accepted the invitation to serve as an Area Chair for NeurIPS 2022.
02/2022, my student James Wood got the University Medal! Congrats James!
02/2022, my monograph on learning with noisy labels has been accepted by MIT Press.
01/2022, I will be serving as an Action Editor of TMLR.
12/2021, I received the Faculty Early Career Research Excellence Award, University of Sydney.
12/2021, I accepted the invitation to serve as an Area Chair for ICML 2022.
11/2021, I accepted the invitation to serve as an Area Chair for UAI 2022.
06/2021, I accepted the invitation to serve as an Area Chair for ICLR 2022.
04/2021, we are organising a speical issue at the Machine Learning Journal.
03/2021, I accepted the invitation to serve as an Area Chair for NeurIPS 2021.
02/2021, we are organising the first Australia-Japan Workshop on Machine Learning.
9/2020, I was named in the Early Achievers Leaderboard by The Australian.
See more previous news here.
Selected Publications on Adversarial Learning
Adversarial Robustness Through the Lens of Causality. [PDF] [CODE]
Y. Zhang, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang
In ICLR, 2022.
Removing Adversarial Noise in Class Activation Feature Space [PDF] [CODE]
D. Zhou, N. Wang, C. Peng, X. Gao, X. Wang, J. Yu, T. Liu
In ICCV, 2021.
Towards Defending against Adversarial Examples via Attack-Invariant Features [PDF] [CODE]
D. Zhou, T. Liu, B. Han, N. Wang, C. Peng, and X. Gao
In ICML, 2021.
Probabilistic Margins for Instance Reweighting in Adversarial Training. [PDF] [CODE]
Q. Wang, F. Liu, B. Han, T. Liu, C. Gong, G. Niu, M. Zhou, and M. Sugiyama.
In NeurIPS, 2021.
Maximum Mean Discrepancy is Aware of Adversarial Attacks [PDF] [CODE]
R. Gao, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu, M. Sugiyama
In ICML, 2021.
Learning Diverse-Structured Networks for Adversarial Robustness [PDF] [CODE]
X. Du, J. Zhang, B. Han, T. Liu, Y. Rong, G. Niu, J. Huang, and M. Sugiyama
In ICML, 2021.
Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks. [PDF]
Y. Zhang, Y. Li, T. Liu, and X. Tian.
In ICML, 2020.
Selected Publications on Causal Representation Learning
Adversarial Robustness Through the Lens of Causality. [PDF] [CODE]
Y. Zhang, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang
In ICLR, 2022.
Fair Classification with Instance-dependent Label Noise. [PDF]
S. Wu, M. Gong, B. Han, Y. Liu, and T. Liu
In CLeaR, 2022.
Instance-Dependent Label-Noise Learning under Structural Causal Models. [PDF] [CODE]
Y. Yao, T. Liu, M. Gong, B. Han, G. Niu, and K. Zhang.
In NeurIPS, 2021.
Selected Publications on Learning with Noisy Labels
Selective-Supervised Contrastive Learning with Noisy Labels. [PDF] [CODE]
S. Li, X. Xia, S. Ge, and T. Liu.
In CVPR, 2022.
Instance-Dependent Label-Noise Learning With Manifold-Regularized Transition Matrix Estimation. [PDF] [CODE]
D. Cheng, T. Liu, Y. Ning, N. Wang, B. Han, G. Niu, X. Gao, and M. Sugiyama.
In CVPR, 2022.
Rethinking Class-Prior Estimation for Positive-Unlabeled Learning. [PDF] [CODE]
Y. Yao, T. Liu, B. Han, M. Gong, G. Niu, M. Sugiyama, and Dacheng Tao
In ICLR, 2022.
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels. [PDF] [CODE]
X. Xia, T. Liu, B. Han, M. Gong, J. Yu, G. Niu, and M. Sugiyama
In ICLR, 2022.
Me-Momentum: Extracting Hard Confident Examples from Noisily Labeled Data [PDF] [CODE] [Oral]
Y. Bai and T. Liu
In ICCV, 2021.
Understanding and Improving Early Stopping for Learning with Noisy Labels. [PDF] [CODE]
Y, Bai, E. Yang, B. Han, Y. Yang, J. Li, Y. Mao, G. Niu, and T. Liu.
In NeurIPS, 2021.
Provably End-to-end Label-noise Learning without Anchor Points [PDF] [CODE]
X. Li, T. Liu, B. Han, G. Niu, and M. Sugiyama
In ICML, 2021.
Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [PDF] [CODE]
S. Wu*, X. Xia*, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu
In ICML, 2021.
A Second-Order Approach to Learning with Instance-Dependent Label Noise. [PDF] [CODE] [Oral]
Z. Zhu, T. Liu, and Y. Liu.
In CVPR, 2021.
Robust early-learning: Hindering the memorization of noisy labels. [PDF] [CODE]
X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang.
In ICLR, 2021.
Part-dependent Label Noise: Towards Instance-dependent Label Noise. [PDF] [CODE] [Spotlight]
X. Xia, T. Liu, B. Han, N. Wang, M. Gong, H. Liu, G. Niu, D. Tao, and M. Sugiyama.
In NeurIPS, 2020.
Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning. [PDF] [CODE]
Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
In NeurIPS, 2020.
Learning with Bounded Instance- and Label-dependent Label Noise. [PDF] [CODE]
J. Cheng, T. Liu, K. Rao, and D. Tao.
In ICML, 2020.
Are Anchor Points Really Indispensable in Label-Noise Learning? [PDF] [CODE]
X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
In NeurIPS, 2019.
Learning with Biased Complementary Labels. [PDF] [CODE] [Oral]
X. Yu, T. Liu, M. Gong, and D. Tao.
In ECCV, 2018.
Classification with Noisy Labels by Importance Reweighting. [PDF] [CODE]
T. Liu and D. Tao.
IEEE T-PAMI, 38(3): 447-461, 2015.
Selected Publications on Transfer Learning
Confident-Anchor-Induced Multi-Source-Free Domain Adaptation. [PDF] [CODE]
J. Dong, Z. Fang, A. Liu, G. Sun, and T. Liu.
In NeurIPS, 2021.
Domain Generalization via Entropy Regularization. [PDF] [CODE]
S. Zhao, M. Gong, T. Liu, H. Fu, and D. Tao.
In NeurIPS, 2020.
Transferring Knowledge Fragments for Learning Distance Metric from A Heterogeneous Domain. [Paper] [CODE]
Y. Luo, Y. Wen, T. Liu, and D. Tao.
IEEE T-PAMI, 41(4): 1013-1026, 2019.
LTF: A Label Transformation Framework for Correcting Label Shift. [PDF] [CODE]
J. Guo, M. Gong, T. Liu, K. Zhang, and D. Tao.
In ICML, 2020.
Deep Domain Generalization via Conditional Invariant Adversarial Networks. [PDF] [CODE]
Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao.
In ECCV, 2018.
Understanding How Feature Structure Transfers in Transfer Learning. [PDF]
T. Liu, Q. Yang, and D. Tao.
In IJCAI, 2017.
Domain Adaptation with Conditional Transferable Components. [PDF] [CODE]
M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Schölkopf.
In ICML, 2106.
Selected Publications on Statistical (Deep) Learning Theory
On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier. [PDF]
J. Zhang, T. Liu, and D. Tao.
IEEE T-NNLS, accepted 2021.
Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence. [PDF]
F. He, T. Liu, and D. Tao.
In NeurIPS, 2019.
Algorithmic Stability and Hypothesis Complexity. [PDF]
T. Liu, G. Lugosi, G. Neu and D. Tao.
In ICML , 2017.
Algorithm-Dependent Generalization Bounds for Multi-Task Learning. [Paper]
T. Liu, D. Tao, M. Song, and S. J. Maybank.
IEEE T-PAMI, 39(2): 227-241, 2017.
See more publications here.
|