Home
Trustworthy Machine Learning Lab (TML Lab) at the University of Sydney hosts, attracts, and connects the best global talents to develop trustworthy machine learning techniques and tools, which are explainable, robust, fair, causally responsible, and privacypreserving. Our mission is to make machines trustworthy, which is a foundation for our society to develop and deploy artificial intelligence to improve our lives. We are broadly interested in the fields of trustworthy machine learning and its interdisciplinary applications, with a particular emphasis on learning with noisy labels, adversarial learning, transfer learning, unsupervised learning, and statistical deep learning theory.
We are recruiting PhD and visitors. If you are interested, please send me your CV and transcripts.
Postdoc positions are available with competitive salary packages.
We are always looking for highlymotivated undergraduate and postgraduate students to join our group. Full scholarships are available!
A few visiting positions in machine learning and computer vision are available.
Research Interests
My research interests lie in providing mathematical and theoretical foundations to justify and understand (deep) machine learning models and designing efficient learning algorithms for problems in computer vision and data mining, with a particular emphasis on
Learning with noisy labels
Deep adversarial learning
Causal representation learning
Deep transfer learning
Deep unsupervised learning
Statistical deep learning theory
Top News
09/2022, I was selected as an Australian Research Council Future Fellow (only three researchers across Australia was awarded in the field of Information and Computing Sciences in 2022).
09/2022, I was appointed as a Visiting Professor with University of Science and Technology of China.
09/2022, I was appointed as a Visiting Associate Professor with Mohammed Bin Zayed University of Artificial Intelligence.
08/2022, I accepted the invitation to serve as an Area Chair for ICLR 2023.
08/2022, I am in the editorial board of JMLR.
08/2022, I was elected as one of the editorial board of the ML journal.
08/2022, Two of my PhD students have got Google PhD Fellowship Awards. Congrats Xiaobo and Shuo!
07/2022, I accepted the invitation to serve as a Discussant for UAI 2022.
04/2022, I served as a Session Chair for ICLR 2022.
04/2022, I was selected as one of Global Top Young Chinese Scholars in AI by Baidu Scholar 2022.
03/2022, I will coorganize IJCAI 2022 Challenge on Learning with Noisy Labels.
03/2022, I accepted the invitation to serve as an Area Chair for NeurIPS 2022.
02/2022, my student James Wood got the University Medal! Congrats James!
02/2022, my monograph on learning with noisy labels has been accepted by MIT Press.
01/2022, I am invited to be an Action Editor of TMLR.
12/2021, I received the Faculty Early Career Research Excellence Award, University of Sydney.
12/2021, I accepted the invitation to serve as an Area Chair for ICML 2022.
11/2021, I accepted the invitation to serve as an Area Chair for UAI 2022.
07/2021, I accepted the invitation to serve as an Area Chair for AAAI 2022.
06/2021, I accepted the invitation to serve as an Area Chair for ICLR 2022.
04/2021, we are organising a speical issue at the ML Journal.
03/2021, I accepted the invitation to serve as an Area Chair for NeurIPS 2021.
02/2021, we are organising the first AustraliaJapan Workshop on Machine Learning.
9/2020, I was named in the Early Achievers Leaderboard by The Australian.
8/2020, I accepted the invitation to serve as an Area Chair for IJCAI 2021.
See more previous news here.
Selected Publications on Learning with Noisy Labels
MSR: Making Selfsupervised learning Robust to Aggressive Augmentations. [PDF]
Y. Bai, E. Yang, Z. Wang, Y. Du, B. Han, C. Deng, D. Wang, and T. Liu.
In NeurIPS, 2022.
Estimating Noise Transition Matrix with Label Cor relations for Noisy MultiLabel Learning. [PDF]
S. Li, X. Xia, H. Zhang, Y. Zhan, S. Ge, and T. Liu.
In NeurIPS, 2022.
ClassDependent LabelNoise Learning with CycleConsistency Regularization. [PDF]
D. Cheng, Y. Ning, N. Wang, X. Gao, H. Yang, Y. Du, B. Han, and T. Liu.
In NeurIPS, 2022.
Estimating Instancedependent Bayeslabel Transition Matrix using a Deep Neural Network. [PDF] [CODE]
S. Yang, E. Yang, B. Han, Y. Liu, M. Xu, G. Niu, and T. Liu.
In ICML, 2022.
SelectiveSupervised Contrastive Learning with Noisy Labels. [PDF] [CODE]
S. Li, X. Xia, S. Ge, and T. Liu.
In CVPR, 2022.
InstanceDependent LabelNoise Learning With ManifoldRegularized Transition Matrix Estimation. [PDF] [CODE]
D. Cheng, T. Liu, Y. Ning, N. Wang, B. Han, G. Niu, X. Gao, and M. Sugiyama.
In CVPR, 2022.
Rethinking ClassPrior Estimation for PositiveUnlabeled Learning. [PDF] [CODE]
Y. Yao, T. Liu, B. Han, M. Gong, G. Niu, M. Sugiyama, and Dacheng Tao
In ICLR, 2022.
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels. [PDF] [CODE]
X. Xia, T. Liu, B. Han, M. Gong, J. Yu, G. Niu, and M. Sugiyama
In ICLR, 2022.
MeMomentum: Extracting Hard Confident Examples from Noisily Labeled Data [PDF] [CODE] [Oral]
Y. Bai and T. Liu
In ICCV, 2021.
InstanceDependent LabelNoise Learning under Structural Causal Models. [PDF] [CODE]
Y. Yao, T. Liu, M. Gong, B. Han, G. Niu, and K. Zhang.
In NeurIPS, 2021.
Understanding and Improving Early Stopping for Learning with Noisy Labels. [PDF] [CODE]
Y, Bai, E. Yang, B. Han, Y. Yang, J. Li, Y. Mao, G. Niu, and T. Liu.
In NeurIPS, 2021.
Provably Endtoend Labelnoise Learning without Anchor Points [PDF] [CODE]
X. Li, T. Liu, B. Han, G. Niu, and M. Sugiyama
In ICML, 2021.
Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [PDF] [CODE]
S. Wu*, X. Xia*, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu
In ICML, 2021.
A SecondOrder Approach to Learning with InstanceDependent Label Noise. [PDF] [CODE] [Oral]
Z. Zhu, T. Liu, and Y. Liu.
In CVPR, 2021.
Robust earlylearning: Hindering the memorization of noisy labels. [PDF] [CODE]
X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang.
In ICLR, 2021.
Partdependent Label Noise: Towards Instancedependent Label Noise. [PDF] [CODE] [Spotlight]
X. Xia, T. Liu, B. Han, N. Wang, M. Gong, H. Liu, G. Niu, D. Tao, and M. Sugiyama.
In NeurIPS, 2020.
Dual T: Reducing Estimation Error for Transition Matrix in Labelnoise Learning. [PDF] [CODE]
Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
In NeurIPS, 2020.
Learning with Bounded Instance and Labeldependent Label Noise. [PDF] [CODE]
J. Cheng, T. Liu, K. Rao, and D. Tao.
In ICML, 2020.
Are Anchor Points Really Indispensable in LabelNoise Learning? [PDF] [CODE]
X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
In NeurIPS, 2019.
Learning with Biased Complementary Labels. [PDF] [CODE] [Oral]
X. Yu, T. Liu, M. Gong, and D. Tao.
In ECCV, 2018.
Classification with Noisy Labels by Importance Reweighting. [PDF] [CODE]
T. Liu and D. Tao.
IEEE TPAMI, 38(3): 447461, 2015.
Selected Publications on Adversarial Learning
Modeling Adversarial Noise for Adversarial Defense. [PDF] [CODE]
D. Zhou, N. Wang, B. Han, and T. Liu.
In ICML, 2022.
Improving Adversarial Robustness via Mutual Information Estimation. [PDF]
D. Zhou, N. Wang, X. Gao, B. Han, X. Wang, Y. Zhan, and T. Liu.
In ICML, 2022.
Understanding Robust Overfitting of Adversarial Training and Beyond. [PDF] [CODE]
C. Yu, B. Han, L. Shen, J. Yu, C. Gong, M. Gong, and T. Liu.
In ICML, 2022.
Adversarial Robustness Through the Lens of Causality. [PDF] [CODE]
Y. Zhang, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang
In ICLR, 2022.
Removing Adversarial Noise in Class Activation Feature Space [PDF] [CODE]
D. Zhou, N. Wang, C. Peng, X. Gao, X. Wang, J. Yu, T. Liu
In ICCV, 2021.
Towards Defending against Adversarial Examples via AttackInvariant Features [PDF] [CODE]
D. Zhou, T. Liu, B. Han, N. Wang, C. Peng, and X. Gao
In ICML, 2021.
DualPath Distillation: A Unified Framework to Improve BlackBox Attacks. [PDF]
Y. Zhang, Y. Li, T. Liu, and X. Tian.
In ICML, 2020.
Selected Publications on Transfer Learning
ConfidentAnchorInduced MultiSourceFree Domain Adaptation. [PDF] [CODE]
J. Dong, Z. Fang, A. Liu, G. Sun, and T. Liu.
In NeurIPS, 2021.
Domain Generalization via Entropy Regularization. [PDF] [CODE]
S. Zhao, M. Gong, T. Liu, H. Fu, and D. Tao.
In NeurIPS, 2020.
Transferring Knowledge Fragments for Learning Distance Metric from A Heterogeneous Domain. [Paper] [CODE]
Y. Luo, Y. Wen, T. Liu, and D. Tao.
IEEE TPAMI, 41(4): 10131026, 2019.
LTF: A Label Transformation Framework for Correcting Label Shift. [PDF] [CODE]
J. Guo, M. Gong, T. Liu, K. Zhang, and D. Tao.
In ICML, 2020.
Deep Domain Generalization via Conditional Invariant Adversarial Networks. [PDF] [CODE]
Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao.
In ECCV, 2018.
Understanding How Feature Structure Transfers in Transfer Learning. [PDF]
T. Liu, Q. Yang, and D. Tao.
In IJCAI, 2017.
Domain Adaptation with Conditional Transferable Components. [PDF] [CODE]
M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Schölkopf.
In ICML, 2106.
Selected Publications on Statistical (Deep) Learning Theory
On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier. [PDF]
J. Zhang, T. Liu, and D. Tao.
IEEE TNNLS, accepted 2021.
Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence. [PDF]
F. He, T. Liu, and D. Tao.
In NeurIPS, 2019.
Algorithmic Stability and Hypothesis Complexity. [PDF]
T. Liu, G. Lugosi, G. Neu and D. Tao.
In ICML , 2017.
AlgorithmDependent Generalization Bounds for MultiTask Learning. [Paper]
T. Liu, D. Tao, M. Song, and S. J. Maybank.
IEEE TPAMI, 39(2): 227241, 2017.
See more publications here.
