Tongliang Liu


Selected Research Topics


Learning with noisy labels

Learning with noisy labels becomes a more and more important topic recently. The reason is that, in the era of big data, datasets are becoming larger and larger. Often, large-scale datasets are infeasible to be annotated accurately due to the cost and time, which naturally brings us cheap datasets with noisy labels. However, the noisy dataset can severally degenerate the performance of machine learning models, especially for the deep neural networks, as they easily memorize and eventually fit label noise. Normally, there are two ways to deal with label noise. One is to extract confident examples, whose labels are correct with a high probability. Another one is to model the noise and then get rid of the side-effect of label noise, i.e., obtain the optimal classifier defined by the clean data by exploiting the noisy data.

Topics:

  • Learning under instance-dependent label noise

  • Estimate the label noise transition matrix without using anchor points

  • Estimate the label noise transition matrix by using anchor points

  • Exploiting the memorization effect of deep neural networks

  • Deep representation learning under label noise

  • Learning with noisy similarity labels

  • Learning with complementary labels

  • Learning with group noise

  • Harnessing side information for classification under label noise

  • Dealing with label noise in the face recognition problem


Robust/Adversarial learning

We are also interested in how to reduce the side effect of noise on the instance, which may be caused by the failure of sensors or even malicious attacks. We human have the ability to correctly recognise the objects even there are noise (e.g., we can easily recognise human faces under extreme illumination conditions, when partially occluded, or even with heavy makeup); while current machine learning algorithms may not. Recent studies also show that an imperceptible noise on the instance will lead machines to make wrong decisions. All those mean that we human and machines are using different feature extraction mechanisms for making decisions. What are the differences? And how to align them? Answering those questions is very important to build robust and trustworthy machine learning algorithms.

Topics:

  • Modelling adversarial noise

  • Towards defending against adversarial examples via attack-invariant features

  • Efficient gradient approximation for black boxes

  • Understanding adversarial attacks via maximum mean discrepancy

  • Learning diverse-structured networks for adversarial robustness

  • Robust non-negative matrix factorisation algorithms [

  • Compare the robustness of different loss functions


Domain adaptation and transfer learning

Just like human, machine can also find the common knowledge between tasks and transfer the knowledge from one task to another one. In machine learning, we can exploit training examples drawn from some related tasks (source domains) to improve the performance on the target task (target domain). This relates two terms in machine learning, i.e., domain adaptation and transfer learning. Domain adaptation refers to how to reduce the difference between the distributions of source and target domain data. Transfer learning refers to how to extract knowledge from source tasks and apply it to improve the learning performance of a target task. We are interested in studying the domain adaptation and transfer learning problems from a causal perspective.

Topics:

  • Out of distribution learning

  • Domain adaptation with conditional transferable components

  • Deep domain generalization via conditional invariant representations

  • Label transformation for correcting label shift

  • Heterogeneous transfer learning

  • Multi-task learning


Statistical (deep) learning theory

Deep learning algorithms have given exciting performances, e.g., painting pictures, beating Go champions, and autonomously driving cars, among others, showing that they have very good generalisation abilities (small differences between training and test errors). These empirical achievements have astounded yet confounded their human creators. Why do deep learning algorithms generalise so well on unseen data? It lacks mathematical elegance. We do not know the underlying principles that guarantee its success. Let alone to interpret or pertinently strengthen its generalisation ability. We are interested in analysing error bounds, e.g., generalisation error bound and excess risk bound, by measuring the complexity of the predefined (or algorithmic) hypothesis class. An algorithmic hypothesis class is a subset of the predefined hypothesis class that a learning algorithm will (or is likely to) output.

Topics:

  • The relationship between algorithmic stability and algorithmic hypothesis complexity

  • Control batch size and learning rate to generalize well

  • Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier

  • Understanding the generalisation of ResNet

  • Understanding the generalisation of orthogonal deep neural networks

  • Understanding the generalisation of multi-task learning

  • Understanding how feature structure transfers in transfer learning

  • Understanding the generalisation of non-negative matrix factorisation