A/Prof Adrian Wills

Research

BAYESIAN INFERENCE

This research area concerns the development of numerically robust and efficient algorithms for estimating probabilistic state distributions based on observed system behaviour. Such problems frequently arise in all areas of Engineering and Science from estimating vehicle pose to disease infection rates.

Bayes’ rule is employed as the principle theoretical framework, where its application to more flexible model structures is of primary interest. This work spans development of algorithms for linear systems with additive Gaussian noise through to much more general non-linear systems. 

MOdelling and identification

Mathematical models are essential for describing complex systems across all fields of engineering and natural sciences. In general terms, a model is used to capture important system behaviour such as the response to changed environment and causal relationships between system components. The remarkable utility of mathematical system modelling stems from the fact that models:

  • Enable prediction of system behaviour under new environmental scenarios;
  • Remove the need for dangerous and/or prohibitively expensive experiments;
  • Accelerate the analysis and design processes;
  • Enable simulation of systems at greatly accelerated time-scales;
  • Are fundamental to detecting faults or changes in the system;
  • Are essential to the design and analysis of advanced feedback control and automation systems.

In this research area, members of the Autonomous Systems Research Centre are developing new theory and supporting algorithms to identify system models based on observed data.

OPTIMISATION MACHINE LEARNING

In this line of research, we are interested in convex stochastic optimisation problems where we only have access to noisy evaluations of the cost function  and its derivatives. The problem has a long history and an important landmark development was the so-called stochastic approximation idea derived by Robbins and Monro almost 70 years ago. In recent years the relevance of this problem has massively increased mainly due to the fact that it arises in at least the following two important situations.

First, when the cost function and its gradients are intractable, but where we can still make use of numerical methods to compute noisy estimates of these objects.

Second, when the measured dataset is too large to be used in its entirety when evaluating the cost function. We then usually resort to using only a smaller fraction of the data, which is commonly referred to as mini- batching. This situation arises in large-scale application of supervised machine learning and in particular in deep learning.

 

About

Our background and vision

People

The research team

Publications

A list of research publications

Contact

Contact us

Close Menu