PhD Scholarship in Systems for Automated Decision-Making

Develop new approaches to fairness, actionable explainability, or socially considerate evaluation of ADM in recommender, search, or other ML based systems.

$31,260 per annum for three years with a possible extension of six months (full-time).

Two (2).

To be eligible for this scholarship you must:

  • have first-class Honours or 2A Honours or equivalent or a Masters by Research degree in a relevant discipline of computer science
  • be an Australian citizen, Australian permanent resident or an international student meeting the minimum English language requirements
  • provide evidence of good oral and written communication skills
  • demonstrate the ability to work as part of a multi-disciplinary research team
  • meet RMIT’s entry requirements for the Doctor of Philosophy.

To apply, please submit the following documents to Mark Sanderson via mark.sanderson@rmit.edu.au

  • a cover letter (research statement)
  • a copy of electronic academic transcripts
  • a CV that includes any publications/awards and the contact details of two referees.

For international applicants, evidence of English proficiency may be required.

Prospective candidates will be invited to submit a full application for admission to the PhD (Computer Science) DR221.

Scholarship applications will only be successful if prospective candidates are provided with an offer for admission.

Applications are open now.

Applications will close once candidates are appointed.

Technical or human focused solutions are welcome.

Potential projects could encompass one of the following areas:

  • ADM systems and machines - including search engines, intelligent assistants, and recommender systems - are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete, by including new aspects such as fairness, that are as important as the traditional definitions of quality, to inform the design, evaluation, and optimisation of such systems.
  • Creating a next generation recommender system that enables equitable allocation of constrained resources. Many recommender systems now suggest items or services drawn from resource constrained environments such as tourist destinations. Unlimited use disrupts the limited capacity of such resources: hidden locations become tourist destinations and neighbourhoods become hotel complexes. Recent research has addressed the problem of building recommender systems that are fair to their registered users, but this comes at the profound risk of being unfair to others: so-called third parties. The incorporation and modelling of such third-party views is a critical omission in existing systems. Our next generation recommender system will consider the preferences, tolerances, and social norms of the system's users as well as its third parties and nonusers.
  • Studying and developing new approaches that combines fairness, privacy and legal guarantees for ADM systems, such as recommender and machine learning based systems. It takes a multi-disciplinary approach and although focused on the transportation focus area, can potentially be applicable in other areas. The project is divided into three work packages, roughly one year in length each. For a mid-point review of the project, we would aim to demonstrate results on formulating and testing different fair routing policies in route recommendation.
  • ADMs, their software, algorithms, and models, are often designed as “black boxes” with little efforts placed on understanding how they actually work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on. Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models. Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing. This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate actionable explanations, for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data.

This scholarship will be governed by RMIT University's Research Scholarship Terms and Conditions.

Mark Sanderson via mark.sanderson@rmit.edu.au

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Luwaytini' by Mark Cleaver, Palawa.