PhD Scholarship in Automated Decision-Making and Information Retrieval

Seeking students to develop new approaches to fairness, actionable explainability or socially considerate evaluation of ADM in recommender, search, or other ML based systems.

$31,885 per annum for three years with a possible extension of six months (full-time).

One (1).

To be eligible for this scholarship you must:

  • Have a first-class Honours degree in Computer Science or equivalent
  • Have strong computational, programming, algorithms, and data analysis skills
  • Provide evidence of adequate oral and written communication skills
  • Demonstrate an ability to work as part of a multi-disciplinary research team
  • Meet RMIT University’s entry requirements for the Higher Degree by Research programs
  • Preferably be an Australian citizen or Australian permanent resident .

To apply, please submit the following documents to Damiano Spina damiano.spina@rmit.edu.au

  • A cover letter (research statement)
  • A copy of electronic academic transcripts 
  • A CV that includes any publications/awards and the contact details of two referees
  • Thesis or research reports.

For international applicants, evidence of English proficiency may be required. 

Once approved, prospective candidates will be required to submit an application for admission to the PhD (Computer Science) program (DR221).   

Scholarship applications will only be successful if prospective candidates are provided with an offer for admission.  

Applications are open now.

Applications will close once a candidate is appointed with intention to start.

Project proposals should encompass one of the areas below and should demonstrate knowledge of the related work published in the following (but not limited to) research venues: SIGIR, WSDM, CHIIR, WWW, CIKM, and ECIR.       

This research involves measuring and quantifying users’ cognitive biases and fairness perceptions when interacting with information access systems, including search engines, intelligent assistants, and recommender systems. These systems are often embedded in Automated Decision-Making (ADM) processes, and are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete by including new aspects such as fairness  - that are as important as the traditional definitions of quality -  as well as better mechanisms to measure and quantify cognitive biases, which would inform the design, evaluation, and optimisation of such systems embedded in ADM processes. 

Software, algorithms, and models used in Automated Decision-Making (ADM) processes are often designed as “black boxes” with little efforts placed on understanding how they actually work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data on which the models are often trained. Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models. Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing. This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g., a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate actionable explanations for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data. 

This scholarship will be governed by RMIT University's Research Scholarship Terms and Conditions.

Damiano Spina - damiano.spina@rmit.edu.au

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Luwaytini' by Mark Cleaver, Palawa.