We are seeking a highly motivated PhD student to work on a project funded by the ARC Centre of Excellence for Automated Decision-Making and Society. 

Value and duration

$31,885 per annum for three years with a possible extension of six months (full time).

Number of scholarships available

One (1)

Eligibility

To be eligible for this scholarship you must:  

  • have a First-Class Honours in Computer Science or equivalent.  
  • have strong computational, programming, algorithms, and data analysis skills.  
  • provide evidence of good oral and written communication skills  
  • demonstrate an ability to work as part of a multi-disciplinary research team  
  • meet RMIT University’s entry requirements for the Higher Degree by Research programs.  
  • preferably be an Australian citizen, or Australian permanent resident  

How to apply

To apply, please submit the following documents to Flora Salim (flora.salim@rmit.edu.au): 

  • a cover letter (research statement) 
  • a copy of electronic academic transcripts 
  • a CV that includes any publications/awards and the contact details of 2 referees. 
  • thesis or research reports 

For international applicants, evidence of English proficiency may be required. 

Once approved, prospective candidates will be required to submit an application for admission to the PhD (Computer Science) program (DR221).   

Scholarship applications will only be successful if prospective candidates are provided with an offer for admission.   

Open date

Applications are open now

Close date

Applications will close once a candidate is appointed with intention to start.

Further information

The PhD candidate, focused on Machine Learning and Data Mining, based in RMIT University, will work with the multidisciplinary research team from RMIT, in collaboration with other institution partners in the ARC Centre of Excellence on Automated Decision Making and Society (ADMS).       

The particular PhD project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate actionable explanations around those biases, for a range of problems, deep learning models, and a wide range of data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data.        

Software, algorithms, and models used in Automated Decision-Making (ADM) processes are often designed as “black boxes” with little efforts placed on understanding how they actually work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on. Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models. Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing. This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g., a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate actionable explanations, for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data.      

Measuring and quantifying users’ cognitive biases and fairness perceptions when interacting with information access systems, including search engines, intelligent assistants, and recommender systems. These systems are often embedded in Automated Decision-Making (ADM) processes, and are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete by including new aspects such as fairness --that are as important as the traditional definitions of quality— as well as better mechanisms to measure and quantify cognitive biases, which would inform the design, evaluation, and optimisation of such systems embedded in ADM processes.

Terms and conditions

This scholarship will be governed by RMIT's University Research Scholarship Terms and Conditions.

Contact

For further information, you can email Flora Salim (flora.salim@rmit.edu.au).