The ecosystem of organisations developing and usine artificial intelligence (AI) systems and research is vast and complex. For some, AI comes with great potential to increase profit margins. For others, AI poses serious challenges to their reputation, security and sustainability.
Wherever your company or charity sits on this spectrum, it is key that staff developing AI tools, staff making decisions about tech procurement, and staff exposed to AI in the workplace all have a keen sense of the limitations of AI systems and research.
Below, you may find a series of interview questions to ensure that job candidates are aware of the limitations of the systems they develop, implement or use. Crucially, these questions help job candidates understand that your organisation values responsible research and innovation.
The questions are categorised according to your type of organisation or the department you are recruiting for: either innovators, buyers or end users.¹
- Innovators: These are the people and organisations who are developing the latest AI. For example, these questions are useful for engineers, data scientists and product managers.
- Buyers: These are the people and organisations who make decisions about procuring and implementing technologies that impact fellow staff and members of the public. For example, these questions can help recruit for HR, finance, operations and IT departments. Schools, hospitals and other public service organisations can also gain from using these questions.
- End users: These are individuals who use AI technologies – in other words, all employees.
The questions are intentionally vague, so that they apply to many contexts. Notwithstanding, the questions are paired with criteria against which you may assess job candidates’ responsible AI readiness. We encourage you to reach out via [email protected] for us to help you design more tailored questions.
You are encouraged to provide feedback on the questions by commenting on the related issue (requires signing in to GitHub).
If you want support in effectively designing and encouraging practices that protect your organisation and be a part of the responsible AI movement, please reach out to use at Kairoi via [email protected].
You are free to copy and adapt these questions. Whilst we don’t expect you to acknowledge that the questions are from this repository during interviews, we encourage you to cite this resource in recruitment and other HR reports as follows: “Based on Responsible AI Interview Questions by Kairoi Ltd (2023) / kairoi.uk / CC-BY 4.0” |
---|
Assessed criterion | Question |
---|---|
Keeping abreast of news in responsible AI, and communicating its relevance | What recent “scandal” have you heard of in the AI world, and why does it matter to you? |
Ability to credibly speculate and articulate future risks from AI R&D | The AI systems we develop come with unforeseeable downstream consequences. Do you agree with this and why? |
Experience identifying and mitigating social harms from technical systems | Tell us of a time you made a suggestion for a product or system you were developing to mitigate against unjust biases or concerns the public may have. |
Ability to credibly speculate amd articulate potential benefits AI R&D | If you are successful in this recruitment process, you will be involved with the development of [X systems], which is intended to [Y applications]. Please tell us of potential uses of this system, beyond the aforementioned intended uses. |
Assessed criterion | Question |
---|---|
Keeping abreast of news in responsible AI, and communicating its relevance | What recent “scandal” have you heard of in the AI world, and why is it relevant to this department? |
Experience of credibly articulating risks from AI systems and influencing change | Tell us of a time you identified some risks in a technology deployed at your workplace, and how you communicated this risk to colleagues. What was the outcome? |
Ability to identify risks when evaluating AI systems for deployment | When identifying systems to deploy in your department, we have a process that involves the IT team’s oversight to ensure certain safety standards are met. However, standards are constantly evolving and emerging in the realm of artificial intelligence technologies. What do you look out for when evaluating AI-based systems to purchase? |
Ability to plan for the responsible implementation of AI systems | If successful in this process, you will oversee the implementation of AI systems into the department’s processes. This can affect staff across the organisation. How do you ensure their knowledgeable and safe use of these systems? |
Assessed criterion | Question |
---|---|
Keeping abreast of news in responsible AI, and communicating its relevance | What recent “scandal” have you heard of in the AI world, and why does it matter to you? |
Ability to credibly articulate risks from the use of AI in the workplace | We use various AI systems across the organisation to make tasks easier for many of our staff. If you are successful in this process, you will use [X system],which helps [Y application]. Whilst we are transparent about our usage of AI systems, we understand there might be concerns about with working with such technologies. What concerns are you aware of around the use of AI systems in the workplace? (This isn’t a trick question!) |
Ability to credibly articulate potential uses of AI in the workplace | What potential do you see for modern technologies for the job you are applying to? |
Awareness of our publicly communicated AI safety policies² | There are many freely accessible AI systems at the disposal of the general public and, therefore, staff. Please tell us of one of our intiatives to ensure safe and responsible usage of such technologies. |
¹ Kherroubi Garcia, I. (2023) Another Piece of the AI Ethics Puzzle, Kairoi
² For example, you may have a version of the our Template ChatGPT Use Policy