Michael Scott, the protagonist of the US version of The Office, is using an artificial intelligence recruiter to hire a receptionist.
Guardian Australia applies.
The text-based system asks applicants five questions that delve into how they responded to past work situations, including dealing with difficult colleagues and juggling competing job demands.
Potential employees type their responses into a chat-style program that resembles a responsive help desk. The real, and bewildering, power of the AI kicks in, sending a score and traits profile to the employer and a personality report to the applicant. (More on our results later.)
This demo, by Melbourne-based startup Sapia.ai, resembles the initial structured interview process used by its clients, which include some of Australia’s largest companies such as Qantas, Medibank, Suncorp and Woolworths.
The process would typically create a short list that an employer can track, with information on personality markers, including humility, extroversion, and conscientiousness.
For customer service roles, it’s designed to help an employer tell if someone is nice. For a manual function, an employer may want to know if an applicant will arrive on time.
“Basically you interview the world; everyone gets an interview,” says Sapia founder and CEO Barb Hyman.
The selling points of AI recruiting are clear: They can automate costly and time-consuming processes for businesses and government agencies, especially for large recruiting campaigns for non-management roles.
However, Sapia’s biggest claim might be that it’s the only way to give someone a fair interview.
“The only way to eliminate hiring bias is to not use people from the get-go,” Hyman says. “That’s where our technology comes in: it’s blind; It doesn’t have time, it doesn’t use resume data or social media data or demographic data. All you are using are the text results.”
A spotty track record
Sapia isn’t the only AI company claiming its technology will reduce bias in the hiring process. A host of companies in Australia offer AI-augmented recruiting tools, including not only chat-based models, but also one-way video interviews, automated reference checks, social media analytics and more.
In 2022, a survey of Australian public sector agencies found that at least a quarter had used AI-assisted technology in recruitment that year. Separate research from the Australian Diversity Council and Monash University suggests that a third of Australian organizations are using it at some point in the recruitment process.
However, applicants are often unaware that they will be subject to an automated process, or on what basis they will be evaluated within it.
The Merit Protection Commissioner’s office advises public service agencies that when using artificial intelligence tools for recruiting, there must be “a clear and demonstrated connection between the qualities of the candidate being evaluated and the qualities required to perform the duties.” from work”.
The commissioner’s office also warns that AI can evaluate candidates on something other than merit, raise ethical and legal concerns about transparency and data bias, produce skewed results, or cause “statistical bias” by misinterpreting socioeconomic markers. as indicators of success.
There is a good reason for that warning. AI’s record of bias has been troubling.
In 2017, Amazon quietly scrapped an experimental candidate ranking tool that had been trained on CVs from the mostly male tech industry, effectively teaching that male candidates were preferable. The tool systematically lowered the CVs of womenpenalizing those that included phrases like “captain of the women’s chess club,” and upping those that used verbs more common on the resumes of male engineers, such as “executed” and “captured.”
Research conducted in the US in 2020 showed that facial analytics technology created by Microsoft and IBM, among others, performed better on lighter-skinned subjects and menwith darker-skinned women more often being misconstrued by shows.
Last year a study from the University of Cambridge showed that AI is not a benign intermediary, but “by building associations between words and people’s bodies” helps produce the “ideal candidate” rather than just observing or identifying him.
Natalie Sheard, a lawyer and PhD candidate at La Trobe university whose PhD examines regulation and discrimination in AI-based hiring systems, says this lack of transparency is a big problem for equity.
“Messenger-style apps rely on natural language processing, similar to ChatGPT, so the training data for those systems tends to be the words or vowel sounds of people who speak standard English,” Sheard says.
“So if he’s not a native speaker, how does he relate to you? It might say that you don’t have good communication skills if you don’t use standard English grammar, or you might have different cultural traits that the system might not recognize because it was trained on native speakers.”
Another concern is how physical disability is accounted for in something like a chat or video interview. And with a lack of transparency about whether and on what basis assessments are conducted with AI, it is often impossible for candidates to know that they may need reasonable accommodations to which they are legally entitled.
“There are legal requirements for organizations to accommodate disability in the recruitment process. But that requires people to disclose their disability directly when they don’t have confidence with this employer. And these systems change traditional hiring practices, so you don’t know what the evaluation is about, you don’t know if or how an algorithm will evaluate you. You may not know you need a reasonable accommodation,” says Sheard.
Australia does not have laws specifically governing AI recruitment tools. While the industry department has developed an AI ethics framework, which includes principles of transparency, explainability, accountability and privacy, the code is voluntary.
“There are low levels of understanding in the community about AI systems, and because employers are so reliant on these vendors, they implement [the tools] without any system of government,” says Sheard.
“Employers don’t have bad intentions, they want to do the right thing but have no idea what they should be doing. There are no internal oversight mechanisms in place, and no independent audit systems to ensure there are no biases.”
A matter of diversity
Hyman says that customer feedback and independent research show that the broader community is comfortable with recruiters using AI.
“They need to have an experience that is engaging, inclusive, and attracts more diversity,” Hyman says. She says Sapia’s low-stress, timeless, text-based system fits this criteria.
“You are twice as likely to get women and keep them in the hiring process when you use AI. It is a complete fiction that people do not like and trust him. We see quite the opposite in our data.”
Research from the Australian Diversity Council and Monash University is not so enthusiastic, showing there is a “clear divide” between employers and candidates “converted” or “cautious” about AI recruiting tools, with 50% of employers converting to the technology, but only a third of job applicants. First Nations job seekers were among the most likely to be concerned.
DCA recommends that recruiters be transparent about the due diligence protocols they have in place to ensure that AI-powered recruiting tools are “bias-free, inclusive, and accessible.”
In the Sapia demo, the AI quickly generates short personality feedback notes at the end of the app for the interviewee.
This is based on how someone rates on various markers, including conscientiousness and kindness, which the AI combines with pre-written phrases that resemble something a life coach might say.
A more complete evaluation, not visible to the applicant, would be sent to the recruiter.
Sapia says its chat interview software tested for language proficiency, with a profanity detector included as well, and the company said these were important considerations for customer-facing roles.
Hyman says the language analysis is based on the “billion words of data” collected from responses in the years since the tech company was founded in 2013. The data itself is proprietary.
You are (not) hired!
So could Guardian Australian work for Michael Scott at the fictional Dunder Mifflin paper company?
“You are self-assured, but not overly confident,” says the personality’s feedback in response to Guardian Australia’s app on the AI demo.
He follows up with a subtle suggestion that this applicant might not be a good candidate for the receptionist position, which requires “repetition, routine, and following a defined process.”
But he does have some helpful advice: “Possibly balance that with variety outside of work.”
Looks like we’re not good at this job.