AI in the workplace

Artificial intelligence (AI) is becoming increasingly used by the public on a daily basis and reliance on AI support is also rapidly rising in the workplace. AI is a form of technology where computers can ‘learn’ from data it analyses and can adapt its ‘behaviour’ from what it learns to improve performance of tasks over time. In other words, it mimics human intelligence in making judgements and reaching conclusions.

There are now a number of organisations and employers that rely on AI to undertake many workplace functions such as recruiting and hiring, employee onboarding, performance management and productivity, managing employees working remotely, selecting employees for redundancy amongst many other workstreams. Many employers are finding AI invaluable when looking to automate repetitive tasks and help increase efficiency.

In July 2023, Delottie undertook a survey which found that over 4 million people had used AI to assist in their work. The survey also found that the overwhelming majority of those using AI believed that it always produced factually accurate answers and that it was unbiased. A separate survey by the Institute for Public Policy Research concluded that 70% of tasks associated with knowledge roles could be significantly transformed or even replaced by AI. Surveys have also found that 42% of those surveyed feared that AI could take their job and that it may no longer exist within the next decade.

So should AI be considered to be infallible, without bias and threaten to soon replace humans from job roles altogether? It must be remembered that AI is only as reliable as the information supplied to it to process. AI does lack transparency and, despite some people’s beliefs, it can be unwittingly biased and offer discriminatory outcomes in its decision making. For example, Amazon ceased using an AI recruiting tool in 2018 when it found that the algorithm used compared candidate data with existing company data to shortlist candidates. The workforce at that time lacked diversity and was predominantly a male workforce. This ‘taught’ the AI tool that male candidates were preferable to female candidates and it would penalise any CV submitted by a female applicant and not seek to shortlist. It has also been reported that AI-enhanced CV’s score significantly higher in screening performed by AI-powered CV screening tools again showing forms of bias in its decision making.

The lack of transparency in AI tools does create potential risks in recruitment. For example, if a candidate with a protected characteristic is denied an interview by an employer using an AI screening tool and alleges discrimination, how does the employer explain their reasoning for shortlisting candidates? Can the employer feel assured and explain that the AI tool has not exercised any bias? It is very improbable that simply excusing shortlisting on AI technology will be sufficient in itself to explain that there has been no discrimination in the process.

Not only does AI occasionally risk bias caused by the algorithm used and the data available for its analysis, it can also be factually wrong. In the case of Taiwo v Tower Homelets of Bath Ltd, the High Court took a stance against fake legal citations and AI ghost assistance and refused an appeal due to fundamental dishonesty, warning that any lawyer involved with fabricated authorities, even if generated by AI, may face misconduct referral or contempt charges. In this case the Court had found that one cited authority “Irani v Duchy Farm Kennels [2020] EWCA Civ 405” was inevitably a false case fabricated by AI when the court asked for a copy of the judgement and this could not be produced. The court noted another citation made in the same case, ‘Chapman v Tameside Hospital NHS Foundation Trust [2018] EWCA Civ 2085’ whilst this had been a county court case in 2016,  there had been no 2018 appeal and so the reference was fictitious.

Chief Constable Craig Guidford of West Midlands Police chose to resign when it was revealed that he had misled the public and MP’s by using AI generated information to ban Maccabi Tel Aviv fans from attending a football match. In this instance, AI had generated details of a non-existent football match between Maccabi Tel Aviv and West Ham implying that fan conduct at that game justified the ban. Following a series of apologies from the police, the Chief Constable took the further step to resign his post.

Whilst the great potential of AI assistance is abundantly clear to anyone that has used it at length, it is not yet infallible and does require a ‘verify before apply’ rule and consideration of when it can be relied upon as a supportive tool and when it may create a degree of risk to a workforce process. Whilst it may ultimately prove able to take over some jobs in due course, it would seem to be an employer who has an appetite for risk who chooses to replace a human worker with an AI tool at the current time.