News and Publications

Taking the ‘human’ out of HR: the risks of letting artificial intelligence say ‘you’re fired’

Posted: 13/06/2023


Enhancements to artificial intelligence (AI) can improve efficiency with timely HR processes involving hundreds of employees or candidates. By removing the human bias from a decision, an employer may also actually be making a fairer decision based on objective data (if the data and criteria themselves are fair). However, humans should not be entirely removed from employment decision making, otherwise errors within the software can lead to significant legal claims against employers. 

Assessing candidates for recruitment or redundancy

Using AI to filter CVs is a common method for businesses worldwide to reduce thousands of applicants to those who are most ‘appropriate’, by selecting those who have certain keywords in their application or CV. This is fine if the filter is relevant for the job and if what is deemed an ‘appropriate’ filter is for legal reasons (such as having a driving licence for a driving role), and where there is a legitimate aim to comply with. The method of implementing this must also be a proportionate means of achieving that aim. Where it is not there could be an indirect discrimination claim.

Those factors that are filtered out must be regularly scrutinised by a human. Reuters reported in 2018 that Amazon had used an algorithm to hire candidates, basing the data set on hires made in the past 10 years. However, in that period, the company had hired more men than women, and so, the data skewed in favour of male traits. The AI taught itself to look for male candidates and deselect CVs with the word ‘women’. This is likely to be direct discrimination.

AI has also been used in redundancy decisions. MAC (part of Estée Lauder) faced claims in 2022 from three makeup artists after they were required to attend a video interview as part of a redundancy selection procedure. This assessed their technical skill, facial expressions and overall engagement using AI software. The video interview only formed part of the assessment (Estée Lauder said just 1%) and there was also human assessment of their job performance alongside this. However, none of the women were successful in the process and all were made redundant. 

The case settled but it would be interesting in future to see where a tribunal draws the line in these types of AI assessment cases, particularly if one of the candidates was not neurotypical or had a cultural background which meant that they presented in a different way, and an algorithm assessed their engagement levels based on a data set which had been trained on a narrower focus group.  This could give rise to a successful indirect discrimination claim and/or a discrimination arising from disability claim. Unless care is taken with the data sets, AI may not factor protected characteristics into its assessment. A human would, however, be more likely to do so, minimising the risk of a discriminatory outcome.

Similarly, if someone was selected for redundancy on the basis of poor appraisals where an employee had alleged discrimination, there could be a victimisation claim. The same can be said of those who raise protected disclosures and then are marked down for not being a ‘team player’. AI would need to know how, if at all, to apply discretion to the appraisal scores, to avoid or limit risks for uncapped damages for either whistleblowing or discrimination. 

Reviewing performance

AI can be useful in monitoring best performing staff if ‘good performance’ is easily quantifiable, such as the number and value of sales. However, AI-reviewed performance targets may not factor in the physical or mental conditions that can impact on someone’s productivity. The new agile world of work meant some employers installed keystroke monitors to check on how often previously office-based employees were active. Basing required output on short average toilet breaks may mean disabled or pregnant employees are marked down unnecessarily, and could give rise to a discrimination claim. 

The Trades Union Congress (TUC) has raised concerns that employee surveillance at work is increasing. Royal Mail workers recently challenged the manner in which work done on their shifts is tracked with handheld devices to record deliveries made. Some Amazon workers in Coventry also went on strike in response to what they considered to be unrealistic and ever-changing targets, which they alleged were created and changed by AI. If an employee does not have trust in how their performance is assessed, then they might raise a constructive dismissal claim. 

Some call centre workers have even reported AI being used to monitor tone in a conversation to allow managers to assist when a call becomes contentious. Negatively marking pitch or tone of voice may also cause indirect discrimination claims based on gender, racial or other protected characteristics. 

Evaluating disciplinary issues

Establishing fault in disciplinary investigations or hearings based on AI investigations can also be flawed. Some Uber drivers based in the UK have taken the company to court in Amsterdam to seek further details on why certain employment-based decisions had been made and whether they were made without human intervention. 

Uber maintains that human review was involved in these decisions and is considering appealing some of the cases. However, it has apologised to one driver, Alexandru Iftimie, who was given two disciplinary warnings for alleged ‘fraud’ after the software detected he had taken a longer route than it determined was necessary. Mr Iftimie had sought to explain that this was required due to an unexpected disruption on the selected route and that he had not charged the passenger extra for this. However, there was no way to explain this to a human easily, due to the automated disciplinary warning letters he received via Uber’s AI, so he could not have easily challenged the fairness of this decision. 

When communicating the outcome to a disciplinary hearing, the reason for dismissal must be clear to minimise the risk of an unfair dismissal or discrimination claim. If the employer does not understand why an AI decision was taken, then they cannot explain it, and so would struggle to defend such a claim.

Data protection for use of AI in HR tasks 

Employers must also be aware of data protection issues when processing personal data and while decisions with no human intervention are possible in very specific scenarios, there must also be transparency for this to be lawful. If considering automated decision making, an employer must have undertaken a data protection impact assessment to ensure that it has adequately contemplated the risks to the personal data of its employees. Failure to do so could lead to a substantial fine from the Information Commissioner’s Office. Organisations would be wise to audit their sub-teams on whether, and how, they use AI.

Recommendations for using AI in HR tasks

A discriminatory or unfair decision made by AI is the responsibility of the employer who chooses to use it. That the ‘computer said no’ will not be an excuse. By effectively outsourcing the decision to AI, the company has taken the risk that the algorithm and machine learning will run perfectly. AI will fail if there is any form of bias present from the other humans involved in preparing the software or setting the required filters. It is, therefore, vital to have human oversight in order to correct errors in decision making from program learning.

As AI is here to stay, and does clearly have a role in HR, pending further regulations which will inevitably follow, it is vital to understand what you are using AI for, and why. After making that decision, steps need to be put in place to:

  • ensure a human decision maker:
    • reviews any decisions made by AI;
    • provides final outcomes on any HR processes; and
    • checks that all decisions are made transparently;
  • understand what the AI can and, more importantly, cannot do;
  • understand what data sets have been used to develop the AI and be alert to signs of potential bias;
  • ensure that if AI is implemented in teams or sub-teams it is done with the knowledge of the wider organisation, with a reporting line for approval of implementing AI tools; and
  • ensure that technology can only be implemented once a manager trained on the risks of using AI has scrutinised how the data is used.

Fortunately for employment lawyers and HR professionals, then, AI is not ready to take over completely just yet. Artificial intelligence is a qualified intelligence. If the person responsible for writing the rules does not apply any, then the employer’s chances of defending likely claims are artificial at best.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP