The Ethical Workplace & Artificial Intelligence

Over the past two decades, technology has transformed our world and our workplaces.  COVID-19 has only accelerated the implementation of technological change and innovation by employers.  One significant development in the workplace is the introduction of artificial intelligence ("AI"), which includes technologies such as automated decision making ("ADM") and machine learning ("ML").

In light of these capabilities, it is wise for employers to stay abreast of the latest developments and opportunities – being an early adopter of new technology can often mean saved costs and a competitive advantage. Nonetheless, it is equally important for employers to be aware of the ethical and legal risks associated with these technologies, as a relatively recent and rapidly evolving phenomenon.

This article focuses on the opportunities and risks associated with AI, which is a broad term for smart systems which can replicate human-like traits including perception, learning and problem-solving. Some of the ways in which employers are already implementing AI include the automation of recruitment processes, analysing workforce performance and organisation, and generating AI 'chatbots' to interact with customers or employees. These solutions provide the opportunity to lighten the administrative burden on employers, allowing their workforces to focus on higher level tasks.

Opportunities

The growth in AI adoption throughout the COVID-19 pandemic has been driven by several factors including the need to police social distancing and to monitor and maintain or improve productivity in a remote working environment. As with any market shift, this change breeds opportunity for innovation, efficiency, and profit. 

Most employers expect that the impacts of the pandemic, particularly as they relate to remote working, are here to stay. A recent survey conducted by the BBC found that 43 out of 50 big UK employers will not be bringing their employees back to the office full-time, but will adopt a hybrid home-office working arrangement. The UK government's flexible working taskforce has also recommended that flexible working be the new default position for all workers. Similarly from an international perspective, a survey by Deloitte of global executives and business leaders suggests that up to a third of workers may be expected to work remotely post-pandemic. This would eradicate full-time office working for millions of workers globally. Understandably, this shift to remote working creates an even greater reliance on technology. 

As a greater proportion of communication becomes virtual, the value of digital performance analysis increases. An AI-driven program could, for example, analyse the way in which employees work and/or communicate, picking up on patterns specific to a particular employee which can be analysed and adjusted as needed to improve performance. Some data even suggests that employees are more receptive to feedback from an AI system than from their manager. Performance data may also be invaluable to employers where they have less oversight of their employees' daily activities and productivity levels while working remotely.

Another major challenge for employers with a remote workforce is maintaining engagement and motivation, with studies suggesting that employees working remotely are less likely to remain at their company long term. A number of new AI-driven tools aim to monitor this effect, so that HR can respond accordingly. For example, 'Elin.ai' uses AI to analyse morale and engagement based on employees' interactions on the work messaging app 'Slack', while IBM recently reported an AI tool that can predict which employees will leave a job with 95% accuracy. Additionally, a 2017 United Nations study found that 41% of remote workers reported high stress levels, compared to 25% of non-remote workers. In some organisations, this is being tackled with AI software such as 'Woebot', a digital app that encourages daily conversations with a therapeutic chatbot to monitor mood and manage employees' mental health.

Employers also face challenges with providing the learning and development ("L&D") needed to maintain employees' growth and engagement while working remotely. AI programs can suggest appropriate L&D materials for employers to adopt and implement based on their workforce, its aspirations and any areas needing improvement. The shift towards digital L&D tools will also enable employers to tailor L&D programs for employees based on their individual skills, rather than the traditional broad-based solutions aimed at the workforce as a whole.

Risks

Tackling bias

Since AI is not susceptible to the same subliminal influences as humans, it is thought that AI could alleviate the unconscious bias inherent to human decision-making, for example when choosing the best candidate for a role regardless of race, gender or appearance. While this is a promising development, employers should also be aware of the limitations of AI software and the possibility of other hidden biases.

Machine learning algorithms commonly used in AI systems 'learn' from the datasets they are given for training, which determines how they treat future information. This can give rise to 'algorithmic bias' whereby the training data is not sufficiently diverse and, as a result, the algorithm develops blind spots or provides inaccurate or biased decisions, leading to a self-perpetuating cycle of hidden bias. This issue was put under the spotlight in 2018 when a major tech company was forced to withdraw its recruitment engine after showing a bias against women when scoring job applicants.

While such stories should not deter employers from making use of AI in their businesses, it should serve as a reminder that AI programs can only provide solutions based on the information they given. They carry out operations without a code of ethics or a moral compass, and they are far from fool-proof. Employers across Europe have legal obligations to avoid algorithmic bias under both equality laws and the General Data Protection Regulation ("GDPR"). In the vast majority of jurisdictions, discrimination by employers is prohibited in some shape or form, and local legal requirements will need to be considered carefully when assessing whether an AI algorithm is fit for introduction to the workplace. For example, AI may give rise to direct discrimination where an algorithm relies on protected characteristics or proxy data when making a decision. In addition, employers should be aware of any information and consultation obligations which may apply in their jurisdictions when implementing AI-based technology in the workplace.

Trust and confidence

Trust and confidence is pivotal to the employment relationship and essential for the contract of employment to be effective.  Such is its importance, it is implied into all UK employment contracts.  A by-product of this duty is that employers are very often under an obligation to provide a clear and cogent explanation to employees for decisions made through the exercise of discretion under the contract of employment.  Additionally, it is well recognised in UK common law that an employer is required to exercise any discretionary power vis-à-vis its employees in good faith and in a way that is lawful and rational. The use of AI to make decisions that have a fundamental impact on employees (e.g. performance management, disciplinary action or dismissal) will not absolve employers from this obligation.

Workforce perceptions

The notion of sentient AI replacing human jobs through automation is an oft-repeated concern in popular culture.  Employers should think carefully about how they can prepare their existing employees for the introduction of AI and whether training should be arranged to make best use of the workforce.

The use of AI can give rise to fear and distrust on the part of workers. Indeed, there is a prevalent perception that employers' adoption of new monitoring technologies in response to remote working during the pandemic is intrusive, and that it extends into the employee's private sphere and well beyond the usual monitoring an employee would undergo in a workplace. There are also fears that the technology is being deployed without employees' full knowledge and understanding and that AI-powered technology (e.g. automated absence-management systems) can give rise to unfair consequences. 

A recent survey of UK workers by the Trades Union Congress found that 89% of respondents either thought that, or were unsure whether, their employer was using AI-powered technologies in the workplace that they were unaware of. Meanwhile, only 28% of workers were comfortable with the use of AI at work. Together these statistics suggest that employees are unsettled by the prospect of AI entering the workplace and there are concerns over employer transparency, which commonly leads to worker dissatisfaction and a rise in employee disputes. This should serve to remind employers that clear communication with the workforce, is not only likely to be a legal prerequisite in most circumstances, but is also an important tool for maintaining healthy employee relations and high levels of worker satisfaction.

Further, a recent report released by the Office of National Statistics (“ONS”) highlights the growing phenomenon of online presenteeism emerging in remote workforces.  Indeed, the introduction of new AI monitoring technologies is arguably contributing to this "always-on" culture in which employees feel never free of work.

The above sentiments give rise to risks in respect of employers' obligations under both equality and data privacy laws, as well as employee satisfaction.  Accordingly, for effective implementation, a proper risk and opportunity analysis must be undertaken and effective communication with employees is advised to ensure that trust in these systems is established from the outset.

Privacy and data protection

Employers and businesses need to be more careful than ever with how they treat personal data of employees and customers. The advent of 'big data' has brought with it a host of data privacy concerns, with employers needing to keep up with regulatory obligations and safeguard individuals' rights.

Keeping this in mind, the extent to which AI systems are given access to such sensitive data should be closely monitored, and it is imperative that employers track exactly how that data will be used. This risk has been acknowledged by the UK's Information Commissioner's Office (“ICO”), which published guidance last year on how businesses should navigate regulatory compliance for their AI systems. Similarly on a European level, the EU's High-Level Expert Group on AI names privacy and data governance as one of its seven key requirements that AI systems should meet to be considered trustworthy and ethical.

For breaches of data protection obligations, employees may seek compensation and through court action and/or report a breach to their local regulatory body. This type of action could lead to heavy fines and reputational damage.

We suspect that, as with the issue of unconscious bias, the key to utilising the efficiencies that AI can bring without losing sight of exactly what is being done, or risking regulatory non-compliance, is adequate risk assessment and human oversight.

Key Takeaways

We can be confident that AI represents a permanent technological shift in the workplace, rather than a passing trend. As such, employers should carefully assess the legal landscape associated with the implementation of AI and employee management on a country-by-country basis.

In addition to tech solution-oriented roles, there is a growing need for individuals who can monitor AI systems for ethical, legal and regulatory compliance. According to the 2018 'Future of Jobs Report' from the World Economic Forum, while AI is estimated to displace over 75 million jobs over the coming years, 133 million new jobs will be created. Human-specific skills will pay an important role in facilitating and monitoring the responsible creation of AI systems.

Beyond raw data-processing and quantitative analytics, the role of an employer – and particularly HR departments – involves managing the workforce on a personal, human level. Employers must remember that the human element of managing a workforce is invaluable and can never be replaced with software in its entirety. Human oversight may be key to avoiding over-reliance on these systems when making business decisions and reaping the benefits of AI whilst appropriately managing the risks.

Latest insights

More Insights
Lamp

UK Unfair Dismissal Reforms

Nov 21 2024

Read More
Magnifying Glass on green background

Frontline UK Employment Law Update Edition 32 2024 - Case Updates

Nov 20 2024

Read More
The European Commission Modern office buildings in Brussels, Belgium.

VAT in the Digital Age (“ViDA”): prepare your business with Bird & Bird – 10 key insights for success

Nov 15 2024

Read More