Sara Ibrahim and Katharine Bailey have written an article for the International Employment Lawyer, focussing on Artificial Intelligence in the workplace, considering the emerging EU and US AI regulatory landscape. The piece contains advice for employers about the ways new regulation will impact businesses, and how to mitigate legal exposure when using AI to monitor employees.
Before the covid-19 pandemic, artificial intelligence (AI) and algorithmic management systems were primarily deployed by “sharing” or gig economy businesses. The pandemic has accelerated this kind of digitisation across all sectors as businesses rushed to implement paperless systems and enable mass-scale remote working. With employers increasingly “monitoring” their employees with AI, steps need to be taken to minimise their legal exposure, particularly in light of emerging European and US regulatory developments.
AI in the workplace
According to an EU report in 2019, about 40% of HR functions in international companies based in the US used AI applications, with European and Asian organisations also beginning to adopt the technology. Just some of the AI systems used by businesses in recruitment and task-allocation include CV scanners, psychometric testing, and facial recognition scanning during video interviews.
However, in the wake of the pandemic, we are now seeing more AI that “monitors” employees in the workplace, particularly in the “remote workplace”. To this end, businesses increasingly deploy “people analytics” – statistical tools, big data, and AI that measure, report, and understand employee performance. Using these analytical tools often means implementing intrusive technical infrastructure, such as technology that tracks workers’ emails, screen-time, and mouse-clicks. More intrusive measures in circulation include GPS tracking, audio-recording, or video-monitoring via webcam.
To continue reading this article please click here.
Please subscribe here