Skip to content

Are we witnessing a watershed moment in AI legislation?

Date posted
02 August 2024
Type
Opinion
Author
Saeed Ahmadi
Estimated reading time
5 minute read

The European Union’s (EU) ground-breaking AI Act takes effect this month (August). This new regulation – the first of its kind – sets a precedent for the way artificial intelligence (AI) may be governed in the future. Saeed Ahmadi, OSH Content Developer at IOSH, explains the Act and considers how it might impact occupational safety and health (OSH) professionals in the future.

Inspired by the General Data Protection Regulation (GDPR), the AI Act aims to encourage the progress of AI technologies while still protecting people’s rights. Its scope covers everyone from developers to users, capturing the need for a human-centred approach. While it applies directly in areas subject to EU law, businesses developing systems for use within the EU will also need to comply.

This is one of several recent developments in the global technology space. This spring, the European Commission (EC) also introduced a directive to improve conditions for platform workers, which aims to address issues related to worker misclassification and algorithmic management.

Meanwhile, a UK-US agreement signed in April to enhance the safety of AI is another indication of increased focus at international level. It begs the question: are these incremental changes or do they reflect a growing consensus around the need for safeguards and responsible innovation when it comes to AI?

Impacts on technology governance

The EU AI Act classifies AI systems by risk, setting different rules for limited-risk and high-risk systems.

  • Limited-risk systems mainly need to ensure transparency, essentially letting users know they’re dealing with AI.
  • High-risk systems, meanwhile, which can impact health, safety, and human rights, must follow stricter rules, including Fundamental Impact Assessments and rigorous data governance. For OSH professionals, this means increased vigilance and surveillance to ensure AI safety. Moreover, they may potentially need to be proactive in conducting assessments and ensuring that all safety protocols are met.
  • General-purpose AI systems (think Chat GPT or AlphaStar) get special attention due to their broad use and potential risks. For systems such as these, developers must provide technical documentation, comply with EU copyright laws, and summarise the data used for training. If these systems pose high risks, they must also conduct systemic risk assessments and report serious incidents to the EU Commission, ensuring responsible development and deployment.

For OSH practitioners, understanding these documents and their implications on workplace safety will be important. Additionally, systemic risk assessments and AI-related incident reporting to the EU Commission may become part of their routine.

Job role
Company

AI systems that are deemed too risky are simply banned. These include systems that use biometric categorisation based on sensitive characteristics, untargeted facial image scraping, emotion recognition in sensitive environments, social scoring, and systems that manipulate behaviour or exploit vulnerabilities. Law enforcement’s use of AI will have specific safeguards to balance efficiency with rights protection.

The EU plans to establish an AI Office, backed by a scientific panel of independent experts, to enforce these regulations. Non-compliance will lead to heavy fines, so businesses need to start preparing by mapping their AI processes and implementing robust governance strategies. It is likely that OSH professionals will be among those leading on this, so training and education on the Act is essential.

In short, the new legislation aims to harness AI’s power while protecting individual rights and ensuring technology continues to meet human needs and rights. The impact of the Act on workers – particularly in the context of their occupational safety and health – could be profound. At the very least, it makes clear the need for safety consideration and oversight for those interacting with AI systems.

New framework for gig workers

The EC’s new directive on platform workers could be a big step forward for the gig economy. It tackles three main issues: worker-status misclassification, fairness and transparency in algorithmic management, and rule enforcement. Ultimately, this may mean better protection for European platform workers such as food-delivery riders, ride-hailing drivers, domestic workers, and others working for online-only platforms.

One key part of the directive is the presumption of employment. This helps ensure that those who are employees in all but name receive employment benefits, even if their contracts specify otherwise. It’s not about automatic reclassification, but a step towards making sure people get what they deserve.

The directive also focuses on algorithmic management, making sure automated systems that affect work are transparent. Digital platforms must disclose how decisions are made, what is being monitored, and who receives this information. They cannot use automated systems to process sensitive data about workers’ emotional states, bargaining rights, or personal conversations. Workers can request human review and challenge decisions, which is surely a victory in terms of social justice and moral responsibility. The directive aims to protect around 43 million platform workers across the EU, heralding a new era of fairness and accountability in the gig economy.

Is this a watershed moment?

In tandem, the EU AI Act and the new directive on platform work could prove to be game-changers. The Act’s risk-based approach prioritises human oversight, openness, and ethical responsibility, while the new directive on platform work enhances conditions for workers by focusing on transparency, fairness and human control.

These developments originate within Europe, but their effects will be felt further afield. And developments in AI regulation are happening elsewhere too, as the US-UK AI safety testing memorandum of understanding demonstrates. While these are initial steps in a rapidly moving space, the general movement towards more socially responsible approaches to AI could prove to be a transformative moment.

We’d like to understand how our members are using AI, the impact it has on their roles and skills, and what resources they would like IOSH to provide. Take part in our survey conducted by the Professional Associations Research Network (PARN) to share your views. Your responses are invaluable in helping us understand the impact of AI on the profession.

Last updated: 17 December 2024

Job role
Company
  • Health and safety Act remains fit for purpose
  • From apathy to engagement
  • Next UK Government cannot ignore emerging risks