Artificial Intelligence (AI) is reshaping workplace dynamics, optimizing processes, and redefining the employer-employee relationship. However, its integration brings significant legal and ethical challenges, particularly in areas such as managerial authority, surveillance, and fairness in human resource management.
To address these concerns, the European Union introduced the Artificial Intelligence Act (AI Act), which took effect on August 1, 2024. This legislation mandates specific employer obligations starting 2 February 2025, to ensure the responsible use of AI in the workplace.
Companies deploying AI systems for workforce management must comply with stringent regulations designed to enhance transparency and worker protection.
- Mandatory Employee Training on AI
Any employee interacting with AI tools must receive training on AI fundamentals, applications, and risks. Training must include:
- Prohibition of Certain AI Practices
To protect employee rights and autonomy, the AI Act bans specific AI applications:
Example: Software that adjusts a salesperson’s targets based on their tone of voice or facial expressions is prohibited.
One of the most profound challenges of AI integration is its impact on workplace authority. Traditionally, employers have exercised direct control over work execution. However, some businesses are now delegating managerial functions to AI, raising critical legal and ethical questions:
- Can managerial authority be legally delegated to AI?
- What are the risks if AI issues unsafe, contradictory, or illegal instructions?
Courts are already addressing these concerns. A recent Brussels Labor Court ruling determined that a digital platform using an algorithm to assign tasks to independent contractors was effectively acting as an employer (exercising employer authority). Nevertheless, employers remain legally responsible for AI-driven decisions, even when made autonomously.
From 2 August 2025, new regulations will apply to General-Purpose AI models—large-scale AI systems designed for various tasks, such as ChatGPT:
- Developers must ensure transparency and regularly update technical documentation.
- Compliance with European copyright laws is mandatory.
- High-risk AI models must undergo periodic assessments to identify vulnerabilities.
- Sanctions will be imposed for non-compliance.
To ensure compliance with AI regulations, employers should take proactive measures:
- Identify AI systems in use and classify their risk levels.
- Assess legal obligations as an AI user, provider, or distributor.
- Implement AI training programs to educate employees on AI's impact and risks.
- Conduct AI audits to ensure compliance in recruitment, evaluation, and HR processes.
- Enhance transparency by informing employees of AI use and providing appeal mechanisms for AI-driven decisions.
- Engage in social dialogue with worker representatives regarding AI governance.
The AI Act introduces significant responsibilities for employers, requiring swift action to ensure compliance. While AI presents immense opportunities for workplace innovation, its use must be governed by clear ethical and legal standards.
By preparing now, companies can mitigate legal risks and responsibly harness AI’s potential. The future of work is being shaped today—ensure your organization is prepared to integrate AI in a compliant and ethical manner.
***
Our Employment & Benefits Practice is closely monitoring these developments. If you have questions or wish to discuss this topic in further, please contact our team.
Nous utilisons des cookies pour améliorer votre expérience de navigation et analyser l'utilisation de notre site. Les cookies nécessaires sont indispensables au fonctionnement du site. Vous pouvez accepter ou refuser les cookies optionnels.