The use of artificial intelligence (AI) has become an integral part of our everyday lives, both at work and in our personal lives. In order to strengthen confidence in this technology and create clear guidelines for its use, the Council of the 27 EU member states adopted the EU Artificial Intelligence Act (AI Act) on 21 May 2024. This regulation marks the world's first comprehensive set of rules for regulating AI and establishes a standardised framework within the European Union.
The AI Act establishes clear rules for the development, commercialisation and use of AI systems within the EU. It follows a risk-based approach in which regulation varies depending on the potential risk of an AI application. AI systems are divided into three main categories:
Further links on the topic:
For EU countries such as Germany, the AI Act means harmonising regulatory standards for AI, which increases market transparency and facilitates the cross-border use of AI technologies. Germany, as one of the leading economic powers within the EU, sees the clear regulation of AI as an opportunity to both promote innovation and ensure the protection of its citizens. It is not yet clear how exactly the German government intends to transpose the EU's requirements into national law.
Further links on the topic:
With the exception of the AI assistant Isa few AI systems are currently used in German companies in the context of occupational prevention. In principle, however, AI can be used to better identify risks, derive more targeted measures and personalise or automate health programmes.
AI-based systems can collect and analyse data from various sources, for example to identify employee health risks. As in the case of Isa, this can include data on working conditions or health behaviour, for example - and in the future it will probably even include mental health parameters . Based on this data, companies can take preventative measures to promote health and safety in the workplace. AI can therefore recognise potential hazards at an early stage in order to derive targeted preventive measures or initiate them itself. It can also be used to create personalised health programmes for employees based on their individual needs and health goals. Automated health advice systems can offer employees continuous support by responding to their personal health data and behavioural patterns and making appropriate recommendations.
The AI Act provides clear guidelines for providers, users and decision-makers alike:
This gives companies and employees who request or use AI offerings more extensive information rights with regard to AI systems and the use of their data. At the same time, the likelihood of data misuse is reduced, which strengthens consumer confidence in AI-based products and services and promotes the acceptance of such technologies.
Our digital AI assistant Isa is categorised as "minimal risk" under the AI Act due to its locally operating and data protection-compliant nature. This means that Isa is only subject to minor regulatory requirements, which facilitates its acceptance and implementation in occupational health management, occupational health and safety and health insurance schemes.
Another reason for this categorisation lies in How Isa works. Even in the event of a malfunction or a wrong decision, Isa poses no immediate danger to users. Isa gives users recommendations to improve ergonomics or to drink enough and teaches skills for healthy habits. However, if Isa's analyses had an influence on a user's medical treatment, this would lead to a higher risk classification.
The EU AI Act represents a pioneering step towards promoting the potential of artificial intelligence in Europe without jeopardising the fundamental rights of citizens. It creates a clear framework for the use of AI technologies and at the same time offers the flexibility to drive innovation. Germany and other EU countries face the challenge of integrating the AI Act into their national legal systems in order to ensure the safe and responsible use of artificial intelligence. AI companies will be held more accountable for ensuring the safety of their systems, creating more transparency and complying with clear legal requirements. Provided that the set of rules is well implemented in EU countries, it can create a level playing field for companies that want to achieve high data protection and security standards for their AI systems and adhere to ethical principles.
You are currently viewing a placeholder content from HubSpot. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from HubSpot. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information