In 2021 cyber threats have been trending to increased ransomware attacks, commodity malware and heightened Dark Web enablement. INTERPOL reported that the projected worldwide financial loss to cyber crime for 2021 is $6 trillion, twice as much as in 2015, with damages set to cost the global economy $10.5 trillion annually by 2025. Globally, leading tech experts reported that 60% of intrusions incorporated data extortion, with a 12-day average operational downtime due to ransomware.
With the acceleration to cloud, companies are taking advantage of cybersecurity in an effort to meet the threat of fast-evolving cyber attacks. AI and machine learning are a way to keep ahead of criminals, automate threat detection, and respond more effectively than before. At the same time, more sophisticated, centralised security operations centres are being set up to detect and eliminate vulnerabilities.
In April 2021, the European Union published its Proposal for a Regulation on Artificial Intelligence Opens in new window (the “AI Regulation”). At this early stage in the legislative process, these are the key takeaways:
- It is the first legislation of its kind that aims to provide a regulatory AI framework;
- It defines an AI system as: “software that is developed with one or more of the techniques and approaches. . . and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
- Some AI systems will be banned completely (“unacceptable risk”) – such as those causing, or likely to cause, physical or psychological harm;
- A risk-based approach to regulation is likely to apply, with systems deemed “high-risk” expected to comply with extensive obligations, with “limited risk” and “minimal risk” systems expected to require less;
- The regulation will have an extra-territorial effect; impacting companies outside the EU that provide services into the EU;
- In line with penalties available under the General Data Protection Regulation (“GDPR”), the AI Regulation allows for the imposition of fines, up to €30 million or up to 6% of annual global turnover, whichever is the higher.
As expected, the debate around this legislation has already started. On the positive side, this regulation may become the global standard, in the same way GDPR has become. It may also make AI systems more trustworthy and offer extra protections to the public. On the other side, it may stifle innovation, add more costs and red-tape, which may hinder start-ups from entering the market. We will hear more on this around the world before it becomes law, currently expected in 2023.
How could the AI Regulation improve cyber security?
Cybersecurity AI systems play a crucial role in ensuring IT systems are resilient against malicious actors. The new AI Regulations will undoubtedly affect these systems. Exactly how these systems will be affected will depend on the system (e.g. for law enforcement use of biometrics, facial recognition) which may lead to conformity assessments, explainability testing, registration, and more.
Considering the speed and agile process that technology is developed today, companies and innovators should consider how might the future AI Regulation affect such technology development.
Matheson’s highly experienced Technology and Innovation Group will be keeping abreast of developments as the legislation progresses. At this stage, we would be very interested to hear from clients on their expectations or questions about these developments.
For further information please contact Rory O’Keeffe or any member of our Technology and Innovation Group