In December 2020 the European Commission published a proposal to repeal the current NIS Directive (European Directive on Network and Information Systems) and to replace it with a new Directive: the so-called NIS-2 Directive. This post will give an update on the status of negotiations of NIS-2, and will outline the aspects we already know and don’t know about the upcoming Directive’s final form.
Privacy Focus Group: AI and Data Protection
9 november 2021 – Cyber Security Coalition
AI and Data Protection
The second webinar of the Privacy Focus Group on the subject of ‘Artificial Intelligence’ (AI) tackles a major challenge: how to reconcile the use of AI with the demands of GDPR, particularly regarding data protection? Because that’s an absolute must, both for best results and to avoid expensive penalties…
As with the first webinar, the Privacy Focus Group availed itself of the expertise of the KU Leuven’s Centre for IT and IP Law (Knowledge Centre Data & Society). This ‘A.I. and the GDPR’ webinar was expertly presented by Brahim Bénichou, Koen Vranckaert and Ellen Wauters.
Know your data
The presentation kicks off with an overview of GDPR principles (see GDPR art.5) that must be abided by AI solutions, whether they use clear personal data (this is allowed!) or non-personal data that enable identification anyway (AI is very clever in determining patterns). AI solutions must restrain themselves to the minimal needed amount of relevant data, used only for a strict and lawful purpose in an accountable way, and as long as useful. Furthermore, transparency is imperative (no ‘black box’ or opaque nature). Developers and users alike cannot treat these requirements in a negligent way (no ‘check-box’-only approach), but must include them ‘by design’ in the data processes. Actually, by paying attention to a better selection of data and performing data cleansing (removing unneeded, outdated etc. data), and by keeping AI solutions ‘explainable’, one can avoid ‘garbage’ results (as in ‘garbage in, garbage out’).
Data protection in ‘real life’ requires a ‘by design’ and ‘by default’ effort with appropriate measures. This webinar lists several points of attention as e.g. risk based modeling (e.g. LINDDUN), explainable processes, the use of anonymization/pseudonymization/encryption, data security, learning methods and the absolute need of documenting all efforts (‘if you didn’t document it, you didn’t do it’!!). AI solutions get on thin ice particularly if applied to ‘profiling’ (allowed if GDPR principles are respected) and ‘automated decision making’ (prohibited except for three situations). Also, Data Protection Impact Assessments may be indicated (cf. the UK’s ICO 9 step approach). Do check out these parts of the webinar. Also of interest are the remarks on the use of AI solutions in scientific research and for statistical purposes. Various exceptions and rules apply, including specific Belgian legislation (e.g., non-pseudonymised data only as a last option).
Transparency is a must
Even more than in ‘everyday’ applications processing personal data, AI solutions must provide the utmost transparency, both ‘internal’ and ‘external’. Internal transparency relates to an understanding of AI systems by and in the companies themselves (policies, guidelines, accountability). External transparency implies extensive, intelligible and easily accessible (clear and plain language) information on a list of topics (e.g., which categories of personal data are collected or the sources of these data, purpose, consent, data subject rights…) for people impacted by the AI solutions. This is of particular importance if ‘automated decision-making’ processes are involved! The onus of proof of sufficient effort regarding transparency is on those responsible for the AI solutions, notwithstanding the potential/probable laziness and negligence of the data subjects involved (interesting example of this ‘negligence’: in 2010, on April 1st, GameStation found that 88% of shoppers did not read or care that the ‘terms and conditions’ included the transfer of their immortal soul…).
Need for a European AI Act?
Considering the scope of the GDPR, the question was raised whether there is yet a need for a specific European AI Act? Apparently ‘yes’, with the AI Act as a necessary complement to the GDPR. The latter regulation covers situations involving personal data, but not e.g. AI solutions without personal data that can nevertheless have negative impact on people. “The concept of harm and the scope of processes clearly exceed pure personal data. […] Though there is overlap with the GDPR, the AI Act offers a broader coverage.” At the time of the webinar, the AI Act was still a proposal with ‘yet work to be done.’ It will be necessary to strike a balance between data protection and the possibility to use AI without infringing the law. Also, the changing nature of AI must be taken into account.
Artificial Intelligence, in combination with privacy, is still very much unknown territory for developers, users and privacy protection officers. This webinar helps you find your way!
Additional information on this subject by the Knowledge Centre:
- Artificial intelligence and data protection: an exploratory guide (Oct 2021)
- Ethical principles and (non-)existing legal rules for AI (Oct 2021)
The text of the EU’s Artificial Intelligence Act (proposal) (both text and annexes) can be downloaded here.
Ransomware – today’s universal cyberworry – is but one aspect of a crime: cyber extortion. Orange Cyberdefense provides some insights into this scourge, based on its ‘Security Navigator 2022’-report.
The Cyber Security Coalition and top cybersecurity trainer SANS Institute joined forces to provide specially needed insights and recommendations on successful cloud security, as well as how to handle cyber security in these times of war.
It is easy to drown in the sea of dire warnings about the danger of AI, in particular to our privacy. The main point is that AI in good trust is possible, but requires solid, long term and well-structured approaches. This session of the Privacy focus group offers some crucial insights and welcome examples.
On computer security day we pay tribute to Ada Lovelace, the forgotten mother of the computer. Often described as the first computer programmer — before computers were even invented — Ada was a real visionary. Imagine what she might have achieved had Babbage actually built his “computer” and she hadn’t died at the age of 36.