Practical AI Use Cases in good trust

It is easy to drown in the sea of dire warnings about the danger of AI, in particular to our privacy. The Coalition Privacy focus group, however, helps with examples of AI done right!

In its third session on Artificial Intelligence (AI), focus group chair Jan Léonard, DPO Orange, points out that beyond the first steps, the impact of AI on privacy requires attention to other aspects and domains, such as e.g. compliance. And thus the need for real-life examples of privacy-respecting AI solutions.

De-mystifying AI

In his ‘Artificial Intelligence demystified and its relation with cybersecurity and privacy’ presentation, Joeri Ruyssinck, CEO of ML2GROW, kicked off with a definition of AI. It is ‘a new way to solve problems’ by ‘making the system itself intelligent, instead of creating yourself the intelligence.’ Or, “instead of programming a computer, you teach a computer to learn something and it does what you want,” dixit Eric Schmidt, ex-CEO Alphabet.

Joeri Russynck illustrated the differences with the traditional way of solving problems (e.g., regarding the use of internal and external data) and pointed out several dangers (e.g., regarding training and use of AI models). He also presented a classification of ‘privacy preserving AI’, resulting in privacy-preserving model training, inference and release.

The ‘proof of the pudding’ of this all was a clear and convincing example of turning that most ‘privacy-invasive’ device of all – the camera – into an inherently trusted cornerstone of a crowding monitor tool (in answer to the ruckus caused by the plans of camera’s used for this purpose on the Belgian coast). Combining a ‘custom all-in-one edge device’ with an inventive data capture/processing approach, the required service was implemented while maintaining privacy. This is important, as in the discussion, it was pointed out that the public interest often demands solutions with potentially severe impact on privacy.

Trustworthy AI

Privacy-preserving AI cannot be a one-off, or left to good luck. ‘Trusted AI’ requires a way ‘to instill trust in our AI solutions’. That was the topic of the ‘KBC’s governance framework for responsible use of AI’ presentation by Peter De Coorde and Maarten Cannaerts, both of KBC. This framework has been in the works for the past few years, and is vital in ‘convincing stakeholders that our AI modeling, deployment and usage is trustworthy’. And that it is a core long-term element of the company’s strategy of ‘responsible behavior and business ethics’, wholeheartedly supported by top management.

The presentation highlighted three aspects in this process– trusted AI, in depth, in practice – and stressed the importance of involving the whole business side! One must realize that all AI are human creations, mimicking people, with a consequent need to beware of (and address) biases. Machines can be trusted to do a person’s job, but – just as for humans – we need to install controls to make sure they behave properly. Therefore, throughout the development cycle of AI projects at KBC, there are several ‘approval’ checks, by a variety of experts.

In depth, Trusted AI is considered from five perspectives: data protection and privacy; diversity, fairness and non-discrimination; accountability and professional responsibility; safety and security; and transparency, explainability and human control. Checking all of this by consistently asking ‘what if’ questions. Interestingly, in this way the KBC approach takes into account many of the concerns and demands in the proposed European ‘AI Act’.

In practice, it took some time to integrate these checks in the relevant processes and tools, including ‘trust as a selling point’ course and a technical fairness framework (a recent addition and ‘leading to ‘interesting’ business discussions’). Several projects have benefited from these practices, e.g. processes regarding job applications; customer intake and others. Obviously, the main point is that AI in good trust is possible, but requires solid, long term and well-structured approaches. This session of the Privacy focus group offers some crucial insights and welcome examples.



Deel deze nuttige inhoud met vrienden:

Volg ons op sociale netwerken:

Andere blogposts

NIS-2: Where are you?

In December 2020 the European Commission published a proposal to repeal the current NIS Directive (European Directive on Network and Information Systems) and to replace it with a new Directive: the so-called NIS-2 Directive. This post will give an update on the status of negotiations of NIS-2, and will outline the aspects we already know and don’t know about the upcoming Directive’s final form.  

SANS Experience Sharing Event

The Cyber Security Coalition and top cybersecurity trainer SANS Institute joined forces to provide specially needed insights and recommendations on successful cloud security, as well as how to handle cyber security in these times of war.

30 November: Computer Security Day: Ada Lovelace

On computer security day we pay tribute to Ada Lovelace, the forgotten mother of the computer. Often described as the first computer programmer — before computers were even invented — Ada was a real visionary. Imagine what she might have achieved had Babbage actually built his “computer” and she hadn’t died at the age of 36.


Deel deze nuttige inhoud met vrienden:

Volg ons op sociale netwerken: