AI is hot, but beware…

Artificial Intelligence, a.k.a. AI is hot! Basking in the heat of an ‘AI summer’, the field is nevertheless not without some burning issues regarding ethics and legal controls. This lucid webinar organized by the Privacy Focus Group helps you gain a much-needed insight.

Conceived in 1956, during the ‘Dartmouth Summer Project’, AI has lived through a series of ‘summers’ of great promise… and ‘winters’ of lingering disinterest. Today, at 65 years of age, it looks like a magical and indispensable tool in every field of human endeavour, though sometimes perhaps too enthusiastically applied. Indeed, there’s still the question of ‘what is AI?’ – the starting point of the webinar ‘AI, basic concepts & regulatory/ethical trends’ by dr. Jan De Bruyne and Thomas Gils, both at the Centre for IT & IP Law (CiTiP) of the KU Leuven.

What is AI?

“That’s an easy question to ask, but a hard one to answer,” quotes dr. Jan De Bruyne, “There is no single definition of AI accepted by the scientific community.” However, one can distinguish between ‘narrow/weak AI’ (focus on one task, outperforms humans), ‘general/strong AI’ (exhibiting human-level intelligence) and ‘super AI’ (surpassing human intelligence). In this classification, all of today’s AI applications are ‘narrow’.

Two major ways of approaching AI are ‘top down’ knowledge based (an 1956 idea, with knowledge input by experts) and ‘bottom up’ data driven (today’s popular approach, facilitated by the data tsunami of recent years). The latter approach is based on ‘machine learning’, of which there are several families. In this webinar, the focus is on supervised (labelled data), unsupervised (unlabelled data), reinforcement (reward/punishment) and deep learning. As it is, learning methods can have unforeseen side effects, through data set bias, with biased results and unwanted societal impact.

Ethical and societal AI

The ubiquitous application of AI raises questions about ethical aspects of AI. Thomas Gils points out the role of ethics in design (acceptable for all stakeholders), ethics for designers (codes of conduct…) and ethics by design (what should AI do). And yes, ethics are relevant because AI is not (yet) robust (e.g., racist pitfalls), controllable (deep fakes), transparent (too much ‘black box’) and reliable (bias prone)!

However, numerous initiatives have started to tackle the regulatory and ethical frameworks for a ‘trustworthy’ AI. In the EU, a high-level expert group already issued some publications, with Ethics Guidelines for trustworthy AI (7 key requirements). The UNESCO, too, has a draft text (2021) on AI ethics. In 2020, the European Commission produced a white paper on AI (to regulate and stimulate AI), with in 2021 an EC proposal for a Regulation (the AI Act, with a risk based approach), as one of a set of data related acts.

Others initiatives have been taken by the Council of Europe, the OECD, the World Intellectual Property Organization, and the Global Partnership on AI, as well as by countries as the US and China. These efforts are welcome and needed, but many challenges remain (ethical impact assessment, digital literacy…) as a concluding ‘product case’ by dr. Jan De Bruyne illustrates. Clearly, this webinar has only scratched the surface of a field that will increase in importance and urgency in the coming years, but serves as a splendid starting point.



Partagez ce contenu avec votre réseau :

Suivez-nous sur les réseaux sociaux:

Nos autres articles

NIS-2: Where are you?

In December 2020 the European Commission published a proposal to repeal the current NIS Directive (European Directive on Network and Information Systems) and to replace it with a new Directive: the so-called NIS-2 Directive. This post will give an update on the status of negotiations of NIS-2, and will outline the aspects we already know and don’t know about the upcoming Directive’s final form.  

SANS Experience Sharing Event

The Cyber Security Coalition and top cybersecurity trainer SANS Institute joined forces to provide specially needed insights and recommendations on successful cloud security, as well as how to handle cyber security in these times of war.

Privacy Focus Group – Practical AI Use Cases

It is easy to drown in the sea of dire warnings about the danger of AI, in particular to our privacy. The main point is that AI in good trust is possible, but requires solid, long term and well-structured approaches. This session of the Privacy focus group offers some crucial insights and welcome examples.

30 November: Computer Security Day: Ada Lovelace

On computer security day we pay tribute to Ada Lovelace, the forgotten mother of the computer. Often described as the first computer programmer — before computers were even invented — Ada was a real visionary. Imagine what she might have achieved had Babbage actually built his “computer” and she hadn’t died at the age of 36.


Partagez ce contenu avec votre réseau :

Suivez-nous sur les réseaux sociaux: