We at the Data Protection Institute follow what is happening in our field as closely as possible. In this post, we identify three trends that we have noticed in the privacy landscape recently and explain how you should deal with them as a DPO.
Face recognition per se is, of course, hardly a trend. However, the gradual shift towards the widespread use of face recognition for all kinds of consumer applications is indeed a trend. At European level, efforts are still being made in the AI Act to ban the use of face recognition in the public domain (such as police forces) but this is by no means the intention for consumers. Whether checking in for the Eurostar, attending festivals or buying goods in shops, you will come across face recognition!
This is not necessarily a bad thing. The technology does involve risks, but it also brings huge advantages in terms of convenience. As a privacy professional, you might be horrified by the use of face recognition anywhere, but if consumers like it, the choice is still theirs. And therein lies the crux of the matter: this must be a transparent and informed choice.
Always make sure that face recognition applications are offered as a fair choice (suggesting an alternative without face recognition and without disadvantages) and that users are well informed in advance. Provide stringent measures to protect templates of faces (or derivatives) against external access and restrict internal access as much as possible.
“What an open door, DPI!” we hear you rant at your screen. AI is the buzzword in almost every sector linked to the knowledge economy. However, what is mainly meant by AI these days is the subset within AI known as “machine learning”. And the keyword here is learning. Models that form the basis of ChatGPT, for example, learn from huge amounts of data and many aspects of these data are therefore crucial for the technology to be successful.
Pay attention to the basics when you come up against AI/machine learning:
- Where did the data that trained the algorithm come from? Have they been lawfully processed? In particular, when certain categories of personal data are involved, consent will almost always be required.
- Are the data varied enough, or does the origin of the data give rise to certain biases that lead to less reliable results?
- How reliable is the outcome? How was the quality of the algorithm tested?
- Is AI used in a context that has a major impact on individuals? In that case, don’t forget to make sure that the outcomes are validated by a real person. This has to be more than an employee simply giving blind approval!
There are also areas of concern when using AI applications. For example, when using an existing AI application, check how it deals with the (personal) data used as input. The application itself reusing the input data may pose risks (data leaks).
Within security and privacy, we often focus on threats that come from outside or on internal “errors”, such as hackers, phishing, ransomware, or e-mails sent by accident. Slowly, however, more attention is being paid to the principle of the “insider threat”, in other words an individual within the organisation deliberately trying to gain access to (personal) data of the organisation for their own benefit. Recent examples come from the police, hospitals or even a dental hygienist. Some aspects worth considering:
- Not all insiders are deliberately malicious. Some are still just not sufficiently aware of the rules. So keep focusing on awareness. Use previous incidents as specific examples for this (without targeting particular (former) employees).
- If individuals cannot access data, they cannot misuse them. Restrict access to personal data as much as possible and apply the “need to know” principle.
- Always provide adequate logging so that incidents can be traced.
- Introduce a clear disciplinary process and ensure that it is enforced. This also helps to bring about a cultural shift: “the organisation really cares about privacy”!