Protecting data in use when consent is not enough

Protecting data in use when consent is not enough

By Gary LaFever, Co-Founder, CEO and General Counsel at  Anonos

In today’s data driven world, new technical safeguards are required to help balance data innovation and protect privacy rights when consent no longer works. For example, while consent may look like a panacea for privacy, it falls through the gaps in the overall data ecosystem in numerous ways, including the following:

  • Unpredictability of Operations – Consent exposes data controllers to unpredictable interruptions in processing – e.g. when consent is revoked by a data subject under the EU General Data Protection Regulation (GDPR) or data subjects exercise their Right to be Forgotten/Right to Erasure.
  • Legal Processing Complications – The requirements for consent as a basis for lawful processing under the GDPR may be impossible to achieve – e.g. when sophisticated iterative processing is incapable of being described in advance with detailed specificity as now required under the GDPR (generalised consent is no longer legally effective).
  • Self-Selection Issues – Reliance on consent can result in inaccurate and incomplete data – e.g. relying on consent to include personal data in studies has been shown to result in biased data that is not representative of the larger population.
  • Liability Across Data Ecosystem – All stakeholders must ‘live’ with the results of inadequate consent secured by other parties in the ecosystem – e.g. controllers and processors involved in co-processing activities are jointly and severally liable under the GDPR for the failure of other parties to secure lawful consent along the data pipeline.

Regulator guidance and enforcement actions by EU Data Protection Authorities recognise the inadequacy of consent as used prior to the GDPR coming into effect.

The use of consent as a basis for lawful processing of personal data is now severely restricted, particularly when it comes to secondary uses of personal data such as analytics, machine learning and artificial intelligence. New technical safeguards are now necessary to replace consent as the bulwark for ensuring individual privacy while still enabling innovation using data. 

The GDPR highlights pseudonymisation as a technological solution. Pseudonymisation – legally defined at the EU level for the first time in the GDPR with a heightened standard relative to past practice – is repeatedly mentioned as a recommended safeguard. In more than a dozen places, the GDPR links -pseudonymisation to express statutory benefits.

GDPR Article 25(1) for example identifies pseudonymisation as an “appropriate technical and organisational measure” while Article 25(2) requires controllers to “implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed.

Leave a Reply

Your email address will not be published. Required fields are marked *