AI says, ‘Give me more data!’; GDPR says, ‘Slow down buddy!’
Training an artificial intelligence (AI) algorithm requires data—lots of data. But staying GDPR-compliant while acquiring that data can be almost impossible.
Here’s the problem: To make a decision about someone—e.g., that they like the color blue and should be targeted with blue advertisements—an AI algorithm combines their personal data with other data inside its big black box, and spits out the answer. To get the data the AI needs, GDPR requires companies to get consent to use that personal data, tell that person exactly what it’s being used for, and guarantee it won’t be used for anything else. But companies have no idea what’s happening inside that black box, so true consent becomes a myth.
Article 22 of GDPR complicates the issue by giving consumers the right to not have an automated process make a decision about them that has legal affects or otherwise “significantly effects them.” It also states that if someone asks for an explanation of how a decision was reached, a company must explain the reasoning. But once again, only the algorithm itself can explain its decision-making.