Quantcast
Channel: Commentary – POLITICO
Viewing all articles
Browse latest Browse all 1782

Resist the robot takeover

$
0
0

Artificial intelligence is coming to your inbox. A Finnish company has rolled out a new product that lets potential employers scan the private emails of job applicants to determine whether or not they would be a good fit for the organization.

The company, Digital Minds, portrays its offering as something innocuous. If applicants give their consent, what’s the harm?

The truth is: We don’t know the answer to that question. And that’s what makes new, potentially intrusive and discriminatory technologies like this one so scary.

The product skims a job applicant’s private conversations to compute an assessment of his or her psychological traits using IBM Watson’s Personality Insights. The result is a profile of the candidate, based on the so-called Big Five personality traits — openness to experience, conscientiousness, extraversion, agreeableness and neuroticism — that gives the potential employer an assessment of whether the person in question would be a good fit at his or her company.

Digital Minds, started by two young Finns in 2017, claims it has carried out thousands of assessments for dozens of clients so far, and that it is fully compliant with the European Union’s strict data protection and privacy legislation, known as GDPR.

Digital Minds’ email-scanning tool is an example of a growing reliance on automatic-decision-making.

There are some obvious advantages to a tool like this. Current hiring methods are far from perfect, as considering resumés and conducting face-to-face interviews leaves plenty of room for discrimination. Interviews especially tend to assess a candidate’s likability to the interviewer, rather than his or her ability to work efficiently in an organization.

Research conducted at the University of California, Berkeley — based on monitoring a company’s internal exchanges over six years — also found that language used in emails could successfully predict an employee’s career path. Those who, based on a linguistic analysis of their emails, did not fit in with the culture of the company, were much more likely to leave the company — whether voluntarily or not — and less likely to be promoted internally.

No studies have been done to assess the predictive power of the private emails, but it is plausible that doing so could be equally successful. A tool that replaced face-to-face interviews with something cheaper, more reliable and less susceptible to personal bias could indeed lead to better hiring results.

Of course, no tool would be fool-proof. If the practice of looking at prospective employees’ private inboxes became widespread, it’s likely that online tools would pop up to help candidates adapt their inboxes to fit their target employers’ culture or expectations.

A Finnish company has rolled out a new product that lets potential employers scan the private emails of job applicants | Mike Clarke/AFP via Getty Images

This product could also prove to be illegal. In Finland, employers are barred from collecting personal data about employees or job applicants that is not directly relevant to a specific task, even if the person in question gives his or her consent. Whether using private emails to create a psychometric profile constitutes collection of personal data is still unclear, as is legislation regarding the arbitrariness of the procedure.

The EU’s data protection regulation also forbids decisions “based solely on automated processing” if they produce “legal effects” on someone — wording that’s convoluted enough to leave too much room for interpretation.

But that’s not the real danger of tools like this one. Digital Minds’ email-scanning tool is an example of a growing reliance on automatic-decision-making. Increasingly, we are taking the power out of hands of humans and entrusting it to algorithms that make decisions on criteria we haven’t decided upon. The opacity in exactly how they make those decisions is what poses the biggest risk from these new technologies and approaches.

Not even the most data-driven companies can reliably predict the outcome of self-learning systems like the one developed by Digital Minds. Amazon, for example, ditched a self-developed recruiting system because it discriminated against women. To their credit, Amazon’s staff scrutinized their own tool and noticed the problem themselves. But not every company is likely to be as rigorous.

As long as automated decision-making remains a black box, there is no way for employers, or even programmers, to know the basis on which it is making its judgement. Is the algorithm looking for key words related to union activity, family planning, sexual preferences or other traits employers might find undesirable? Has a job seeker been rejected on the basis of a past action, gender, religious belief or political affiliation? We don’t know.

Unless we ensure these products are more transparent, people will lose trust in the companies and governments that use them.

The arbitrariness of tools like this could elicit a significant backlash among citizens, especially as automatic decision-making starts to have important effects on people’s lives.

That’s already starting to happen across Europe. France has a vast system of automated speed controls; authorities in Denmark have developed a tool to automatically identify children at risk of neglect; and in Italy, treatment for patients in the public health system is allocated on the basis of automated data analysis.

The Netherlands relies on a system of “risk indication” to detect welfare fraud (it is currently subject to a legal challenge), and in Slovenia, teachers in some schools rely on automated tools to detect “problematic” students. In Poland, the ministry of justice last year introduced a system that, once a day, automatically assigns cases to judges across the country “without emotions” or “bias.”

These tools may be developed with the best intentions in mind. But unless we ensure these products are more transparent, people will lose trust in the companies and governments that use them.

European governments need to make sure they understand, and legislate for, how automatic decision-making is reshaping our world | Dan Kitwood/Getty Images

Citizens also need to have appropriate recourse to contest these automated decisions and hold to account those who rely on them, no matter if they are public administration or private companies. Dumping source code online, as some governments do, is not enough.

There are examples of how best to scrutinize these systems. New York City, for example, created a commission to identify systems of automated decision-making in use by the administration and review them “through the lens of equity, fairness and accountability.”

It is high time European governments — whether national, regional or local — take a page out of their book and make sure they understand, and legislate for, how automatic decision-making is reshaping our world.

Matthias Spielkamp is director of nonprofit AlgorithmWatch. Nicolas Kayser-Bril is a freelance data journalist based in Berlin. They are co-authors of the report “Automating Society: Taking stock of automated decision-making in the EU.”

Read this next: Pocket guide to the Yellow Jackets 


Viewing all articles
Browse latest Browse all 1782

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>