Why New Profiling Software Raises Privacy Concerns

I always feel like... somebody's watchin' me

Key Takeaways

  • Software that uses artificial intelligence to profile people is raising privacy concerns. 
  • Cryfe combines behavioral analysis techniques with artificial intelligence.
  • The Chinese company Alibaba recently faced criticism after reportedly saying that its software could detect Uighurs and other ethnic minorities.
AI (artificial intelligence) and face recognition concept scanning faces in a crowd.
KENGKAT / Getty Images

New software powered by artificial intelligence that's intended for employers to profile their employees is raising privacy concerns. 

One new software platform, called Cryfe, combines behavioral analysis techniques with artificial intelligence. The developer claims that by analyzing minute clues, the software can reveal people’s intentions during interviews. But some observers say that Cryfe and other types of software that analyze behavior can invade privacy. 

"Companies increasingly rely on AI for profiling," AI expert Vaclav Vincale said in an email interview. "But even the humans who code these algorithms, much less a customer support person you reach on the phone, couldn’t tell you why they make any given recommendation."

More Than Words

Cryfe was developed by a Swiss company whose employees were trained by the FBI in profiling techniques. "Cryfe, in all interpersonal communication, does not only listen to words, but identifies other signals emitted by the human such as emotions, micro-expressions, and all gestures," Caroline Matteucci, the founder of Cryfe, said in an email interview.

"During recruitment, for example, this allows us to go and look for the real personality of our interlocutor."

Matteucci said users’ privacy is protected because the company is transparent about how its software works. "The user, before being able to use the platform, must accept the general conditions," she said.

"It is specified there that the user may in no case submit an interview for analysis without having received the written consent of the interlocutor."

Cryfe isn’t the only AI-powered software that purports to analyze human behavior. There’s also Humantic, which claims to analyze consumer behavior. "Humantic's path-breaking technology predicts everyone's behavior without them ever needing to take a personality test," according to the company’s website.

Artificial Intelligence overlays on people in a meeting.
metamorworks / Getty Images

The company claims to use AI to create applicants’ psychological profiles based on the words they use in resumes, cover letters, LinkedIn profiles, and any other piece of text they submit.

Behavioral software has run into legal challenges in the past. In 2019, Bloomberg Law reported that the Equal Employment Opportunity Commission (EEOC) looked into cases of alleged unlawful discrimination due to algorithm-assisted, HR-related decisions.

"This is all going to have to get worked out because the future of recruiting is AI," lawyer Bradford Newman told Bloomberg. 

Some observers take issue with companies using behavioral tracking software because it’s not accurate enough. In an interview, Nigel Duffy, global artificial intelligence leader at professional services firm EY, told InformationWeek that he’s troubled by software that uses social media quizzes and affect detection.

"I think there's some really compelling literature on the potential for affect detection, but my understanding is that the way that's implemented oftentimes is rather naive," he said.

"People are drawing inferences that the science doesn't really support [such as] deciding somebody is a potentially good employee because they're smiling a lot or deciding that somebody likes your products because they're smiling a lot."

Chinese Companies Reportedly Profile Minorities

Behavioral tracking could have more sinister purposes as well, some human rights groups say. In China, online marketplace giant Alibaba recently raised a stir after it reportedly claimed that its software could detect Uighurs and other ethnic minorities.

The New York Times reported that the company’s cloud computing business had software that would scan images and videos. 

But even the humans who code these algorithms...couldn’t tell you why they make any given recommendation.

The Washington Post also recently reported that Huawei, another Chinese tech company, had tested software that could alert law enforcement when its surveillance cameras detected Uighur faces.

A 2018 patent application by Huawei reportedly claimed that the "identification of pedestrian attributes is very important" in facial recognition technology. "The attributes of the target object can be gender (male, female), age (such as teenagers, middle-aged, old) [or] race (Han, Uyghur)," the application said. 

A Huawei spokesperson told CNN Business that the ethnicity identification feature should "never have become part of the application."

The burgeoning use of artificial intelligence to sort through vast amounts of data is bound to raise privacy concerns. You might never know who or what is analyzing you the next time you go for a job interview.

Was this page helpful?