Podcast: Hired by an algorithm

If you’ve applied for a job lately, it’s all but guaranteed that your application was reviewed by software—in most cases, before a human ever laid eyes on it. In this episode, the first in a four-part investigation into automated hiring practices, we speak with the CEOs of ZipRecruiter and CareerBuilder, and one of the architects of LinkedIn’s algorithmic job-matching system, to explore how AI is increasingly playing matchmaker between job searchers and employers. But while software helps speed up the process of sifting through the job market, algorithms have a history of biasing the opportunities they present to people by gender, race…and in at least one case, whether you played lacrosse in high school.

We Meet:

  • Mark Girouard, Attorney, Nilan Johnson Lewis
  • Ian Siegel, CEO, ZipRecruiter
  • John Jersin, former Vice President of Product Management, LinkedIn
  • Irina Novoselsky, CEO, CareerBuilder 

Credits:

This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, and Anthony Green with special thanks to Karen Hao. We’re edited by Michael Reilly.

Transcript:

[TR ID]

Jennifer: Searching for a job can be incredibly stressful, especially when you’ve been at it for a while. 

Anonymous Jobseeker: At that moment in time I wanted to give up, and I was like, all right, maybe this, this industry isn’t for me or maybe I’m just dumb. And I was just like, really beating myself up. I did go into the imposter syndrome, when I felt like this is not where I belong.

Jennifer: And this woman, who we’ll call Sally, knows the struggle all too well. She’s a black woman with a unique name trying to break into the tech industry. Since she’s criticizing the hiring methods of potential employers, she’s asked us not to use her real name.

Anonymous Jobseeker: So, I use Glassdoor, I use LinkedIn, going to the website specifically, as well as other people in my networks to see, hey, are they hiring? Are they not hiring? And yeah,  I think in total I applied to 146 jobs. 

Jennifer:  And.. she knows that exact number, because she put every application in a spreadsheet. 

Anonymous Jobseeker: I have a tracker in Excel. So every time I apply for a job, I use a tracker. After I apply, I look up recruiters on LinkedIn, I shoot them a quick message. Sometimes I got a reply, sometimes I didn’t.

Jennifer: Tech companies are scrambling to hire more women and people of color. She’s both, and she started to wonder why she wasn’t getting more traction with her job search. 

Anonymous Jobseeker: I’m a military veteran. I was four years active, four years reserve, and I went on two deployments. I’m from the Bronx. I’m a project baby. I completed my bachelor’s degree in information technology where there’s rarely any black people or any black women in general. 

Jennifer:  And, a few weeks ago, she graduated again. Now, she also has a master’s degree in information from Rutgers University in New Jersey, with specialties in data science and interaction design. 

For many of the software developer jobs she applied to, Sally was assessed not by a human but by artificial intelligence—in the form of services like resume screeners or video interviews. 

Anonymous Jobseeker: I’ve been involved in many HireVues, many cognify gaming interviews, and playing with my resume so that the AI could pick up my resume. Because being a black woman, you remain a little on the unknown side, so playing with resumes just to get picked up.

Jennifer: Using A-I in the hiring process got a huge push during the pandemic, because these tools make it easy to hire candidates without in-person contact. 

Anonymous Jobseeker: But it was just weird not having human interaction because it’s like, okay, so who’s picking me, is this robot thing picking me or is a human being picking me? Am I going to be working with robots? Or am I going to be working with humans?

Jennifer: These interactions are almost always one-sided, and she says that added to her doubts. 

Anonymous Jobseeker: For me, being a military veteran, being able to take tests and quizzes or being under pressure is nothing for me. But I don’t know why the cognitive tests gave me anxiety, but I think it’s because I knew that it had nothing to do with software engineering—that’s what really got me. But yeah, so basically you would have to solve each puzzle within a timeframe and if you didn’t get it, that’s where you lose points. So even though I got each one right, because I was a bit slower, it was like, no—reject, reject, reject, reject.

Jennifer: The first place you might find A-I in a hiring process is a tool that extracts information from resumes. It tries to predict the most successful applicants, and sorts those resumes into a pile. 

Anonymous Jobseeker: So yeah, it wasn’t later, until maybe about 130 applications, where I met other people who were like 200 applications in, or 50 applications in. And we all were just like, what is this? 

Jennifer: And it’s only the tip of the iceberg. There’s also chatbots, AI-based video games, social media checks, and then come the automated interviews. 

These are one-way video interviews where an algorithm analyzes a job candidate’s word choice, voice, and sometimes—even their facial expressions.  

Anonymous Jobseeker: It’s the tech industry. I don’t understand how the tech industry makes it difficult to get in, but then they complain that they don’t have enough people to hire.

Jennifer: At this point Sally is discouraged after loads of rejection.

But then—she has a realization 

Anonymous Jobseeker: And I was just like, all right, so it’s not me—it’s the AI. And then that’s when I got my confidence back and then I started reapplying to other things. 

Jennifer: It can be hard, or even impossible, to know how or why AI-systems make the decisions they do. 

But Sally wonders if one reason she wasn’t selected is that Black women, and college students who get a later start, are rarely represented in the training data used for these algorithms. 

Anonymous Jobseeker: Cause if this is me being a non-traditional student, I wonder other people, like if there was others, if they get affected by this. And then it’s like, do you email the company to let them know? Or it’s just like, because they told you no, forget them, like, no! Like, I don’t know, it’s like, like, how do you make something better without, I guess, being defensive.

Jennifer: I’m Jennifer Strong and with most people applying for jobs now screened by an automated system—we’re launching an investigation into what happens when algorithms try to predict how successful an applicant will be.

In a four-part series we’ll lift the curtain on how these machines work, dig into why we haven’t seen any meaningful regulation, and test some of these tools ourselves.

[TITLES]

Today’s job hunts are a far cry from the past, when the process started by dressing up to go make your best first impression.

SOT

Man: This looks alright. Tell me, why are you interested in this job?

Young Man: I need a steady job Mr. Wiley, with the chance to go places. 

[music up]

Jennifer: These days, many people start the process having to get past a machine.

System: I will pass you to our AI interviewer now. Please wait a second. Hello. I am Kristine. Let’s do a quick test run to get you familiar with the experience. Good luck with your interview.  Just remember, please relax and treat this as a normal conversation.

Hilke: So, I first heard all about this new world of machines in hiring while chatting with a cab driver. 

Jennifer: Hilke Schellmann is an Emmy-award winning reporter writing a book about AI and hiring and she’s been investigating this topic with us.

Hilke: So this was in late 2017. I was at a conference in Washington DC and needed a ride to the train station. And I always ask how the drivers are doing. But, this driver’s reaction was a bit different. He hesitated for a second and then shared with me that he had had a weird day because he had been interviewed by a robot. That got me interested, and I asked him something like: “Wait a job interview by a robot? What?”. He told me that he had applied for a baggage handler position at an airport, and instead of a human being, a robot had called him that afternoon and asked him three questions. I had never heard of job interviews conducted by robots and made a mental note to look into it. 

Jennifer:  Ok, you’ve spent months digging into this. So, what have you learned?

Hilke: Hiring is profoundly changing from human hiring to hiring by machines. So, at that time little did I know that phone interviews with machines were just the beginning. When I started to dig in, I learned that there are AI-tools that analyze job applicants’ facial expressions and their voices, and try to gage your personality from your social media accounts. It feels pretty all-encompassing. A couple times I actually had to think for a minute if I was comfortable running my own information through these systems.

Jennifer:  And who’s using these systems?

Hilke: Well, at this point most of the Fortune 500 companies use some kind of AI technology to screen job applicants, like Unilever, Hilton, McDonald’s, IBM, and many, many, other large companies use AI in their hiring practices. 

To give you an idea of just how widespread this is—I attended an HR Tech conference a few months ago, and it felt like all of the tools for sale now have AI built in. 

Vendors I have been speaking to are saying that their tools are making hiring more efficient, faster, saving companies money and picking the best candidates without any discrimination. 

Jennifer: Right, because the computer is supposed to be making objective hiring decisions and not potentially biased ones, like humans do. 

Hilke: Yes. As we know, humans struggle to make objective hiring decisions. We love small talk, and finding connections to people we try to hire like where they are from. We often like it if folks went to the same schools we did. And all of that’s not relevant to whether someone can do a job. 

Jennifer:  And what do we know at this point about which tools work and which don’t?

Hilke: We don’t really know which work, and which don’t, because these tools don’t have to be licensed or tested in the United States. Jen—you and I could build an AI hiring tool and sell it. Most vendors claim that their algorithms are proprietary black boxes, but they assure us that their tools are tested for bias. That’s mandated by the federal government, but so far as I can tell there isn’t much third-party checking happening. 

Jennifer:  So, no one gets to see inside these tools?

Hilke: Only a few get access, like external auditors after an algorithm is already in use. And then there are lawyers and management psychologists who often are hired by the company that wants to potentially buy a tool—they have the financial power to strong arm a vendor to open up the black box. 

So, for example, I spoke with Mark Girouard. He’s an employment lawyer based in Minneapolis and one of the few people who’s ever gotten access. A few years back, he examined a resume screener that was trained on resumes of successful employees. It looked at what the resumes of high performers in this job have in common, and here’s what he found. 

Mark: Two of the biggest predictors of performance were having played high school lacrosse or being named Jared. Just based on the training data it was trained with, those correlates with performance. You know, that was probably a very simple tool where the data set it was fed was here’s, here’s, a bunch of resumes, and, here are individuals who are strong performers and here are their resumes and the tool just finds those correlations and says, these must be predictors of performance.

Hilke: So could somebody say, Oh, playing lacrosse in high school, maybe you’re very good at teamwork. Teamwork is something that’s job relevant here.

Mark Girouard: Right, or why not field hockey? And I would say it really was, you know, at some degree it was a lack of human oversight. There’s not a person opening the hood and seeing like what’s the machine actually doing.

Jennifer:  Yeah and that’s why we decided to test some of these systems and see what we’d find. 

Hilke: So, in this test I answered every question reading the Wikipedia text of the psychometrics entry in German. So, I’d assumed I’d just get back error messages saying, “hey we couldn’t score your interview,” but actually what happened was kind of interesting. So, it assessed me on me speaking German but gave me a competency score English score.

Jennifer:  But we begin with a closer look at jobs sites like LinkedIn and ZipRecruiter. Because They’re trying to match millions of people to millions of jobs… and in a weird twist these platforms are partially responsible for why companies need AI tools to weed through applications in the first place. 

They made it possible for job seekers to apply to hundreds of jobs with a click of a button. And now companies are drowning in millions of applications a year, and need a solution that scales. 

Ian Siegel: Oh, it’s it’s dwarfing humans. I mean, I, I, don’t like to be Terminator-ish in my marketing of AI, but look, the Dawn of robot recruiting has come and went, and people just haven’t caught up to the realization yet.

Ian Siegel: My name is Ian Siegel. I’m the CEO and co-founder of ZipRecruiter.

Jennifer:  It’s a jobs platform that runs on AI.

Ian Siegel: Forget AI, ask yourself what percentage of people who apply to a job today will have their resume read by a human. Somewhere between 75 and a hundred percent are going to be read by software. A fraction of that is going to be read by a human after the software is done with it. 

Jennifer: It fundamentally changes the way a resume needs to be written in order to get noticed, and we’ll get into that later in the series. 

But Siegel says something else that’s accelerating a shift in how we hire, is that employers only want to review a handful of candidates.  

Ian Siegel: There’s effectively this incredible premium put on efficiency and certainty, where employers are willing to pay up to 25% of the first year of a person’s salary in order to get a handful of quality candidates that are ready to interview. And so, I think, that we’re going to see adoption of, whether it’s machine learning or deep learning or whatever you want to call it, as the norm and the like table stakes to be in the recruiting field, in the literal like next 18 months. Not, I’m not talking five years out, I’m not talking the future of work, I’m talking about the now of work. 

Jennifer: Here’s how he describes his platform.

Ian Siegel: So, an employer posts a job, and we say other employers who have posted a job like this have liked candidates who look like that. And then we also start to learn the custom preferences of every employer who uses our service. So as they start to engage with candidates, we say, oh, okay, there’s tons of quality signal that they’re giving us from how they engage with these candidates. Like, do they look at a resume more than once? Do they give a thumbs up to a candidate? And then we can just start doing a, let’s go find more candidates who look like this candidate exercise, which is another thing that these algorithms are extremely good at. 

Jennifer: In other words, he thinks AI brings organization and structure to the hiring chaos. 

Ian Siegel: You end up with a job market that no longer relies on random chance, the right person happening upon the right job description or the right employer happening upon the right job seeker. But rather you have software that is thrusting them together, rapidly making introductions, and then further facilitating information to both sides along the way that encourages them to go faster, or stay engaged.

Jennifer: For example, Job seekers get notified when someone reads their resume.

Ian Siegel: They get a feeling like there is momentum, something happening, so that everybody has as much information as possible to make the best decisions and take the best actions they can to get the result they’re looking for. 

Jennifer:  The AI also notifies employers if a candidate they like is being considered by another company. 

Ian Siegel: And if you’re wondering like, how good is it? I mean, go to YouTube, pick a video you like, and then look at the right rail, like, look at how good they are at finding more stuff that you are likely to like. That is the wisdom of the crowd. That is the power of AI. We’re doing the exact same thing inside of the job category for both employers and for job seekers. 

Jennifer:  Like Youtube, their algorithm is a deep neural network.

And like all neural networks, it’s not always clear to humans why an algorithm makes certain decisions. 

Ian Siegel: It’s a black box. The way you measure it is you look at things like satisfaction, metrics, speed by which jobs are filled, speed at which job seekers find work. But you don’t know why it’s doing what it’s doing? But you can see patterns in what it’s doing.  

Jennifer:  Like, the algorithm learned that job seekers in New York’s tech industry, who applied to positions in LA, were often hired. 

Ian Siegel: We’ve encountered a number of sort of like astute observations or insights that the algorithm was able to derive just by the training data that we fed it. We wouldn’t have said like any job posting in LA, post in LA and post in New York. Like that’s just not something you would think to do. It’s a level of optimization beyond what humans would think to go to.

Jennifer: And he says satisfaction has jumped more than a third among hiring managers since introducing these deep neural networks.

Ian Siegel: So, like you’re getting into a realm of accomplishment and satisfaction that was literally unimaginable five years ago, like this is bleeding edge technology and the awareness of society has not caught up to it. 

Jennifer: But, bias in algorithmic systems is something people are becoming more aware of.  Going back to that YouTube analogy, it got in trouble for not knowing that their algorithm served more and more radical content to certain people.

Ian Siegel: It is a fundamental problem that affects the job category. And we take it deadly seriously at ZipRecruiter. We’ve been thinking about it since we first introduced these algorithms. We were aware of the potential for the bias to permeate our algorithms. You could be theoretically perfecting bias, you know, by giving people exactly what they want you give them I don’t know more and more old white men maybe, for example, whatever the bias would spit out.

Jennifer: That’s because the AI learns as it goes and is based on feedback loops. Their solution is to not let the AI analyze specific demographic information like names, addresses, or gendered terms like waitress. 

Ian Siegel: So, we strip a bunch of information from the algorithms, and I believe we are as close to a merit based assessment of people as can currently be done.

Jennifer:  But how can ZipRecruiter and other job sites know for sure if there’s bias on their platforms, without knowing why the algorithm matches specific people to jobs? 

One person asking this question is John Jersin. He’s the former Vice President of Product at Linkedin. And, a few years back he found some unsettling trends when he took a closer look at the data it gathers on its users.

And he says it all starts with what the AI is programmed to predict.

John Jersin: What AI does in its most basic form is tries to optimize something.. So, it depends a lot on what that AI is trying to optimize and then also on whether there are any constraints on that optimization that have been placed on the AI. So most platforms are trying to optimize something like the number of applications per job or the likelihood that someone is to respond to a message. Some platforms and this was a key focus at LinkedIn, try to go deeper than that and try to optimize for the number of hires. So not just more people applying, but also the right people applying.

Jennifer: The largest platforms rely heavily on the three types of data they collect. That gets used to make decisions about which opportunities job seekers see,  and which resumes recruiters see. 

John Jersin: The three types of data are the explicit data. What’s on your profile, the things that you can actually read, the implicit data, which is things that you can infer from that data. So, for example, if you wrote down on your profile, you’re a software engineer, and you worked at this particular company, we might be able to infer that you know certain kinds of technologies. That you know how to code, for example, is a pretty obvious one, but it gets a lot more sophisticated than that. The third type of data is behavioral data. What actions you’re taking on the platform can tell us a lot about what kinds of jobs you think are fit for you, or which kinds of recruiters reaching out about opportunities are more relevant to you.

Jennifer:  This all looks great on paper. The algorithm doesn’t include the gender or names of applicants, their photos or pronouns. So, in theory there shouldn’t be any gender or racial bias. Right? But there are differences in the data. 

John Jersin: So we found, for example, that men tend to be a little bit more verbose. They tend to be a little bit more willing to identify skills that they have, maybe at a slightly lower level than women who have those same skills, who would be a little less willing to identify those skills as something that, that, they want to be viewed as having. So, you end up with a profile disparity that might mean there’s slightly less data available for women, or women might put data on their profile that indicates a slightly higher level of skill or higher level of experience for the same statement, versus what a man might put on their profile.

Jennifer: In other words, the algorithm doesn’t get told who’s a man and who’s a woman, but the data gives it away: Many women only add skills to their resumes once they’ve mastered them, but many men add skills much earlier. So, in an automated world, it often appears that men have more skills than women, based on their profiles.  

And women, on average, understating their skills, with men, on average, exaggerating their skills, is of course also a problem with traditional hiring. But, Jersin found other signals in the data that the AI picks up on as well. 

John Jersin: How often have you responded to messages like this? How aggressive are you when you’re applying to jobs? How many keywords did you put on your profile, whether or not they were fully justified by your experience. And so the algorithm will make these decisions based on something that you can’t hide from the recruiter—you can’t turn off. And to some extent, that’s the algorithm working exactly as it was intended to work. It’s trying to find any difference it can to get this job in front of somebody who’s more likely to apply or to get this person in front of a company who’s more likely to reach out to them. And they’re going to respond as a result. But what happens is these behavioral differences, which can be linked to your cultural identity, to your gender identity, what have you, they drive the difference. So, the bias is a part of the system. It’s built in.

Jennifer: So different genders behave differently on the platform, the algorithm picks up on that, and it has consequences. 

John Jersin: Part of what happens with these algorithms is they don’t know who’s who. They just know, hey, this person is more likely to apply for a job. And so they want to show that job to that person because that’ll get an apply, that’ll score a point for the algorithm. It’s doing what it’s trying to do. One thing that you might start realizing is that, oh, well, if this group applies to a job a little bit more often than this other group, or this groups willing to apply to a job that they’re not quite qualified for, it might be more of a step up for them than this other group, than that AI might make the decision to start showing certain jobs to one group versus the other. 

Jennifer: It means, the A-I may start recommending more men than women for a job, because men, on average, go after job opportunities more aggressively than women, and the A-I ‘may be’ optimized not just to recommend qualified people for a given job, but recommend people who are ‘also’ likely to apply for it. 

And on the other side of the marketplace, the same thing is probably happening as well. The AI may show less senior roles to qualified women and more senior roles to qualified men, just because men are more likely to apply to those jobs. 

John Jersin: Because of your gender, because of your cultural background, if that entails a certain behavioral difference, you’re going to receive different opportunities that other groups will not receive. Or worse, you might not be receiving opportunities that other groups are receiving simply because you behave a little bit differently on their platform. And we don’t really want our systems to work that way. We certainly shouldn’t want our systems to work that way to pick up on these potentially minor behavioral differences and then drive this radical difference in terms of opportunity and outcome as a result. But that’s what happens in AI.

Jennifer: Before he left LinkedIn, Jersin and his team built another AI to combat these tendencies. It tries to catch the bias before the other AI releases matches to recruiters. 

John Jersin: What representative results can do is rearrange the results so that it actually maintains that composition of people across those two different groups. So instead of, for example, the AI trying to optimize the people in that group and shift more towards men and show 70 percent men, and 30 percent women. It’ll make sure that it continues to show 50 percent of each.

Jennifer:  Basically, he built AI to fight existing AI, to try to make sure everyone has a fair chance to get a job. 

And he says examples like the problem Amazon faced when testing their in-house resume sorter helped pave the way for developers to understand how unintentional bias can creep into the most well-intentioned products. 

John Jersin: What they did was they built an AI, that worked in recruiting and basically tried to solve this matching problem. And the data set that they were using was from people’s resumes. And so, they would parse through those resumes and they would find certain words that were more correlated with being a fit for a particular job.

Jennifer: The tech industry is predominantly male… and since the algorithm was trained on these mostly male resumes, the AI picked up those preferences.

This led Amazon’s algorithm to downgrade resumes with words that suggested the applicants were female. 

John Jersin: Unfortunately, some of those words were things like she or her or him, which identified something that has absolutely nothing to do with qualification for a job and obviously identified something about gender.

Jennifer: Amazon fixed the programs to be neutral to those particular words, but that’s no guarantee against bias elsewhere in the tool. So executives decided it was just best to scrap it. 

John Jersin: We’re talking about people’s economic opportunities, their careers, their ability to earn income, and support their families. And we’re talking about these people not necessarily getting the same opportunities presented to them because, they’re in a certain gender group, because they’re in a certain cultural group. 

Jennifer: We called other job platforms too to ask about how they’re dealing with this problem, and we’ll get to that in just a moment. 

[MIDROLL]

Jennifer: To understand what job platforms are doing to combat the problem John Jersin described tackling during his days at LinkedIn, we reached out to other companies to ask about this gender drift. 

Indeed didn’t provide us with details. LinkedIn confirms it still uses representative results. And—Monster’s head of product management says he believes they’re not using biased input data, but isn’t testing for this problem specifically either.

Then we spoke to CareerBuilder, and they told us they aren’t seeing  the same problems LinkedIn found because their AI tries to match people to jobs in a very different way. 

They revamped their algorithm a couple of years back, because of a problem unrelated to bias. 

Irina Novoselsky: We really saw that there’s this big gap in the workforce. That companies today aren’t going to have the needs from the current workforce.

Jennifer: Irina Novoselsky is the Chief Executive of CareerBuilder.

Irina Novoselsky: It means that high paying jobs are going to continue to increase in salary. Low-Paying jobs are going to increase too, but it’s going to hollow out the middle class. 

Jennifer: She says that’s because supply and demand for these roles will continue to be an issue. And, the company uncovered the problem when analyzing 25 years of data from connecting candidates with jobs. 

Irina Novoselsky: And we used all of that information, that data, and leveraged our AI to create a skills based search. What does that mean? That means that you are matched and you look for jobs based on your skillset, on your transferable skill set. 

Jennifer: She says thinking about the workforce this way could help move employees from troubled sectors, where there’s too many people and not enough jobs, to ones that really need workers.

Irina Novoselsky: When COVID happened, the whole airline industry got massively impacted. And when you look at it, flight attendants were out of a job for a significant period of time. But one of the things that our data and our algorithms suggested, that they had a 95% match to customer service roles, which happened to be one of the highest sought after roles and the biggest supply and demand imbalance, meaning that for every person looking there was over 10 jobs. And so when you match based on their skills, because they are dealing with problems, their communication skills, their logistic handlers, their project managers, and so when you look at that high customer satisfaction and customer interaction skillset, they were a perfect match.

Jennifer: But some skill matches are more surprising than others. 

Irina Novoselsky: Prison guards, when you look at their underlying skillset are a huge match for veterinary technicians: Empathy, communication, strength, being able to, to manage difficult situations. The by-product of this is increased diversity, because if you think about it, you’re now not looking for the same type of person that you’ve been looking for that has that experience. You’re widening your net and you’re able to get a very different type of person into that role, and we have seen that play out where our clients have been able to get a much more diverse skill set using our tools. 

Jennifer:  Her team also found differences when they took a closer look at the gender data. It turns out that a long list of required skills in a job description keeps many women away. And how it’s written also matters a great deal.

Irina Novoselsky: Women are more likely to respond to the words on a job description. And so if that job description isn’t written in gender neutral tones, you’re not going to get the same amount of men / women to apply.

Jennifer:  CareerBuilder also has AI that suggests gender neutral words in job descriptions, to avoid language like “Coding ninja” or “rockstar,”which may deter some women from applying. 

The company also found women and people of color, on average, apply to fewer jobs overall. And they built an AI to fix that too.  

Irina Novoselsky: And so this is where we really believe that shift towards skills is so disruptive. Not only because it helps solve this gap, that we just don’t have enough supply for the demand that’s out there, but it’s opening up this net of people that normally wouldn’t have applied. We’re pushing the jobs to them. We’re telling this candidate, we’re applying on your behalf, you don’t have to do anything. 

Jennifer:  But how good are these measures at avoiding unintentional bias? 

Honestly it’s hard to know. More auditing is needed, and it’s incredibly hard to do from the outside. In part, because researchers only ever get to see a tiny fraction of the data that these algorithms are built on.

And making sure men and women get served the same opportunities is also a problem on social media. 

Facebook got in trouble for discriminatory job ads a few years back. It settled several lawsuits alleging the company and its advertisers were discriminating against older workers, by allowing companies to show job ads only to people of a certain age, and in that case excluding potential job applicants who are older. 

Facebook vowed to fix the problem of direct discrimination in ad targeting, and although they did in theory, in practice three scientists from the University of Southern California recently showed the unintentional discrimination Jersin found at LinkedIn is still present on Facebook. The researchers didn’t find the problem on LinkedIn.

It remains to be seen how regulators will deal with this problem. In the U-S that’s handled by the Equal Employment Opportunity Commission. 

It’s recently taken a closer look at this industry, but is yet to issue any guidelines. 

Meanwhile, if you’re wondering how Sally is doing, the woman searching for a job at the start of this episode. After 146 applications she’s accepted a job, but they hired her the old fashioned way. 

Anonymous Jobseeker: I went straight for the interview, old fashioned style face-to-face and that’s how I got it. They basically hired me off of my projects and what I already did, which is what I like. ‘Cause it’s like, I’m showing you I can do the job. 

[music]

Jennifer:  Next episode, the rise of AI job interviews, and machines scoring people on the words they use, their tone of voice—sometimes even their facial expressions.

Join us as we test some of these systems. 

Hilke: So… I was scored six out of nine… and my skill level in English is competent. What’s really interesting about this is I actually didn’t speak English. 

[CREDITS]

Jennifer:  This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, and Anthony Green with special thanks to Karen Hao. 

We’re edited by Michael Reilly.

Thanks for listening… I’m Jennifer Strong.

Discover more from WHO WILL CARE eCommerce

Subscribe now to keep reading and get access to the full archive.

Continue reading