Speaker 1 (00:13):
Welcome to the High Volume Hiring Podcast. I'm Stephen Rothberg, the founder of College Recruiter Job search site at College Recruiter. We believe that every student and recent grad deserves a great career. This podcast features news tips, case studies, and interviews with the world's leading experts about the good, the bad, and the ugly when it comes to high volume hiring. Thanks for joining us. Today's guest is John Sumser principle for HR Examiner, an analyst firm that covers HR technology and the intersection of people, tech and work. As part of that, John examines the insides of hundreds of companies, their products and their ecosystems each year. Oh, and he does a few podcasts on the side, too. John, welcome to the show. Hi,
Speaker 2 (01:01):
Sheep. And how are you? It's been a long time since we have a chance to talk. I'm, I'm looking forward to this.
Speaker 1 (01:06):
Yeah, I, I think the last time we were in the same room at the same time, I think Ulysses s Grant had just been inaugurated. It was some, somewhere around that era.
Speaker 2 (01:15):
No, no. I'm pretty sure that we were together on Arm this day. <Laugh>.
Speaker 1 (01:22):
Speaker 2 (01:24):
Speaker 1 (01:26):
Such, such malarky. Oh, that lays us in a different direction. It, I think one of the last times we were together was in, I think it was 2018 or 2019 college recruiter was having what we called a college recruiting bootcamp. It was an employer user conference. Our friends at Google hosted it at their campus, and the topic was about the use of AI in recruiting, which roughly four years ago was still kind of a new topic. People were trying to figure it out. You were one of the very few people in the world that had done any actual research into it. Is it real? Does it work? What are the problems with it? What are the opportunities, et cetera. For the listeners who weren't one of the couple hundred people in that room, or haven't seen the, the video of your presentation, maybe you can fill us in a little bit about, you know, what you've found about AI and then if there's sort of new things that, that you've, that you've run across over the last few years as, as far as the use of AI with volume recruiting.
Speaker 2 (02:34):
So one of the things that I've learned about AI is, is, is, is pretty interesting. And that is the best way you can define it is it still doesn't work. And by that, I mean, we have AI all over the place, all around us all the time now, and we don't notice it because it works. So when Google guesses what your next search is, or Outlook recommends a response, or, or, or, or it's always an example of some form of artificial intelligence being applied to some aspects of your life. And we're immersed in it. We're just totally immersed in it. And so the stuff that we don't think about, it's not really ai, it's not what people are thinking about when they talk about ai. They're, when they think about ai, they're thinking about the stuff that might happen, the stuff that could happen, the stuff that hasn't quite happened just yet.
There's a, a bunch of that in the human resources and recruiting world where, where the, where the, the heart of the technical question is, can we have a machine make decisions about people? Right. and, you know, on on, on some level that's particularly abhorrent that machines would be, you know, I don't want a machine making a decision about me. And I certainly, if a machine's gonna make a decision about me, I wanna be able to protest it. I wanna have agency. And, and one of the things that, that AI systems currently do is deprive their objects, the, the people who are objects of the AI of agency and processes. And so what does that mean? Well, so AI may, in, in recruiting, AI has populated a bunch of areas. One area is loosely matching and improving the quality of the fit between a person and a job.
One area is sort of guidance as in I have a question and the intelligence can answer that question. And sometimes that's big. I know of systems that are incredibly detailed and robust, and sometimes it's little like all of the initial intake questions in a screening interview. So chat bots are chat bots and conversational intelligence are this piece. And you can have that as a step in a workflow or as an adjacent thing to a workflow. And by adjacent thing to a workflow is you can have the little bobbing head on your website that's a chat bot that is prepared to capture your question and hand it off to somebody. So that sort of a little thing is an AI function. The chat bot that gets you on the phone and starts doing intake in information is, is is that kind of function.
And the matching stuff. And the matching stuff gets really complicated because it includes many systems that derive their data by scavenging it around the internet. And so, so they're building huge databases of information about you and me that they capture from all sorts of places for all sorts of reasons, and assemble into a profile that is beyond human understanding. And right. And sometimes it's got information that you could read, and sometimes it's got behavioral information from your interaction with the screen that you could never read or understand. And all of that is accumulated into profiles. And so I think the last time I looked, there were like 50 companies doing some variation of assembling large data sets about people and making it possible to apply criteria and discover people in those data sets. One of the interesting things that doesn't quite work just yet is there's an increasing emphasis on skills rather than experience.
Because experience tends to deliver biases and tends to be skewed towards privilege. And so, and so taking things out of the experience level and turning it into a way of skills is a way of leveling the playing field. But there are 15 companies with 15 different ideas about what skills are and different level differing levels of details. So if you talk to one of the bigger, heavier funded companies, they'll tell you there are 600 million or a billion skills. And, and I have no idea what you do with that. But, but their theory is, I don't need to have an idea about what you do with that because the machine will take care of that for me. And that's when I start to get itchy. Trust.
Speaker 1 (07:55):
Yeah. Right. Trust, trust the machine. It, I mean, it worked, it worked just fine with the matrix. I mean, I don't know why we would be concerned.
Speaker 2 (08:02):
Right, right. Well just take, just take the right pill. And then the last area is, is sort of automated assessment and that ranges from facial recognition and video interviewing, which is a frightening thing. Yeah. That keeps popping up. Doesn't, doesn't seem to want to go away.
Speaker 1 (08:28):
It's like a cockroach, Right? You'd step on one and 12 more emerge from the
Speaker 2 (08:32):
Wall. Well, and, but it's, you know, in this context of, of volume hiring the idea that you might be able to tell whether or not somebody's lying to you without ever having to interact with them. Mm-Hmm. <affirmative>, that's, that, that's, that's part of the attractiveness is, is I, I'm, I'm not from the school that thinks that people are generally awful. That's, that's a more puritanical view of things than, than, than I carry. But I think it's common in volume hiring environments to believe that the biggest problem that you have is figuring out where the bad apples are and the pile of people you're bringing into the company. And so there's a lot of emphasis on weaving out bad apples and facial recognition tools that promise to be able to tell you when somebody's bullshitting you is is, is an attractive proposition. And so you see that stuff getting sold into places that use lower social status people to perform hourly tasks with some level of efficiency. You see this kind of thinking applied, you know, it's, it's the area in which deep background screening is a norm. Even though if you're the cfo, you can steal a whole hell of a lot more money than somebody who works in a retail store. The CFO is rarely, rarely dug into in the way that somebody who works at a store is dug into in terms of background reporting. It's fascinating.
Speaker 1 (10:05):
Yeah. If you want to, if you wanna work as a cashier in a typical retail store, they're gonna run a criminal background check on you. That goes back to when you were two years old. And if you had a DWI 12 years ago, sorry, you can't be a cashier here. I fail to understand what those two things have to do with each other. I can certainly understand it if you were convicted of like, you know burglary six months ago, why you wouldn't wanna have that person working a cash register. But but somebody who committed burglary six months ago is, is gonna do far more damage to you as a CFO than as a cashier.
Speaker 2 (10:41):
Right. And you might, you might not catch it because we don't really do as extensive a background check there. So anyhow, the point is that, that there's a, there is a market for the weirdest stuff, and there is a lot of weird stuff that passes as ways of telling whether or not your people are going to make it in your organization. And one of the things that I learned way before there was such a thing as AI is the quality of your hiring process goes up, the more consistent you are with it. And it doesn't matter if you do bonehead things in the process of making the hiring process better, it gets better because you do repeated things. And so you start having standardized criteria by which you're judging the hiring decision. And that's always going to increase the base level of quality that you get. It will also rule out anything interesting, but, but, but it does sort of narrow your focus and get you to picking only chicken eggs to put in the thing and leaving the duck eggs out, sort of as, as the process.
Speaker 1 (12:08):
I definitely hear from a lot of employers that the worst hiring decisions that are made are typically made by hiring managers who rarely hire people. The hiring managers who are hiring people all the time tend to get very good at it,
Speaker 2 (12:24):
Speaker 1 (12:25):
Or you find out pretty quickly, you know, what, she's hired 50 people in the last year and 49 of them have been awful. Maybe we shouldn't have her be hiring people anymore. Right? That's just not a core competency of, of hers. Does that translate over into software as well? Is that something that as we use software more within the same piece of software, AI driven software in an organization, it gets better and better over time, at least theoretically.
Speaker 2 (13:01):
That's such a great question. And the, the, the, the, so, so another way of asking the question is what's the difference between a human hiring person and a, a machine hiring person? Right? And the Sure. The, the human hiring person takes in data with say a trillion sensors ev every little increment of the shift in your eyebrows and the shift in the color of your cheeks and the pace at which you nod your head. And what happens when I nod my head with you and do you shake your head when I'm shaking? All of these things that are very subliminal parts of being a human interacting with another human are completely lost when you have a machine doing it and the machine can only see what it's been told to look for. And so and so, so you get a model and the model the model of a super employee, it might have a hundred variables. If it's a really good model, it might have a hundred variables. Really good employees have like a billion variables, right? And so, and so you run, you run these really interesting risks once you get up to speed of, of trying to figure out whether or not your criteria have gotten too narrow and how you undo it when they get too, right? Cause if the machine learns that only people who look like Steven and have eyeglasses get hired, you get this incredible narrowing, the machine learns to get more and more precise because the feedback is guys like Steve work.
Speaker 1 (14:55):
It's a self, it's a self-fulfilling prophecy.
Speaker 2 (14:58):
Or as a friend of mine says, it's a self looking ice cream cone
Speaker 1 (15:02):
<Laugh>. Yeah. I mean, if you hear, you hear these stories and then, you know, it was the famous one probably four years ago or something now with Amazon basically feeding into its hiring system. The profiles of all of its, I think they were software engineers. And then lo and behold, the AI favored future or candidates for future positions that looked the same. They came from the same sorts of schools. They had the same sorts of background because that's what the software was told to do. Amazon, if I remember the story correctly, the software developers were shockingly, overly, well, overwhelmingly male. You know what a, what a surprise. And I'm not saying that about Amazon, That's just the nature of that industry. It's rare when you find an organization with more than a handful of software engineers where most of them aren't mailed. And so, lo and behold, the software favored males.
That to me is an example of where the software was told to look for that kind of stuff. If it's, if all of those software engineers went to the same male dominated schools, if they all belong to the same male dominated sports teams, the software's gonna start to think that that's a determining factor. And, and to an extent it has been for that organization in the past, the motives moving forward might be very different. Which kind of brings me, John to the, to the next question that I had for you and, and that that's about risk of, of using ai. Maybe you can talk with us a little bit about both sort of the legal and the moral risk of using ai.
Speaker 2 (16:45):
I'm gonna try to get this, this five hour conversation squeeze into two minutes. So, so bear
Speaker 1 (16:52):
Well, it's, Hey, it's, it's 2022. Nobody can concentrate for more than 22 minutes, so, you know.
Speaker 2 (16:58):
Speaker 1 (16:58):
Speaker 2 (17:00):
Good. So, so the first risk is what automated decision making systems do is they reinforce the status quo. And so, so you get a narrowing. And what that tells you as a big ethical question that's a whole radio show in and of itself, is that diversity and automated hiring are mutually opposed ideas.
Speaker 1 (17:29):
Speaker 2 (17:30):
Speaker 1 (17:31):
I have never heard it put that way, but, but yeah.
Speaker 2 (17:34):
Right. Because yeah. And, and it's just the nature of things. So if you want to make the machine perform correctly, you have to keep feeding it bad information because it's gonna focus on the things that work and the things that work are the way that your culture has always been in your organization. And the problem with not paying attention to diversity today isn't about social justice. It's about the fact that the workforce has changed <laugh>. Yeah. And if you keep hiring the same kinds of people you hired last year, you're gonna run outta people. And so you need to broaden your point of view about what constitutes success, what constitutes qualified. And that means struggling against the basic tendencies of ai. But that is a powerful, powerful place to start learning because AI can just provide you with an opinion. And the temptation of human beings in the 21st century is to take whatever comes out of the computer's mouth and treat it as gospel. When really this is more like your father-in-law's opinion about whether or not you're a good husband. It's <laugh>, you know, it's <laugh>.
Speaker 1 (18:50):
Well, I I would say that, that over the years, more and more he decided that I was a good husband. But the reality was early on, I, you know, I married his daughter, so there
Speaker 2 (19:03):
Was that, but, but you have to take the father-in-law approach to the Yeah. Output of the ai, which is treated with suspicion. And that's the opposite of how the stuff is being sold, which is treated as reliable and it will take care of your problems. And you don't have to think about that risky stuff anymore. And that sets you up for all sorts of legal and ethical problems, right? If you don't manage the machine as if it were a junior employee with a narrow view of how things are supposed to be, who might have an interesting opinion you set yourself up for unintentional discrimination. You set yourself up for hiring the wrong kinds of people. You set yourself up for emphasizing factors that you didn't understand. And so the, the ethical questions slide into legal questions fairly quickly because rule number one, when you buy an AI tool of any kind is the liability is yours, not the vendors.
And so if it doesn't work and it does stupid stuff, you're on the hook, they're gonna sue you. And the vendor can wave their hands and do theon pilot routine, and you're stuck with the liability. And you don't know what that risk is. They don't know what that risk is. But there's some chance that the aggregate risk across a body of employers is big enough so that it can take down the company. And there are very, very few companies that have adequate ethics functions to try to prepare to route around that level of aggregate risk, right? Because ethics is fundamentally risk management. And so, so you, you, you need an ethics function, but, but the ethics function is gonna make you more thoughtful. It's not as a vendor or as a, as an HR department, it makes you more thoughtful. It doesn't give you solid answers. Solid answers are legal or moral answers. And ethics is about trying to make sense out of what the right thing is to do here right now. And that kind of thoughtful quality in managing technology is a new skill that, that is just emerging in the marketplace. Yeah,
Speaker 1 (21:45):
It, it well, as, as, as we wrap up, I'll just kind of throw my 2 cents in there with a, with an analogy if that's okay. Well, hey, it's my podcast, so it's
Speaker 2 (21:55):
Speaker 1 (21:59):
Speaker 2 (21:59):
It's not. Okay, Stop. Stop.
Speaker 1 (22:01):
Yeah, exactly. Com complain all you want, John. I pay the editor, or they, they, So the, it seems to me that there's a lot of similarities here to when, when employers for quite a while would try to outsource their discriminatory hiring practices to agencies. And then they, the employer would say, Hey, it wasn't us who chose this slate of candidates that were all white males. It was this staffing company or this executive recruiting company, and all we were doing was just choosing from the three finalists. And we have nothing to do with them. What the courts have come down pretty strongly, you can't outsource illegalities if you choose a vendor that is acting on your behalf, They are your agent. Just like if you have an employee who's acting on your behalf, they are your agent and you're responsible for their behavior, and it should be no different.
With ai, it's just another tool. You're choosing a tool, you're choosing how to use it. It's up to you to know if that tool is acting properly for lack of better, for a better word. And I, I do think it's gonna be interesting moving forward, probably another conversation for you and me another day about whether organizations are going to be successful with what I've seen some of them argue. And that is we can't be held responsible for what we don't know. We don't know how our AI software is working, why it's choosing some candidates over another because the vendor won't tell us it's a black box. And so if we don't know why it's rejecting some candidates, we can't be held responsible for that. And I call BS on that. You shouldn't have chosen that software to begin with. Holy
Speaker 2 (24:00):
Well, yeah. That's, ignorance is not a excuse. Right? And, and today, where all you have to do is look at the data that you've already collected, the idea that you don't know is indefensible.
Speaker 1 (24:15):
Yeah. Yeah. I mean, you, if you just hired 125 people for your call center and 123 of them are basically, you know, white, and then you've got a couple people who aren't in most areas, you're gonna have a problem. I mean, you should, you should just be able to see proof is in the pudding. And if you can't determine why your software is ranking one candidate ahead of another I would say that's software you shouldn't be using. But so John, before we wrap up listeners who wanna learn more about you your adorable dog, Carlos, your much better wife, Heather or anything else, how do they, how do they reach you?
Speaker 2 (25:00):
Twitter is pretty useful. I'm at John Sums or on Twitter. If you'd like to email me, I love talking about ethics and ai mm-hmm. <Affirmative>, which makes me a weirdo, but, but, but I have fun doing it. I'm John at HR Examiner, R E X A M I N E r.com. I'd love to talk to you. I have a, a ill attended website called HR Examiner which got ill attended because I am up to my eyeballs working with companies on ethics so that they manage their AI correctly.
Speaker 1 (25:37):
Plumbers have the leafiest pipes.
Speaker 2 (25:39):
Plumbers have the leafiest pipes. Exactly.
Speaker 1 (25:42):
Well, thank, thank you. So go ahead, John. Sorry.
Speaker 2 (25:45):
Yeah, no, I was gonna say thank you. I was gonna beat you to the gratitude section of the conversation, so thank you for having me on. This was a lot of fun. I miss you.
Speaker 1 (25:54):
I miss you as well. And, and we will we'll be in the same room at the same time before we're even older. Thank you so much for joining us today on the High Volume Hiring podcast. This has been a co-production of Evergreen Podcasts and College Recruiter. Please subscribe for free on your favorite app. Review it. Five stars are always nice and recommended to a couple of people you know who wanna learn more about how best to hire dozens or even hundreds of people. A special thanks to our producer and engineer Ian Douglas, and hopefully he doesn't acts out the bad parts that John said that we disagreed about. And I'm your host Steven Rothberg of job search site College recruiter. Each year we help more than 7 million candidates find great new jobs. Our customers are primarily Fortune 1000 companies, government agencies, and other employers who hire at scale and advertise their jobs with us. You can reach me at [email protected] Cheers.