Hr Software – Screening For Bias

Many small and middle market companies, and virtually all large enterprises use “intelligent” software to screen resumes, search publicly available information, social media postings and Internet histories to compile profiles of potential job candidates.

Best Practices for HR software use currently include:

  • Retaining a third party HR firm to screen ethnic,  sex, age, disability, religious and other inappropriate information about potential hires from employers; includes –
  • Due diligence checking to see if vendor has been involved in disparate impact cases;
  • Using only HR screening software which is tailored to the company and specific jobs;
  • Ensuring job selection criteria are relevant to job requirements and business necessity;
  • Applying recruiting, hiring and promotion policies consistently;
  • Periodically analyzing hiring and promotion data for bias patterns [1];
  • Conducting in-person interviews; not using software tools as decision-makers.

New Screening Technologies

According to recent surveys, a new wave of HR talent selection technologies such as software evaluation of behavior (word choice, body language) and personality traits, online video interviews ranked using predictive analytics, and mobile job applicant screening games are exploding onto the market [2].

These new tools, the instruments of an emerging 21st century “workforce science”, categorize people based on characteristics which are soft, subjective, not binary, not

either ~ or. And since [3], the screening algorithms [4] and databases on which they’re constructed are proprietary and not open to outside review, they carry potential class action and reputational risks, as well as society-wide implications. It’s therefore worth taking a closer look at the issue of programmed-in bias, and discrimination which is the result of data mining itself.

In addition to providing time and cost savings, these platforms are designed to eliminate illegal discrimination from hiring and promotion decisions. Will they?

The vendors of HR screening technologies seek to ensure that the only personality traits screened for are those related to job performance by verifying (e.g., via job audits; “back-testing”) that the traits are predictors of performance, are job-relevant, and impact the company’s bottom line. [5]

Recruiting- Big Data Insights

However, as big data screening technology becomes increasingly sophisticated employers searching for “insights” will leverage it’s speed, granularity and economies of scale by increasing the number of factors, including soft, hard to code personal characteristics, which are considered “important” in candidate selection; i.e. data mine to find hidden correlations without regard to explicit job performance and retention-related criteria.

This will set up a tension between what technology can increasingly deliver, what talent management vendors and their client companies can legally use, and what government and private plaintiffs can prove (is the process fair?; are the results tainted?) under current anti-discrimination laws.

HR screening technology uses machine-learning algorithms, aka artificial intelligence, to detect and learn behavior patterns. These systems build models, are “trained” on the data fed to them. But because AI is designed to mimic humans, AI-based programs can learn human biases from programmers, even though they are not given explicit, legally protected characteristics (e.g., race, age, sex) as an attribute of a job description. [6] What “good” performance is defined to be, and how examples of it are labeled affects the screening and evaluation rules the programs learn and then apply to applicants. [7]

Search terms and datasets have a context and a history, and disregarding them can reinforce narrow views of competence and historical stereotypes. An example – use of the term “digital native” in media, advertising and tech industry job descriptions may screen out older applicants. [8]

Limits of Testing for Adverse Impact

In an attempt to address this, AI software has been developed to test screening programs for unintentionally learned bias to see, for example, if it’s possible to predict candidates’ gender or race based on data such as names (“ethnic”- sounding?), addresses (zip code demographics), educational institutions (attendee profile) which are being analyzed by a resume scanning program. If so, the data in the scanning program can be redistributed so that the data resulting in the bias cannot be “seen” by the program. [9]

However, as discussed below, such solutions do not deal with the larger issue of what to do about the bias embedded in historical data generally, which AI draws on, and which is often provided by HR software vendors.

Business Necessity

At issue is the use of AI in employee recruitment and screening which has an unintentional but “disparate impact” on legally protected groups. Under Title VII of the 1964 Civil Rights Act, a practice with a disparate impact – use of a particular HR “predictive analytics” screening program which has such an impact – can be legally justified as a business necessity if: (i) it predicts employment outcomes; and (ii) there is no other procedure with less of an adverse impact.

HR data mining is being used to discover such correlations and support a “business necessity” justification even though the correlations may reflect historic patterns of discrimination within a business itself (e.g., in human-graded performance reviews) or underlying flaws in external data sources, rather than actual on-the-job performance. [10]

A major dilemma posed by the growing use of HR screening programs is whether it’s even possible to identify inbred dataset biases. And if so, how are the results of data mining to be adjusted – “affirmative action”, by whose yardstick? [11]

Current government guidance has limited value. “The Uniform Guidelines on Employee Selection Procedures” [12] which the U.S. Equal Employment Opportunity Commission and Office of Federal Contract Compliance advise firms to follow so their tests and selection methods don’t have an adverse impact on protected groups, but the Guidelines haven’t been updated to reflect the use and problematical aspects of HR data mining technologies.

What Is Acceptable?

The promise of predictive analytics is that it will increase the possibility that an otherwise qualified applicant will be hired and result in a more diverse workforce by not reproducing social preferences, but by expanding the types of people who are considered beyond those who would have been looked at using traditional hiring practices, such as word-of-mouth, employee referrals, etc.

The challenge presented is whether and how data mining discrimination can be addressed so that the new screening technologies don’t make worse what they seek to eliminate. Unless and until this is done, we may be left with deciding what degree of AI’s disparate impact is acceptable in American society. [13]

  1. Title VII, 1964 Civil Rights Act (disproportionate adverse impact on legally protected groups); Age Discrimination in Employment Act, 1966; Title I, Americans With the Disabilities Act, 1990.
  2. Kramer, “Does Talent Selection Tech Pose Discrimination Risks?”, Electronic Commerce and Law Report, Apr. 12, 20-6, Bloomberg BNA, citing “Sierra-Cedar 2015-16 HR Systems Survey”
    http://www.bna.com/talent-selection-tech-n57982070031/
  3. K. Crawford, “Artificial Intelligence’s White Guy Problem”, Jun. 25, 2016, New York Times
    http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0
  4. An algorithm is a sequence of arithmetic steps for solving a problem. J. Kun, “Big Data Algorithms Can Discriminate”, Aug. 16, 2015, Predictive Analytics Times
    http://www.predictiveanalyticsworld.com/patimes/big-data-algorithms-can-discriminate0816152/6089/
  5. Back-testing involves evaluating predictive models’ validity by testing how the model predicts already-known results; this permits assessment of possible biases generated by input data or the modeling process. “HR Analytics Should Recognize Potential and Pitfalls in Use of Big Data” Edgeworth Economics, Jan. 12, 2015. http://www.edgewortheconomics.com/experience-and-news/edgewords-blogs/edgewords-business-analytics-and-regulation/article:01-12-2015-12-00am-hr-analytics-should-recognize-potential-and-pitfalls-in-use-of-big-data/
  6. Id.
  7. “Big Data’s Disparate Impact”, S. Barocas and A. Selbst,
    104 California Law Rev. 671, 680 (2016)
  8. V. Giang, “This Is The Latest Way Employers Mask Age Bias, Lawyers Say”, Fortune, May 4, 2015.
    http://fortune.com/2015/05/04/digital-native-employers-bias/
  9. K. Smith-Strickland, “Computer Programs Can Be As Biased As Humans” Aug. 16, 2015 2015, http://gizmodo.com/computer-programs-can-be-as-biased-as-humans-1724436758
  10. See generally, “Big Data’s Disparate Impact’ n. 7, supra.
  11. Id.
  12. http://uniformguidelines.com/
  13. “The question of determining which kinds of biases we don’t want to tolerate is a policy one,” according to Deirdre Mulligan, University of California, Berkeley School of Information. “It requires a lot of care and thinking about the ways we compose these technical systems.” Silicon Valley, however, is known for pushing out new products without necessarily considering the societal or ethical implications. “There’s a huge rush to innovate,” Ms. Mulligan said, “a desire to release early and often — and then do cleanup.”; source: “Analytics & Employment Discrimination”; Jul. 23, 2015, Torylaw.com
    http://torylaw.com/analytics-employment-discrimination-2/