Quality of Hire Blog

arrowBack to Quality of Hire Blog

Computer as Recruiter? - They Lack Data, Analysis, and Judgment

by Joseph Murphy

Charles Handler wrote a very thought provoking article about the future of computers and hiring decisions on ERE.

Charles - thanks for continuing to invite us forward.

The decision to hire will most likely always be an act of personal judgment.  However, better data regarding the variables that impact the quality of the decision is what differentiates down-stream outcomes.  And in the case of hiring, that means on-the-job performance.

His first and last bullets are the 'sit up and take notice' elements to embrace.

  • Algorithms must be fed quality post-hire performance data to be useful.
  • Our concept of validation will need to be expanded.

All the writing on Big Data is capturing the imagination of business and working its way into the business process called staffing.  To extract value from Big Data requires rigor and discipline.  This is the work of HR Analytics.

The discipline of capturing and feeding back post-hire performance data requires a system and resources.  Lack of an infrastructure to capture, analyze and report objective performance metrics is a huge barrier in many organizations.  There are many jobs where companies just do not measure performance.  The commitment of resources, i.e., manager completion of behaviorally anchored ratings, is often prevented from an attitude of 'Our managers don't have time for that.', or ‘That seems like a lot of work.’

The science of servo-feedback has been used to manage process consistency, a feedback loop is used to modify or control the outcome.  By default, organizations unwilling to set up a data loop eschew true learning from experience and evidence-based process improvement. They leave a lot of unrealized potential on the table.

The concept of validation must indeed change.  Too many assessment publishers assert "our test is valid."  This perpetuates a static-state mind-set of validation analysis, and invites the practitioner to believe there is a universal value in validation.  In fact, using a pre-employment test under claims of it being 'valid' is a 'me too' tactic which makes the user more like other companies instead of being a driver of their competitive differences.  It might be referred to as striving for vanilla.  The low value of  'me-too' approaches to HR practices is discussed well in the The Differentiated Workforce.  (A book well worth the read.)

Validation may be viewed as an academic term for calibration.  Assessment can be viewed as measurement rigor for hiring process outcomes.  Without in-house or local validation, your quality of raw goods measure (candidate characteristics)  is calibrated to the performance variables of other companies - their finished goods (on-the-job performance).  And, I am not sure I have ever heard a staffing or recruiting executive state; “We are just like everybody else.”  In fact, the opposite is true.  The assertion is more like we are different and unique.

So, why then do so many companies rely on off-the-shelf ‘validated’ assessments?  I believe it is because of the barriers posed by the first and last bullets in Charles' article.  Evidence-based staffing process improvement is work and it requires specialist skills, such as those of I/O psychology.  Not every company has in-house lawyers, yet they hire one when the need expertise to solve a business problem.  Companies hire I/O Psychologists for the same reason - to solve a business problem.

Find a job where staffing process improvement will add value and commit some time, resources and dollars to collecting post-hire data and conducting validation analysis as an on-going business practice.  It’s a great way to document business impact, drive competitive differentiation, and justify resources via ROI reporting.

Some related articles are here:

Alchemy and Algorithms – Recruiting by Ego or Evidence

Validation of a Pre-employment Assessment and Crowdsourcing
Moneyball and Selection Science – Pre-employment Testing