Shaker Consulting Group Logo Virtual Job Tryout Logo

Archive for the ‘HR Analytics’ Category

December 7, 2012

Shaker Consulting Group Hits Milestone of Ten Years

Shaker was founded on a mission to revolutionize how pre-employment assessment is created and delivered. It has attracted market-leading Fortune 500 clients in retail, financial services, hospitality, medical services, insurance, manufacturing, digital entertainment, and technology sectors. From its inception in Shaker Heights, OH, the firm has expanded operations into four cities (Atlanta, Pittsburgh, and Washington DC) and employs 12 Ph.D level Industrial/Organizational psychologists engaged in selection science and HR analytics.

Brian M. Stern & Joseph P. Murphy

The Virtual Job Tryout is designed to leverage the multi-media capabilities of the internet to deliver a highly engaging, company branded candidate experience. It educates and evaluates at the same time. “In completing a Virtual Job Tryout, candidates take the job for a test drive and learn a lot about the performance demands. Recruiters obtain information about a candidate that is far more objective and useful than anything found on a resume,” says Brian Stern, Ph.D., president of the firm.

The first Virtual Job Tryout evaluated about 10,000 call center candidates in 2002. This year, the firm has clients which have 5,000 candidates a day complete their Virtual Job Tryout. It is seen as an excellent tool to help companies improve their quality of hire.

October 18, 2012

The Journey Into Effective Use of Assessment – How Long Does it Take?

The journey into effective use of assessment must be marked by patience and a long view to the horizon.

What is on your horizon? Better Hiring?

When viewed as measurement discipline for a business process called staffing, it brings to mind the time frame required to implement other strategic measurement systems.

It does not happen over night. But the dawn of a new day, with the prospect of positive business transformation can be beautiful.

The design-build-validation cycle requires about five to six months.
• Six weeks to collect job analysis data and define the eassessment specifications
• Six weeks to build the validation version of the assessment
• Six weeks to have hundreds of incumbents and performance rating managers complete the data collection
• Six weeks to conduct the analysis, document results, and prepare for implementation.

To document tangible results – measurable ROI – requires several business cycles to occur:
1. Hire a statistically significant number of new associates
2. Allow enough time to transpire so that proficient on-the-job performance metrics can be observed, collected, and analyzed.
3. The relationship among assessment scores and new hire performance can then be examined, shared (and celebrated!).
4. Recruiter/Hiring Manager behaviors can be reinforced or coached and corrected with the support of evidence.

Anecdotal insights and observation begin immediately – both pro and con. Change management must be sustained over time. The consistent caliber of new hires will be evident immediately through observation-based comparisons. The stream of new hires begins to deliver at a meaningful level as they achieve full competence and proficiency.

For the impact to contribute to business transformation – six months to multiple years.
A high need for immediate gratification is not addressed with a new employee selection system.

Having a partnership that values patience and a long view to what may lay beyond the horizon is essential.

We would be happy to discuss our experience making the journey of staffing process improvement with you.

October 11, 2012

Computer as Recruiter? – They Lack Data, Analysis, and Judgment

Charles Handler wrote a very thought provoking article about the future of computers and hiring decisions on ERE.

Charles – thanks for continuing to invite us forward.

The decision to hire will most likely always be an act of personal judgment.  However, better data regarding the variables that impact the quality of the decision is what differentiates down-stream outcomes.  And in the case of hiring, that means on-the-job performance.

Data-loops provide a means to manage outcomes

His first and last bullets are the ’sit up and take notice’ elements to embrace.

  • Algorithms must be fed quality post-hire performance data to be useful.
  • Our concept of validation will need to be expanded.

All the writing on Big Data is capturing the imagination of business and working its way into the business process called staffing.  To extract value from Big Data requires rigor and discipline.  This is the work of HR Analytics.

The discipline of capturing and feeding back post-hire performance data requires a system and resources.  Lack of an infrastructure to capture, analyze and report objective performance metrics is a huge barrier in many organizations.  There are many jobs where companies just do not measure performance.  The commitment of resources, i.e., manager completion of behaviorally anchored ratings, is often prevented from an attitude of ‘Our managers don’t have time for that.’, or ‘That seems like a lot of work.’

The science of servo-feedback has been used to manage process consistency, a feedback loop is used to modify or control the outcome.  By default, organizations unwilling to set up a data loop eschew true learning from experience and evidence-based process improvement. They leave a lot of unrealized potential on the table.

The concept of validation must indeed change.  Too many assessment publishers assert “our test is valid.”  This perpetuates a static-state mind-set of validation analysis, and invites the practitioner to believe there is a universal value in validation.  In fact, using a pre-employment test under claims of it being ‘valid’ is a ‘me too’ tactic which makes the user more like other companies instead of being a driver of their competitive differences.  It might be referred to as striving for vanilla.  The low value of  ’me-too’ approaches to HR practices is discussed well in the The Differentiated Workforce.  (A book well worth the read.)

Validation may be viewed as an academic term for calibration.  Assessment can be viewed as measurement rigor for hiring process outcomes.  Without in-house or local validation, your quality of raw goods measure (candidate characteristics)  is calibrated to the performance variables of other companies – their finished goods (on-the-job performance).  And, I am not sure I have ever heard a staffing or recruiting executive state; “We are just like everybody else.”  In fact, the opposite is true.  The assertion is more like we are different and unique.

So, why then do so many companies rely on off-the-shelf ‘validated’ assessments?  I believe it is because of the barriers posed by the first and last bullets in Charles’ article.  Evidence-based staffing process improvement is work and it requires specialist skills, such as those of I/O psychology.  Not every company has in-house lawyers, yet they hire one when the need expertise to solve a business problem.  Companies hire I/O Psychologists for the same reason – to solve a business problem.

Find a job where staffing process improvement will add value and commit some time, resources and dollars to collecting post-hire data and conducting validation analysis as an on-going business practice.  It’s a great way to document business impact, drive competitive differentiation, and justify resources via ROI reporting.

Some related articles are here:

Alchemy and Algorithms – Recruiting by Ego or Evidence
Validation of a Pre-employment Assessment and Crowdsourcing
Moneyball and Selection Science – Pre-employment Testing

August 21, 2012

Christopher Frost to Support Virtual Job Tryout with Expertise in Predictive Analytics

Shaker Consulting Group hired Christopher Frost as Virtual Job Tryout and HR Analytics Scientist. Frost, a native of Michigan, with an advanced degree in Industrial/Organizational Psychology, brings a unique mix of assessment research and predictive analytics skills to the firm.

Frost Brings Predictive Analytics Skills

“His experience in machine learning and HR analytics offer great value for supporting design innovation in our virtual job tryouts. His expertise will be extremely valuable as we continue to develop more sophisticated pre-employment testing methods,” said Joseph P. Murphy, vice president of Shaker Consulting Group.

“Today’s candidate expects an engaging and dynamic experience when applying for a job. Machine learning allows assessment design to go beyond yes-no and multiple choice items. Shaker’s Virtual Job Tryout is an exceptional platform for innovation with pre-employment assessment,” said Frost.

When asked for something unique about himself he offered this. “I am on a mission to see every major league baseball team and only have three more left to go. In my free time I also like to read (working my way through the second Game of Thrones book right now), play golf, and play cards/poker. And yes, just in case you were wondering, I do have an uncle named Jack Frost.”

For more information, read the full release: Shaker Consulting Group Hires Christopher Frost to Support Virtual Job Tryout with Expertise in Predictive Analytics.

June 7, 2012

Golden Data for the Golden Era – Selection Science

Charles Handler wrote a great article on technology, analytics and assessments coming together to form the threshold to Golden Era.

Threshold is indeed the correct word. The hold back factors from saying we have made it through the door are missing skill sets and data access.

HR Analytics in the Golden Era


Assessment is actually a form of measurement rigor for a business process called staffing. When set up with closed-loop analytics, the insights can drive decision making that improves the yield of the process. The greatest challenges to taking advantage of this form of analysis are HR and Recruiting practitioners that are hard pressed to access data, and organize it into meaningful clusters.

We have been providing closed-loop analytics for our clients for over a decade. The variation in data resources within organizations is significant and meaningful. From our experience we see it ranging from instant access and exportability to being hard-pressed to know where to begin.

The data needed often lives in various repositories; no one individual has line of sight or access rights. Key client roles on our project teams include a data-base advisor and a metrics advisor. Organizations that have internal skill-sets in these two disciplines are in a much better position to walk over the threshold and into the Golden Era of data mining and analytic insights.

If Talent Management is responsible for staffing the organization with productive workers, then it would seem important to know the cost to proficiency. This is a measure of time and dollars invested in acquiring an individual from sourcing, through on-boarding, to the time of self-sufficient performance. In manufacturing terms that is the cost of finished goods. This is essential to understand, document, and calculate the return on investment (ROI) of staffing process improvement.  This is the real value selection science brings to business.

Raw goods that become defects are like new hires that quit or are terminated prior to achieving proficiency. This is staffing waste and causes rework. Dollars to proficiency and time to proficiency can double. Assessment is a form of raw goods analysis. The data from assessment and various metrics and performance ratings on the journey to proficiency are the Golden Data for the Golden Era.

The consolidation of data capture and retrieval service providers, (e.g., Oracle/Taleo, etc.) may lead to easier access to data that can be used for analysis. The overarching structure of talent management needs to integrate data from sourcing through on-boarding to proficiency, and even deeper into employment life-cycle.

May 11, 2012

CSI: Recruiting – Candidate Science Investigation

The popular TV show CSI has created a fascination with the science behind crime scene investigation.  It has raised interest and awareness to the science of forensics, or as Webster defines it: the use of science and technology to investigate and establish facts.  In law, decisions should be supported with evidence.

Forensics for Recruiting

The result of this TV series has been an explosion in enrollment in criminal justice and law enforcement related programs at colleges and technical schools across the country.  The outcome will be a flood of graduates imbued with knowledge and skills, hopeful about being hired by leading edge crime fighting police departments who want more science in their prosecution.  In recruiting, hiring decisions should be based upon evidence supported with sound data collection and analysis too.

A similar science-based approach for establishing decision support evidence is available for recruiters: Industrial/Organizational Psychology (I/O Psych).  These professionals are the CSI: Recruiting specialists – Candidate Science Investigators

This professional discipline was established in the early 20th century, to apply psychological principles and techniques to business and industrial problems, as in the selection of personnel.  Before crime investigator forensics hit prime time TV, forensic specialists worked quietly behind the scenes improving the quality of data collected and used to build a case.  Industrial/Organizational Psychologists (I/O Psych) are quietly at work around the world, building data collection methods that improve the quality of  hire  with selection science.  These professionals design the data collection methods called assessment.

Every law department has someone trained in forensics because they know better evidence improves the decision in our justice process.  Will it take a fast paced TV series before companies sit up and take notice of the work being done by CSI Recruiting?

Doctorate degrees have been offered in I/O Psych for over 80 years.  These graduates have been snapped up by leading edge companies who understand the competitive advantage of more science in their recruiting process.  The evidence of I/O Psych’s contribution is compelling.

Here are a few simple examples from our work with selection science and HR Analytics that shed a bit of light on to the potential of better candidate data.

A retail operation with thousands of stores had been using a candidate screening criteria of “years of experience in a similar industry.”  Intuitively, every one thought it had made sense.  However, during a recent study by a team of I/O Psychologists, evidence determined that the longer a candidate had been in the similar industry, the less likely they were to be an above average contributor at this firm.

Similarly, in a capital equipment field sales representative position, the company had established screening criteria of hiring people who had worked for the major competitors.  After an investigation by a team of I/O Psychologists, the evidence demonstrated the longer a candidate had worked for the competition, the less likely they were to be an above average performer.

Ironically, “related experience” is often a candidate screening criteria.  In both cases, using that factor places positive weight on a evaluation criteria with negative value.  The CSI Recuiting teams had been chasing bad leads, intuitive assumptions, proved wrong by evidence.

In both cases, the evidence collection process was dramatically improved across a broad range of factors that contributed to a better quality of hire through CSI.

More examples can be found on our case study page.

Crime scene investigation is typically more like making one high stakes hiring decision.  The consequence of the decision can be significant.  The approach used to leverage I/O psychology varies depending on the scale of the staffing process.  A once every few years hiring decision requires a different solution than staffing processes which make hundreds or even thousands of hiring decisions every year.

If your company has a job or job family with over 100 employees in it, I/O psychology can begin to add measurable impact on performance with each hiring decision.  If you have a job with thousands of employees engaged in fundamentally the same work, it could be the basis for a charge of Recruiter Negligence for not engaging an I/O psychologist.  In high volume hiring processes, the size of the data set, the frequency of decision making, and the potential for significant performance variation almost mandated Candidate Science Investigation.  Click here to review some criteria to consider to determine if CSI is appropriate for your recruiting situation.

The decision to hire will always be an act of personal judgment.  However, every executive knows a decision is only as good at the data behind it.  There is a great opportunity for staffing process owners to do real CSI: Candidate Science Investigation.  Without the use of selection science, data collection and analysis, recruiting could be activity without insight.

Watch our movie to learn more.

April 26, 2012

Simulations Plus Other Assessment Modalities: Is the Whole Greater Than the Sum of the Parts?

SIOP 2012 San Diego CA

Three Shaker Consulting Group psychologists discuss the power of combining simulations with other assessment data for more predictive power.

Throughout the history of psychometric assessment, researchers have focused on discrete measurement modalities such as multiple choice biographical data, scale-based personality, and various types of mental ability measures. Researchers and practitioners have generally studied each modality individually, and attempted to show that individual constructs have a degree of reliability and correlate significantly with some outcome measure. As sample sizes and, correspondingly, statistical computing power have grown, researchers have also been able to examine complex effects such as interactions between constructs, though even these analyses are most often done within a particular assessment modality (e.g., examining the interaction between two personality scales).

Society for Industrial Organizational Psychology

Now, as computing and networking technologies continue to evolve at an exponential rate, nearly any type of assessment modality imaginable is possible to study and implement (Greene, 2011; Trull, 2007). As a result, novel assessment techniques are being developed faster than ever before. Today, simulations are at the forefront of modern assessment design and, as work samples, are known to be quite predictive of job performance (Schmidt & Hunter, 1998). Yet, as a category they are quite heterogeneous.

This presentation will contribute to the growing literature in this area by examining two aspects of simulation design:

1. Whether measuring respondent behaviors during the simulation rather than just responses to embedded questions might confer a psychometric advantage.
2. How simulation measures may interact with other types of assessment measures, to better predict various outcomes.

Ultimately, as computerized simulation of the real world increases in complexity and fidelity, our science can move from creating measures of theoretical constructs to measuring actual behavior. In this manner, in time we may begin to transcend the need to measure proxy variables and theoretical constructs, and instead, use more direct measures of behavioral outcomes.

Simulations offer us a venue to ask many new questions that were not feasible and/or informative before. What can we learn, for example, by observing how a person interacts with an environment, be it virtual or real? For example:

• What do errant mouse clicks tell us?
• What do we learn when a respondent changes her answer?
• Can we learn something about a candidate’s personality by whether she skips over questions she doesn’t know, or spends extra time trying to get each question correct?
• Does repeating an example item during simulation instructions tell us anything about an individual’s personality?

In short, by simulating environments, psychologists can expand the measurement space from the traditional and simple question and answer, to a broad range of non-question-based behavioral measures. As simulations grow in complexity and fidelity, we may be able to move away from actual questions, and more towards measuring how an individual actually interacts with a particular environment; in other words, measure that individual’s choices and behaviors (Hornke & Kersting, 2006). At that point, our measurement focus will be on anything and everything that can be measured, from how a person moves through a space to what they actually do after making a decision.

While one simulation-based measurement advance is thus a gradual moving away from construct measurement to direct behavior measurement, another advance occurs as practitioners combine simulations with other assessment types. This allows researchers to measure interaction and other complex effects across measurement modalities. For example, we may find that a person with a high score on a multitasking simulation is better at a certain job, but only if that person also has a high score on attention to detail. Or, perhaps we will find that multitasking scores are only predictive for people who have a background doing work that involves multitasking. This type of cross-modality research is in its’ infancy, but it should increase as researchers integrate more measures in to online assessment experiences.

Psychological science has a hundred-year history of theory and research, but we feel that it has barely begun to tap its’ potential to explain and predict human behavior. Traditionally, researchers in our field focus on finding that elusive, “holy grail” measurement construct that can predict uncharted amounts of variance in job performance. We believe that this pursuit is ill-fated. Instead, researchers should explore ways to measure human characteristics with increasing fidelity, and in particular, move beyond paper-and-pencil measures of abstract constructs to actual direct behavioral measurements, especially in virtual environments. Furthermore, and especially with the vast amounts of data increasingly being captured by modern organizations, our science needs to focus less on the direct effects of measured variables, and more on complex and cross-modality effects. The latter topic represents a tremendous area of untapped opportunity for exploration.

References
Greene, R. L. (2011). Some considerations for enhancing psychological assessment. Journal of Personality Assessment, 93(3), 198-203. doi:10.1080/00223891.2011.558879
Hornke, L. F., & Kersting, M. (2006). Optimizing Quality in the Use of Web-Based and Computer-Based Testing for Personnel Selection. In D. Bartram, & R. K. Hambleton (Eds.), Computer-based testing and the Internet: Issues and advances (pp. 149-162). New York, NY: John Wiley & Sons Ltd.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274. doi:10.1037/0033-2909.124.2.262
Trull, T. J. (2007). Expanding the aperture of psychological assessment: Introduction to the special section on innovative clinical assessment technologies and methods. Psychological Assessment, 19(1), 1-3. doi:10.1037/1040-3590.19.1.1

April 2, 2012

Lessons from Lake Wobegon and Quality of Hire

“…and the children are all above average.”  Wouldn’t be great if you could say that about your candidates?  Garrison Keillor’s famous ending line to his Lake Wobegon monologues offers us an opportunity to ponder how that idea can be applied to our mental model for HR Analytics and practices for staffing process improvement.

What is Average?

Let’s begin with the concept of average.  Webster offers this definition:   The result obtained by adding several quantities together and then dividing this total by the number of quantities .  In the context of recruiting, I prefer to describe it as the result of mistakes that occur while trying to only hire top talent.  Your worst hires pull your average down.

William Scherkenbach, noted  author and former Director of Statistical  Methods for Ford Motor Company in his book The Deming Route to Quality reminds us that, “if  you believe in the law of averages you will always have above average and below average, and there is  not a darn thing you can do about it.”  What that means in the simplest of terms is that half of your candidates are below average.  All the time, every time.  Think about that.

So what is an above average candidate?

The law of averages surfaces as variation in your process.  You hired your best, and you hired your worst, using the same candidate evaluation methods.  Average is a place on the continuum between the two hires that bookmark the ends.

When we apply this to a candidate population two question emerge:

1. What is the average?” and,

2. How does the average in the candidate population compare to the average in the current employee population?

At Lake Wobegon, the children are all above average.  But, above average on what?  And Scherkenbach would balance that with an assertion that the children in the sister city, Lake Whoa-is-me are all below average.

Quality of hire carries with it a notion that there is one or more metrics against which a candidate can be evaluated.  Enter productivity metrics, competency models, KSAOs, success criteria, performance frameworks, etc.  These methodologies for describing and evaluating on-the-job behaviors of current employees become the metrics for evaluating candidates as well.

Scherkenbach goes on to assert the real opportunity is to know what your average is and continuously raise it.  This is where the notion of above average really comes into play.  It is possible for the best candidate to be below average, when compared to the average of current employees.  And hiring the best candidate from a below average pool, lowers the average of the current employees.  Dwell on that for a moment.

Defining Average

It is possible to conduct an analysis that determines average or actually a variety of averages within existing employees.  Data about their performance exists or can be created.  Objective measures of performance can be used to collect data on a group and calculate an average level of productivity.  Supervisor ratings of observed performance can be collected using behaviorally anchored rating scales. This data can be used to calculate the average of each competency as well.  These two data sets comprise the metrics for quality of hire.

Quality of hire metrics become the standard for differentiating among candidates.  How then, does one gain insight into a candidate’s ability to achieve certain objective measures of performance?  And, how does one ascertain a candidate’s capacity to deliver behaviors similar to those defined in a competency model?

Over time, new hire performance can be evaluated using the same criteria, objective results and competency ratings.  When that data is collected, a quality of hire evaluation can be documented and reported.  Muse over the value of that.

Proxy Measures and Evaluating Average

Most forms of candidate evaluation are proxy measures. A proxy measure is a substitute or surrogate measure.  The most obvious and common proxy measure is level of education.  A diploma or degree is often set as a threshold measure of some functional level of literacy.  Assumptions are made about basic reading writing and reasoning levels associated with each level of academic attainment. I wonder if Garrison Keillor is referring to above average high school achievement in Lake Wobegon?

No doubt you have seen two individuals with similar academic credentials but with very different levels of literacy.  This is an example of where a proxy measures fail to do its job.  At issue here is the variation that exists in the proxy measure, or how abstract the proxy measure is in relation to the quality of hire metric.

Behavioral interviewing is a form of proxy measure that assumes story telling about past behaviors relate to what we might expect in terms of behaviors in the future.  And there is a large degree of truth to that.

There are two inherent challenges in achieving an effective evaluation with interviewing.  One is interviewer skillfulness at probing and documenting responses in a useful manner. The second is a candidate’s ability to articulate how they accomplish results.

Beginning with thoughtfully constructed questions that elicit job-relevant examples of past efforts is considered a best practice.  Recruiters who stay on-script with competency-based, behavioral interviews do get a pretty thorough evaluation of candidate-job fit.  Candidate responses can be rated against evaluation criteria and ratings can be used to determine if the candidate is below, at, or above average.  Unfortunately only about 35% of recruiters stated they apply this level of rigor to interviewing practices. (Ask for a copy of the survey white paper)

There are more reliable and direct ways to evaluate and indentify above average.

Direct Evaluation of Average

The closer the evaluation exercise is to actual job demands, the more accurate it can be in assessing capabilities in relation to an average.  That is precisely the role of simulations for pre-employment assessment.  Job specific simulations present the candidate with work scenarios and job-relevant work exercises to capture data on how an individual actually handles a range of job demands.

Having a large group of current employees complete a simulation captures the data to document in-house average.  With these two data sets it is easy to differentiate among candidates.  Candidate results on the simulation produce a score that can be compared to other candidates and to existing employees.  The score on a simulation can help identify candidates with above average capabilities.

Having all applicants for a job complete a simulation provides the data to document candidate pool average, as well as the variation from low to high.

Handsome, Good Looking and Above Average

Garrison Keilor sends the listener on their way with a closing comment about characteristics of the entire population at Lake Wobegon.  I’d like to send you off with a few closing comments about characteristics of above average recruiting practices.

You can invest your interview time and effort with above average candidates.  But you must first invest in the rigor of defining the metrics, creating an objective evaluation method and calibrating it with your existing population.  Using off-the-shelf evaluation resources may be a good step in the right direction.  However by default that implements a ‘me too’ approach to candidate evaluation when in fact you may be working hard to create a workforce that is distinctive.  And may not contribute to a competitive advantage.

On page 10 of  The Differentiated Workforce, authors Beatty, Becker and Huselid provide a visual model for considering the need for and strategic impact of job specific, company specific workforce practices.  One might draw a sound conclusion that certain jobs in your organization demand an extremely rigorous candidate evaluation experience.

Calibration is the business term for validation analysis.  Validation analysis documents the relationship among competency ratings, objective performance metrics and simulation results.  Validation is the measurement rigor that links candidate evaluation to your business drivers.

Recruiting departments that us HR Analytics and conduct in-house validation analysis go beyond above average, they ensure their efforts create aworkforce that delivers superior results.  That by itself makes them pretty good looking to the CEO.  If you want to do that, we can help.  Let’s think about that together.

March 6, 2012

Moneyball and Selection Science – Pre-employment Testing

Got Data?

Lance Haun of TLNT wrote a great article about Moneyball.

What Moneyball brings to light is the same discipline I/O psychology brings to staffing process improvement.  In Moneyball the various qualifications of the candidate pool are supported by rigorous data collection and HR analytics.

ATS profile questions often gather proxy data.  Interesting but not valuable data.  Careerbuilder recently posted the average ‘Look” at a resume was less than 2 minutes.  How much hard data was collected?  How much data was entered into a data base for analysis?  What scoring algorithm was used to drive the Yes, Maybe, No sorting process that took place.

Proxy data are the easy to capture but often substitutes for evaluating the underlying trait or characteristic.  In Moneyball, a scout asserted one player lacked confidence. The proxy measure used was the arbitrary rating the scout attributed to the player’s girl friend.  Well intended recruiters deploy proxy evaluations with little or no substance behind them. I’d be happy to describe a few we have helped organizations debunk – just ask.

It is common for a robust validation analysis to capture and analyze 250,000 data points to examine the relationship among candidate evaluation data and on-the-job performance.  Companies that engage in this level of analysis create a workforce that delivers superior results.  Better candidate data supports better decision making.

One of the best lines in the movie was the job offer scene in Boston.  It went something like this, You got just as many wins as NY on 25% of the salary.  Anybody who does not take note of that will be watching the Series from their sofa.

Footnoted – Boston went on to win the Series two years later.

One of the hardest lessons to learn from Moneyball was that it is not about hiring the superstar.  Quality of hire is all about consistently and objectively raising the average.

Stop in and talk about this at our booth at ERE in San Diego. I will have a gift for you if you mention this post.

March 1, 2012

Shaker Consulting Group Hires Dr. Christie Cox to Support Virtual Job Tryout Design and HR Analytics

To meet client growth and expanding global market demands, Shaker Consulting Group is proud to announce the hiring of Dr. Christie Cox as Virtual Job Tryout HR Analytics Scientist.

“Her experience in global assessment implementation and HR analytics offer great value for supporting our multi-national clients. Her expertise in pre-employment assessment design and HR analytics will prove invaluable as we expand our client base in Asia and Europe,” said Joseph P. Murphy, vice president of Shaker Consulting Group.

Cox, a native of Virginia, with a Ph.D. in Industrial/Organizational Psychology, brings a unique mix of HR Analytics and global implementation capabilities to the firm.  She is a true selection scientist.

For more information, read the full release: Shaker Consulting Group Hires Dr. Christie Cox to Support Virtual Job Tryout with Global Clients.

RSS Feed LinkedIn