Weirdly is all about making the slow, complicated part of high-volume recruitment super-speedy based on real, reliable data. Customised assessments and instant longlists of pre-qualified candidates are a big part of how we deliver that. The quality of those longlists are crucial, so it’s unsurprising we’ve spent a lot (read: most) of our time and energy getting those right. So, how do we do it? In short, we use recognised Organisational Psychology theory and industry best-practice. And in long? We’re glad you asked. Here’s the full rundown on the science behind our clever tech. 

Working out what to ask

So, sure, customised quizzes sound great, but first, we need to work out what should be in there. That starts with breaking down your stated values, and crucially, what you really mean when you say “resilience”. Once we’ve got a good handle on that (based on conversations with you or analysis of any relevant internal documents you might share with us) we map your values against our list of traits. So resilience for you might be a mix of traits 1,2 and 3, while another company’s version of resilience is better defined by traits 4,5 and 6. 

We take those results back to you to confirm, then conduct a series of workshops with stakeholders to confirm that the way your wider team - those who will be using the assessments day to day - interpret those values or behaviours is the same as yours. This is where we can also uncover really interesting variations between different geographic markets that allow us to tailor your assessments more accurately.


Building good longlists (fast)

To build a longlist, we measure how well a candidate aligns with the values and soft skills that would make an ideal employee in a particular role. That means when you begin looking at CVs, you’re starting with a group of people you’re pretty sure will suit OK. Each assessment is customised by our organisational psychology department to your exact requirements, with questions sourced mostly from our question bank. We’ve developed hundreds of questions designed to assess values, soft skills and traits, and they’ve been tested on and used by over 500,000 real job-seekers around the world. That assessment is then tested with your internal teams – across different markets if necessary – to establish benchmarks. 


Validating the assessments 

The resulting assessment is then validated by real people – your candidates and existing employees. We gather data from post-application and/or post-hire phases and reconcile the results of successful candidates. This shows us what factors make a successful hire and what could indicate high performance or longer retention, for example. This delivers a dynamic and ever-improving norm group and gives confidence that we’re measuring the right things and in the right way. Our inhouse team uses tried and tested analyses like Cronbach’s Alpha – this is a way to measure internal consistency and we typically get between 0.7 and 0.9 (spoiler: those are great scores).


Testing for inclusivity and bias 

Be warned, what follows will be a bit TM:DR (too maths, didn’t read) for some. 

If that sounds like you, here’s the summary: We analyse results as they come in to see how scores are impacted by factors like being a woman, a person of colour, a protected class or any other grouping you care to name. Those numbers are extra handy when we can mix them all up with more info from your HRStack. 

If you’re into the mathsy detail, here’s the breakdown: We measure effect size for group differences with Cohen’s d. and compare group means, either with a t test or ANOVA, depending on the number of groups. We use the 4/5th rule to confirm we have no gender bias and weed out bias against protected classes or specific demographic groups by comparing by effect size. 

The assessments themselves are also inclusive – we look at things like readability and linguistic gender coding, follow the EEOC's guidelines (the most stringent in the world), and seek (and apply) advice from world-leading advocacy groups when it comes to the wording of specific diversity-related questions. This is a constant improvement project for us - making sure our assessments don't discriminate against protected classes and are as inclusive as possible. Assessment designs are also tested for comprehension against the four major variants of colour-blindness and built to work well with major visual impairment apps. 

All of that seems to work pretty well. For example, using Weirdly, Bunnings achieved a 0% gender bias in their frontline recruitment.


Reporting (that’s actually useful) 

Built into everything we do is extremely pretty, extremely useful reporting. That means people at all team levels access the reporting that suits their role: Candidate Profiles, On-Demand Assessment Reporting and Deep Analytics. It gives you all the real, sciencey data you need to make better, data-led recruiting decisions. 


Science made simple

While it’s easier to judge people by years of experience, academic record or previous work history, those things can be pretty bad indicators of success in an employee if they’re taken alone. Things that do make a difference – like values and soft skills –  are too vague and blobby to easily boil down into measurable data.

Which is where we come in (hello!👋).

Transferring values into behaviours, behaviours into traits and traits into assessments (all while factoring for bias), Weirdly’s science team are measuring the unmeasurable and helping you create the teams of your dreams.