As an Analyst, I’m regularly called upon to measure importance in quantitative data. The question of preference is key, whether we’re evaluating product features, satisfaction drivers or positioning statements.
The challenge here lies as much in how we ask questions as how we evaluate the answers. The ideal approach should:
- be easy to use
- have strong discrimination
- offer robust scaling properties
- limit scale use bias
Traditional techniques, including rating (Likert scale), ranking and constant sum questions, can’t always tick those boxes.
While Likert scales are useful, participants tend to be agreeable when selecting attributes. In other words, they simply tell us everything is important. There’s not much discrimination when all attributes have 4.0 – 4.5 means on a 5-point scale! Ranking and constant sum have inherent bias as well; both are tedious tasks, so participants will often take shortcuts.
That’s where Maximum Difference Scaling (MaxDiff) comes in. The results from this approach typically demonstrate greater discrimination, making it a more refined measurement tool.
With MaxDiff, participants are asked to select most and least preferred items. The technique works best when testing between 15-40 attributes. Displayed attributes are designed so that each item is shown an equal number of times and pairs of items are shown an equal number of times.
The question type is straightforward, so participants of all ages and backgrounds can provide reliable data. And unlike subjective scale ratings, MaxDiff choices are cut and dry. By forcing participants to make discrete choices, there is no opportunity for scale use bias, and the resulting attribute scores are easy to interpret.
MaxDiff is applicable to a wide range of research scenarios, making it a valuable addition to our research toolbox. Consider this technique for:
- menu development
- positioning statements
- promotional offers
- product features