Rating wine divides opinion. Some industry types feel squeamish about reducing a bottle to score out of 100. But, here, our wine commentator, Chris Losh, argues that rating systems are popular with consumers, and should therefore be embraced.

About 18 months ago I got annoyed by the restaurant critic AA Gill. Deeming it inappropriate to reduce a dining out experience to a five-star rating, he stopped scoring his reviews.

Whatever his reasoning, as a non-expert, I was miffed. I liked that he had nailed his colours to the mast, and without a score it felt like his column was only half-complete.

I have similar feelings about scoring in wine. I like critics to rate their recommendations, and I expect magazine panel tastings to have a score. Though, as with AA Gill, not everyone feels this way.

When I asked Twitter what it thought of wine scoring a while back, I was surprised by the heat of some of the responses. There is, it seems, a phalange of the wine world that finds the idea of rating bottles objectionable in principle.

However, Guy Woodward, editor of Decanter, is not one of them. "Much as wines are precious and we love them, they aren’t the works of Shakespeare," he says pragmatically. "I don’t have a problem with scoring them." 

The five-star system might be deemed lacking in subtlety by most critics, but both the 20-point system in Europe and the 100-point scale popularised by The Wine Spectator and Robert Parker, in Asia and the US, are going strong. 

Indeed, with Decanter’s recent decision to publish average scores from its panel tastings to the 100-point system, alternative ratings scales in Europe might soon be as rare as the red squirrel.

These differences, however, are not so great as they might seem. A wine would need actually to kill someone to score less than 80/100 in The Spectator, and it’s rare for anyone scoring out of 20 to give a wine less than 8.

All of which means that scores are skewed upwards. 85/100 might be a terrific exam mark, but it’s little more than average in wine scoring terms. Critics have complained that the numbers are too high to be meaningful, but in fact such is the influence of the 100-point scale now that the vast majority of the trade understand its nuances clearly.

It’s one of the reasons why Decanter’s new tasting panels’ scores out of 20 are not simply multiplied by 5 to get their 100-point equivalent. 18.25/20, for instance, is not 91/100, but 95. The figure-juggling required to make scores /20 match prestige /100 makes the Libor equations look simple.

There’s the further complication that, as well as different scores meaning different things in different systems, critics have different palates, and different levels of severity. How is the public to make sense of the fact that one critic’s, or merchant’s, four-and-a-half stars is another’s 16/20 and a third’s 93 points? 

It’s no wonder that the public are confused.

"The wine trade needs to do more to communicate the meritocracy of wine and make it simpler – and I’m not sure that scoring does that," says Dan Jago from supermarket chain, Tesco.

For scoring to work, it seems, there needs to be not just impartiality and consistency, but also, crucially, engagement on the part of the consumer, with those who follow a regular critic or writer able to contextualise his or her scores. 

Parker’s en primeur scores are the best example since they put potential investments in the context of competitor releases from the same vintage for a highly engaged audience. 

RP has been doing that long enough for people to appreciate what he likes – and doesn’t – to the extent where some wineries actively parade their low Parker scores as a badge of honour and proof of their wine’s style.

Much has been made of the power of Parker scores, but there’s a limit to his influence, as the Fladgate Partnership’s MD Adrian Bridge points out. "Taylor and Fonseca have more 100 point wines (WS or RP) than any other company in the world.  This is good for us but still has not yet led to a massive price speculation in our vintage ports," he says.

Merchants, of course, love Parker scores, ignoring those that don’t suit and promoting the hell out of high pointers - which can give rise to the kind of behaviour that the wine trade finds so objectionable. 

I’ve heard of top clarets on sale in Singapore retailers whose shelf-barker had nothing by way of recommendation beyond a Parker mark. And such lapses in taste, where a wine’s flavour or back-story has been ignored in favour of a score, are not as uncommon as they should be.

Still, using the occasional misuse of Parker scores as a way of denigrating all wine scoring is unfair. Pointless, too, since, like it or not, the public loves numbers, ratings and winners. 

"Tasting notes are more important than the score," says Woodward, "but I’m sure we’re all guilty of looking at the score first."

Moreover, in the interactive world of digital media, the public are increasingly used to leaving scores and feedback on everything from books to nights out. From Amazon to TripAdvisor, peer-group experience is key, and most specialist online wine retailers include some form of customer rating system.

There might be some squeamishness in the wine trade when it comes to scoring bottles, but the general public, it seems, has no such qualms. And while most of the public might struggle to name, let alone follow, two wine critics, they’re happy to respond to peer-group scores.

And it’s because of this that, while wine might lack a universally popular scoring system, or a global benchmark that can measure everything from Lafite to Liebfraumilch, I can only see scoring it growing in popularity. Whatever AA Gill might think.