Hi ISU
I am really happy that you are concerned about the fairness of the competitions. If we want sport to be more and more popular and appreciated, fair competition is essential, so it is right to work in this direction. A great help could be the introduction of better technologies, in order to help the judges to correctly evaluate what is being done on the ice. If you are not sure which technology is the best, you can always take advantage of the knowledge of some expert such as Dr. George S. Rossano.
I have seen that your concern for the fairness of the competitions is so wide that it also extends to the identity of the judges and the competition protocol.

I don’t know if you’ve ever watched a tennis match. I’ve seen some of them, and I noticed a curious detail. The chair umpire loudly announces the score after each single point. Really, watch a game too if you don’t believe me. After each point has finished, he says the score, which in any case is always visible to everyone on the boards displayed next to the playing court. The chair umpire also says things like “double foul”, or announces when the player goes to take the second serve or when, after hitting the net with the first serve, he repeats it. He says it all, so there may be doubts as to whether a ball has landed on or off the court, but the decisions are clear. And also for the point of impact of the balls, technology is used in certain cases, so that the judgments are as fair as possible. And we know the chair umpire’s name. Do you know what? I haven’t always liked his decisions, but I’ve never doubted his honesty.
So I really don’t understand how anyone could think that not publishing the protocol, or publishing it late, or not saying the names of the judges, could ensure the integrity of the event. Here are som example, from when we we did not know which judge had assigned each vote, and I have some doubts. This is the Pairs’ short program at the 2016 World Championship.
I don’t post the official protocol but the SkatingScore version, because it gives me the sums. I only looked at the best pairs, and I noticed a big difference in the scores given by the different judges. The biggest difference, 12.60 points, is in Savchenko/Massot’s score. Did all the judges see the same program? There are a couple of particularly low ratings in components. To understand how strict Judge 2 was, I looked to see if other top pairs had received such low marks.
We must keep in mind that the order in which the votes are published is random, so what for someone is judge 2 for someone else can be judge 1, or 3, or another. I did a little cut and paste job, so we don’t have a huge screen and can easily compare the sums in the component marks.
With the components alone, Judge 2 remained below the average by 5.04 points. Did the same happen to everyone? Because if a judge has been particularly strict to everyone, we can think that it is he who assigns low marks but that, maintaining the same severe judgment criteria, the final result is correct. For Sui/Han the biggest difference is 3.37 points, for Duhamel/Radford 1.85 points, for Volosozhar/Trankov 2.16 points, for Stolbova/Klimov 3.00 points, for Tarasova/Morozov 4.51 points. Comparing with other couples, the difference is really high.
I did another check. I looked at what marks each pair received. How many 10.00, how many 9.75, how many 9.50 and so on.
In the first six pairs only Savchenko/Massot and Tarasova/Morozov received marks below 8.00, but the German pair also received marks above 9.00, the Russian one did not. Are we sure those low marks aren’t too harsh? And if by checking Ashley Wagner’s marks in the free skate at the 2016 World Championship I showed how even one judge can be decisive in awarding the medals, with Savchenko/Massot the judges who have assigned low marks are two, therefore the marks of one of the two entered the score and had an even greater weight than Judge 6 did with Wagner.
I also highlighted the overall score given by each judge in purple, and of course the difference is even greater. The judges of that program were
Mr. Philippe MERIGUET, FRA
Ms. Nadezhda FIODOROVA, BLR
Ms. Jung-Sue LEE, KOR
Mr. Benoit LAVOIE, CAN
Ms. Jia YAO, CHN
Mr. Volker WALDECK, GER
Ms. Vanessa RILEY, GBR
Ms. Joanna MILLER, AUS
Ms. Lorrie PARKER, USA
It would be nice to be able to ask them the reason for those marks. Not being able to do this, we suspect that some judge, the Canadian, or the Chinese, or the Belarusian one, since judges from the former Soviet Union often feel a considerable attachment to Russia, may have lowered the German pair’s score to help his pair. But they are suspects that cannot be confirmed and that, at the same time, cannot be denied. ISU, do you know that suspicions aren’t a good thing? Everything should be clear and understandable, at least if you want to be taken seriously.
Do you know why I post screenshots with all the data I get my averages from, even if taking screenshots is a long and tedious job? Because so anyone who wants to can check the work I have done, and possibly challenge it to me. I do my best, but I’m just a person and I can make mistakes. If someone points out a mistake to me, I correct it. But I’m not hiding anything. You may not like what I do, not say that I cheat. What you do is a little bit more important than what I do, so transparency should be even more important to you. And it is not transparency that prevents the judges from assessing the competitions with serenity.
For example, when the identity of the judges was secret, at the 2013 World Championship, a judge assigned a +1 to the Axel performed by Javier Fernandez in the short program.
Before anyone could think wrong, there was no Spanish judge on that panel, so unless someone can prove it was a swap vote, the only thing we can assume is that that judge made a mistake. It would have been nice to be able to ask the judge why he gave that mark, we can’t do that. I hope that at least the referee, Mona Jonsson, has pointed to him his mistake.
The fact that the judges, protected by anonymity, have given strange marks, is not something recent. For the short program of the 2005 World Championships I limited myself to taking the totals given by SkatingScore, with the protocols of the skaters whose score had the greatest fluctuations from one judge to another. Are we sure that all the judges have seen the same competition?
The identity of the judges is not a problem if they vote fairly, what is important is the fairness of their evaluations. Better technology can help them judge more serenely, so this must be a priority, starting next season. And it is important that everything is checked, otherwise absurd things can happen, thing like the 0.75 assigned by Jerome Poulin to Yuma Kagiyama in Skating Skills at the 2020 World Junior Championship.
I am not accusing Poulin of unfairness, I am sure that he simply made a mistake in pressing the button with which he assigned the vote. But why did referee Robert Rosembluth not notice? Wasn’t it his job to make sure the judges’ votes were correct? And is it possible that no automatic system has been created capable of identifying major anomalies as this? This is not the first time such a mistake was done.
This is the Ladies’ free skate of Skate America 2003, the first competition in which the ISU Judging System was used.
The mistake was made in the first competition, almost twenty years ago. It seems to me more than enough time to identify a type of mistake and find a way to prevent it from happening again.
While I’ve got this protocol, I’ve highlighted a couple more details. The sums are shown in green. In the first competition you published those sums that I now only find on SkatingScore or that I have to calculate personally. Why? Transparency is essential for the fairness of every competition. And in blue I indicated the list of possible deductions. In that program no skater deserved one, but your system contemplate clarity, even with the anonymity of the judges. You have rightly removed anonymity, now it is important not to take a step back.