I'm not the user you quoted, but I can give you my two cents about this line of thinking, since I partially agree with it.
Imagine a game as a school essay or work project, or even an exam. Let's say you are the developer of the game or, in this case, the person tasked with doing the essay. You are 30% done. You're still missing 70% of that essay, but you're "competing" against others that have been finished already. The person in charge of evaluating the essay reads it and says it's the best essay in the class.
That speaks volumes about the quality of your essay, right? Sure. But there's also 70% of your work that is still not there, and nothing guarantees that all that work that is missing won't massively and substantially alter the quality of your essay. Maybe your thesis is impleccable, but you reach the conclusion that the Earth is flat and that the sun is actually a light bulb in the sky. Who knows. The potential for greatness is equally as valid as the potential for disaster.
Now, if you think of games in these terms, you'll see why some people (or at least me) consider it important to have a landmark "official" release for a game: it's an arbitrary limit that allows you to judge a game as a whole for what it is at the moment of review, not what it may become.
Maybe PUBG becomes the greatest game of the decade. Maybe it doesn't even get released in the end, as so many incredibly popular Early Access games have done: ARK, Rust, 7 Days to Die, DayZ, etc. Again, having a "1.0" allows to say: "This is the game, I'll judge it as what the developer considered to be a completed, functional, commercial version of it."
tl;dr: It's an arbritary measure to give all games an equal playing field.