There's some confusion here, I hope I can make it more clear.
Both SGSSAA and OGSSAA (which ~= downsampling) are SSAA (supersampling) methods. That means both deal with aliasing in all its forms (normal edge aliasing, subpixel aliasing, alpha aliasing and shader aliasing). And both increase the temporal stability of the picture.
SSAA methods use multiple samples per pixel (instead of 1 sample without AA) and combine them to form the final color of each pixel. The performance hit for all SSAA method is large and should be the roughly same at similar sample count (that is, 4xOGSSAA has the same performance hit as 4xSGSSAA).
Where OGSSAA and SGSSAA differ is in how they place these samples within each pixel. In OGSSAA, they are placed on a regular grid, while in SGSSAA the are placed on a sparse grid (think N-queens with N being the number of samples). Thus, with the same sample count, SGSSAA achieves a better reduction in aliasing artifacts. How much better depends on the angle of the aliased edge, but the difference is most pronounced at almost horizontal (or almost vertical) lines, which are usually also the ones with the most visible aliasing, particularly in motion.
To understand why this is the case, think about the sample patterns, and how each sample falls on the edge. With an ordered grid and a near-horizontal edge, the upper two and lower two samples will almost always be either both covered or both not covered, so you only get a single intermediate step (neither upper nor lower row covered, one row covered, both rows covered). With a sparse grid, each sample has a different position in the Y dimension, so you will get 3 intermediate steps (no sample covered, lowest sample covered, 2 samples covered, 3 samples covered, all samples covered).
If you google "AA sampling patterns" you can find some illustrations of the sampling patterns used on current GPUs.