Looking @ UC 4 - I think there are a couple of things coming into play here. Looking at some screenshots right before release, and a couple from post-release (that you sent me <3 ) we know the DOF is basically quarter resolution, and you can see this upon its interesecting with overlapping geometry or alpha effects, or overlapping obmb. Though, if there is not obvious interessection or a shallow cut off from DOF it looks pretty darn smooth in spite of its internal resolution, the upscale in a static image is not too bad and blocky... unlike say KZ:SF where it is really obviously quarter resolution. I would imagine, and based upon some of the serious stair stepping in the screen above, that the motion blur is running at that exact same internal resolution as depth of field (I would be surprised to learn that various post processes are in fact running at different resolutions as depth of field and mb are usually conceptually coupled). So you already have a low internal resolution of the effect when it is rendered in one slice. Then combine that with the fact that there are a low amount of sample slices (which are visible in the screen you posted as well), then couple that with the fact that the upsample for motion blur on top of how it blends between samples. How the samples are blended and weighted increases their visual plausibility in stills and in motion even.
This presentation from Jorge Jimenez goes into the various steps of making a plausible reconstruction filter for motion blur.
Or this
paper from Tiago Sousa after slide 45.
So 4 things in total if you take the time to look through both those presentations:
1. Lower internal resolution for motion blur (not uncommon, does not need to be a huge detriment)
2. How that is upsampled to native resolution
3. How the motion blur blends between individual samples / slices
4. How it blends over non-motion blurred parts of the screen, or motion blurred areas moving in a different direction
Points 2 through 4 are the most important IMO, as that is where you can fudge and blur and sample interessingly enough to create really plausible looking results. The various different looks provided in that call of duty paper show just how different the end result of the same amount of motion blur slices can look based upon how it is upscaled and blended along its vector.
Looking @ doom - it just seems to do steps 2 and 4 much better to the point where seeing its internal resolution is really hard to even notice (it is doubtlessly less than full res though considering it is on console as well) as well as to the point where you cannot obviously see the geometric cut off where obmb begins and where it ends. That is quite unlike UC4 where you can see a hard line cut off more often than not in the motion blur itself, either between its internal samples or between motion blur on top of non-motion blurred screen parts, or motion blur going in a different direction (look at the rocks or around the red kerchief ont he head of the NPC). Contrasted this with doom:
The chain saw blades are moving and being motion blurred, but you cannot see the obvious cut off line against the non-motion blurred areas of the screen and the individual samples themselves. Although, and this may just be because the console obmb in doom is not as great, you can see some of the indvidual slices in your screen shot from the mancubus, as part of the motion blur on top of his left arm (as it is is infront of a high constrast red/white light area).