That eDRAM would have to take space on their die which means a) A more expensive APU. or b) A weaker APU like X1. The benefits aren't that great tbh and it runs counter to their design goals anyway.
The Xbox One uses DDR3 so a bandwidth beast it is not. Don't recall exact numbers, I think they do something like 50-60GB/s? Someone might like to correct me.
eDRAM wouldn't have meant a more expensive APU at all, nor would it have implied a weaker one. The point being that you cannot currently manufacture eDRAM on a 28nm process node, as nobody does it. The smallest node for eDRAM currently is 32nm which would have implied a daughter die like the 360 setup - perhaps with the GPU ROPS on the daughter die also.
EDRAM at 32MB would have meant a much smaller die area than eSRAM at 32nm. It would have allowed Sony to go up to 64 or possibily even 128MB of eDRAM on the daughter die. Would have been far superior performance wise than MS' current XB1 setup, provided they had double or more embedded memory and in the terabits per second internal bandwidth to the ROPs. Just would have been more expensive.
The thing with eDRAM is that it requires special manufacturing steps and thus cannot be fabbed at every major fab. This ends up making it more expensive in the long run as you'd be exempt from any kind of ability to shop around and get the best deal at the major foundries. MS chose eSRAM because they wanted a solution that would be integrated on die from the offset. A solution which though it makes your APU more expensive to start with, as it will invariably affect yields (i.e. on-die high transistor density eSRAM adds another level of complexity making your APU even more prone defects), it does allow you to cost reduce your embedded memory alongside your main APU as you transition from one manufacturing process node to the next.
The main problem with a hypothetical PS4 that used eDRAM and a big main pool of DDR3 is that it would have meant a more complex system with a greater potential for performance bottlenecks. There would be no real way to get around the slow main memory bandwidth, which could tank performance for any application that cannot find its data in the finite pools of caches and the embedded memory.
I think the GDDR5 solution Sony chose was a good, solid and wise choice. And I think it is a system that will pay off for them in the long term, moreso than any system Sony has designed to date.