Microsoft outlines performance difference between Xbox One and PS4 – and lies?

xboxoneIn a post on NeoGAF, Xbox exec Albert Penello commented that, though he was “not disparaging Sony… the way people are calculating the differences between the two machines isn’t completely accurate. I think I’ve been upfront I have nothing but respect for those guys, but I’m not a fan of the mis-information about our performance.”

Penello then goes on to clarify key elements of the two consoles that he believes are often misunderstood, including beliefs that the lower amount of Compute Units available in the Xbox One lead to a 50 per cent power disadvantage, and that Xbox One’s memory is slower..

 

There are a few issues with his statement, let’s take a look.

This is what he said:

I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I’m simply creating FUD or spin.

I do want to be super clear: I’m not disparaging Sony. I’m not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn’t completely accurate. I think I’ve been upfront I have nothing but respect for those guys, but I’m not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU’s vs. 12 CU’s =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU’s, so it’s simply incorrect to say 50% more GPU.
• Adding to that, each of our CU’s is running 6% faster. It’s not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 – it’s called Kinect.
• Speaking of GPGPU – we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I’m sure this will get debated endlessly but at least you can see I’m backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we’re working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we’ve done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.

Now A few issues obviously stick out here.

  • We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.

You can’t add bandwidth like that…i mean on paper is not a way to tell people, everything looks good on paper.
Also, how exactly are are they calculating the 204GB/s value.. i am not sure they know the real value. Honestly, It’s one thing to beat around the Bush but it’s another to straight up lie, The majority of the data on the ram can only move at 68gb/s and esram can only move 32mb at the previously reported 194gb/s peak.

Taking the effective speed (5500) x the bus width (256-bit) gives you the maximum aka theoretical bandwidth.
5,500 x 32 (1 per 8-bit) = 176GB/s
the 176GB/s is the absolute maximum, NOT the reasonable expectation.

  • We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 – it’s called Kinect.

I see, but who cares? what’s that to someone looking into a better system? but anyway… PS4 has 6 additional compute units. That’s pretty big difference. It means the PS4 can offload more general purpose tasks than the Xbox One.

  • Speaking of GPGPU – we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Nice little jab at the rival really, but PS4 also has 30GB/s with Onion and Onion+…

 

I am not really sure what he would say all that for, maybe chucking smoke into fanboys faces?
Had this discussion been about PowerPC or some other PC architecture I would believe you but you have to realize that you are talking about architectures we have been familiar with for over 20 years and throwing out buzzwords and inaccurate numbers really doesn’t help your argument.

We are all somesort of armchair engineer these days, don’t throw shit around, people will pick up on it.
How he manages to get those numbers is well, I don’t know, because they are not real.
If the xbox is more powerful, which is what he’s trying to say and of course it isn’t, it’s also more powerful on the price tag too, £100 more infact.

You can’t simply add up all your bandwidth from different memory pools and say ‘thats that’. You can’t say there is a lack of parallel scaling on graphics when 99% of the time the problem is hugely parallel which is why graphics cards go with so many concurrent SIMD units that are clocked lower then say a CPU.

 

 

%d bloggers like this: