Lord of the Cards - The return of the king? When NVIDIA launched the GeForce GTX 480 in Q1 2010, their worst fears became reality. The high-end Fermi part was launched and then was gutted and slaughtered over three trivial aspects; those being high-power consumption, loud noise levels and a GPU that ran far too hot. The flipside of that coin was the fact that the performance was actually spot on. To date the GeForce GTX 480 is the fastest kid on the DX11 block offering stunning performance. Yet the dark clouds that started hovering above the Fermi launch was something they never got rid of, up until the GeForce GTX 460 launch. That made the GeForce GTX 480 probably the worst selling high-end graphics card series to date for NVIDIA. Throughout the year we've reviewed a good number GTX 480 cards and we've always tried to be very fair. We firmly (not Fermi) believe that if NVIDIA addressed the heat and noise levels from the get go, the outcome and overall opinion of the GTX 480 would have been much more positive as more enthusiast targeted end users can live with that somewhat high TDP. Good examples of these are KFA2's excellent GeForce GTX 480 Anarchy and more recently the MSI GTX 480 Lightning and soon Gigabyte's GTX 480 SOC. However, the damage was done and NVIDIA needed to refocus, redesign and improve the GF100 silicon. They went back to the drawing board, made the design more efficient and at transistor level made some significant changes. As a result they were able to slightly lower the TDP, increase the shader processor count and increase the overall clock frequency on both core and memory domains. The end result is the product you've all been hearing about for weeks now, the GeForce GTX 580. A product that is more silent then the GTX 280/285/480 you guys are so familiar with, a product that keeps temperatures under control slightly better and noise levels that overall are really silent. All that still based on the 40nm fabrication node, while offering over 20% more performance compared to the reference GeForce GTX 480. Will NVIDIA have it right this time? Well they'd better hope so, as real soon AMD's Cayman aka Radeon HD 6970 is being released as well. These two cards will go head to head with each other in both price and performance, at least that's what we hope. Exciting times with an exciting product, head on over to the next page where we'll start up a review on the product that NVIDIA unleashes today, the GeForce GTX 580. |
The GeForce GTX 580 graphics processor
So we already stated that for the GeForce GTX 580 NVIDIA went back to the drawing board and introduced a new revision based on the GF100 ASIC, now labeled as the GF110.
With this release, NVIDIA now has a full range of products out on the market from top to bottom. All the new graphics adapters are of course DirectX 11 ready. With Windows 7 and Vista also being DX11 ready all we need are some games to take advantage of DirectCompute, multi-threading, hardware tessellation and new shader 5.0 extensions. DX11 is going to be good and once tessellation kicks into games, much better looking.
GeForce GTX 580 : 512 SP, 384-bit, 243W TDP
GeForce GTX 480 : 480 SP, 384-bit, 250W TDP
GeForce GTX 470 : 448 SP, 320-bit, 225W TDP
The GPU that empowers it all has small architectural changes, some stuff was stripped away and some additional functional units for tessellation, shading and texturing have been added. Make note that the GPU still is big, as the fabrication node is still 40nm. TSMC canceled the 32nm fab node preventing this chip from being smaller.
Both the GF100 and GF110 graphics processors have sixteen shader clusters embedded in them (called SMs). For the GeForce GTX 480 one such a cluster was disabled and on the GeForce GTX 470 two were actually disabled. The GTX 580 has the full 512 shader processors activated, meaning a notch more performance just based on that alone already. So that's 512 shader processors, 32 more than the GTX 480 had.
Finally, to find some additional performance, the card got clocked a chunk faster at 772 MHz as well, whereas the GeForce GTX 480 was clocked at 700 MHz. GeForce
9800 GTX GeForce GTX
285 GeForce GTX
295 GeForce GTX
470 GeForce GTX
480 GeForce GTX
580
Stream (Shader) Processors 128 240 240 x2 448 480 512
Core Clock (MHz) 675 648 576 607 700 772
Shader Clock (MHz) 1675 1476 1242 1215 1400 1544
Memory Clock (effective MHz) 2200 2400 2000 3350 3700 4000
Memory amount 512 MB 1024 MB 1792 MB 1280 1536 1536
Memory Interface 256-bit 512-bit 448-bit x2 320-bit 384-bit 384-bit
Memory Type gDDR2 gDDR3 gDDR3 gDDR5 gDDR5 gDDR5
HDCP Yes Yes Yes Yes Yes Yes
Two Dual link DVI Yes Yes Yes Yes Yes Yes
HDMI No No No Yes Yes Yes
For Fermi NVIDIA made their memory controllers GDDR5 compatible, which was not the case on GT200 based GeForce GTX 260/275/285/295, hence their GDDR3 memory.
Memory wise NVIDIA has large expensive memory volumes due to their architecture, we pass 1 GB as standard these days for most of NVIDIA's series 400 and 500 graphics cards. Each memory partition utilizes one memory controller on the respective GPU, which will get 256MB of memory tied to it.
The GTX 470 has five memory controllers (5x256MB) = 1280 MB of GDDR5 memory
The GTX 480 has six memory controllers (6x256MB) = 1536 MB of GDDR5 memory
The GTX 580 has six memory controllers (6x256MB) = 1536 MB of GDDR5 memory
As you can understand, the massive memory partitions, bus-width and combination of GDDR5 memory (quad data rate) allow the GPU to work with a very high framebuffer bandwidth (effective). Let's put most of the data in a chart to get an idea and overview of changes:Graphics card GeForce GTX 470 GeForce GTX 480 GeForce GTX 580
Fabrication node 40nm 40nm 40nm
Shader processors 448 480 512
Streaming Multiprocessors (SM) 14 15 16
Texture Units 56 60 64
ROP units 40 48 48
Graphics Clock (Core) 607 MHz 700 MHz 772 MHz
Shader Processor Clock 1215 MHz 1401 MHz 1544 MHz
Memory Clock / Data rate 837 MHz / 3348 MHz 924 MHz / 3696 MHz 1000 MHz / 4000 MHz
Graphics memory 1280 MB 1536 MB 1536W MB
Memory interface 320-bit 384-bit 384-bit
Memory bandwidth 134 GB/s 177 GB/s 192 GB/s
Power connectors 2x6-pin PEG 1x6-pin PEG, 1x8-pin PEG 1x6-pin PEG, 1x8-pin PEG
Max board power (TDP) 215 Watts 250 Watts 244 Watts
Recommended Power supply 550 Watts 600 Watts 600 Watts
GPU Thermal Threshold 105 degrees C 105 degrees C 97 degrees C
So we talked about the core clocks, specifications and memory partitions. Obviously there's a lot more to talk through. Now, at the end of the pipeline we run into the ROP (Raster Operation) engine and the GTX 580 again has 48 units for features like pixel blending and AA.
There's a total of 64 texture filtering units available for the GeForce GTX 580. The math is simple here, each SM has four texture units tied to it.
GeForce GTX 470 has 14 SMs X 4 Texture units = 56
GeForce GTX 480 has 15 SMs X 4 Texture units = 60
GeForce GTX 580 has 16 SMs X 4 Texture units = 64
Though still a 40nm based chip, the GF110 GPU comes with almost 3 billion transistors embedded into it. The TDP remains the same at roughly 240~250 Watts, while performance goes up ~20%.
TDP = Thermal Design Power. Roughly translated, when you stress everything on the graphics card 100%, your maximum power consumption is the TDP.
The GeForce GTX 580 comes with both a 6-pin and 8-pin power connector to get enough current and a little spare for overclocking. This boils down as: 8-pin PEG = 150W + 6-pin PEG = 75W + PCIe slot = 75W is 300W available (in theory).
So we already stated that for the GeForce GTX 580 NVIDIA went back to the drawing board and introduced a new revision based on the GF100 ASIC, now labeled as the GF110.
With this release, NVIDIA now has a full range of products out on the market from top to bottom. All the new graphics adapters are of course DirectX 11 ready. With Windows 7 and Vista also being DX11 ready all we need are some games to take advantage of DirectCompute, multi-threading, hardware tessellation and new shader 5.0 extensions. DX11 is going to be good and once tessellation kicks into games, much better looking.
GeForce GTX 580 : 512 SP, 384-bit, 243W TDP
GeForce GTX 480 : 480 SP, 384-bit, 250W TDP
GeForce GTX 470 : 448 SP, 320-bit, 225W TDP
The GPU that empowers it all has small architectural changes, some stuff was stripped away and some additional functional units for tessellation, shading and texturing have been added. Make note that the GPU still is big, as the fabrication node is still 40nm. TSMC canceled the 32nm fab node preventing this chip from being smaller.
Both the GF100 and GF110 graphics processors have sixteen shader clusters embedded in them (called SMs). For the GeForce GTX 480 one such a cluster was disabled and on the GeForce GTX 470 two were actually disabled. The GTX 580 has the full 512 shader processors activated, meaning a notch more performance just based on that alone already. So that's 512 shader processors, 32 more than the GTX 480 had.
Finally, to find some additional performance, the card got clocked a chunk faster at 772 MHz as well, whereas the GeForce GTX 480 was clocked at 700 MHz. GeForce
9800 GTX GeForce GTX
285 GeForce GTX
295 GeForce GTX
470 GeForce GTX
480 GeForce GTX
580
Stream (Shader) Processors 128 240 240 x2 448 480 512
Core Clock (MHz) 675 648 576 607 700 772
Shader Clock (MHz) 1675 1476 1242 1215 1400 1544
Memory Clock (effective MHz) 2200 2400 2000 3350 3700 4000
Memory amount 512 MB 1024 MB 1792 MB 1280 1536 1536
Memory Interface 256-bit 512-bit 448-bit x2 320-bit 384-bit 384-bit
Memory Type gDDR2 gDDR3 gDDR3 gDDR5 gDDR5 gDDR5
HDCP Yes Yes Yes Yes Yes Yes
Two Dual link DVI Yes Yes Yes Yes Yes Yes
HDMI No No No Yes Yes Yes
For Fermi NVIDIA made their memory controllers GDDR5 compatible, which was not the case on GT200 based GeForce GTX 260/275/285/295, hence their GDDR3 memory.
Memory wise NVIDIA has large expensive memory volumes due to their architecture, we pass 1 GB as standard these days for most of NVIDIA's series 400 and 500 graphics cards. Each memory partition utilizes one memory controller on the respective GPU, which will get 256MB of memory tied to it.
The GTX 470 has five memory controllers (5x256MB) = 1280 MB of GDDR5 memory
The GTX 480 has six memory controllers (6x256MB) = 1536 MB of GDDR5 memory
The GTX 580 has six memory controllers (6x256MB) = 1536 MB of GDDR5 memory
As you can understand, the massive memory partitions, bus-width and combination of GDDR5 memory (quad data rate) allow the GPU to work with a very high framebuffer bandwidth (effective). Let's put most of the data in a chart to get an idea and overview of changes:Graphics card GeForce GTX 470 GeForce GTX 480 GeForce GTX 580
Fabrication node 40nm 40nm 40nm
Shader processors 448 480 512
Streaming Multiprocessors (SM) 14 15 16
Texture Units 56 60 64
ROP units 40 48 48
Graphics Clock (Core) 607 MHz 700 MHz 772 MHz
Shader Processor Clock 1215 MHz 1401 MHz 1544 MHz
Memory Clock / Data rate 837 MHz / 3348 MHz 924 MHz / 3696 MHz 1000 MHz / 4000 MHz
Graphics memory 1280 MB 1536 MB 1536W MB
Memory interface 320-bit 384-bit 384-bit
Memory bandwidth 134 GB/s 177 GB/s 192 GB/s
Power connectors 2x6-pin PEG 1x6-pin PEG, 1x8-pin PEG 1x6-pin PEG, 1x8-pin PEG
Max board power (TDP) 215 Watts 250 Watts 244 Watts
Recommended Power supply 550 Watts 600 Watts 600 Watts
GPU Thermal Threshold 105 degrees C 105 degrees C 97 degrees C
So we talked about the core clocks, specifications and memory partitions. Obviously there's a lot more to talk through. Now, at the end of the pipeline we run into the ROP (Raster Operation) engine and the GTX 580 again has 48 units for features like pixel blending and AA.
There's a total of 64 texture filtering units available for the GeForce GTX 580. The math is simple here, each SM has four texture units tied to it.
GeForce GTX 470 has 14 SMs X 4 Texture units = 56
GeForce GTX 480 has 15 SMs X 4 Texture units = 60
GeForce GTX 580 has 16 SMs X 4 Texture units = 64
Though still a 40nm based chip, the GF110 GPU comes with almost 3 billion transistors embedded into it. The TDP remains the same at roughly 240~250 Watts, while performance goes up ~20%.
TDP = Thermal Design Power. Roughly translated, when you stress everything on the graphics card 100%, your maximum power consumption is the TDP.
The GeForce GTX 580 comes with both a 6-pin and 8-pin power connector to get enough current and a little spare for overclocking. This boils down as: 8-pin PEG = 150W + 6-pin PEG = 75W + PCIe slot = 75W is 300W available (in theory).
Final words and conclusion
Nice, yeah I certainly like what NVIDIA has done with the GF110 GPU. No matter how you look at it, it is new silicon that runs much more efficiently and thanks to more shader processors, higher clocks, faster memory and tweaks and optimizations at transistor level we get extra performance as well. The end result it a product that at is, give or take, 20% faster than the GTX 480, which is a product that already was blazingly fast of course.
Now, that GTX 480 already was the fastest chip on the globe, yet was haunted by high noise levels and heat issues. We stated it in all our reviews, would the GTX 480 have been more silent and running less hot, then everybody would have been way milder with their opinion of that product.
The GeForce GTX 580 is exactly that, it is the GTX 480 in a new jacket, now comes with vapor chamber cooling, improved PCB and most of all a more refined GPU. Don't get me wrong here, the GPU itself is still huge, but who cares about that when the rest is right? And the rest is right... much higher performance at very okay noise levels with a GPU that runs at decent GPU temperatures. The one downside we measured was an increase power consumption, slightly higher than ther GTX 480. Now we really need to mention that the board used for this article (engineering sample) had an older BIOS and that power consumption on this board might be a tad higher as a result of it.
The cooling performance is better thanks to the vapor chamber cooler, a technology that has been widely adopted in many CPU and VGA cooling solutions already in the past. Even with an overclock towards 850 MHz on the GPU core rediculously stressed we still did not pass 85~87 Degrees C, and trust me that the GPU humped, stressed and dominated with a whip. Mind you that in your overall gaming experience the temperatures will definitely be somewhat lower as we really give the GPU a kick in the proverbial nuts here. 80 Degrees C on average is a number I can safely state is what you'll be seeing.
If you take a reference baseline GTX 480 and compare it to this product we already have 20% faster performance. At an overclocked 850 MHz clock frequency that's accumulated 25% to 30% extra performance easily over that baseline GTS 480 NVIDIA product, and in the world of high-end that is a mighty amount of extra performance. Of course any game to date will play fine at the highest resolutions with a minimum of 4x Antialising enabled and the very best image quality settings. So performance is just not an issue, as well as heat and noise.
Now if you come from an factory overclocked GTX 480 like the KFA2 Anarchy or MSI Lightning, then the difference is NIL really as these cards are clocked faster at default. There's no reason to upgrade whatsoever. However if you're in the market for a pre-overclocked GTX 480 or this reference 580, then obviously the 580 must have your preference as that card at default is as fast as the standard overclocked GTX 480 cards, and then has more room for tweaking. That certainly is a bitter message for KFA2, MSI and Gigabyte who all recently released the supadupa OC models of the GTX 480. It's the reality though, the GTX 580 is the logical choice here.
The new advanced power monitoring management function is well ... dissapointing. If the ICs where there as overprotection for drawing too much power it would have been fine. But it was designed and implemented to detect specific applications such as Furmark and then throttle down the GPU. We really dislike the fact that ODMs like NVIDIA try to dictate how we as a consumer or press should stress the tested hardware. NVIDIA's defense here is that ATI has been doing this on R5000/6000 as well, yet we think the difference is that ATI does not enable it on stress tests, yet is simply a common safety feature when you go way beyond specs. We have not seen ATI cards clock down with Furmark as of recently, unless we clocked say the memory too high after which it clocked down as safety feature. No matter how you look at it or try to explain it, this is going to be a sore topic for now and in the future. There are however many ways to bypass this feature and I expect that any decent reviewer will do so. Much like any protection, if one application does not work, we'll move on to the next one.
Alright, time to round up this review. Saying that the GeForce GTX 580 is merely a respin product would not do NVIDIA justice, this certainly is a new taped out revision that has been tweaked and made more efficient. The end result is a card that is something we expected early 2010, but is faster. The only thing that can ruin NVIDIA's all new release will be AMD's upcoming Cayman (Radeon HD 6970). The performance and pricing of that card is still an unknown. Anyway, if priced right and if it falls within your budget, then we do like to recommend the GeForce GTX 580, but we are afraid that the 479 EUR (499 USD) price tag will scare away many people. High-end anno 2010 should be 400 EUR tops, imho.
The GeForce GTX 580... well this is the product that really should have been launched in Q1, it would have made all the difference in the world for NVIDIA. Though we do not see any groundbreaking new stuff, the performance went up with a good enough margin and 'the feel' of the product is just so much better compared to the GeForce GTX 480 launch. Definitely a card I wouldn't mind having in my PC, now then .. Call of Duty Black Ops bring it on !
Nice, yeah I certainly like what NVIDIA has done with the GF110 GPU. No matter how you look at it, it is new silicon that runs much more efficiently and thanks to more shader processors, higher clocks, faster memory and tweaks and optimizations at transistor level we get extra performance as well. The end result it a product that at is, give or take, 20% faster than the GTX 480, which is a product that already was blazingly fast of course.
Now, that GTX 480 already was the fastest chip on the globe, yet was haunted by high noise levels and heat issues. We stated it in all our reviews, would the GTX 480 have been more silent and running less hot, then everybody would have been way milder with their opinion of that product.
The GeForce GTX 580 is exactly that, it is the GTX 480 in a new jacket, now comes with vapor chamber cooling, improved PCB and most of all a more refined GPU. Don't get me wrong here, the GPU itself is still huge, but who cares about that when the rest is right? And the rest is right... much higher performance at very okay noise levels with a GPU that runs at decent GPU temperatures. The one downside we measured was an increase power consumption, slightly higher than ther GTX 480. Now we really need to mention that the board used for this article (engineering sample) had an older BIOS and that power consumption on this board might be a tad higher as a result of it.
The cooling performance is better thanks to the vapor chamber cooler, a technology that has been widely adopted in many CPU and VGA cooling solutions already in the past. Even with an overclock towards 850 MHz on the GPU core rediculously stressed we still did not pass 85~87 Degrees C, and trust me that the GPU humped, stressed and dominated with a whip. Mind you that in your overall gaming experience the temperatures will definitely be somewhat lower as we really give the GPU a kick in the proverbial nuts here. 80 Degrees C on average is a number I can safely state is what you'll be seeing.
If you take a reference baseline GTX 480 and compare it to this product we already have 20% faster performance. At an overclocked 850 MHz clock frequency that's accumulated 25% to 30% extra performance easily over that baseline GTS 480 NVIDIA product, and in the world of high-end that is a mighty amount of extra performance. Of course any game to date will play fine at the highest resolutions with a minimum of 4x Antialising enabled and the very best image quality settings. So performance is just not an issue, as well as heat and noise.
Now if you come from an factory overclocked GTX 480 like the KFA2 Anarchy or MSI Lightning, then the difference is NIL really as these cards are clocked faster at default. There's no reason to upgrade whatsoever. However if you're in the market for a pre-overclocked GTX 480 or this reference 580, then obviously the 580 must have your preference as that card at default is as fast as the standard overclocked GTX 480 cards, and then has more room for tweaking. That certainly is a bitter message for KFA2, MSI and Gigabyte who all recently released the supadupa OC models of the GTX 480. It's the reality though, the GTX 580 is the logical choice here.
The new advanced power monitoring management function is well ... dissapointing. If the ICs where there as overprotection for drawing too much power it would have been fine. But it was designed and implemented to detect specific applications such as Furmark and then throttle down the GPU. We really dislike the fact that ODMs like NVIDIA try to dictate how we as a consumer or press should stress the tested hardware. NVIDIA's defense here is that ATI has been doing this on R5000/6000 as well, yet we think the difference is that ATI does not enable it on stress tests, yet is simply a common safety feature when you go way beyond specs. We have not seen ATI cards clock down with Furmark as of recently, unless we clocked say the memory too high after which it clocked down as safety feature. No matter how you look at it or try to explain it, this is going to be a sore topic for now and in the future. There are however many ways to bypass this feature and I expect that any decent reviewer will do so. Much like any protection, if one application does not work, we'll move on to the next one.
Alright, time to round up this review. Saying that the GeForce GTX 580 is merely a respin product would not do NVIDIA justice, this certainly is a new taped out revision that has been tweaked and made more efficient. The end result is a card that is something we expected early 2010, but is faster. The only thing that can ruin NVIDIA's all new release will be AMD's upcoming Cayman (Radeon HD 6970). The performance and pricing of that card is still an unknown. Anyway, if priced right and if it falls within your budget, then we do like to recommend the GeForce GTX 580, but we are afraid that the 479 EUR (499 USD) price tag will scare away many people. High-end anno 2010 should be 400 EUR tops, imho.
The GeForce GTX 580... well this is the product that really should have been launched in Q1, it would have made all the difference in the world for NVIDIA. Though we do not see any groundbreaking new stuff, the performance went up with a good enough margin and 'the feel' of the product is just so much better compared to the GeForce GTX 480 launch. Definitely a card I wouldn't mind having in my PC, now then .. Call of Duty Black Ops bring it on !
No comments:
Post a Comment