The RV770 Story: Documenting ATI's Road to Success
by Anand Lal Shimpi on December 2, 2008 12:00 AM EST- Posted in
- GPUs
The Bet, Would NVIDIA Take It?
In the Spring of 2005 ATI had R480 on the market (Radeon X850 series), a 130nm chip that was a mild improvement over R420 another 130nm chip (Radeon X800 series). The R420 to 480 transition is an important one because it’s these sorts of trends that NVIDIA would look at to predict ATI’s future actions.
ATI was still trying to work through execution on the R520, which was the Radeon X1800, but as you may remember that part was delayed. ATI was having a problem with the chip at the time, with a particular piece of IP. The R520 delay ended up causing a ripple that affected everything in the pipeline, including the R600 which itself was delayed for other reasons as well.
When ATI looked at the R520 in particular it was a big chip and it didn’t look like it got good bang for the buck, so ATI made a change in architecture going from the R520 to the R580 that was unexpected: it broke the 1:1:1:1 ratio.
The R520 had a 1:1:1:1 ratio of ALUs:texture units:color units:z units, but in the R580 ATI varied this relationship to be a 3:1:1:1. Increasing arithmetic power without increasing texture/memory capabilities; ATI noticed that shading complexity of applications went up but bandwidth requirements didn’t, justifying the architectural shift.
This made the R520 to R580 transition a much larger one than anyone would’ve expected, including NVIDIA. While the Radeon X1800 wasn’t really competitive (partially due to its delay, but also due to how good G70 was), the Radeon X1900 put ATI on top for a while. It was an unexpected move that undoubtedly ruffled feathers at NVIDIA. Used to being on top, NVIDIA doesn’t exactly like it when ATI takes its place.
Inside ATI, Carrell made a bet. He bet that NVIDIA would underestimate R580, that it would look at what ATI did with R480 and expect that R580 would be similar in vain. He bet that NVIDIA would be surprised by R580 and the chip to follow G70 would be huge, NVIDIA wouldn’t want to lose again, G80 would be a monster.
ATI had hoped to ship the R520 in early summer 2005, it ended up shipping in October, almost 6 months later and as I already mentioned, it delayed the whole stack. The negative ripple effect made it all the way into the R600 family. ATI speculated that NVIDIA would design its next part (G71, 7900 GTX) to be around 20% faster than R520 and not expect much out of R580.
A comparison of die sizes for ATI and NVIDIA GPUs over the year, these boxes are to scale. Red is ATI, Green is NV.
ATI was planning the R600 at the time and knew it was going to be big; it started at 18mm x 18mm, then 19, then 20. Engineers kept asking Carrell, “do you think their chip is going to be bigger than this?”. “Definitely! They aren’t going to lose, after the 580 they aren’t going to lose”. Whether or not G80’s size and power was a direct result of ATI getting too good with R580 is up for debate, I’m sure NVIDIA will argue that it was by design and had nothing to do with ATI, and obviously we know where ATI stands, but the fact of the matter is that Carrell’s prediction was correct - the next generation after G70 was going to be a huge chip.
If ATI was responsible, even in part, for NVIDIA’s G80 (GeForce 8800 GTX) being as good as it was then ATI ensured its own demise. Not only was G80 good, but R600 was late, very late. Still impacted by the R520 delay, R600 had a serious problem with its AA resolve hardware that took a while to work through and ended up being a part that wasn’t very competitive. Not only was G80 very good, but without AA resolve hardware the R600 had an even tougher time competing. ATI had lost the halo, ATI’s biggest chip ever couldn’t compete with NVIDIA’s big chip and for the next year ATI’s revenues and marketshare would suffer. While this was going on, Carrell was still trying to convince everyone working on the RV770 that they were doing the right thing, that winning the halo didn’t matter...just as ATI was suffering from not winning the halo. He must’ve sounded like a lunatic at the time.
When Carrell and crew were specing the RV770 the prediction was that not only would it be good against similarly sized chips, but it would be competitive because NVIDIA would still be in overshoot mode after G80. Carrell believed that whatever followed G80 would be huge and that RV770 would have an advantage because NVIDIA would have to charge a lot for this chip.
Carrell and the rest of ATI were in for the surprise of their lives...
116 Comments
View All Comments
Chainlink - Saturday, December 6, 2008 - link
I've followed Anandtech for many years but never felt the need to respond to posts or reviews. I've always used anandtech as THE source of information for tech reviews and I just wanted to show my appreciation for this article.Following the graphics industry is certainly a challenge, I think I've owned most of the major cards mentioned in this insitful article. But to learn some of the background of why AMD/ATI made some of the decisions they did is just AWESOME.
I've always been AMD for CPU (won a XP1800+ at the Philly zoo!!!) and a mix of the red and green for GPUs. But I'm glad to see AMD back on track in both CPU and GPU especially (I actually have stock in them :/).
Thanks Anand for the best article I've read anywhere, it actually made me sign up to post this!
pyrosity - Saturday, December 6, 2008 - link
Anand & Co., AMD & Co.,Thank you. I'm not too much into following hardware these days but this article was interesting, informative, and insightful. You all have my appreciation for what amounts to a unique, humanizing story that feels like a diamond in the rough (not to say AT is "the rough," but perhaps the sea of reviews, charts, benchmarking--things that are so temporal).
Flyboy27 - Friday, December 5, 2008 - link
Amazing that you got to sit down with these folks. Great article. This is why I visit anandtech.com!BenSkywalker - Friday, December 5, 2008 - link
Is the ~$550 price point seen on ATi's current high end part evidence of them making their GPUs for the masses? If this entrire strategy is as exceptional as this article makes it out to be, and this was an effort to honestly give high end performance to the masses then why no lengthy conversation of how ATi currently offers, by a hefty margin, the most expensive graphics cards on the market? You even present the slide that demonstrates the key to obtaining the high end was scalability, yet you fail to discuss how their pricing structure is the same one nVidia was using, they simply chose to use two smaller GPUs in the place of one monolithic part. Not saying there is anything wrong with their approach at all- but your implication that it was a choice made around a populist mindset is quite out of place, and by a wide margin. They have the fastest part out, and they are charging a hefty premium for it. Wrong in any way? Absolutely not. An overall approach that has the same impact that nV or 3dfx before them had on consumers? Absolutely. Nothing remotely populist about it.From an engineering angle, it is very interesting how you gloss over the impact that 55nm had for ATi versus nVidia and in turn how this current direction will hold up when they are not dealing with a build process advantage. It also was interesting that quite a bit of time was given to the advantages that ATi's approach had over nV's in terms of costs, yet ATi's margins remain well behind that of nVidia's(not included in the article). All of these factors could have easily been left out of the article altogether and you could have left it as an article about the development of the RV770 from a human interest perspective.
This article could have been a lot better as a straight human interest fluff piece, by half bringing in some elements that are favorable to the direction of the article while leaving out any analysis from an engineering or business perspective from an objective standpoint this reads a lot more like a press release then journalism.
Garson007 - Friday, December 5, 2008 - link
Never in the article did it say anything about ATI turning socialistic. All it did mention was that they designed a performance card instead of an enthusiast one. How they approach to finally get to the enthusiast block, and how much it is priced, is completely irrelevant to the fact that they designed a performance card. This also allowed ATI to bring better graphics to lower priced segments because the relative scaling was much less than nVidia -still- has to undertake.The built process was mentioned. It is completely nVidia's prerogative to ignore a certain process until they create the architecture that works on one they already know; you are bringing up a coulda/woulda/shoulda situation around nVidia's strategy - when it means nothing to the current end-user. The future after all, is the future.
I'd respectfully disagree about the journalism statement, as I believe this to be a much higher form of journalism than a lot of what happens on the internet these days.
I'd also disagree with the people who say that AMD is any less secretive or anything. Looking in the article there is no real information in it which could disadvantage them in any way; all this article revealed about AMD is a more human side to the inner workings.
Thank you AMD for making this article possible, hopefully others will follow suit.
travbrad - Friday, December 5, 2008 - link
This was a really cool and interesting article, thanks for writing it. :)However there was one glaring flaw I noticed: "The Radeon 8500 wasn’t good at all; there was just no beating NVIDIA’s GeForce4, the Ti 4200 did well in the mainstream market and the Ti 4600 was king of the high end. "
That is a very misleading and flat-out false statement. The Radeon 8500 was launched in October 2001, and the Geforce 4 was launched in April 2002 (that's a 7 month difference). I would certainly hope a card launched more than half a year later was faster.
The Radeon 8500 was up against the Geforce3 when it was launched. It was generally as fast/faster than the similarly priced Ti200, and only a bit slower than the more expensive Ti500. Hardly what I would call "not good at all". Admittedly it wasn't nearly as popular as the Geforce3, but popularity != performance.
7Enigma - Friday, December 5, 2008 - link
That's all I have to say. As near to perfection as you can get in an article.hanstollo - Friday, December 5, 2008 - link
Hello, I've been visiting your site for about a year now and just wanted to let you know I'm really impressed with all of the work you guys do. Thank you so much for this article as i feel i really learned a whole lot from it. It was well written and kept me engaged. I had never heard of concepts like harvesting and repairability. I had no idea that three years went into designing this GPU. I love keeping up with hardware and really trust and admire your site. Thank you for taking the time to write this article.dvinnen - Friday, December 5, 2008 - link
Been reading this site for going on 8 years now and this article ranks up there with your best ever. As I've grown older and games have taken a back seat I find articles like this much more interesting. When a new product comes out I find myself reading the forwards and architectural bits of the articles and skipping over all the graphs to the conclusions.Anyways, just wish I was one of those brilliant programmers who was skilled enough to do massively parallelized programming.
quanta - Friday, December 5, 2008 - link
While the RV770 engineers may not have GDDR5 SDRAM to play with during its development, ATI can already use the GDDR4 SDRAM, which already has the memory bandwidth doubling that of GDDR5 SDRAM, AND it was already used in Radeon X1900 (R580+) cores. If there was any bandwidth superiority over NVIDIA, it was because of NVIDIA's refusal to switch to GDDR4, not lack of technology.