The RV770 Story: Documenting ATI's Road to Success
by Anand Lal Shimpi on December 2, 2008 12:00 AM EST- Posted in
- GPUs
The Beginning: The Shot Heard Around the World
It all started back in 2001 when ATI, independent at the time, was working on the R300 GPU (Radeon 9700 Pro). If you were following the industry at all back then, you’d never forget the R300. NVIDIA was steadily gaining steam and nothing ATI could do was enough to dethrone the king. The original Radeon was a nice attempt but poor drivers and no real performance advantage kept NVIDIA customers loyal. The Radeon 8500 wasn’t good at all; there was just no beating NVIDIA’s GeForce4, the Ti 4200 did well in the mainstream market and the Ti 4600 was king of the high end.
While ATI was taking punches with the original Radeon and Radeon 8500, internally the company decided that in order to win the market - it had to win the halo. If ATI could produce the fastest GPU, it would get the brand recognition and loyalty necessary to not only sell those high end GPUs but also lower end models at cheaper price points. The GPU would hit the high end first, but within the next 6 - 12 months we’d see derivatives for lower market segments. One important takeaway is that at this point, the high end of the market was $399 - keep that in mind.
With everyone at ATI thinking that they had to make the fastest GPU in the world in order to beat NVIDIA, the successor to the Radeon 8500 was going to be a big GPU. The Radeon 8500 was built on a 0.15-micron manufacturing process and had around 60M transistors; R300 was going to be built on the same process, but with 110M transistors - nearly twice that of the 8500 without a die shrink.
Its competition, the GeForce4 was still only a 63M transistor chip and even NVIDIA didn’t dare to build something so big on the 150nm node, the GF4 successor would wait for 130nm.
We all know how the story unfolded from here. The R300 was eventually branded the ATI Radeon 9700 Pro and mopped the floor with the GeForce4. What Intel did to AMD with Conroe, ATI did to NVIDIA with R300 - back in 2002.
The success with R300 solidified ATI’s strategy: in order to beat NVIDIA, it had to keep pushing the envelope for chip size. Each subsequent GPU would have to be bigger and faster at the high end. Begun these GPU wars had.
116 Comments
View All Comments
Chainlink - Saturday, December 6, 2008 - link
I've followed Anandtech for many years but never felt the need to respond to posts or reviews. I've always used anandtech as THE source of information for tech reviews and I just wanted to show my appreciation for this article.Following the graphics industry is certainly a challenge, I think I've owned most of the major cards mentioned in this insitful article. But to learn some of the background of why AMD/ATI made some of the decisions they did is just AWESOME.
I've always been AMD for CPU (won a XP1800+ at the Philly zoo!!!) and a mix of the red and green for GPUs. But I'm glad to see AMD back on track in both CPU and GPU especially (I actually have stock in them :/).
Thanks Anand for the best article I've read anywhere, it actually made me sign up to post this!
pyrosity - Saturday, December 6, 2008 - link
Anand & Co., AMD & Co.,Thank you. I'm not too much into following hardware these days but this article was interesting, informative, and insightful. You all have my appreciation for what amounts to a unique, humanizing story that feels like a diamond in the rough (not to say AT is "the rough," but perhaps the sea of reviews, charts, benchmarking--things that are so temporal).
Flyboy27 - Friday, December 5, 2008 - link
Amazing that you got to sit down with these folks. Great article. This is why I visit anandtech.com!BenSkywalker - Friday, December 5, 2008 - link
Is the ~$550 price point seen on ATi's current high end part evidence of them making their GPUs for the masses? If this entrire strategy is as exceptional as this article makes it out to be, and this was an effort to honestly give high end performance to the masses then why no lengthy conversation of how ATi currently offers, by a hefty margin, the most expensive graphics cards on the market? You even present the slide that demonstrates the key to obtaining the high end was scalability, yet you fail to discuss how their pricing structure is the same one nVidia was using, they simply chose to use two smaller GPUs in the place of one monolithic part. Not saying there is anything wrong with their approach at all- but your implication that it was a choice made around a populist mindset is quite out of place, and by a wide margin. They have the fastest part out, and they are charging a hefty premium for it. Wrong in any way? Absolutely not. An overall approach that has the same impact that nV or 3dfx before them had on consumers? Absolutely. Nothing remotely populist about it.From an engineering angle, it is very interesting how you gloss over the impact that 55nm had for ATi versus nVidia and in turn how this current direction will hold up when they are not dealing with a build process advantage. It also was interesting that quite a bit of time was given to the advantages that ATi's approach had over nV's in terms of costs, yet ATi's margins remain well behind that of nVidia's(not included in the article). All of these factors could have easily been left out of the article altogether and you could have left it as an article about the development of the RV770 from a human interest perspective.
This article could have been a lot better as a straight human interest fluff piece, by half bringing in some elements that are favorable to the direction of the article while leaving out any analysis from an engineering or business perspective from an objective standpoint this reads a lot more like a press release then journalism.
Garson007 - Friday, December 5, 2008 - link
Never in the article did it say anything about ATI turning socialistic. All it did mention was that they designed a performance card instead of an enthusiast one. How they approach to finally get to the enthusiast block, and how much it is priced, is completely irrelevant to the fact that they designed a performance card. This also allowed ATI to bring better graphics to lower priced segments because the relative scaling was much less than nVidia -still- has to undertake.The built process was mentioned. It is completely nVidia's prerogative to ignore a certain process until they create the architecture that works on one they already know; you are bringing up a coulda/woulda/shoulda situation around nVidia's strategy - when it means nothing to the current end-user. The future after all, is the future.
I'd respectfully disagree about the journalism statement, as I believe this to be a much higher form of journalism than a lot of what happens on the internet these days.
I'd also disagree with the people who say that AMD is any less secretive or anything. Looking in the article there is no real information in it which could disadvantage them in any way; all this article revealed about AMD is a more human side to the inner workings.
Thank you AMD for making this article possible, hopefully others will follow suit.
travbrad - Friday, December 5, 2008 - link
This was a really cool and interesting article, thanks for writing it. :)However there was one glaring flaw I noticed: "The Radeon 8500 wasn’t good at all; there was just no beating NVIDIA’s GeForce4, the Ti 4200 did well in the mainstream market and the Ti 4600 was king of the high end. "
That is a very misleading and flat-out false statement. The Radeon 8500 was launched in October 2001, and the Geforce 4 was launched in April 2002 (that's a 7 month difference). I would certainly hope a card launched more than half a year later was faster.
The Radeon 8500 was up against the Geforce3 when it was launched. It was generally as fast/faster than the similarly priced Ti200, and only a bit slower than the more expensive Ti500. Hardly what I would call "not good at all". Admittedly it wasn't nearly as popular as the Geforce3, but popularity != performance.
7Enigma - Friday, December 5, 2008 - link
That's all I have to say. As near to perfection as you can get in an article.hanstollo - Friday, December 5, 2008 - link
Hello, I've been visiting your site for about a year now and just wanted to let you know I'm really impressed with all of the work you guys do. Thank you so much for this article as i feel i really learned a whole lot from it. It was well written and kept me engaged. I had never heard of concepts like harvesting and repairability. I had no idea that three years went into designing this GPU. I love keeping up with hardware and really trust and admire your site. Thank you for taking the time to write this article.dvinnen - Friday, December 5, 2008 - link
Been reading this site for going on 8 years now and this article ranks up there with your best ever. As I've grown older and games have taken a back seat I find articles like this much more interesting. When a new product comes out I find myself reading the forwards and architectural bits of the articles and skipping over all the graphs to the conclusions.Anyways, just wish I was one of those brilliant programmers who was skilled enough to do massively parallelized programming.
quanta - Friday, December 5, 2008 - link
While the RV770 engineers may not have GDDR5 SDRAM to play with during its development, ATI can already use the GDDR4 SDRAM, which already has the memory bandwidth doubling that of GDDR5 SDRAM, AND it was already used in Radeon X1900 (R580+) cores. If there was any bandwidth superiority over NVIDIA, it was because of NVIDIA's refusal to switch to GDDR4, not lack of technology.