Welcome Guest! To enable all features please Login or Register.



Go to last post Go to first unread
Offline News  
#1 Posted : Wednesday, September 30, 2009 5:12:39 PM(UTC)

Rank: Member


Groups: Administrators, Registered
Joined: 9/23/2007(UTC)
Posts: 25,073

Was thanked: 3 time(s) in 3 post(s)

NVIDIA Receives WHQL Certification For Windows 7 DriverNVIDIA has begun to disclose some information regarding its next generation GPU architecture, codenamed "Fermi". Actual product names or specifics were not disclosed just yet, nor was performance in 3D games, but high-level information about the architecture and its strong focus on compute performance and broader compatibility with computational applications were discussed.

The GPU codenamed Fermi will feature over 3-billion transistors and be produced using TSMC's 40nm processes. If you remember, AMD's new RV870 is 2.15 billion transistors and is also manufactured at 40nm, so Fermi will be significantly larger and more expensive to produce. Fermi will be outfitted with more than double the number of cores as the GT200, 512 in total. It will also offer 8x the peak double-precision compute performance. In addition, Fermi is the first GPU architecture to support ECC, so it can compensate for some errors and potentially scale to higher densities, and it will be able to execute C++ code.

NVIDIA's CEO Jen-Hsun Huang showed off the first Fermi-based, Tesla branded add in board at the GPU Technology conference taking place in San Jose over the next few days.

We will have more information regarding Fermi posted as it becomes available. For now, we've got the official release below, along with some images. Be sure to check out NVIDIA's official site as well for more details...

NVIDIA Unveils Next Generation CUDA GPU Architecture - Codenamed "Fermi"

New Ground-Up Design Gives Rise to the World's First Computational GPUs

SANTA CLARA, Calif. -Sep. 30, 2009 - NVIDIA Corp. today introduced its next generation CUDA GPU architecture, codenamed "Fermi". An entirely new ground-up design, the "Fermi" architecture is the foundation for the world's first computational graphics processing units (GPUs), delivering breakthroughs in both graphics and GPU computing.

"NVIDIA and the Fermi team have taken a giant step towards making GPUs attractive for a broader class of programs," said Dave Patterson, director Parallel Computing Research Laboratory, U.C. Berkeley and co-author of Computer Architecture: A Quantitative Approach. "I believe history will record Fermi as a significant milestone."

Presented at the company's inaugural GPU Technology Conference, in San Jose, California, "Fermi" delivers a feature set that accelerates performance on a wider array of computational applications than ever before. Joining NVIDIA's press conference was Oak Ridge National Laboratory who announced plans for a new supercomputer that will use NVIDIA GPUs based on the "Fermi" architecture. "Fermi" also garnered the support of leading organizations including Bloomberg, Cray, Dell, HP, IBM and Microsoft.

"It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore," said Jen-Hsun Huang, co-founder and CEO of NVIDIA. "The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry."

As the foundation for NVIDIA's family of next generation GPUs namely GeForce, Quadro and Tesla - "Fermi" features a host of new technologies that are "must-have" features for the computing space, including:

  • C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute. 
  • ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale 
  • 512 CUDA Cores featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs 
  • 8x the peak double precision arithmetic performance over NVIDIA's last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry 
  • NVIDIA Parallel DataCache - the world's first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand 
  • NVIDIA GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers) 
  • Nexus - the world's first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio

Images, technical whitepapers, presentations, videos and more on "Fermi" can all be found at: www.nvidia.com/fermi.

Offline gibbersome  
#2 Posted : Wednesday, September 30, 2009 6:57:32 PM(UTC)

Rank: Member


Groups: Registered
Joined: 9/22/2009(UTC)
Posts: 1,940

What does Nvidia have in the works? The shine off the 5870's glittering case hasn't even worn off and Nvidia releases just enough information to tantalize. While the ATI 5000 series was merely evolutionary in terms of GPU advancement, with the GT 300 be a true revolution?

The details are much too vague and I don't understand how ECC, double precision arithmetic performance, etc will impact game performance and FPS. But it's exciting nonetheless.

So Nvidia's next-gen GPUs will be monstrous no doubt, but at what price? If the cards are priced at $600 or above, Nvidia might as well not bother.

Offline acarzt  
#3 Posted : Wednesday, September 30, 2009 10:39:56 PM(UTC)

Rank: Advanced Member


Groups: Registered
Joined: 8/4/2003(UTC)
Posts: 3,567
United States
Location: Texas

Thanks: 2 times
Was thanked: 19 time(s) in 19 post(s)

Hmmm... these things are become more and more CPU like. L1 and L2 cache? ECC? Ehhhh.... This thing sounds like it's more meant to handle the kind of apps a typical CPU would. I don't like the sound of ECC being on board... that's gonna add an extra cycle or 2 to the memory which is gonna slow things down. I can see it's appeal for image quality.. sort of.. I guess it would get rid of some artifacts and such? Some odd occurances that might show up? But really most games are pumping along so fast you wont really notice a small error here and there. It's really only good for crunching code that NEEDS to be checked.

This seems more like a peice of hardware that belong in a Server... not in a Gaming PC. Of course I am going purely off of the specs on paper. That thing is practically a fully functioning computer on it's own lol Hopefully, it all comes together to make a sweet gaming GPU, but I somehow expect this to be only slightly faster than current Gen Stuff.

Also, no mention of DX11... will it support it? hmmmm... Perhaps they are pushing Cuda so hard they lost focus on gaming?

Offline Xylem  
#4 Posted : Thursday, October 1, 2009 4:13:26 AM(UTC)

Rank: Member


Groups: Registered
Joined: 3/11/2009(UTC)
Posts: 192
Location: Bengalooru (Bangalore), India

IHMO, NVIDIA is trying to build a Gaming Rig without any dependency on CPUs. May be they are doing a Gaming "ION" platform.

Dream Gaming "ION" platform Specs:

NVIDIA Cuda with L3 Cache & Core rated @ 3GHz and a atom 330ish proc jus for Handling the OS Part. Completely running outta GPU power. Imagine plonking one of these on a Core i9.. :) Then it'll be a real GPU - CPU cat fight. ;)

Offline 3vi1  
#5 Posted : Thursday, October 1, 2009 8:15:25 AM(UTC)

Rank: Advanced Member


Groups: Moderator, Registered
Joined: 5/12/2008(UTC)
Posts: 5,078
Location: U.S.

I think you've got it Gib.

To summarize: "nVidia says: WAIT! Don't run out and buy an ATI! We're going to release something competitive, or at least faster but ultra expensive, in 3-6 months."

Offline Jeremy  
#6 Posted : Thursday, October 1, 2009 8:33:08 AM(UTC)

Rank: Member


Groups: Registered
Joined: 10/14/2008(UTC)
Posts: 119

Nothing regarding availability or where they are, exactly, with development. Sure, he's holding what looks to be a next gen part, but who's to say there's actually anything inside it? 2 months? 3 months? 6 months? I suppose time will tell.

Given the size of the GPU itself and the specs listed, that certainly sounds like it's going to be one pricey card. My only thought on that is that perhaps things such as ECC will only be found on the Tesla cards, and not on the mainstream, gaming centric cards.

Offline slugbug  
#7 Posted : Thursday, October 1, 2009 11:00:18 AM(UTC)

Rank: Member


Groups: Registered
Joined: 4/18/2009(UTC)
Posts: 344
Location: Montreal, Canada

I wonder how long it will be until Nvidia releases a GTX 300 X2. I can't imagine how much power that would draw.

Offline kid007  
#8 Posted : Thursday, October 1, 2009 11:15:07 AM(UTC)

Rank: Member


Groups: Registered
Joined: 8/12/2004(UTC)
Posts: 2,050
Location: United States, Michigan

This people don't even let their own hardware be at least a year old... DAMN talk about helping the economy! Update your car for most of us or average american is about 3-6 years, update yoru computer according to Nvidia, ATI, Intel, AMD is about every 6 month. No wonder this people are making gazillions of dollars...

Offline ClemSnide  
#9 Posted : Thursday, October 1, 2009 11:53:00 AM(UTC)

Rank: Member


Groups: Registered
Joined: 7/23/2009(UTC)
Posts: 633

Well now, kid007, you don't really HAVE to upgrade-- or at least you don't have to do it very often, only when your current hardware refuses to run your desired applications (read: games). I have a Radeon 3850, which was the fanciest one that would fit into an AGP slot-- yes, my system is kind of old-- and while I'm not getting the performance I'd like, it'll run WoW, which is my primary concern.

One very nice thing about new products is that they drive the price of old products down sharply. If you don't need DX11 support (and who does, at the moment?), a previous-generation board will give you more bang for the buck. I am of the opinion that high-end graphics boards are mostly for bragging rights.

I do still want that Radeon 5870, which has the advantage of being non-vaporware. If my new system build's budget can handle it, I'll go with that-- I doubt if I could afford nVidia, which is traditionally way more expensive for a little more performance.

Offline mimimimi  
#10 Posted : Wednesday, July 23, 2014 4:58:13 AM(UTC)

Rank: Member


Groups: Registered
Joined: 7/23/2014(UTC)
Posts: 1

slugbug wrote:

I wonder how long it will be until Nvidia releases a GTX 300 X2. I can't imagine how much power that would draw.

indeed and i search Fermi and find it is powerful.

Fermi is the codename for a GPU microarchitecture developed by Nvidia as the successor to the Tesla microarchitecture. It was the primary microarchitecture used in the GeForce GeForce 400 Series and GeForce 500 Series.

Fermi Graphic Processing Units (GPUs) feature 3.0 billion transistors and a schematic is sketched in Fig. 1.

class for barcode java


Users browsing this topic
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.