Your point regarding efficiency is well made. Maybe Imisunderstood you however; I thought you were talking about the overallperformance between Deneb and Piledriver. Your clock-for-clock comparison onlyillustrates one aspect, namely efficiency, of the overall performance between the two architectures. At the end of the day, as your data evenshows, Piledriver is still more powerful "overall" than Deneb, butyes, it has its own quirks since it’s a different architecture. Yes though, I completely agree with what you said about P3/P4 comparing it to BD and the K10. It's simply notas efficient clock for clock. Deneb also had better FPUs from what I can tell. Assuming the clock efficiencywill only be on par with K10, the inclusion of HSA, OpenCL, and Mantle forgaming, will theoretically solve those serial/parallel processing efficiencies. What I think many don't understand is AMD isn'tcompeting with Intel solely based on x86 raw core performance, which isessentially what Rory Read said. They believe the innovation they need to breakthrough Murphy's ceiling, and keep up with a much more resource laden Intel, isfind a way to integrate and leverage multiple system resources to work more closely in unison. Hence the much talked about buzzword "HSA". Intelis doing something similar with Crystal Well and their 128MB eDram L4 cache. Personally I find Intel’s solution very interesting and I'd like to see it compare against Kaveri with HT disabled.
Many people even within the tech industry don’t use Linux; whichis too bad and I think they’re missing out. I natively boot Ubuntu& Arch, and I run Windows just for development, and maybe a gamehere and there, in virtualization. Linux,in my opinion, is a much more optimized OS and by far the most advanced. Iadmire Mac OS and AQUA's GUI layer, but at the end of theday while it has many novelties it just doesn’t offer what Linux does...and I just can't justify the cost of a Mac anymore. I am happy that it’s allowed BSD-UNIX to return to the spotlight however. I'm not much of a Windows fan though.
Off subject - I didn't realize you were Joel Hruska fromExtremeTech and [censored]. I read your article regarding the 9590 vs. Ivy-E which was wellwritten. The 9590 is a bit ridiculous in my opinion though. I’d still go for the Ivy-E at their price points ($500for the AMD and $550 for the Intel?). What are your thoughts on the speculationsurrounding the decline of x86 and the possible ARM saturation in the low powerserver market? The latter seems reasonable, but I’d imagine there would need tobe many changes made that would cost quite a bit of money.
Thanks for the information Joel, especially the explanation regarding efficiencies which I had never considered.
Again, I wrote this at work so I apologize if it’s choppy.
Yeah, you have a point Neil, perhaps we take things too far [H] and it is simply a price/performance issue. Especially since users can't tell the difference for most normal everyday use anyways. It is what it is, but according to another's post: I like taking my super-charged economy car with an Edelbrock manifold and street race it with Ferraris on the weekend. So simple and normal may have already flew out the Windows......at least since "8" anyways [;)]
Oh and I apparently like pixie dust and magical Linux-Sutra to justify my hardware habits! How magical...[:D]
Whoa Ken...where'd you come from?! Nice addition to the discussion. I had no idea about AWS jumping ship. I deal with them quite a bit and they're a pretty big player in the arena. Many developers from what I'm seeing are hyped about HSA and the advantages it holds for future processing. Kaveri's computational power with HSA enabled is impressive. I think you meant public "access" though. It read pretty funny the first time through though regarding their "public assess". [;)]
I'm typing this up on my Nexus so I apologize if it reads sloppy. How did you get a review/marketing sample? I thought AMD wasn't sending those out until after CES and as far as I know one wasn't on display at APU13? They just showed the video. Can you post a screen shot of the task manager or CPUZ? I agree with you about being the underdog not automatically granting you some level of artificial success. You disputed my claims about Linux and I posted the real world benchmarks from Phoronix backing them up. I also included several other articles that did the same. Those are legitimate real world benchmarks however and not theorized synthetic ones that you and I know are useless outside of trying to convince the uninitiated to buy this over that. I don't agree about the single core performance between Piledriver and Deneb being in favor of the latter. That's an old claim from the hysteria that went viral when Bulldozer came out. Bulldozer sucked, let's not argue that point. In almost every real world benchmark I see or test the Vishera chip wins hands down on multi and single threading...finally. For non-gaming workloads I also disagree with the idea that the 8 core FX is on par with an i5. Did you check out the links I provided? For gaming performance right now 2 threads is almost all you need and Intel leads in single threaded performance. Except Tek Syndicate did show the FX chip beating the Ivy i5 and checking in just behind the i7-3770- I posted that link above (Logan is an Intel Fanboy who openly admits it). But for work oriented tasks like those benchmarks I shared the FX chip's 8 physical integer cores are a force to be reckoned with. Under professional grade software like Blender, Sony Vegas, or even Adobe those 8 physical integer cores give Intel's (Ivy) 8 virtual cores a run for their money since those programs are optimized to use many threads and Hyper Threading isn't that efficient. Some developers purposely disable HT in the BIOS for performance reasons. For virtual machines those 8 physical cores also shine out against the i7s (Ivy) virtual ones. You can actually pass those FX cores on to the VM. Now yes, my 3930K handily beats the FX everywhere but I paid $169 for the FX and $569 for the 3930K. At the end of the day going back to the APU and the article AMD is saying that even with the 10% clock reduction SteamrollerB is still 20% more efficient than Piledriver and the same goes for the iGPU with a 30% increase, after a clock reduction, over the A10's 8670 "Devastator" which was pre-GCN anyways. Kaveri will not be an answer to Haswell. Some sites are claiming it'll compete with a Haswell i5, but I'm very skeptical. I do think it will beat an Ivy Bridge i5 and perhaps give Ivy's i7 a run based on those increases over Piledriver and the inclusion of, again, OpenCL, HSA, & Mantle. You can't keep throwing up the "single core speed" banner when you have variables like those in play. A large part of the reason why Intel is faster is because compilers, Windows, and even some benchmarking software are purposefully optimized to favor Intel's architecture. The lesson that AMD has needed to learn is that it's not necessarily the hardware, but the software that makes the chip great. If AMD could get the same software optimization advantage that Intel has, then the performance differences between the two would shrink. This is what we see under Linux since it's a community driven neutral OS, and behold Intel's i7-3770(K) has barely any lead on the FX Piledriver if any at all. This is what AMD is doing in partnering with the HSA Foundation and Khronos with their OpenCL standard for computation. For gaming Mantle makes the CPU cores almost irrelevant. True Audio is also interesting as is the ARM co-processor. Opinions aside I appreciate the forum we have going on here. I will confess though that you're right about the threads it seems. WCCF updated the article and redacted the "4/8" they had on the slide chart. If I could find a way to upload an image to this thread I'll happily post a screenshot of the original article they posted showing the 8 threads just to back up my sanity. I think WCCF is about to be dropped from my feed...So for that point I stand corrected.
I typed that up at work, so I know it reads rough.
-I’m going to try and respond to everyone in one post –
To Quote Michael Larabel at Phoronix:
"From the initial testing of the brand new AMD FX-8350 "Vishera", the performance was admirable, especially compared to last year's bit of a troubled start with the AMD FX Bulldozer processors.
For many of the Linux computational benchmarks carried out in this article, the AMD FX-8350 proved to be competitive with the Intel Core i7 3770K "Ivy Bridge" processor. Seeing the FX-8350 compete with the i7-3770K at stock speeds in so many benchmarks was rather a surprise since the Windows-focused AMD marketing crew was more expecting this new high-end processor to be head-to-head competition for the Intel Core i5 3570K on Microsoft's operating system. "
So yes, according to Phoronix under Linux the FX-8350 was competitive with the i7-3770(K) since Linux was further optimized to use AMD's architecture. Unfortunately Windows has always favored Intel which is where the “Wintel” pet name comes from.
"Kaveri + Hyper Threading" When I said Hyper Threading I was using it as a general term since most people do not understand the different architectures and think of core multi-threading as "Hyper Threading" since Intel has been far more successful with it's endeavors in this regard, so more people are familiar with that term. I should have explained it differently. Yes I know about CMT but I appreciate you sharing that info rather than simply blasting me. I never said it'd be an 8 core chip however. Based on a slide within an article I read it's supposed to be 2 modules, 4 cores, and handle 8 threads. AMD made many changes to its Steamroller core so it's now "SteamrollerB". I haven't found much information on the exact differences, but see the following article which illustrates that it is a 4-core/8-threaded chip. ->http://wccftech.com/amd-announces-a107850k-kaveri-apu-specifications-architectural-details-launching-14th-january-512-gcn-cores-28nm-steamroller/ [correction - this article was updated redacting the "4/8" on the Kaveri slide. Kaveri is NOT able to handle 8 threads]
And where did you get the idea that it’d be on par with Deneb as far as single threaded performance goes?
-->http://cpuboss.com/cpus/AMD-FX-4350-vs-AMD-A10-6800K The A10 Piledriver matches the quad core FX-Piledriver for the mysterious reason of it being the exact same CPU architecture.
-->http://cpuboss.com/cpus/AMD-Phenom-II-X4-965-vs-AMD-FX-4350 The quad core FX chip beats the Deneb
-->http://cpuboss.com/cpus/AMD-Phenom-II-X4-965-vs-AMD-A10-6800K The A10 beats the Deneb quad core in single threading and matches it in multi-threading performances.
So if we all accept that Kaveri will be “some measure” better than Richland, then Kaveri will also be “some measure” better than the Piledriver quad core and the Deneb Phenom quad core according to the benchmarks cited on CPUBoss. Truly I’m not a huge CPUBoss fan, but it is flashier than CPU-World.
What you’re missing from my statements is that with the advent of HSA, OpenCL, and Mantle AMD will be able to leverage its better assets, namely its Radeon cores, to help out its weaker x86 cores. It’s the exact inverse of Intel who has a weaker iGPU and stronger CPU core which is why they invented Crystal Well with it's eDRAM L4 cache. So overall, granting that HSA, OpenCL, and Mantle for games and possibly content creation software, is utilized properly Kaveri could give the stock Ivy Bridge i7 a run for its money under the right conditions. Intel, NVidia, Adobe, and Apple also have a stake in OpenCL, and HSA has AMD, ARM, Samsung, TI, and Qualcomm. Plus both have various other Open Source supporters and along with built in support under Linux & Mac OS. So it’s safe to say that both Open Source standards will be utilized in the future.
Also don’t forget that Mantle is supported by all 3 next gen consoles, and R-X GPUs, so it will also be adopted very well within the near future. Since it has to be for anyone to enjoy new games on the next gen consoles. To give you a taste of it check out this video from APU13 showing Kaveri out-do an i7-4770K paired with a GeForce 630 while playing BF4
Truthfully, we all know if you’re using GeForce 630 you would use an i5 and not i7, but the point they’re making about threading is valid.
Here's the bottom line - I love my 3930K and in the past I only used Intel CPUs because I wasn't much of a gamer and kind of a Mac fan to begin with. After someone at U of M finally talked me into buying an FX-8350 for VMs under Linux for some labs we set up I found myself questioning a lot of the synthetic benchmarks out there after I played with around with it.
Check out these pages:
Then I discovered that not only was Windows specifically optimized for Intel's architecture, but many of the compilers under Windows were as well. The GCC compiler, which Intel and AMD support quite a bit, is a good example of what can be done with AMD's module approach when fully utilized. The FX-8350 under those benchmarks above is nearly on par with the i7-3770K under the right conditions. Also, as Michael Larabel shows on Phoronix, the FX-8350 performs great under Linux and on par with the i7-3770(K). These are facts that many try to suppress out of nothing other than bias. I personally use both Intel and AMD processors now and both have their place depending on how they’re used. Intel definitely has its advantages, and so does AMD. AMD’s price to performance ratio is pretty legendary and compelling with many of its products, although not all.
I’ve been working with computers since my first PowerPC back in the early 90s before IBM’s architecture took over. I’ve seen both of these companies go at each other time and again, and the different approaches they’ve taken to do so. AMD has been in a pretty dismal position the last 2-3 years, but just as they did in the 90s and early 2000s they’ll bounce back and bring the competition to Intel which is good for all of us. In some ways AMD is a tad more innovative than Intel, but they have to be since Intel has a lot more resources to leverage. AMD’s chip-sets tend to be more resourceful and last a long time while still being relevant. The 990FX chipset, in my opinion, was more compelling than what Intel used for Sandy Bridge. Plus, how happy have we been with the dismal increases from Sandy Bridge to Ivy and Haswell? I personally feel as though AMD has been working on its "module" pretty aggressively since Bulldozer flopped on the scene. I was very surprised with the A10-6800K’s performance and floored by Kaveri’s potential. In the mobile arena AMD without a doubt has given Intel a real run for their money, and both have almost made low to mid-grade mobile Nvidia dGPUs obsolete.
The undeniable truth is this: we all need AMD and others (ARM) to be successful, no matter what our bias may be, because without real competition Intel has shown that it’s a tyrant and cares little for the consumer or a free market economy. Without competition Intel can charge whatever ridiculous price they wish by their own terms. They’d simply rather “out-buy” their competition and place a stranglehold on the entire industry as they did in the 90s and early 2000s with Andy Grove's “always be paranoid” motto. Until the FCC, the European Trade Commission, Japanese FTC, and even the trade commissions in China and some in South America found Intel guilty of violating fair market trade agreements and restricting consumer choice by forcing retailers to only sell Intel products, or else Intel wouldn’t sell to them, and in exchange Intel would give them hefty lump sum bonuses which is illegal across the globe. Now Intel HAS something to be “paranoid” about.
For those of you in denial about that last fact, here’s a pretty unbiased article lightly chronicling Intel’s many run-ins with the law.
You don’t have to agree with any of my opinions, but facts are simply facts. Don’t let your bias cloud your judgment. Both companies have great products to offer across multiple markets.
I agree. This one article actually flies in the face of many more out there which "show" a 20% CPU performance increase and a 30% GPU performance increase. I honestly think AMD is being conservative here. What he also failed to directly mention is with Kaveri AMD have developed their own version of Hyper Threading. You'll notice on some WCCFTech slides that Kaveri is a 4-core/8-thread chip. So you'll get a very decent, actually to date the best, iGPU with a quad core that could handle 8 threads at near FX-8350 performance levels. I believe that based on the many articles I've read, AMD's slides, and the performance demonstrations at APU13 that Kaveri will be on par with an Ivy Bridge i7 depending on HSA/OpenCL support. Yes, that's still a generation behind in terms of raw CPU performance, but keep in mind this thing will have THE BEST iGPU combined with Mantle and True Audio. Mantle/HSA will leverage the iGPU, and an R7-260X if you'd like, to offload not just the more paralleled workloads, but also the floating point math from the CPU cores to the GPU cores which help isolate the APUs strengths. Not to mention each part will be an equal citizen when it comes to RAM utilization so we'll actually be able to start utilizing RAM efficiently for the first time this decade! I'm thoroughly excited for Kaveri because it'll be a benchmark in processing history if HSA takes off. What I really appreciate is AMD has nearly completely embraced the Open Source community. They don't have half the resources Intel or nVidia have, yet they're right there in terms of commitment and support of the Linux Kernel. The FX-8350 was a monster under Linux, actually it matched the i7-3770, and Kaveri is slated to be even better. I just hope the Wintel..err I mean Windows at-least tries to fully utilize Kaveri. It's sad when a single partnership can stranglehold an entire industry.