44 CUs is almost perfect gpu utilization when you figure the actual gaming performance difference (not counting compute-intensive perks) between 7970 and 7950 (or gk104 for all intents and purposes considering 256 sfus really make it 1792 ops anyway) per clock when accounting for their bandwidth requirements and excess...That is to say somewhere around 1877sp per 32 rops is exactly what they would want.
900mhz is perfect for a process that was made for a nominal power/performance/yield voltage of .85-9v and has attributes of approx 1ghz/1v.
fwiw, 900mhz and 2816sp would require 5940mhz on a 384-bit bus. 2560 at 1ghz would be 6ghz.
This sounds very un-ati-like.
I was very much expecting 40-42 (36 pro) units and 7ghz (6ghz pro) ram because of the clockspeeds 28nm is capable (to keep die size down) and available bw in the said power envelopes this gen, but this would be better for power consumption. Clockspeeds would be lower, but it should yield well at those clocks and consume less power considering the trade-off of around 10% clockspeed for 10% efficiency (that would be fairly linear with sufficient bw) and around 6mm-7mm2 per CU.
Just theorizing, but I can't help but wonder about this and read into it a little.
While AIO/laptops and cooling power-hungry chips can be problematic in certain designs, this issue sounds not unlike some well-known but not-often reported on issues on some amd gpu s[censored].
While I am unsure of Barts/Blackcomb, but certainly some early Pitcairn parts had issues because some cards had electrical noise problems (because of capacitors; base spec too low from amd?). Some AIBs had issues with their cards causing issues very similar, if not identical, to those mentioned in this article (ie the black screen issue). It has since been rectified as far as I know, but with the proximity/power envelope of Barts vs Pitcairn combined with the fact mobility 6970 uses a mxm module rather than the notebook standard of bga, I cannot help but wonder if there is some correlation. Seems possible, but it certainly could be something else as well.
Here is TPU article referencing the issue with Sapphire cards, but it's insinuated this is an issue beyond a single AIB because of amd's involvement and response:
Very nice analysis, Rob. I agree with all the major points...
...except for the xbox one, of course (but I understand an opening paragraph has to come from somewhere).
I think a swing from a 3:1 pre-order/public opinion deficit to 2:1 (an indication the reversal helped in public opinion, but their platform is still behind) isn't exactly spectacular.
I think the interesting thing is to get back on even keel, if not pull ahead of the competition, they need to do EXACTLY the same things you mentioned regarding tablets and mobile in general. We are certainly on the same page regarding their problems across new platforms. If they were to drop the price (cheaper than ps4 without the Kinect most people don't want?) and open up development/publishing of apps/games for the system to at least parity with the competition while having those people releasing products on the system either out-the-gate or very early in the cycle...it could be a winning combo.
Be it corporate ineptness (old-think), arrogance (they've been on top so long), or just the slow turning radius of such a huge ship to adapt to the new reality, things need to change at Microsoft or the world will change to one with them in a significantly smaller role.
Has anyone noticed the majority of these overclocked s[censored] are doing the same thing?
Say a chip like 760 which is 980/1033/1130-1150 for base, average, and max...where max is in reality the median under load.
The base clock really means nothing...the clock you use really is the median (it's the highest clock in this example where on a 760 980/1033 are advertised). It's a clever trick to make the chips at stock faster than they appear on paper.
The grand majority of overclocked s[censored] simply use the median clock as the second number...marketed as the boost clock (the more honest 'average' on a stock sku) and step up the median clock proportionally to the original quoted ratios...not unlike upping a multiplier on a cpu.
In effect, the real performance gain is far less than what it would appear, and probably gives insight into how the gpus are binned when taking boost into account.
Every time a Microsoft employee speaks, a reiterative product is created.
This is similar to the SE principle which states every time a game doesn't sell 10 million copies a FFVII tie-in is subsequently born for a future product.
One works on dudebros, the other on sentimentals.
Be it 'Merica or the Japanese way, product trolling at it's finest.
Typically dig HH content and articles, but this needs some massive work in the editing dept. Also, I generally disagree with a lot of the arguments.
1. AMD didn't announce anything. They were poised the question by a journalist and said essentially 'It was shown with Gaming Evolved logos for the first time at GDC, it would make sense for that to happen.' He basically helped the journalist understand how to do deductive reasoning via marketing. Ex: nVIDIA hasn't announced bundling Arkham 3, but if it's TWIMTBP it's essentially a sure thing. Marketing does not equal bundle announcement even if implied.
2. AMD has said because of NS and NSR that Q1 sales were similar to Q4 (in their hasty 'stable 7000' conference call). The assumption of the article is only correct if nvidia had equally good sales not compared to AMD generally, but the previous quarter versus themselves. Given all the generally accurate assumptions about typical Q1 sales, I fail to see the correlation. What would have spurred GPU upgrades more than the top two selling, if not best received titles of the year so far? In short, I think a better interpretation is if one planned on playing a new PC game in Q1, they likely considered the bundle a value-add.
As I've said before, I agree a big part of the upcoming ecosystem is AMD segmenting good, better, best along with cheap (APU), budget (console), and enthusiast. Be that mobile/mom's pc, console, and up-to-date gaming PC or categorized differently. It's clear gaming bundles will also offset the times between product launches (7790, for instance, was planned to launch RIGHT AFTER Bioshock, the last free title, which end-capped Q1 and 7790 started Q2) to keep momentum. This essentially completes the last argument: You don't need to wait. By a new card now for the tech, or wait and get a new game instead of waiting for new tech. Buy a console and/or console title, or get perhaps that same title as a value-add to a cheaper and/or better experience on a PC. Pretty clever marketing from multiple directions when you think about it.
I echo that sentiment, and think this is by far the most thought-out and intelligent response I have ever seen by a media company rep. His answer was pitch perfect.
He looks like the good guy, is accepting reality while harnessing that possible monetary gain (which is really all he can do other than simply ignore/abhore the existence of the matter, which does nothing or even gives people an 'excuse' to keep doing as they are now.)
He is also reminding that potential customer base a 1080p/Dolby Blu-ray is going to provide a much better experience than how most people that pirate it see the content, which is true.
Give this guy a raise...seriously. If not for this, than also because writers, directors, and producers all seem to love him, which speaks volumes this day and age.
Brilliant response that wins not only support, but shows he is one the few that truly understands the world we now live. I have no doubt that quote (and continued mindset) will make HBO a ton of money, as this guy not only gets the situation, but understands what needs to be done to monetize it on the current battlefield.
I absolutely have this problem. I can state I'm fairly certain it's not the mobo/video card audio, as it happens to me when using my WP-350's (Creative bluetooth headphones) with the corresponding USB APTX bluetooth dongle.
The 'displaced noise' that bothers me the most is Elizabeth's footsteps. It drives me bonkers.
Also, what is up with the need to edit the .ini to apparently make this game work right for a ton of folks? It consistently crashed for me after varying play times, and no it wasn't an overclocking issue. It's a problem many are having. Also, apparently high-rez textures not being displayed are an issue for some. Some have solved it through .ini tweaks.
I settled on using RadeonPro (March 30 version) and now it works fine for me without crashing...but kind of disappointed in these issues. I fully understand others are not having them, but it's not something I've witnessed in quite a while. I'm sure through patchwork and drivers it will tighten up...likely right around the time most people finish it (if they haven't already).
I've always believed the problem has been with features layered BEHIND other features. IE the mapping of skin has looked good for some time in my opinion, but eyes have always looked odd because they are naturally sunken in. In that respect, this demo looks amazing. The eyes look pretty fabulous.
Now they just need to work on the inside of mouths (teeth etc.)
I feel terrible saying that, because if it's not one thing it will always be another, but still...not quite there yet. That said, these guys did a remarkable job.
"you can't buy this [the system's RAM] for $50. That's why graphics card cost as much as they do."
That's kinda here and there, and almost sounds like an excuse to jack the price over $400 as was obviously planned (with 4GB). Yeah, 8GB (especially low-voltage as they are using, which is probably similar-priced to their 1.6v 7gbps chips) probably costs a bit more at this moment, but we're also talking 4gigabit. While certainly early in the life-cycle (the stuff they are using will likely be hot off the mass production line), that's still 'only' 16 chips.
I guess what I'm trying to say is, no, it isn't $50 at this moment, but it can't be that far off. Currently doubling the density from 1GB to 2GB (using 8, 2Gb chips instead of 1Gb on a graphics card) is $20 more to the end user, and they're using 2x that amount with 2x density for 4x the buffer. The fact is it will indeed become $50 quickly with the new processes they are using for 4Gb at Hynix/Samsung (judging by the voltages and speeds planned by both) and it seems they will very much be a commodity as we go forward.
Also, let's be real. Nvidia put 2GB of the fastest stuff available at the time (6gbps, much of it from Samsung) on their 650ti (original) a year ago, and that can be had for peanuts (and is a low-end card). It doesn't cost THAT much, especially for a company using that many (like Sony), which probably signed an exclusive deal with Hynix (I would imagine cheaper than Samsung) saving them even more money, not-to-mention the chips will be plentiful because AMD's upcoming platforms will very much depend on that being the case.
PR guy just doing his job, and the ram is very nice, but let's be real.