• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD | Bulldozer, Fusion, AM3+, FM1, and What's To Come

Can anybody explain to me why AMD didn't just tape together 8 phenom II cores (and likely be able to clock them higher on the move to 32nm too), and save a few hundred million R&D dollars as well as have a better performing chip too boot?

It may not have been a world beater, but it could have kept them viable. The Phenom II X4 was a good cheap gaming alternative to Intel, now even it is being discontinued leaving nothing worthwhile from AMD. Nothing.

That's whats so mind boggling about Bulldozer.
 

Hazaro

relies on auto-aim
Can anybody explain to me why AMD didn't just tape together 8 phenom II cores (and likely be able to clock them higher on the move to 32nm too), and save a few hundred million R&D dollars as well as have a better performing chip too boot?

It may not have been a world beater, but it could have kept them viable. The Phenom II X4 was a good cheap gaming alternative to Intel, now even it is being discontinued leaving nothing worthwhile from AMD. Nothing.

That's whats so mind boggling about Bulldozer.
It was supposed to run at 4Ghz+ or something out of the gate and not suck at everything.
They are pushing more cores. Sucks for budget gaming rigs that want to OC.

$99 Quad + $70 mobo is still great for gaming and I had it recommended for a while before AMD needed a new socket and BD turned out to be horrible. Intel pricing jacking up lower end, but at least it's fast.
 
But from a little poking around online, Phenom II cores werent very big. Phenom II X4 was 258MM^2 on 45nm. Supposedly 758m transistors vs initially claimed 2B, then revised to 1.2B on BD. BD is 315 MM^2 on 32nm.

According to an article I read BD may be so bloated because AMD designed it with machines, or something. Another mistake if true.

Phenom 2 was already clocked up at 3.7 ghz, and 32nm plus some tweaking presumably would have bought them more speed. Maybe 4.1 ghz to be conservative? Maybe more? And 8 phenom 2 cores presumably would have competed well with BD on die size on 32nm. And has a higher IPC. And would have saved the presumably mountains they spent on BD development.

People say BD was designed for higher clock speed, but I've seen no real proof that was the case it just seems to be accepted despite no proof. I dont see how much more clock they could have gotten. Even so, it was a fail either way.

BD really seem like the biggest engineering mistake I can think of. I know Nvidia had some bad cards in the 5000 line.
 

Hazaro

relies on auto-aim
But from a little poking around online, Phenom II cores werent very big. Phenom II X4 was 258MM^2 on 45nm. Supposedly 758m transistors vs initially claimed 2B, then revised to 1.2B on BD. BD is 315 MM^2 on 32nm.

According to an article I read BD may be so bloated because AMD designed it with machines, or something. Another mistake if true.

Phenom 2 was already clocked up at 3.7 ghz, and 32nm plus some tweaking presumably would have bought them more speed. Maybe 4.1 ghz to be conservative? Maybe more? And 8 phenom 2 cores presumably would have competed well with BD on die size on 32nm. And has a higher IPC. And would have saved the presumably mountains they spent on BD development.

People say BD was designed for higher clock speed, but I've seen no real proof that was the case it just seems to be accepted despite no proof. I dont see how much more clock they could have gotten. Even so, it was a fail either way.

BD really seem like the biggest engineering mistake I can think of. I know Nvidia had some bad cards in the 5000 line.
Yeah, I don't know what happened. Hopefully some internal papers get leaked down the line and we can find out.
There's some proof for the higher clock speed as the pure Ghz it can get under LN2 is fantastic, the problem is it uses so much power it's not desktop viable.
 
Can anyone give a guess as to how the desktop variant of Trinity will compare with my Phenom II/4850? Im looking for maybe a 10% upgrade to play mainly Diablo 3 and SC2 expansions.

Ill probably go with a i5/7770 but the thought of a capable mini-atx machine is equally enticing.
As an outright, discrete GPU-less replacement? That's a tough sell. There haven't been any truly reliable figures, so it's hard to say for certain. For one, the top end trinity won't have L3 cache like your Phenom II. Unlike your PII, Trinity also won't have "fully complete" CPU cores.

Here's a very rough comparison of a 3.1GHz Athlon II X4 645 with an HD 4850, up against a 2.9GHz A8-3850. Keep in mind, some of the tests are done with only the iGPU, while others feature hybrid crossfire.

Introducing the AMD Llano APU and Socket FM1
http://icrontic.com/article/introducing-the-amd-llano-apu-and-socket-fm1

Compare that to AMD's marketing slide estimates of Trinity having a ~50 GFlops improvement over Llano. Leaked slides, and statements by AMD mention:

- "Up to 20% uplift vs. Llano" CPU cores
- "~30% performance increase vs. Llano" GPU cores

Trinity will also have turbo on the iGPU portion, as well as the CPU portion. In addition, there will be the same 65w and 100w APUs, along with new, higher end 125w versions, and unlocked "K" SKUs. So, it remains to be seen until we get hard numbers.




DH Special: all about AMD's Bulldozer-based Fusion processors!
http://www.donanimhaber.com/islemci/videolari/AMD-Trinity-icin-ilk-performans-degerleri.htm

These are supposedly AMD's own internal documents from Q4 2011

Trinity vs. Llano
ypIy5.jpg
LpBHz.jpg

twOjC.jpg
XkO0x.jpg



ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif




Rumored A10 Trinity for 2012 APU Lineup (likely to include new top end 125w variants):

AMD to Start Production of Desktop "Trinity" APU in March - Document.
http://www.xbitlabs.com/news/cpu/di...of_Desktop_Trinity_APU_in_March_Document.html

Staring from early and middle March, 2012, AMD intends to mass produce its A-series "Trinity" accelerated processing units with 65W thermal design power (TDP), according to an AMD document seen by X-bit labs. In early May, 2012, the chip designer wants to initiate mass production of A-series "Trinity" APUs with 100W TDP and higher performance.

The 65W chips will belong to A10-5700, A8-5500, A6-5400 and A4-5300 families, whereas 100W microprocessors will only fit into A10-5800 and A8-5600 series.

It is unclear whether AMD will launch all versions of its chips at the same time, or microprocessors with 65W TDP that will hit mass production two months earlier will be formally introduced earlier than the more powerful parts.
The chips will be compatible with new FM2 infrastructure.
According to a slide that resembles those from AMD's presentations published by a web-site, AMD projects Trinity's Piledriver x86 cores to offer up to 20% higher performance compared to Husky x86 cores inside Llano. In addition, the newly-architected DirectX 11 graphics core will provide up to 30% higher speed in graphics applications, such as video games. The 20% speed improvement represents AMD's projections "using digital media workload" and actual performance advantage over currently available Fusion A-series "Llano" vary depending on the applications and usage models.

AMD expects the new Trinity APUs to be not only faster than Llano, but also more available because of improved yields as well as because increased number of 32nm SOI/HKMG wafer starts starting from the fourth quarter.




ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif
ZkNan.gif




Trinity Desktop info

Virgo Platform With AMD Discreet Class Graphics
DUCuB.jpg


Lynx = Llano (FM1) vs. Virgo = Trinity (FM2)

Lynx To Virgo Transition Considerations
a6Hmm.jpg


Llano (left), Trinity engineering sample (right) - showing pin differences
gutte.jpg


Trinity ES, next to FM1 socket - showing FM1 socket incompatible with Trinity
LNZqt.jpg


Trinity ES clock speeds, identification info

wStlc.jpg


ZJxiZ.jpg
xkEeZ.jpg
 
Intel's Sandy Bridge vs. AMD's Fusion: A Comparison
http://www.pcmag.com/article2/0,2817,2375574,00.asp

[...]So you might be wondering how the two compare.

The short answer is: They don't.

Though Fusion and Sandy Bridge may share some defining characteristics, right now they are so dissimilar that they can barely be said to be the same thing. This doesn't mean that at some point elements of the two won't converge enough to be comparable. In fact, that's probably even likely. But until that day, which could be six months or even a year in the future, the two need to be assessed in very different ways if you're to figure out which is the most relevant for you. Both, however, are already reshaping the industry as a whole.

How They're Different
In the simplest terms, Sandy Bridge is a technology, and Fusion is a design philosophy.

Sandy Bridge is the most recent "tock" in Intel's tick-tock development strategy, which introduces a new production process one year (the "tick") and major technological redesigns the next (the "tock"). It's a new microarchitecture that's being implemented immediately in desktop and laptop computers of all sizes and configurations, delivering what Intel promises is improved performance and an impressive feature set.

Fusion, on the other hand, is the name of AMD's new family of Accelerated Processing Units (APUs). It will include a variety of chips for both laptop and desktop machines, but the chips will be based on designs specific for their function—unlike Sandy Bridge, where processors different from each other mainly in terms of degree. AMD sees Fusion as a strategy that explains, with a single word, what the company hopes to accomplish.

more...


AMD's Fusion System Architecture (FSA) becomes Heterogeneous Systems Architecture (HSA)

A6n1W.jpg

*click for 1,024px × 640px


AMD ditches Fusion branding
http://www.bit-tech.net/news/hardware/2012/01/19/amd-ditches-fusion-branding/1

AMD has announced that it plans to rebrand its Fusion System Architecture (FSA) to the Heterogeneous Systems Architecture, as it looks to gain more traction in professional environments.

First launched in June 2011 at the AMD Fusion11 Developer Summit, Fusion is the name given to AMD's efforts to meld its CPU and GPU know-how into a single platform offering high performance at a low power draw.

The best-known outcome of the Fusion project, AMD's Accelerated Processing Units (APUs,) offer small form factor system builders surprisingly powerful graphics and processing capabilities - with corresponding general-purpose GPU (GPGPU) capabilities - in a low-cost, low-power chip.

With AMD supporting languages including C++ AMP and OpenCL for GPGPU offload, the company clearly feels it's time to bring the technology to a new audience under a more professional brand identity.
The move isn't purely a branding exercise, however: Rogers promises to reveal recent advances in the HSA platform design at the company's Financial Analyst Day on the 2nd of February that will offer a clear improvement worthy of the platform's new name.

While AMD isn't the only company looking towards heterogeneous computing platforms, it does have a distinct advantage over its rivals: Intel is able to offer high-performance CPUs but is weak in graphics, while Nvidia offers high-performance GPUs but has no CPU presence outside its mobile-centric Tegra line and the secretive 'Project Denver.'

more...


For those still unsure of why Bulldozer exists, or is designed the way that it is, consider the above (along with all of the info already available in this thread), and look at what AMD has done with their new GCN "Graphics Core Next" GPU arch in the high end 7000 series.


Related:

Arctic Planning To Sue AMD Over “Fusion” Branding
http://www.maximumpc.com/article/news/arctic_planning_sue_amd_over_“fusion”_branding


AMD Announces the First AMD Fusion Center of Innovation (at the University of Illinois at Urbana-Champaign)
http://www.marketwatch.com/story/amd-announces-the-first-amd-fusion-center-of-innovation-2012-01-17
 
AnandTech goes a bit into what's been said earlier, by many, about needing a scheduler that can effectively balance CMT with SMT to make the most of the arch. Again, not easy.

When you look at just how many patches, and optimizations have been worked on over the last 10 years by Microsoft, Intel, and others for Intel CPUs (even up 'til today with Windows 7 & 8), it shouldn't come as a surprise to see how much effort it takes to fine tune new approaches in tech.

Check out the link for the rest of the comments, and other benchmarks including those which showed zero gains.



The AMD FX (Bulldozer) Scheduling Hotfixes Tested
http://www.anandtech.com/show/5448/the-bulldozer-scheduling-patch-tested

01 - The Hotfixes
02 - Single & Heavily Threaded Workloads Need Not Apply
03 - Mixed Workloads: Mild Gains
04 - Final Words

The basic building block of Bulldozer is the dual-core module, pictured below. AMD wanted better performance than simple SMT (ala Hyper Threading) would allow but without resorting to full duplication of resources we get in a traditional dual core CPU. The result is a duplication of integer execution resources and L1 caches, but a sharing of the front end and FPU. AMD still refers to this module as being dual-core, although it's a departure from the more traditional definition of the word. In the early days of multi-core x86 processors, dual-core designs were simply two single core processors stuck on the same package. Today we still see simple duplication of identical cores in a single processor, but moving forward it's likely that we'll see more heterogenous multi-core systems. AMD's Bulldozer architecture may be unusual, but it challenges the conventional definition of a core in a way that we're probably going to face one way or another in the not too distant future.

KKJ2X.jpg


This ideal scenario isn't how threads are scheduled on Bulldozer today. Instead of intelligent core/module scheduling based on the memory addresses touched by a thread, Windows 7 currently just schedules threads on Bulldozer in order. Starting from core 0 and going up to core 7 in an eight-core FX-8150, Windows 7 will schedule two threads on the first module, then move to the next module, etc... If the threads happen to be working on the same data, then Windows 7's scheduling approach makes sense. If the threads scheduled are working on different data sets however, Windows 7's current treatment of Bulldozer is suboptimal.
The first update simply tells Windows 7 to schedule all threads on empty modules first, then on shared cores. The second hotfix increases Windows 7's core parking latency if there are threads that need scheduling. There's a performance penalty you pay to sleep/wake a module, so if there are threads waiting to be scheduled they'll have a better chance to be scheduled on an unused module after this update.

Note that neither hotfix enables the most optimal scheduling on Bulldozer. Rather than being thread aware and scheduling dependent threads on the same module and independent threads across separate modules, the updates simply move to a better default cause of scheduling on modules first. This should improve performance in most cases but there's a chance that some workloads will see a performance reduction. AMD tells me that it's still working with OS vendors (read: Microsoft) to better optimize for Bulldozer. If I had to guess I'd say that we may see the next big step forward with Windows 8.

AMD was pretty honest when it described the performance gains FX owners can expect to see from this update. In its own blog post on the topic AMD tells users to expect a 1 - 2% gain on average across most applications. Without any big promises I wasn't expecting the Bulldozer vs. Sandy Bridge standings to change post-update, but I wanted to run some tests just to be sure.
Remembering what these two hotfixes actually do, the only hope for performance gains comes from running workloads that are neither single threaded nor heavily threaded.
Although you can argue that CPU performance is most important when utilization is at its highest, most desktops will find themselves in between full utilization of a single core and all cores. To test those cases, we need to look elsewhere.
The one thing all of the following benchmarks have in common is they feature more varied CPU utilization. With periods of heavy single and all core utilization, we also see times when these benchmarks use more than one core but fewer than all.
SPW9d.png
Q9SZ0.png

~5% (left chart), ~2% (right chart)
"Games are another place we can look for performance improvements as it's rare to see consistent, many-core utilization while playing a modern title on a high-end CPU. Metro 2033 is fairly GPU bound and thus we don't see much of an improvement, although for whatever reason the 51.5 fps ceiling at 19x12 is broken by the hotfixes."

N07su.png
lFUz5.png

~4%

DYAgl.png
dfSWY.png

5%

E9EyV.png

5%

kvsRS.png
vBcot.png

4.9%
It's important to note that the hotfixes for Windows 7 aren't ideal either. They simply force threads to be scheduled on empty modules first rather than idle cores on occupied modules. To properly utilize Bulldozer's architecture we'd need a scheduler that schedules both based on available cores/modules but biases its scheduling depending on data dependency between threads.
 

Datschge

Member
Can anybody explain to me why AMD didn't just tape together 8 phenom II cores (and likely be able to clock them higher on the move to 32nm too), and save a few hundred million R&D dollars as well as have a better performing chip too boot?

That's whats so mind boggling about Bulldozer.

As I already wrote earlier in this thread there's nothing mind boggling about Bulldozer once you look at AMD's long term plans that is merging CPU and GPU into one tightly integrated piece. With such APU you need flexibly combinable, reusable and updateable modules if only keeping up with the faster progresses with GPU technology (but it helps more easily diversifying the portfolio as well). Traditionally CPUs are hand optimized which is done over several years for every major change. GPU circuit designs had been automated and modularized for a much longer time, allowing for a much faster turn around for major changes. After the merger there was a company wide struggle moving CPU circuit design to the existing GPU school of design as well. Bulldozer is AMD's first CPU part done this new way.
 
$120 off CPU+motherboard bundle at Micro Center, for anyone who may be interested.


"OFFERS GOOD 01/27/12 - 01/29/12"
http://www.microcenter.com/specials/email/CPlanding_0127.html


If the motherboard is over $120, it's said they simply give you $120 off the board of your choice.

No idea if this deal stacks with their $40 off FX CPU coupon offer, but it does make for more interesting choices than Micro Center's normal $40 off motherboard bundle deal.


As I already wrote earlier in this thread...
*chuckles
 
Well I was looking at more Bulldozer benches earlier and the claim that it failed because they were hoping for more clock is just nonsense. Say they had got it to 4.4ghz, it still wouldn't have been close to competitive in gaming.

I dont think you could have hoped for a lot more than 4.4ghz.
 
It's as though the last 1-3 pages aren't even there, let alone everything else...



Various user results, from various forums on the 2-part Windows 7 patch improving performance, stability, temps, and power consumption.

Salt handy, as always.


diceman2037:

theres an additional file in the newer scheduler patch and all contained files are a higher versio number.


Power Consumption:

kzone75:

[FX-8120 overclocked]

I might be stupid but I think that the power draw is less now (when idling) with the patches. Used to be above 200w with all the power saving features off. ~160w now and I have wmp running. 330w when running CPU benchie in CB. 390w before.. Is this possible? Or is my Belkin just pulling my leg?


mikezachlowe2004:

[FX-8120 ~4.8GHz]

Also, I did want to point out cause somebody mentioned earlier that I am getting about 15 Watts less power consumption during idle with the MS Hotfix for BD. Not sure about under load but Ill look into it.


Gaming & General Use:

ComputerRestore:

Noticing a bit more performance overall. One thing I notice is better temperature control. I believe that this is what the Core Parking was for. To make sure under certain circumstances a MODULE doesn't duplicate a process between the cores, by now knowing how to park one of the cores per module.
I imagine this was creating excess heat and bottlenecking the FPU during single threaded processes.


noko:

Very interesting! Some of the tests on AIDA64 went up 14%! (Memory copy for example). This also improved Photoworxx and AES and a little bit on others.

I then did a 4 thread prime 95 run and AIDA64 for some real thrashing of threads, cpu usage and memory usage, memory speeds improved with the patch over the non-patch for this scenario but the results are mixed. Will have to plot out.

Still the patch does make a difference. Now this has been taking up my time on uninstalling my video card and putting in my 7970
mad.gif
. (Stupid testing)


noko:

I saw improvements with AIDA64 benchmark with memory speeds, they went up which affected the AES and Photoworxx part of the benchmark in a positive way. Also 3dMark11 physics score saw a very nice improvement. If anyone is interested I could post the graphs of the AIDA64 benches with the before and after.

There are factors which can probably make this hotfix more noticeable. For example upping the northbridge speed (increasing memory and L3 cache efficiency or speed). 1-2% doesn't really mean much since for some applications it could be more then 10% while others nothing and maybe a lost.


[RIP]Zeus:

Right now I am running the 2 hotfixes on my 8120.

I can't tell that much of a difference, but it seems boot times are a bit faster, and games are tad bit smoother.


jreinlie4:

I'm experiencing the similar results, the benchmarks aren't throwing out anything special but some games just seem smoother, reduced twitching. Probably due to the better load scheduling. BF3 performs amazing, super smooth, even in 64 player servers. Now when those MSI R7970 Lightning cards come out..... I may have to pick up a pair of those to replace my R6970's.


jreinlie4:

System Specs:

CPU: AMD FX-8150 (OC1 4.515ghz @1.46V) (OC2 4.620ghz @1.5V)
GPU: 2x AMD 6970 CFX @ 950/1500mhz
RAM: 2x 4gb Corsair XMS DDR3 @ 1632mhz
MoB: MSI 890FXA-GD70 BIOS 1.12 (1.1C)
HDD: 2x WD3000HLFS 300gb Velociraptor
PSU: Corsair AX1200
GPU Driver: Catalyst 11.11b + 12.1 CAP

The benchmark utility I am using is FRAPS.

(BD HF) = hotfix

FarCry 2 (OC1 @ 4.5ghz) (950/1500)
Benchmark (Ranch Long)
Settings: 1920x1080 @120hz,8xAA,DX10
Min: 71.96
Max: 231.09
Avg: 115.57

FarCry 2 (OC1 @ 4.5ghz) (950/1500) (BD HF)
Benchmark (Ranch Long)
Settings: 1920x1080 @120hz,8xAA,DX10
Min: 74.36
Max: 258.81
Avg: 125.92

FarCry 2 (OC2 @ 4.6ghz) (950/1500)
Benchmark (Ranch Long)
Settings: 1920x1080 @120hz,8xAA,DX10
Min: 73.61
Max: 247.40
Avg: 122.74

FarCry 2 (OC2 @ 4.6ghz) (950/1500) (BD HF)
Benchmark (Ranch Long)
Settings: 1920x1080 @120hz,8xAA,DX10
Min: 74.47
Max: 267.92
Avg: 128.73


Batman Arkham Asylum (OC1 @ 4.5ghz) (950/1500)
Benchmark
Settings: 1920x1080 @120hz,8xAA,PHYSX Off
Min: 78
Max: 338
AVG: 209

Batman Arkham Asylum (OC1 @ 4.5ghz) (950/1500) (BD HF)
Benchmark
Settings: 1920x1080 @120hz,8xAA,PHYSX Off
Min: 76
Max: 344
AVG: 210.5

Batman Arkham Asylum (OC2 @ 4.6ghz) (950/1500)
Benchmark
Settings: 1920x1080 @120hz,8xAA,PHYSX Off
Min: 80
Max: 339
AVG: 211

Batman Arkham Asylum (OC2 @ 4.6ghz) (950/1500) (BD HF)
Benchmark
Settings: 1920x1080 @120hz,8xAA,PHYSX Off
Min: 81
Max: 348
AVG: 214


ArmA 2 (OC2 @ 4.6ghz) (950/1500)
Benchmark A
Settings: 1920x1080 @120hz, AA Normal, View Distance 1500m, All Details Very High
Benchmark 01
Avg: 51
Benchmark 02
Avg: 22

ArmA 2 (OC2 @ 4.6ghz) (950/1500) (BD HF)
Benchmark A
Settings: 1920x1080 @120hz, AA Normal, View Distance 1500m, All Details Very High
Benchmark 01
Avg: 56
Benchmark 02
Avg: 23


Balkroth:

I'm seeing some pretty sizeable increases in matlab when using 8 threads for operations (well basically for >4 threads I'm seeing some 11%-18% performance over win 7 with no scheduler update this puts it about 4-6% faster than when I run the same things in windows 8 dev preview.

*Matlab R2007b win8 dev preview showed a large increase over 7 without the scheduler update (on another note, the old scheduler update that leaked out was slower than using 7 without the update)


Balkroth:

Just kinda a followup to what I said earlier for matlab

comparing to Tom's Hardware dozer tests since there isn't exactly a lot of people doing matlab tests with it

10 runs for me, error in parenthesis besides for 3d/2d since it slows done when ran in a loop
3d 230.5 ms (first 196.5 ms, last 275.3 ms) fft 132.0 ms (+/- 3.7ms) LU 123.5 ms (+/- 0.7ms) 2d 347.3 ms (first 298.6 last 363.9ms) ODE 122.2 ms (+/- 1.3ms) Sparse 251.4 (+/- 8.3ms)


polyzp:

AIDA64 Benchmarks! Windows 7 FX patch preview!

AMD FX 8150 @ 4.8 ghz



CPU AES :

BEFORE - view @ blog [For all pre-patch results]

AFTER - [Pictured below]

CPU-AES.png


CPU HASH :
CPU-hash.png


CPU PHOTOWORX :
http://4.bp.blogspot.com/-MgkUWDzVG7c/TxcKaBn_XjI/AAAAAAAAAHk/liPb1_V1IAY/s1600/CPU-photoworx.png

CPU QUEEN :
http://3.bp.blogspot.com/-EkOOpVebv-A/TxcKi2F0K4I/AAAAAAAAAHs/IV-KikrOjxg/s1600/CPU-queen.png

CPU ZLIB :
http://4.bp.blogspot.com/-yF5FLlRvcHs/TxcKpiiKC-I/AAAAAAAAAH0/EM_bXWfB3Ys/s1600/CPU-Zlib.png

FPU JULIA :
http://3.bp.blogspot.com/-QIzNd5t0s-8/TxcKyQ4lgsI/AAAAAAAAAH8/MV4eS4H_7Z0/s1600/FPU-julia2.png

FPU SINJULIA :
http://4.bp.blogspot.com/-oIBn0ky6rHI/TxcK-CbslNI/AAAAAAAAAIE/_tuRNHF2gr4/s1600/FPU-Sinjulia.png

FPU VP8 :
http://4.bp.blogspot.com/-eNhoJNhyoLI/TxcLDgqPjlI/AAAAAAAAAIM/XD18BVUMdWo/s1600/FPU-vp8.png

FPU MANDEL :
http://3.bp.blogspot.com/-vCZ9UHDV0tw/TxcLKbblw0I/AAAAAAAAAIU/uUEZEVVq7hk/s1600/FPU-mandel.png

SUMMARY OF RESULTS (Patch vs. No-Patch)

CPU Tests -

AES : +7.3% performance
Hash : +0.2% performance
Photoworx : +3.3% performance
Queen : +0.1% performance
ZLib : +0.1% performance

FPU Tests -

Julia : +0.3% performance
SinJulia : +0.0% performance
VP8 : +1.4% performance
Mandel : +0.3% performance

We can see here that the patch gives a decent boost in performance with AIDA64 across the board with none of the benchmarks showing worse performance than with pre-patched Windows 7. Overall FX fairs fairly well, but the only benchmark where it pulls ahead of all the other CPUs is in CPU Hash. The AMD FX 8150 @ 4.8 ghz manages a top 2 spot when compared to the other CPUs in 4/8 tests and a top 3 spot in 5/8 tests. Naturally the 3960x @ 3.8 ghz Turbo manages to beat FX in most tests, but not nearly as singificantly as one would expect.


TechArp H.264 Benchmarks!

With Patch vs. Without Patch

RESULTS:

First Pass - Single Core Performance!
Second Pass - Multi-Core Performance!

h264-PASS1.png
h264-PASS2.png


Single core Performance increases by +2.3% with both Windows 7 Patches installed. This isn't grossly significant, but still welcome! At 4.8 Ghz the AMD FX 8150 manages to beat an i7-875k @ 4.0 Ghz by about +4%.
When all cores are active, the windows 7 patch actually manages to bring improvement of +2.4%. This pushes the performance of the AMD FX 8150 @ 4.8 Ghz above the i5 2500k @ 5.0 Ghz by a whopping +21% and below that of an i7 2600k @ 5.0 Ghz by only -1%.


PCMARK 7 benchmarks!

RESULTS:

Pre-Patch VS. Post-Patch

Before Patch installation / After Patch installation

PCMARK7-pre.png
PCmark7-2.png

We can see that PCMARK 7 is very happy with the Windows 7 FX Patch. The only performance decrease is the system storage score which is probably due to the use of my SSD. Garbage Collection seems to be doing its job however. The most notable increase in performance is in the computation Score, where the patch shows a +16.6% increase in performance. An honourable mention to the entertainment score as well, which noticed a +4.4% increase in performance.




C6ZR1:

I am using an ATI 4850 GPU which could affect results in a negative way; and if anything actual results could be better.

FX-8120 stock clocked

347x320px-LL-2edfdf99_nopatchpassmark.png
347x321px-LL-31a7c32e_withpatch.png

Again, first picture is without patches and second is with patches

PassMark Score:

Without patch: 1578.1
With patch: 1730.3
%increase: 9.64

[GPU is bogging down score, 4850 ftw!! ;)]

Finally the good stuff. Battle field 3 [Using Fraps to get avg. fps]

WqRDD.jpg


Shown above are the settings that I used for BF3 and ran it on the campaign level Sword Breaker

Without Patch: 31.78 FPS Avg
With patch: 40.94 FPS Avg
% Increase: 28.8

I was actually very surprised about the BF3 test and re-ran it just to verify the avg FPS jumps.

So from my mini-benchmark I can say that the patch did help quite a bit, especially where it counted, BF3


aszrael1266:
ok so I ran passmark before and after updates and same thing with doing a fraps with bf3.

DUAwK.jpg
Hai0J.jpg

6fJvB.jpg
B6C1Z.jpg

My BF3 settings
WzpHN.jpg


Before update
avg 43.85

After the update
avg 45.418

The specs for my system are
stock clocked 8120
Patriot 8GB Divison 2 DDR3 1333MHz
MSI 990FXA-GD65 AMD 990FX Socket AM3+
MSI GTX560 TI twin frozr 2


GunnDawg:

Not sure how this was supposed to affect the 4100's but I gained anywhere from 20 - 40 FPS in BF3 after applying both patches. I went from about 35 - 40fps up to about 60fps average, some times spiking up to 70's, and 80's. I am running a GTX 295 as well. Great patch IMO.


kevmatic:

I ran wPrime because it was the only bench I could think of that allowed me to easily change the number of threads:

FX-6100 @ 4.00Ghz:
1 Thread: 61.385 sec
3 Threads: 21.278 sec
6 Threads: 13.219 sec


With both hotfixes:

1 Thread: 59.94 sec
3 Threads: 19.512 sec
6 Threads: 12.652 sec

Do the math:

1 Thread improvement: 2.3%
3 Thread improvement: 9.0%
6 thread improvement: 4.2%


Makes sense when you consider what the hotfix is intended to do. I almost wish I had run the benchmark 6 times... Sounds tedious though.


mikezachlowe2004:

I noticed that my boot up time has decreased by a couple seconds. Very smooth gameplay in lightly threaded games.

MW3, Dirt 3, and Black ops play a lot smoother though.

With the patch so far I have seen faster boot up, quicker application startup, and much smoother gameplay with lightly threaded games.


bmgjet:

Im still doing my gaming bench marks but Iv noticed fraps runs better as well now.
Use to drop the frame rate 3-4fps now it drops it 0-1fps while recording BF3 ultra
.


TKFlight:

I've seen some better game play in Civ 5, so the patch did make some improvements. Still waiting for benchmarks.


axipher:

There are a few specific cases where the patch makes a huge difference.

One scenario that I ran into was folding and playing an Emulator or game.

As you know most games only need 4 threads max, and folding SMP's on 4 threads is perfectly fine.

Letting Windows 7 without the patch clumsily schedule the threads of the folding to the first 2 modules and the game/emulator to the last 2 modules is immensely slower then having each one across 1 core per module.

Example:

A great example is the Dolphin Wii Emulator I use. Switching from no core-scheduling, to manually setting affinity to every other core and doing the same for my SMP client was the difference between playable and unplayable at above 2x native resolution.

This brings me to my final point:

The 3-5% increase that the patch brings is based on running lightly-threaded applications when the processor isn't under load.

Running multiple threaded apps that take only a few threads benefits much more then "3-5%". I don't have any solid numbers at all and really don't feel like getting some, but if need be I can. In my opinion, I would have to say the patch brings easily 10-20% improvement to performance in scenarios like the one I explained.


Tweety:

Windows CPU score went from 7.4 to 7.5... for free
smile.gif


Tslm:

[FX-8150]

Well I just applied both patches. Windows score went up 0.1
laughingsmiley.gif


derpy_hooves:

[FX 8120]

Got update installed Cinebench from a 6.11 to 6.34 @ 3.9ghz


bmgjet:

[FX-8120 4.7GHz]

Passmark 7 8T = Old Patch = 1.9% increase, New Patch 2.1% increase. (it doesn't have any of the performances drops on some tests like the old patch)
Passmark 7 4T = Old Patch = 5.3% increase, New Patch 11.5% increase.
Passmark 7 2T = Old Patch = 5.8% increase, New Patch 12.1% increase.
Passmark 7 1T = Old Patch = 0.3% increase, New Patch 0.7% increase.

Cinebench 8T = Old Patch = 0.01-0.02 points better, New Patch = 0.04-0.05 points better.
Cinebench 4T = Old Patch = 0.05-0.06 points better, New Patch = 0.09-0.10 points better.
Cinebench 2T = Old Patch = 0.05-0.06 points better, New Patch = 0.09-0.10 points better.
Cinebench 1T = Old Patch = 0.01-0.02 points better, New Patch = 0.01-0.02 points better.

3D Mark 11 Physics Score = No Patch 4.7ghz 7943, Old Patch 7981, New Patch 8025

Primebench mark, 4.5ghz No Patch 9.199, Old Patch 9.144, New Patch 9.128


kahboom:

Batman arkham city bench picked up a good 10fps in my sli set up, did some other registry tweaks too, seems to work much better in games

running gtx 570's in sli 10fps to 25fps in most games much smoother too

used fraps


kahboom:

gained some bandwidth two with the patch and some registry tweaks

03408bcf_fx8150patch4iju2.jpeg


got 113gbs, previously without the patch it was 110gbs.
someone got like 113.82 @ 5.4ghz with an fx 8150 and i got 113.062 after the patch i started testing various things


Ajston:

Just did a fresh install of windows 7-64 and installed the BD Hotfix and I'm very pleased to say that games are running much more fluidly.
For a moment there I thought spending all my savings on 6990 CF and an 8150 was a mistake.


n0tiert:

My Futuremark 3DMark11 test on FX-8150 / 6990

before:
3dmark11_P.jpg


and after using the patches:

BD_patch.jpg


as you can see , it slightly raised the PhysX score a bit and the Total score is arround 50 points higher


screamer980:

MY RIG: http://www.sysprofile.de/id71626

Here's my result. Before Patch 7.15. After 7.23

cb115.jpg


pantherx12:

Got the same 7.6% increase in cinebench single core as the leaked patch, this time with no hit to multicore performance : ]


omagic:

Well i only had time to check Crysis CPU Benchmark
Before 47 fps
After 56 fps

So quite nice boost. Ill try some more games after the work

FX-8120 8GB RAM HD6870
1680x1050 all maxed


There were other benches and remarks on gaming, and power consumption. I'll likely add them if/when I come across them again.
 
Globalfoundries On Track For 14nm Process Technology
http://www.cdrinfo.com/Sections/News/Details.aspx?NewsId=32296

Globalfoundries CEO says company is back on track
http://www.eetimes.com/electronics-news/4235397/Globalfoundries-CEO-says-company-is-back-on-track-

Globalfoundries CEO discusses challenges
http://www.youtube.com/watch?v=D7M5DQCnc_w


From the OP:


This article goes into what I was mentioning earlier about the Piledriver cores in Trinity being tweaked further for 2012 FX CPU use.


AMD Quietly Adopting "Tick-Tock" Model for Micro-Architectures.

AMD to Use Slightly Different Micro-Architectures for APUs and CPUs
http://www.xbitlabs.com/news/cpu/di..._Tick_Tock_Model_for_Micro_Architectures.html

Intel Corp.'s so-called "tick-tock" model of transitioning to new manufacturing processes and micro-architectures has proved to be very efficient in making Intel the maker of the highest-performance microprocessors. Apparently, its smaller rival Advanced Micro Devices is also plotting something similar, but a bit differently.

As it appears from AMD's documents observed by an X-bit labs reader (in the comments for this news-story), starting from Piledriver micro-architecture and going forward, AMD's Fusion accelerated processing units (chips that integrate both x86 and stream processing cores) will feature "reduced", or "early" micro-architectural feature-set, whereas central processing units (CPUs) based on new designs will feature "full" or "late" feature-set. As a result, x86 performance of the former will be lower than x86 performance of the latter.

9HRSa.jpg


AMD wants APUs to be released earlier than fully-fledged CPUs since they are aimed at broader segment of the market. Therefore, x86 cores of Fusion chips will sport "reduced" next-generation micro-architecture (and will fully support previous-gen features and capabilities) in order to cut their development time and reduce their die size. CPUs will come to market several months after APUs and will feature more advanced x86 cores that will support more new instructions and therefore will offer better x86 performance.

For example, only fully-fledged "late" Piledriver inside Viperfish (code-name of next-gen server/desktop die design, the successor of Orochi that powers FX and Opteron chips) will be able to execute numerous new instructions as well as will receive instructions per clock (IPC) increase. Even though reduced "early" Piledriver inside code-named Trinity APUs will be more advanced than the original Bulldozer, the x86 cores are projected to be slightly less efficient than those of the full Piledriver.

The "tick-tock"-like approach is expected to allow AMD to reduce time-to-market of its new products and ensure that innovations do not negatively affect yields. On the other hand, it will create difficulties for software makers who will have to take into account that x86 cores within one generation of APUs and CPUs are slightly different. In addition, it should be noted that AMD's "tick-tock" has nothing to do with transitions to newer process technologies and is almost completely about micro-architectures.


Of course, "tick-tock" isn't a secret, or even new. A lot of companies have similar strategies, most just aren't able to execute it as well as Intel. AMD has also previously had a slightly different "tick-tock":


AMD “Pipe”: Platform Innovation Progression

AMD: Still in the Game
http://www.anandtech.com/show/2287/2

AMD - July 2007 Analyst Day Randy Allen
http://www.scribd.com/doc/277093/AMD-July-2007-Analyst-Day-Randy-Allen-



Related:

AMD: Improvements of Next-Generation Process Technologies Start to Wane.

AMD Talks 14nm, 20nm: We Have to Approach New Processes Wisely
http://www.xbitlabs.com/news/cpu/di...ation_Process_Technologies_Start_to_Wane.html

"Now, let's talk about 20nm and 14nm. I think that we really flying hard in the path of subatomic environments. The price advantages as we move down nodes are starting to wane. The ability to [quickly improve] yields and ramp up our products (which have fixed amount of time) is under exceptional pressure. It costs huge amounts of money. I think we have to be strategic and think about how quickly we go down the node," said Rory Read, chief executive officer of AMD, during IT Supply Chain conference organized by Raymond James.

Traditionally, both vertically integrated makers of semiconductors as well as contract makers of semiconductors, introduce new process technologies every 18 to 24 months. In the recent years the cadence changed a bit since development of new manufacturing processes and building new fabs became extremely expensive, but Intel Corp. keeps introducing new fabrication processes every two years and new product families every year proving the financial viability of Moore's law. The world's biggest chipmaker believes that the new process technologies enable it to integrate more functionality into chips while keeping their prices relatively flat. However, Intel is among a few companies who produce so large amounts of chips that it can cover development costs.
"Just go look at the cost of wafers as you move down those technologies, they are not going down, they are going up! If the yield does not go up, how do you get your return? You have to charge bigger prices. We will get there, we will move down [but ultimately there got to be different pricing model]," said the chief executive officer of AMD.
 

Neo C.

Member
Yeah, it's time for paradigm change. It isn't like we didn't know things gonna be hard once we reach somewhere in the 10nm area. It seems the limit is even a bit higher than 10nm.
 
The AMD guy's comments have far reaching ramifications for future consoles if true. How do you plan a console when future cost reductions arent guaranteed?

That said, it makes me wonder why Intel pursues shrinks so helter skelter. Clearly Intel finds benefits in pushing the edge.
 

Neo C.

Member
The AMD guy's comments have far reaching ramifications for future consoles if true. How do you plan a console when future cost reductions arent guaranteed?
I expect even smaller leaps after next gen, unless graphene processors suddenly become commercially available or cooling tech improves dramatically.

Edit: I think it's an interesting topic for itself, but I'm not sure if GAF can handle it. Some people are already crying because the jump to next gen could probably smaller than the previous one, how are they going to swallow even less progress (or progress for a much higher price) in the future?
 
AMD analyst day was yesterday. Mostly confirmed their new direction, partly outlined during Q4 2011, fleshed things out, and threw in some surprises. For those who haven't followed all of the developments, I'll throw most of it together when I get a chance.


Yeah, it's time for paradigm change. It isn't like we didn't know things gonna be hard once we reach somewhere in the 10nm area. It seems the limit is even a bit higher than 10nm.
Yeah, companies have been well aware of this for years, but replying a fair deal on process/material has been easier than chasing efficiency in other areas. We see it in most industries.


The AMD guy's comments have far reaching ramifications for future consoles if true. How do you plan a console when future cost reductions arent guaranteed?

That said, it makes me wonder why Intel pursues shrinks so helter skelter. Clearly Intel finds benefits in pushing the edge.
It works for Intel, because they can eat costs unlike most other companies who can't amortize things as easily. When you have high profits, and market share, in addition to your of fabs that work very closely with your engineers, you're afforded the kind of leeway and efficiency others would kill for. There's a method to how they're pursuing ever increasing profit (from ever decreasing margins). Look at Samsung. They drop their HDD biz to focus on extremely profitable SSDs. Just so happens that Samsung produces most of the components in their SSDs (or licenses IP that they then design), and has internal fab capability.

Consoles won't be affected quite the same as an AMD. While they are at the mercy of TSMC/GlobalFoundries/etc., the chip design, select process, and resulting yields all factor into costs.


I expect even smaller leaps after next gen, unless graphene processors suddenly become commercially available or cooling tech improves dramatically.

Edit: I think it's an interesting topic for itself, but I'm not sure if GAF can handle it. Some people are already crying because the jump to next gen could probably smaller than the previous one, how are they going to swallow even less progress (or progress for a much higher price) in the future?
Did you happen to start up that thread? I'd love to see the reaction.

Shame that graphene is still a ways off. Thankfully, it isn't alone, so we'll see what develops between that, molybdenite, or any other viable avenue.


Physical limits of silicon transistors and circuits
http://www.fisica.unipg.it/~gammaitoni/fisinfo/documenti-informatici/physical-limits-silicon.pdf

The incredible properties of molybdenite
http://sti.epfl.ch/page-61514-en.html

YtWA1.gif



Of course, some of the cooling tech will also be taking a while, if ever, to become viable at the consumer level.


New material bests silicon at gadget cooling
http://www.msnbc.msn.com/id/4596253.../t/new-material-bests-silicon-gadget-cooling/

"The experimental graphene made by U.S. and Chinese researchers has also proven 60 percent more effective at transferring heat than typical graphene — a carbon sheet just one atom thick. Such a material could eventually become a part of computer chips alongside silicon, as well as whisk heat away from solar panels, radar, security systems and imaging gadgets."


NASA to demonstrate super-cool cooling technology
http://news.cnet.com/8301-17938_105-20066890-1/nasa-to-demonstrate-super-cool-cooling-technology/

New 'cooler' technology offers fundamental breakthrough in heat transfer for microelectronics
http://www.physorg.com/news/2011-07-cooler-technology-fundamental-breakthrough-microelectronics.html

Electrohydrodynamic-based EHD, and Sandia Cooler/Air Bearing Heat Exchanger

 

Nemo

Will Eat Your Children
What's the followup on e-450? Did they cancel it all? thought Fusion was pretty promising myself...
 
Was puttering around the web and found about FX-4170. A quad core bulldozer clocked at 4.2(!) ghz and soon to be released. Also supposed to be priced at ~$140 and I'm sure even that could fall.

http://www.cpu-world.com/news_2012/2012020101_Pre-order_prices_for_upcoming_AMD_FX_CPUs.html

Of course BD performance is so bad it's STILL probably not worth it, but it might be something that can get AMD back a little bit in that cheap value area they used to hold with Phenom II X4 that Intel didnt really have an answer for.

Also the fastest clocked stock chip I believe ever released, as far as I know.

You could overclock it too, but it's turbo clock is only 4.3 so I have to wonder if there's little potential left. However if it overclocks even decently, that would be great.
 
What's the followup on e-450? Did they cancel it all? thought Fusion was pretty promising myself...
No, some of the lower power APUs that were slated to transition to 28nm have either been stalled/canceled, or are staying on 40nm with some tweaks. A LOT changed at AMD between Q3, and Q4 2011. However, Fusion is still at the forefront of their plans.

The higher end desktop users might feel they're being abandoned, but it isn't that simple.
 
Sometimes you really do have to pay close attention to presentation slides.


Lots of info incoming.



AMD 2012 Financial Analyst Day
http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-2012analystday <--- multiple presentation .PDFs available

Your new AMD decoder key
http://blogs.amd.com/work/2012/02/02/your-new-amd-decoder-key/ <--- codenames no longer in use/canceled products



compalamdtrinityodmprototype-dantetktk-01.jpg


Compal Trinity ODM reference design eyes-on
http://www.engadget.com/2012/02/02/compal-trinity-odm-reference-design-eyes-on/

Not just a FAD, AMD aims at the market ultra-thin laptop-like device
http://www.pcper.com/news/General-Tech/Not-just-FAD-AMD-aims-market-ultra-thin-laptop-device

AMD Trinity ULV 3DMark scores revealed
http://vr-zone.com/articles/amd-trinity-ulv-3dmark-scores-revealed/14742.html

Inside this particular prototype is one of the lower voltage variants of Trinity (read: either the 17W or 25W part), which enables that svelte 18mm profile.
An estimated starting price of half an Ultrabook (roughly in the $500 to $600 range).
There weren't many details though Su did state they were hoping for prices in the $600-800 range could put a lot of pressure on Intel.
One of the benefits Trinity will bring is what AMD called 'All day' battery life, with a 12 hour lifespan predicted. Trinity uses half the power of Llano as well as featuring an improved graphics core which they predict to be half again as powerful as Intel's HD Graphics. They also predict the new Bulldozer architecture will increase general computing power.
Compal, a contract maker of electronics, may start producing notebooks using this design for interested parties, whereas other notebook makers may create their own ultra-thin mobile computers powered by AMD Trinity.

"The 18mm reference design from Compal is what many OEMs are looking at."
To counter Intel's ultrabook initiative, AMD is preparing fully enabled quad-core (two module) APUs for ultrathins. The typical AMD ultrathin will be around ~18mm, featuring 17W APUs, while slightly thicker ultrathins may use high performance 25W APUs.
FCwkc.jpg
AMD revealed typical 3D Mark Vantage scores for selected 17W and 25W SKUs, performed on Compal reference designs. A A6 ULV 17W Trinity scores 2355 3D Marks. This is twice as fast as Sandy Bridge Core i5 2537M, an ultrabook regular, which scores 1158 3D Marks. Of course, Trinity will be competing with Ivy Bridge ultrabooks, but AMD expects the given A6 ULV to still be over 50% faster than competing Ivy Bridge ultrabooks.

The pinnacle of performance/power lies with the A10 Low Voltage APU, however. The 25W LV APU scores a blistering 3600 3D Marks
, which sets a new record for integrated graphics in a laptop and puts most entry-level mobile discrete GPUs to shame. Trinity will also be present in performance laptops in 35W and 45W variants, which will further advance integrated graphics in notebooks.

On the CPU front, Ivy Bridge should be dominant in less threaded benchmarks. However, things will be much closer in multi-threaded benchmarks as AMD is offering two-module / quad-core APUs against Intel's dual-core / four hyper-thread CPUs. With the right price points, AMD is well positioned to offer a compelling alternative to Intel's pricey ultrabooks.

49584a_vision2011_logo_family_100x85.png





AMD Unwraps 2013 Client Roadmap: 28nm Everywhere.
http://www.xbitlabs.com/news/cpu/di...raps_2013_Client_Roadmap_28nm_Everywhere.html

AMD's 2012 - 2013 Client CPU/GPU/APU Roadmap Revealed
http://www.anandtech.com/show/5491/amds-2012-2013-client-cpugpuapu-roadmap-revealed

All product lineups that AMD promises to release next year will be substantially different from what is available today, hence, they will not be just product shrinks or refreshes. 28nm process technology alone is likely to allow AMD to boost performance and feature-set of its chips without increasing power consumption compared to existing solutions that are made using 32nm or 40nm fabrication processes.
The new Steamroller x86 high-performance cores will improve performance by boosting efficiency of execution and will also include design elements from low-power architectures to reduce power consumption.
Going into 2013 AMD will move all mainstream client APUs to 28nm and bring a GCN (Graphics Core Next) based GPU to all of the APUs. Kaveri, the Llano/Trinity follow-on, will use Steamroller cores (evolution of Bulldozer/Piledriver) while Kabini and Temash will use Jaguar. Jaguar is an evolution of the Bobcat core although we don't have any architectural details at this time. Kabini and Temash will also integrate the Fusion Controller Hub (FCH, aka South Bridge) making these two APUs AMD's first true single-chip solutions.

AMD's FX platform will get an update to Piledriver cores this year with Vishera. There's no visibility beyond Vishera unfortunately, although it's probably a safe bet that we'll see a Steamroller based derivative at some point.

AMD's 2013 roadmap is heavily built around HSA. The hope is that with Graphics Core Next on-die, and proper software support, AMD will be able to deliver a compelling heterogenous computing platform that lets you leverage the strengths of both x86 CPU cores and a GPU built for compute. AMD has been chasing the promise of heterogenous compute for a while now, but its roadmap is clearly built around that vision becoming a reality.
Following the cancelation of Wichita and Krishna, Brazos 2.0 will have to hold ground in the low-power market covering C-series and A-series for 2012. Brazos 2.0 is reported to be a minor improvement over Brazos. The main improvements are in the chipset as well as the introduction of Turbo Core. Hondo is the successor to Z-01 (Desna) and will appear in tablets and other ultra low power devices. Like Brazos 2.0, Hondo's improvements are also mostly in the FCH - power optimizations will cut down consumption significantly over Z-01. Hondo is rumoured to be branded Z-03. Both Hondo and Brazos 2.0 are based on the same infrastructure and fabricated by TSMC's 40nm process.

a0ydK.png



vLOMx.png





AMD sets out its plans for 2013, hints at a possible ARM future
http://arstechnica.com/business/new...s-for-2013-hints-at-a-possible-arm-future.ars

AMD Promises "Full" Fusion of CPU and GPU in 2014
http://www.xbitlabs.com/news/cpu/di...mises_Full_Fusion_of_CPU_and_GPU_in_2014.html

Understanding AMD's Roadmap & New Direction
http://www.anandtech.com/show/5503/understanding-amds-roadmap-new-direction

The underlying themes to AMD's plans are faster iteration&#8212;a GPU-like 18-24 months between CPU designs, compared to the current 3 or more years&#8212;achieved by moving away from custom designs and depending more heavily on synthesized chip layouts, and lower power usage. This in turn will give AMD more flexibility to integrate CPUs and GPUs&#8212;and potentially other co-processors too&#8212;into what the company calls APUs (accelerated processing units).
Now it's official AMD policy to have shorter design cycles. These shorter design cycles will leverage lower amounts of custom block design and lean more on easily synthesizable architectures. The tradeoff is obviously performance but you do get better time to market. As was the case with Brazos however, if you can bring the right combination of technologies to market at the right time, the tradeoff is worth it.
AMD's goal with HSA is to make mixed workloads that use both the CPU and the GPU easier. The 2013 HSA (Heterogeneous Systems Architecture) processors will give the GPU and CPU a unified, coherent address space, with the GPU able to use the same demand-paged virtual memory that the CPU uses. This means that data will no longer have to be moved from CPU to GPU to allow the GPU to work on it, and that both processors will be able to operate on the same data simultaneously, making mixed CPU/GPU computation seamless.
This year we got Graphics Core Next, but next year we'll see a unified address space that both AMD CPUs and GPUs can access (today CPUs and GPUs mostly store separate copies of data in separate memory spaces). In 2014 AMD plans to deliver HSA compatible GPUs that allow for true heterogeneous computing where workloads will run, seamlessly, on both CPUs and GPUs in parallel. The latter is something we've been waiting on for years now but AMD seems committed to delivering it in a major way in just two years.
Given the limitations of the current architecture, x86 and Radeon cores should use dedicated memory, which tends to be inefficient. Essentially, nowadays the software completely controls which compute resources to use and when.

But after years of evolution, which will involve development of both hardware and software/compilers/tools, accelerated processing units in 2014 will be able to dynamically decide (possibly, when it comes to new programs) which task is better to execute on a particular core thanks to new software as well as special features of the chips.
Dynamic context switching between different types of cores will not only greatly speed up performance of such chips, but will also optimize power consumption as the most efficient hardware will be used to perform an operation.

Earlier AMD expected to release "fully fused" Fusion chips in 2015 or beyond.

44292.png


A mixed ISA future?

With APUs, more easily iterated synthesized designs, and HSA, AMD is taking some big steps toward producing flexible, heterogeneous processors: processors that pack together many different cores, each with their own strengths and weaknesses. AMD wants to extend the APU concept with other processor units, both AMD-originated and third party, to offer customers tailored solutions suitable for different applications. For example, motion video codec accelerators (such as those found in Intel's Medfield system-on-chip) would be attractive in tablets and, if AMD can get power usage down enough, even smartphones (though this is not a market the company is targeting at present).

One particularly intriguing third-party unit would be an ARM processor. AMD mentioned ARM several times during its presentations, and a number of its slides stated that the company wanted to produce SoCs that are "ambidextrous... across ISAs," stating also that the company was "flexible around ISA."

AMD spoke of these mixed-ISA processors in the context of servers and datacenters, so the immediate utility of an ARM processor is not clear. However, ARM Ltd wants to move ARM into the server space, having recently extended the ISA to support 64-bit systems. If ARM were to become a significant force in this market, the ability to natively run both ARM and x86 workloads on a single chip might become attractive.
Looking at TI, Qualcomm, NVIDIA and even Intel, integrating 3rd party IP into an SoC isn't unusual - particularly when competing in the ultra mobile space. AMD wants the same flexibility. Going forward, if AMD is successful, we will see SoCs based on AMD technologies that are combined with 3rd party IP. In theory this could come in the form of anything from a video decoder/encoder block to an ARM based CPU/GPU.



AMD is mostly interested in markets that have high annual growth rates. Looking [below] you can see that pretty much all of those categories with the exception of the client desktop are interesting for AMD. It's about time that AMD focused more on mobile and I don't believe that it's too late for the company.
From a product standpoint, AMD is really focusing on its mainstream and entry level APUs. Rory didn't come out and say it here but no where in AMD's future direction is a focus on the high-end x86 CPU space.

Also note that AMD isn't going to be as focused on delivering high performance products on the absolute latest process node. It views Brazos as one of its biggest successes to date and that architecture was built on a 40nm process with an easily synthesizable architecture. It's likely that the future of AMD is built around more of these easy to manufacture SoCs rather than highly custom, bleeding edge CPUs.

AMD plans on leveraging OEMs to deliver its products but it also wants to explore other routes as well. Rory referenced the game console model, where AMD would sell an ODM a chip solution tailored specifically to their needs. AMD wants to use this model to complement the more traditional route of selling its products. The transition here makes sense if you look at the current tablet space. The SoC players in tablets effectively follow the game console model. You buy a tablet that has an SoC that's custom tailored to its needs rather than buying a system with a myriad of CPU options.

SJG7D.png


Originally AMD had talked about introducing a new G2012 platform and delivering 10 & 20-core solutions called Sepang and Terramar. Those plans have been scrapped for the moment and what we get instead is a drop-in replacement for existing Opteron 6200 CPUs.

Take the current 6200 lineup, upgrade the CPU cores to Piledriver and you get a high level look at AMD's near-term server strategy. The sockets remain the same, as do the core counts, but performance should go up. AMD hasn't given us any more detail as to what Piledriver fixes other than to say that it's a higher IPC version of Bulldozer.





Related:

Solid Third Quarter Has Intel So Giddy, It Praises AMD
http://www.pcmag.com/article2/0,2817,2394923,00.asp

"I hear this argument a lot about ARM coming into the PC segment of the market," Smith said. "That they're going to come into the low-end of the market with a $30 processor to compete with our $100 processor and then they win.

"But we have $30 processors, full kits in fact, that we make a nice profit on. And AMD has a $20 kit."
Still, it seems like the last time Intel execs discussed the company's long-time archenemy this much was back when Intel was handing over $1.25 billion to AMD to settle a bunch of anti-trust lawsuits. The last time Intel talked up AMD this much in a non-legal context was probably back when AMD's K7 architecture was kicking butt and taking names.



The death of CPU scaling: From one core to many &#8212; and why we&#8217;re still stuck
http://www.extremetech.com/computin...rom-one-core-to-many-and-why-were-still-stuck

01 - Intro
02 - The multi-core swerve
03 - The rise (and limit) of Many-Core
 

DonasaurusRex

Online Ho Champ
I expect even smaller leaps after next gen, unless graphene processors suddenly become commercially available or cooling tech improves dramatically.

Edit: I think it's an interesting topic for itself, but I'm not sure if GAF can handle it. Some people are already crying because the jump to next gen could probably smaller than the previous one, how are they going to swallow even less progress (or progress for a much higher price) in the future?

something like graphene is going to have to become an alternative. The days of going from .90nm _> .60nm -> .45nm are like clockwork are over. Its expensive , and man thats just the process all the tools, the wafers, etc who will be able to afford it besides Intel and TSMC?
 

DonasaurusRex

Online Ho Champ
What's the followup on e-450? Did they cancel it all? thought Fusion was pretty promising myself...

all the fusion chips will be replaced with trinity chips...they arent gonna cancel them , if anything that will be their main focus, mobile, tablet, and small form factor pc's. I dont think they are going to change the name from fusion...those have been selling well...BD however....next time fine tune the logic gates AMD instead of using automatic software.
 

Nemo

Will Eat Your Children
all the fusion chips will be replaced with trinity chips...they arent gonna cancel them , if anything that will be their main focus, mobile, tablet, and small form factor pc's. I dont think they are going to change the name from fusion...those have been selling well...BD however....next time fine tune the logic gates AMD instead of using automatic software.
Was hoping for netbooks and small screensize laptops myself. That's what I haven't been able to find much info on, only ultrabooks. Getting them at 500$ is great and all but I don't find those things very comfy. The netbooks on the otherhand (like the DM1) are sexy beasts, and I want those in a stronger config
 
Was puttering around the web and found about FX-4170. A quad core bulldozer clocked at 4.2(!) ghz and soon to be released. Also supposed to be priced at ~$140 and I'm sure even that could fall.

http://www.cpu-world.com/news_2012/2012020101_Pre-order_prices_for_upcoming_AMD_FX_CPUs.html

Of course BD performance is so bad it's STILL probably not worth it, but it might be something that can get AMD back a little bit in that cheap value area they used to hold with Phenom II X4 that Intel didnt really have an answer for.

Also the fastest clocked stock chip I believe ever released, as far as I know.

You could overclock it too, but it's turbo clock is only 4.3 so I have to wonder if there's little potential left. However if it overclocks even decently, that would be great.
Yeah, BLT posted the pre-order page a few days ago. Reviewers have been emulating the expected FX-4170 specs since BD launch, with most using OC FX-4100s. About a month or so back, AMD leaked the FX-6200 on their own site, before removing the page. The specs matched up with other leaks, and both CPUs are arriving later than expected.

Unless something changed, neither of these are B3 stepping. At this point, there also isn't any guarantee B3 stepping wasn't shelved, with resources allocated to Trinity, and Vishera. Getting better Opterons out of the current BD could be B3 stepping's best hope.

The FX-6200 really should be an "FX-6150", though.
 
I think FAD 2012 was the first time AMD publicly mentioned the A10 Trinity. A few days before that, Arctic confirmed its existence when they announced their new MC101 HTPC, and media hub.


8s5nb.jpg

(PR) The New MC101 Series - Entertainment Always
http://hexus.net/ce/items/audio-visual/34693-the-new-mc101-series-entertainment-always/
The MC101 Series features only the latest high-speed processor AMD APU Trinity A8/A10, extremely advanced 3D HD graphics that are ideal for gaming, HD movies and HD music so that you can enjoy and share unlimited entertainment hassle-free.
No detailed APU specs, though the options list includes:

AMD Trinity A8 + HD7640G
AMD Trinity A10 + HD7660G

 
HSA (Heterogeneous Systems Architecture) processors will give the GPU and CPU a unified, coherent address space, with the GPU able to use the same demand-paged virtual memory that the CPU uses. This means that data will no longer have to be moved from CPU to GPU to allow the GPU to work on it, and that both processors will be able to operate on the same data simultaneously

jackslownod.gifx8
 

kitch9

Banned
Can anybody explain to me why AMD didn't just tape together 8 phenom II cores (and likely be able to clock them higher on the move to 32nm too), and save a few hundred million R&D dollars as well as have a better performing chip too boot?

It may not have been a world beater, but it could have kept them viable. The Phenom II X4 was a good cheap gaming alternative to Intel, now even it is being discontinued leaving nothing worthwhile from AMD. Nothing.

That's whats so mind boggling about Bulldozer.

The design is not that modular on the stars cores, whereas BD is.

BD is a decent design in theory, and I can imagine all the simulations AMD had running in on their designers PC's had the thing running at 25-30% higher clocks, with less cache latency due the higher clocks, and 30-40% less power draw.

If AMD had been able to print that design onto the silicon, and it had worked as intended it would have been an excellent chip and we would have all had a decision to make with our next CPU purchase.

What happened instead is what we have now. GlobalFoundries cannot control (atm.) power leakage, which means AMD cannot raise the clocks as intended without raising the voltage, so they've had to compromise with what we have now.

Hopefully they'll sort it but it may take time.
 

jwhit28

Member
The design is not that modular on the stars cores, whereas BD is.

BD is a decent design in theory, and I can imagine all the simulations AMD had running in on their designers PC's had the thing running at 25-30% higher clocks, with less cache latency due the higher clocks, and 30-40% less power draw.

If AMD had been able to print that design onto the silicon, and it had worked as intended it would have been an excellent chip and we would have all had a decision to make with our next CPU purchase.

What happened instead is what we have now. GlobalFoundries cannot control (atm.) power leakage, which means AMD cannot raise the clocks as intended without raising the voltage, so they've had to compromise with what we have now.

Hopefully they'll sort it but it may take time.

Sounds similar to what happened between Phenom and Phenom II
 

kuroshiki

Member
Hope AMD can make a decent comeback. At this point I don't even expect a lot. Just release some cheap, decent core that can rival i3 series in terms of power and efficiency and I will be happy.
 

Datschge

Member
·feist·;34823039 said:
Sometimes you really do have to pay close attention to presentation slides.

Lots of info incoming. (snipped)

It's slowly starting finally. Imho shaping up to be the perfect response to Intel's Ultrabooks as long as they can actually fulfill all the promises.
 
Any Trinity laptops announced yet besides that Compal one?
Should have some leaks, if not outright announcements, within the next two weeks. Following that, MWC (Mobile World Congress) 2012 runs 2/27-3/1, and there will certainly be some tablet, netbook, and laptop sightings.


Hope AMD can make a decent comeback. At this point I don't even expect a lot. Just release some cheap, decent core that can rival i3 series in terms of power and efficiency and I will be happy.
Unfortunately, we'll need to wait until Q2/Q3 2013 to see just how concerted an effort they're cable of. Trinity will give us a nice preview of Vishera, though it's only when we can directly compare Trinity to Kaveri, and Vishera to its replacement, that we'll get good picture of how things are truly shaping up. That said, the Zambezi-->Vishera transition should be a fairly good barometer as well; more telling than Llano-->Trinity, anyway.


It's slowly starting finally. Imho shaping up to be the perfect response to Intel's Ultrabooks as long as they can actually fulfill all the promises.
Exactly. It really is the best strategy to take. Play on their strengths, and take advantage of high growth segments as much as possible. As you said, it's all contingent upon their ability to deliver.

Even some analysts are taking notice:


AMD: Time To Buy? Longbow Boosts To Buy; Sets $10 Target
http://www.forbes.com/sites/ericsav...-to-buy-longbow-boosts-to-buy-sets-10-target/

AMD: Longbow Ups to Buy; Production Looks Fixed
http://markets.hpcwire.com/taborcomm.hpcwire/news/read?GUID=20576059

Longbow Research’s Joanne Feeney this morning raised her rating on Advanced Micro Devices (AMD) to Buy from Hold, with a $10 price target, writing that the company has “finally cleared the supple headwinds that constrained its performance in 2011.”

2012&#8242;s chip production capacity for AMD will be 50% hiher than it was in 2011, she estimates, as several measures put in place with foundry partner GlobalFoundries appear to have relieved last year’s problems, she writes, citing her own “checks”:
 
CC1XQ.jpg


Engineers Boost Computer Processor Performance By Over 20 Percent
http://news.ncsu.edu/releases/wmszhougpucpu/

Researchers from North Carolina State University have developed a new technique that allows graphics processing units (GPUs) and central processing units (CPUs) on a single chip to collaborate – boosting processor performance by an average of more than 20 percent.

“Chip manufacturers are now creating processors that have a ‘fused architecture,’ meaning that they include CPUs and GPUs on a single chip,” says Dr. Huiyang Zhou, an associate professor of electrical and computer engineering who co-authored a paper on the research. “This approach decreases manufacturing costs and makes computers more energy efficient. However, the CPU cores and GPU cores still work almost exclusively on separate functions. They rarely collaborate to execute any given program, so they aren’t as efficient as they could be. That’s the issue we’re trying to resolve

GPUs were initially designed to execute graphics programs, and they are capable of executing many individual functions very quickly. CPUs, or the “brains” of a computer, have less computational power – but are better able to perform more complex tasks.

“Our approach is to allow the GPU cores to execute computational functions, and have CPU cores pre-fetch the data the GPUs will need from off-chip main memory,” Zhou says.

“This is more efficient because it allows CPUs and GPUs to do what they are good at. GPUs are good at performing computations. CPUs are good at making decisions and flexible data retrieval

In other words, CPUs and GPUs fetch data from off-chip main memory at approximately the same speed, but GPUs can execute the functions that use that data more quickly. So, if a CPU determines what data a GPU will need in advance, and fetches it from off-chip main memory, that allows the GPU to focus on executing the functions themselves – and the overall process takes less time.

In preliminary testing, Zhou’s team found that its new approach improved fused processor performance by an average of 21.4 percent.

This approach has not been possible in the past, Zhou adds, because CPUs and GPUs were located on separate chips.

The paper, “CPU-Assisted GPGPU on Fused CPU-GPU Architectures,” will be presented Feb. 27 at the 18th International Symposium on High Performance Computer Architecture, in New Orleans. The paper was co-authored by NC State Ph.D. students Yi Yang and Ping Xiang, and by Mike Mantor of Advanced Micro Devices (AMD). The research was funded by the National Science Foundation and AMD.
Abstract:

[...]

Our experiments on a set of benchmarks show that our proposed preexecution improves the performance by up to 113% and 21.4% on average.
Full abstract, and press release at the above link.





Related
kinda
:


Intel TSX (Transactional Synchronization)

Transactional Synchronization in Haswell
http://software.intel.com/en-us/blogs/2012/02/07/transactional-synchronization-in-haswell/

Coarse-grained locks and Transactional Synchronization explained
http://software.intel.com/en-us/blo...-and-transactional-synchronization-explained/

Questions about TSX
http://software.intel.com/en-us/forums/showthread.php?t=102946&p=1

Why is this useful?

With transactional synchronization, the hardware can determine dynamically whether threads need to serialize through lock-protected critical sections, and perform serialization only when required. This lets the processor expose and exploit concurrency that would otherwise be hidden due to dynamically unnecessary synchronization.

At the lowest level with Intel TSX, programmer-specified code regions (also referred to as transactional regions) are executed transactionally. If the transactional execution completes successfully, then all memory operations performed within the transactional region will appear to have occurred instantaneously when viewed from other logical processors. A processor makes architectural updates performed within the region visible to other logical processors only on a successful commit, a process referred to as an atomic commit.

These extensions can help achieve the performance of fine-grain locking while using coarser grain locks. These extensions can also allow locks around critical sections while avoiding unnecessary serializations. If multiple threads execute critical sections protected by the same lock but they do not perform any conflicting operations on each other’s data, then the threads can execute concurrently and without serialization. Even though the software uses lock acquisition operations on a common lock, the hardware is allowed to recognize this, elide the lock, and execute the critical sections on the two threads without requiring any communication through the lock if such communication was dynamically unnecessary.
 
·feist·;35048170 said:
DH Exclusive: AMD's roadmap 2012; new FX and Fusion series processors
http://www.donanimhaber.com/islemci...itasi-FX-ve-Fusion-serisi-yeni-islemciler.htm

Q2 2012: Trinity ~ A10-5800/A10-5700 + HD 7660D, A8-5500/A8-5600 + HD 7560D

Q2 2012: Brazos 2.0 ~ E2-1800 + HD 7340, E2-1200 + 7310



Q3 2012: Vishera ~ FX-8350/FX-8320, FX-6300, FX-4320

Q3 2012: Trinity ~ A6-5400 + HD 7540D, A4-5300 + HD 7480D


Lol..no Bulldozer revision, going straight to Vishera i see, even within AMD it seems Bulldozer is viewed as a lost cause trainwreck of a CPU.
 

Rolf NB

Member
People say BD was designed for higher clock speed, but I've seen no real proof that was the case it just seems to be accepted despite no proof. I dont see how much more clock they could have gotten. Even so, it was a fail either way.

BD really seem like the biggest engineering mistake I can think of. I know Nvidia had some bad cards in the 5000 line.
It's AMD's version of the Pentium 4. Built for extreme clocks at the expense of IPC, and can't achieve the clocks in practice because power consumption prevents it.
 

AMD FX-8150 3.60 GHz with Windows Patches
http://www.techpowerup.com/reviews/AMD/FX8150/

There's a double edge situation in late reviewing the product, after reading everyone else's work, and opinions. On one side, it's harder to approach the product reviewed with objective state of mind, and on the other hand, there's a certain motivation to find something everyone else missed, or at least round up all of the testing methods and aspects on one table, and come to an undeniable conclusion. But before we go and bench the hell out of the fastest FX comebacker we have to go through the architecture so we can get a better grasp of the performance numbers later on and what they mean.
01 - Introduction
02 - Bulldozer Architecture
03 - FX Lineup & FX-8150
04 - Test Setup & Specifications
05 - AIDA64 Extreme
06 - Synthetic Benchmarks
07 - Audio Encoding
08 - Video Encoding
09 - Graphics & Rendering
10 - File Compression and Encryption
11 - Gaming Tests
12 - System Power Consumption
13 - Overclocking
14 - Value & Conclusion
Overclocking

The idea was to try to find weak spots in FX-8150 factory defaults, or performance bottlenecks if you will, and try to improve on that to squeeze out every bit of extra performance while keeping power consumption at acceptable levels.

In the following charts you will find a total of five separate FX-8150 results and to understand what they mean here's an quick legend on them:
  • FX-8150 - factory settings, no overclock, Turbo Core enabled
  • FX-8150 northbridge overclock - NB speed set to 2.60 GHz, everything else untouched
  • FX-8150 memory overclock - DDR3 2125 MHz, NB speed 2.27 GHz, HT 228 MHz, HT Link 2.50 GHz and CPU 3.63 GHz
  • FX-8150 CPU overclock - overclocked via multiplier to 4.70 GHz
  • FX-8150 combined overclock - CPU 4.77 GHz, HT 236 MHz, NB 2.60 GHz, DDR3 2201 MHz, HT Link 2.60 GHz
Each adjustable segment of the processor is overclocked separately compared to stock results and then combined into one overclocking attempt where each segment is pushed to its limits at the same time. Also, you'll find in the charts a few results for Intel's 2500K and 2600K processors, both stock results and CPU only overclocked results. This should give overclockers a fair insight to the overclocking performance gains compared to FX-8150 competitors.
 
I posted direct links earlier, so any owners could avoid the hassle of submitting a request to MS:

Before attempting to instal, Windows has to be up to date, with SP1 and the latest updates. Otherwise, you'll get a compatibility error.

The patches should be installed in order: http://www.neogaf.com/forum/showpost.php?p=34198167&postcount=979


If you can, feel free to share any before/after personal results of yours. Or, any general use observations.
 

dionysus

Yaldog
I listened to the entire presentation at AMD's Financial Analyst Day that was Feb 2nd.

Some notes:
-There is no real high end desktop development anymore. According to their VP for global business, they will continue to use their server development as the core of their high end desktop offerings. (Pretty much exact words.) I'd imagine this explains the lackluster performance of Bulldozer as it is built for servers with tons of multithreading. In fact, AMD isn't even targetting so called "mainstream" servers.
-AMD is no longer seeking to be on the bleeding edge, they are focusing on having an "ambidextrous" architecture that can be tailored to a specific purpose by combining their CPU IP, GPU IP, and third party IP in SoC solutions.
-AMD is out of the clock speed race entirely
-AMD is all in on low power across their entire product portfolio, with the possible exception of discrete GPUs
-Not to worry gamers, discrete GPUs are still a focus area for AMD as that is the AMD differentiator/core technology for all their APUs. They ain't throwing high end GPUs away like they appear to be throwing away high end CPU investment
-Focusing on mature processes and good execution. Basically, they aren't hinging their business on being early on a particular process node. 40nm will continue in their product lines through 2012. They don't want to be TSMC and the like's test product for a new manufacturing process.


Biggest news to me is AMD is basically saying they aren't trying to compete with Intel except for in the server space. Even in the server space they are targetting clients who can take advantage of their GPU IP and compute IP.
 

Datschge

Member
Biggest news to me is AMD is basically saying they aren't trying to compete with Intel except for in the server space. Even in the server space they are targetting clients who can take advantage of their GPU IP and compute IP.

As I mentioned before already (haha) the whole APU/HSA integration is AMD's only feasible way left to really compete, they already overhauled their whole CPU design process to adapt to that. The only reason left for CPUs without GPUs to exist for AMD is as a test bed for new changes to optimize further for HSA. It's a great thing that the new management seems to be fully on board with this already on-going development.

Keep the stuff coming, feist. It's very nice to have a single place to keep track of all this, thanks. ^^
 

dionysus

Yaldog
My i7 2600k @ 4.8ghz on it stock cooler says otherwise.

AMD makes i7s? It was an AMD investor conference. I am not talking about anything but AMD in an AMD thread.

Edit. On a side note, AMD mentioned consoles a bunch as how they see the market developing. Specifically, they see clients demanding partially customized/optimized solutions just like consoles manufacturers do with AMD, Nvidia, and Intel. Take IP and designs from AMD and then play with those building blocks for a fit for purpose solution.
 
Top Bottom