Intel Haswell Processors To Launch In First Half of 2013
Subject: Processors | February 12, 2012 - 06:57 PM | Tim Verry
Tagged: shark bay, Intel, haswell, cpu
Intel's Ivy Bridge processor, the upcoming "tick" in Intel's clock-esque world domination strategy, has yet to be released and we are already getting rumors and leaked information coming in about the "tock" that will be Ivy Bridge's successor in the 22nm Haswell processors (as part of the Shark Bay platform). Ivy Bridge processors will bring incremental performance improvements and lower power usage on the same 1155 socket that Sandy Bridge employs.
Haswell; however, will move to (yet another) socket LGA 1150 on the desktop, and will bring incremental improvements over Ivy Bridge. Improvements include much faster integrated processor graphics and the AVX2 instruction set. Unfortunately, Intel will be returning to an increased TDP (thermal design power) with Haswell compared to the lower TDP from Sandy Bridge to Ivy Bridge.
According to Domain Haber, who claims to have gotten their hands on a leaked road map , Intel will be launching Ivy Bridge through the end of this year, and then will debut their Haswell processors in the first half of 2013. The alleged road map can be seen below.
What I found interesting about the road map is that there is no mention of an Ivy Bridge-E or Haswell-E processor. Instead, the current Sandy Bridge-E chips are shown occupying the high end and enthusiast segment through at least the first half of 2013 and the launch of Haswell. Whether enthusiasts will continue to choose the Sandy Bridge-E processors for that long will remain to be seen, however. Also strange is that, according to VR-Zone , Intel will have three tiers of integrated graphics performance with GT1, GT2, and GT3. They will then place the fastest graphics core in the mobile chips and leave the slower graphics cores in the desktop chips. Discrete cards are not dead yet, it seems (unless you're rocking an AMD APU of course).
Haswell is the codename for a processor microarchitecture to be developed by Intel's Oregon team as the successor to the Ivy Bridge architecture. Using the 22 nm process, Intel is expected to release CPUs based on this microarchitecture around March to June 2013 according to leaked roadmaps. Intel has shown a working Haswell chip at the 2011 Intel Developer Forum. According to Fudzilla, "Intel tells its partners to expect that Haswell should end up at least 10 percent faster than Ivy Bridge based cores at the same clock."
37, 47, 57W TDP mobile processors.
35, 45, 55, 65, 77, and ~100W+ (high-end) TDP desktop processors.
15W TDP processors for the Ultrabook platform (multi-chip package like Westmere).
First Details About Intel Haswell Emerge: 2-4 Cores, New Graphics Core, DDR3, Low Power.
Intel Set to Continue Aggressively Lowering Processor Power Consumption with Haswell
[11/09/2011 11:05 PM]
by Anton Shilov
The very first details about the actual microprocessors based on code-named Haswell micro-architecture for mainstream desktops and notebooks have emerged on the Internet. Instead of increasing the number of cores inside its microprocessors, Intel Corp. will continue to improve efficiency to boost performance amid aggressive lowering of power consumption of chips.
Intel Haswell microprocessors for mainstream desktops and laptops will be structurally similar to existing Core i-series "Sandy Bridge" and "Ivy Bridge" chips and will continue to have two or four cores with Hyper-Threading technology along with graphics adapter that shares last level cache (LLC) with processing cores and works with memory controller via system agent, according to a slide (which resembles those from Intel) published by ChipHell web-site. On the micro-architectural level the chip will be a lot different: its x86 cores will be based on the brand new Haswell micro-architecture and its graphics engine based on Denlow architecture will support such new features as DirectX 11.1, OpenGL 3.2+ and so on.
The processors that belong to the Haswell generation will continue to rely on dual-channel DDR3/DDR3L memory controller with DDR power gating support to trim idle power consumption. The chip will have three PCI Express 3.0 controllers, Intel Turbo Boost technology with further improvements, power aware interrupt routing for power/performance optimizations and other improvements. What is important is that Haswell-generation chips will sport new form-factors, including LGA 1150 for desktops as well as rPGA and BGA for laptops.
The new processors for mobile applications will continue to have thermal design power between 15W and 57W (15W, 37W, 47W and 57W) for ultra low-voltage and extreme edition models, respectively; while desktop chips will have TDP in the range between 35W and 95W, just like today. However, in a bid to open the doors to various new form-factors, such as ultrabooks, Intel implemented a number of aggressive measures to trim power consumption further even from the levels of Ivy Bridge, including power aware interrupt routing for power/performance optimizations, configurable TDP and LPM, DDR power gating, power optimizer (CPPM) support, idle power improvements, latest power states, etc.
The most important improvements of Haswell are on the level of x86 core micro-architecture. It is believed that the new MA will be substantially different from current Nehalem/Sandy Bridge generations, which will enable further scalability and performance increases. Besides, Haswell will support numerous new instructions, including AVX2, bit manipulation instructions, FPMA (floating point multiple accumulate) and others. Denlow graphics core of Haswell will also sport substantially boosted performance and will also be certified to run many professional applications.
Benchmark comparison of AMD’s upcoming FX-Vishera 8350 against the FX-Bulldozer 8150 have been posted over at OBR-Blog . The FX-Vishera processors are based on the x86 piledriver architecture which features improved clock frequencies for better performance and launches sometime around Q4 2012.
The FX-8350 features an Octacore design and comes clocked at 4.0GHz stock and 4.2GHz Turbo Core frequency. For comparison, the first generation FX-8150 is also an eight core processors clocked at 3.6GHz stock and 4.2GHz Turbo Core frequency. Both processors are built on a 32nm fabrication process with a rated TDP of 125W. Each CPU has 8MB of L3 Cache.
The 'net has been rife with speculation about the primary sources of Bulldozer's problems. Looks like the shared front end of the dual-core Bulldozer "module" is indeed one of the culprits. Steamroller gets separate, dedicated decoders for each integer core, along with larger instruction caches.
There are some very big numbers in this slide, given what they represent. Branch mispredictions drop by 20%, instruction cache misses by 30%. Per-thread instruction dispatches that use the full width of the execution units are up by a quarter. Overall, these changes add up to a whopping 30% improvement in ops dispatched per clock cycle—and these numbers are based on simulation, not just hopeful estimation. Even more notably, this 30% figure comes from simulated client-focused workloads, including "digital media, productivity and gaming applications," not just the server-class applications for which the original Bulldozer core was so obviously tuned.
Presumably, the revised front end is the single biggest improvement in Steamroller. Provided the rest of the engine can cope with how it's being fed, these changes could result in a formidable boost to overall performance.
Steamroller's cores should be better equipped with the front-end's higher dispatch rate thanks to some changes to the schedulers and the memory subsystem. We don't have too many specifics here, but the 5-10% improvement in scheduling efficiency again comes from simulation of client-side workloads like "digital media, productivity and gaming applications."
Zooming back out, this slide offers a look at some power-efficiency provisions baked into Steamroller. The instruction fetch optimization, which detects loops and handles them more efficiently, is a familiar trick. The dynamic L2 cache resizing makes sense, too, since it's a shared resource used by both integer cores and the working data set of different threads can vary. If not all of the L2 cache is needed, portions of it can be powered down.
Moving from architecture to design opens up more opportunity. As you may recall, Bulldozer is relatively large for a 32-nm chip with its transistor count, especially after AMD revised down the transistor count estimate. Apparently, there's plenty of room for improvement even in the same process node.
Shown above is a portion of the chip's FPU. The top image comes from a current Bulldozer chip, which employs the hand-drawn custom logic that's generally used in high-end x86 CPUs. The lower image comes from a potential future chip that uses a more automated high-density cell library. On the same 32-nm process node, the high-density library purportedly crams the same logic into 30% less area, with 30% less power use. As the slide notes, gains on this order would usually come from the transition to a newer, smaller fabrication process. We'd expect the more automated approach to design to reduce AMD's time to market, as well.
What we don't know is when we'll see a product designed using a high-density cell library like this one. AMD tells us the future processor illustrated here is a post-Steamroller design, and it therefore seems likely that any improvements realized by using these tools will happen on a future process node, not at 32 nm.
Intel details 10nm, 7nm, 5nm process roadmap
Published on 14th May 2012 by Gareth Halfacree
Semiconductor giant Intel has revealed its roadmap for process technologies, which will see 10nm, 7nm and 5nm released beginning in 2015.
In semiconductor manufacturing, process is king: the term refers to how chip designs are shrunk from their giant human-viewable schematics down to teeny-tiny production parts with components several times thinner than a human hair. The better a company's process, the smaller the final chip; the smaller the final chip, the better the performance. Smaller process nodes also mean more chips from a particular silicon wafer, albeit tempered with the usual yield problems as issues with new process nodes are ironed out.
The majority of the industry is working on a 22nm process at present, including Intel's recently-launched Ivy Bridge processors. The next step for the semiconductor industry is 14nm, which Intel is planning to introduce with its Broadwell processors - the successor to the new microarchitecture Haswell parts, based on the same 22nm process as Ivy Bridge as part of Intel's 'tick-tock' development cycle.
All companies in the semiconductor industry are looking beyond 14nm, however - and Intel is no exception. According to slides obtained by X-bit Labs , Intel has already begun research on sub-14nm processors with a view to getting 10nm parts into development for a 2015 roll-out.
According to the research and development pipeline espoused by the company's slides, 2015 will see the production of semiconductor components based on a 10nm process size, quickly followed by 7nm and 5nm parts.
The company has an uphill struggle ahead of it, however: the smaller the process, the larger the challenge. As the component sizes decrease and the gaps between components get smaller, numerous issues raise their heads. The biggest of these, current leakage, calls for a radical rethink to how semiconductors are designed in order to smash what has been termed the '10nm physical gate length barrier.'
Some companies have already developed prototype technology for the sub-10nm process, including IBM's carbon nanotube transitor, which is running in the lab based on a 9nm process size. Intel, however, isn't detailing the precise route its own research is taking, except to explain that it's in the process of researching new lithography, materials and interconnect techniques to address the issues of shrinking component sizes.
Intel's slide deck also explains that its 14nm production, due to begin in 2013 for the Broadwell family, will take place in its D1X Oregon, Fab 42 Arizona and Fab 24 Ireland facilities following 22nm upgrades to the D1D/C Oregon, Fab 32/12 Arizona and Fab 28 Israel plants.
Should Intel hit its schedule of a 10nm part by 2015, it will likely find itself ahead of its rivals in process technology - a move which will do nothing to lessen the company's growing dominance of the mainstream processor industry.
For AMD, with a great product but no high volume 28nm process on which to run it, the situation is frustrating.
AMD first encountered 28nm problems at its GloFo partner-foundry. Now, it seems, there are 28nm problems at its TSMC foundry-partner.
This is all putting pressure back on AMD to give up being fabless and go back to operating fabs again.
How on earth can it afford to do that?
Well governments are pretty good about putting up money for fabs, while Nvidia, Altera, Xilinx, Qualcomm and others may be getting sufficiently tee-d off with flaky 28nm availability to consider a wafer fab consortium.
From Wikipedia, the free encyclopedia
10 µm — 1971
3 µm — 1975
1.5 µm — 1982
1 µm — 1985
800 nm (.80 µm) — 1989
600 nm (.60 µm) — 1994
350 nm (.35 µm) — 1995
250 nm (.25 µm) — 1998
180 nm (.18 µm) — 1999
130 nm (.13 µm) — 2000
90 nm — 2002
65 nm — 2006
45 nm — 2008
32 nm — 2010
22 nm — 2012
14 nm — approx. 2013
10 nm — approx. 2015
7 nm — approx. 2020
5 nm — approx. 2022
The 22 nanometer (22 nm) is the next CMOS process step following the 32 nm step on the International Technology Roadmap for Semiconductors (ITRS). The typical half-pitch (i.e., half the distance between identical features in an array) for a memory cell using the process is around 22 nm. It was first introduced by semiconductor companies in 2008 for use in memory products, while first consumer-level CPU deliveries started in April 2012.
The ITRS 2006 Front End Process Update indicates that equivalent physical oxide thickness will not scale below 0.5 nm (about twice the diameter of a silicon atom), which is the expected value at the 22 nm node. This is an indication that CMOS scaling in this area has reached a wall at this point, possibly disturbing Moore's law.
On the ITRS roadmap, the successor to 22 nm technology will be 14 nm technology.
On August 18, 2008, AMD, Freescale, IBM, STMicroelectronics, Toshiba, and the College of Nanoscale Science and Engineering (CNSE) announced that they jointly developed and manufactured a 22 nm SRAM cell, built on a traditional six-transistor design on a 300 mm wafer, which had a memory cell size of just 0.1 μm2. The cell was printed using immersion lithography.
The 22 nm node may be the first time where the gate length is not necessarily smaller than the technology mode designation. For example, a 25 nm gate length would be typical for the 22 nm node.
On September 22, 2009, during the Intel Developer Forum Fall 2009, Intel showed a 22 nm wafer and announced that chips with 22 nm technology would be available in the second half of 2011. SRAM cell size is said to be 0.092 μm2, smallest reported to date.
On January 3, 2010, Intel and Micron Technology Inc. announced the first in a family of 25 nm NAND devices.
On May 2, 2011, Intel announced its first 22 nm microprocessor, codenamed Ivy Bridge, using a technology called 3-D Tri-Gate.
On October 19, 2011, Intel CEO Paul Otellini confirmed that Ivy Bridge 22 nm processor volume production has already begun.
schnarf314 wrote:Does Intel look like their higher performance releases will be lower consumers of power than AMD for 2013-2014? I'm wondering if it will be possible without OC to get a processor that with or without turbo boost gives 4ghz, and can keep the power consumption to 15-35W?
no.Does Intel look like their higher performance releases will be lower consumers of power than AMD for 2013-2014? I'm wondering if it will be possible without OC to get a processor that with or without turbo boost gives 4ghz, and can keep the power consumption to 15-35W?
if that were true then we'd all be using Intel Prescotts and AMD Bulldozers.... just sayin.It is kinda silly fixating on 4ghz clocks, I just can't shake the idea faster is faster.
Intel's finfet shape a liability, says Asenov
Friday 27 July 2012 16:03
Intel’s triangular finfets suffer a severe performance disadvantage to rectangular finfets and a further disadvantage in respect to SOI finfets, says Professor Asen Asenov of Glasgow University who is the CEO of Gold Standard Simulations.
"Intel may have technological reasons for adopting this shape but by doing so you reduce performance by 12-15%," Asenov tells EW.
In a paper on the GSS website , Asenov points out that, while Intel bulk FinFETs have 12%-15% less performance than equivalent rectangular bulk FinFET, compared to rectangular 'bulk' FinFETs, rectangular SOI FinFETs have either 5% higher drive current at equivalent threshold voltage and leakage, or 2.5 times less leakage at equivalent on current.
"Bearing in mind that it is easier to make rectangular SOI FinFETs than rectangular bulk FinFETs, moving from triangular Intel bulk FinFETs to rectangular SOI FinFETs can deliver approximately 20% performance improvement," says Asenov.
Asked why he thinks Intel has adopted this disadvantageous shape, Asenov replies: "We have very little solid knowledge. Maybe there’s a technological reason associated with the deposition of HK gate dielectric over vertical walls."
"IBM can make nice rectangular shaped vertical walls," adds Asenov.
This could be a problem for Intel because IBM has licensed its technology to UMC, the No.2 foundry.
Meanwhile Gobalfoundries, the No.3 foundry, is preparing SOI technology.
"There has never been a time in my life, and I’ve been in this business for 35 years, when it’s been so exciting and so complicated," says Asenov, "companies will have to decide on the different technologies and these will be very expensive decisions. It will also mean very difficult decisions for the fabless companies."
GSS has included results for simulations of rectangular cross-section FinFETs with 10-nm and 8-nm widths hinting at where the company thinks Intel must go next. "If you can make them [FinFETs] rectangular you will gain significantly in terms of performance, about a 20 percent gain."
Professor Asenov said that moving from bulk FinFETs to FinFETs constructed on SOI wafers could solve a number of problems. "The buried oxide layer means you don't have the problem of filling trenches. The height of the fin is determined by the depth of the silicon above the oxide."
Professor Asenov added: "I think Intel just survived at 22-nm. I think bulk FinFETs will be difficult to scale to 16-nm or 14-nm. I think that SOI will help the task of scaling FinFETs to 16-nm and 11-nm. Of course, the wafers are more expensive, but you save money with less processing."
Researchers from GSS and the University of Glasgow published a paper at the International Electron Devices Meeting of 2011 that dealt with FinFETs implemented in SOI wafers and how they could meet the low statistical variability requirements of 11-nm CMOS.
ARM grabs TSMC's 3D FinFETs for future 64-bit PC brains
Will it be a RISCy move for the v8 family?
By Timothy Prickett Morgan •
Posted in PCs & Chips, 23rd July 2012 16:32 GMT
ARM says its 64-bit ARMv8 processor architecture is a real contender for servers and PCs. But without an appropriate process from major fab partners to etch the chips, the design doesn't matter all that much.
That's why an agreement between the stewards of the ARM designs and Taiwan Semiconductor Manufacturing Corp is vital if the ARM collective is going to retain its position in smartphones and tablets while giving Intel a run for its money in data centres and desktops.
In a statement , ARM and TSMC said they have inked a deal to extend their silicon die technology beyond 20-nanometer-sized transistor gates. The financial details of the new agreement were not divulged. The two companies had agreed in July 2010 to collaborate on baking wafers with 20nm features.
All they did say was that the renewed partnership would bring future manufacturing processes to ARMv8 derivatives and to the Artisan tools that companies use to modify ARM's blueprints and optimise them for specific foundries and processes.
TSMC will be tuning its three-dimensional double-gate field-effect transistors (FinFETs) so they can be implemented with 64-bit ARMv8 processor designs. As ordinary planar transistors have shrunk, they have become relatively poor switches, leaking current even when they are in the off position.
This current leakage is a very big problem and is one of the reasons why chips suck so much juice and emit so much heat as they operate. This leakage is mitigated by going 3D, as Intel has already done with its 22-nanometer Tri-Gate process used with the current "Ivy Bridge" Core and Xeon processors. In fact, the Tri-Gate approach by Intel is an example of a FinFET design, albeit one with three gate surfaces instead of two.
Every electron out of the water, there's a fin in the sea!
The fin in a FinFET is just that: a vertical fin coming out of the silicon that presents more surface area than a planar transistor design and therefore does not have high current leakage at 20-ish nanometer sizes, which is very likely to hold for a few more future process shrinkages.
Both Intel and TSMC have been showing off various FinFET designs for the past decade, and Intel was the first to commercialise the idea. The key point with the various FinFET approaches is that the threshold voltage - the point at which the gate switches from one binary state to another - for the transistor is lower, too. So you get less leakage and lower voltage operation. And as a bonus, the lower you drop the voltage on the chip, the lower the transistor switching gate delay, which can increase processing speed.
You can see why TSMC, Intel, and others are keen on adding FinFETs to their processes. If the ARM collective is to keep its lead on Intel in the smartphone and tablet spaces and take on Intel in servers and PCs (or whatever is left of the PC market some years hence), then ARM Holdings is going to need to work with fabs to get processes ramped up to keep pace with Intel's factories.
Back in April 2010, TSMC said it would skip 22-nanometer tech and go right to 20nm in the second half of 2012. That smaller spin was planned to use planar transistor designs with enhanced high-K metal gate, strained silicon and low-resistance copper ultra-low-K interconnects; the company said at the time that FinFETs will be feasible at 20nm, but made no commitments on it.
From the ARM-TSMC partnership, we can now see that FinFETs are not coming out until after the 20nm chip phase. For TSMC, the next jump is to 14nm. TSMC has had volume issues with its 28nm manufacturing processes, impacting shipments for Advanced Micro Devices and NVIDIA among others.
So-called "risk production" of 20nm chips (a kind of beta test for the fab) started at the end of 2011 at TSMC , and the silicon will go into full production in 2013. The expectation is for a second-generation of FinFETs to come out during the 14nm phase, which will go into risk production in 2014.
That's about a year behind Intel. So the design benefits between x86 and ARM will have to make up the difference in the manufacturing gap. ®
Hammer_Time wrote:Most welcome!
Your dreams are not unworldly though, if Intel can make it to 14 or 10 nm then it could be possible, but not at 22 nm... cheers!
Intel wants more power efficient chips
Posted on September 5, 2012 - 13:51 by Trent Nouveau
Intel says it remains on track to reduce the energy consumption of its flagship processor lineup by a whopping 41%.
As Shara Tibken of the Wall Street Journal points out, Intel's power reduction roadmap is being touted as demand slows noticeably for conventional laptops, with Santa Clara's Ultrabooks thus far failing to gain significant traction.
"[Yes], Intel supplies processors for more than 80% of the world's computers but long has struggled to move its technology into smartphones and tablets. Those products are typically powered by chip designs licensed by ARM, in large part because of their lower power consumption," Tibken explained.
"The chip maker, while targeting those products with a low-end line called Atom, also is improving its mainstream PC chips to help blur the lines between the categories."
In an effort to bolster its Ultrabook lineup, Santa Clara plans to showcase a number of new devices at its upcoming developers conference in San Francisco, including Ultrabook "convertibles," along with systems featuring gesture, voice and facial recognition.
In addition, Intel says the fourth iteration of its Core processor - aka Haswell - will boast improved performance, more sophisticated graphics and improved security capabilities. As noted above, the biggest change is related to power consumption, with the new processor expected to sip 10 watts versus 17 watts for comparable existing chips.
"Basically it means we can make devices even thinner, even lighter and with an even higher battery life while still giving a full PC experience," Intel exec Kirk Skaugen told the WSJ. "Once and for all, you will feel comfortable walking out of your house and not carrying your power brick."
However, Patrick Moorhead, an analyst with Moor Insights & Strategy, said Intel needs to get power consumption down to approximately four watts for tablets without active cooling fans. Nevertheless, Haswell-powered devices will likely boast less-obtrusive fans as well as a thinner form factor compared to current Ultrabooks.
"It's going to be a killer part for convertibles and a killer part for notebooks," Moorhead added.
The article above is only talking about Haswell ulv cpu's ( ultra low voltage ) and other similar Haswell mobile cpu's NOT the desktop Haswell chips...
Users browsing this forum: No registered users and 0 guests