Computer Engineering (and Computer Science)

<p>There's far more to computer engineering than the level of integration in the microprocessor. And JoeJoe's correct - the vast majority of computer engineers aren't designing highly integrated processor chips. There are far more that put a microprocessor into their designs than actually design the microprocessor itself. </p>

<p>Look around you - microprocessors/microcontrollers and electronic circuits in general are ubiquitous today. The phone used to be a simple device but my iPhone has a large amount of processing and graphics capabilities. My Hitachi plasma TV runs a Linux OS. So does my cable box. Cars now have quite a number of microprocessors in them. And the list goes on. </p>

<p>Again, it's not just the design of microprocessor but more, the designs of innovative uses of the microprocessor and other technologies that employ the majority of CEs.</p>

<p>"There's far more to computer engineering than the level of integration in the microprocessor."</p>

<p>This is, of course, correct. I was just using microprocessors as a familiar example. Hardware being dead is like Bill Gates' quote on memory.</p>

<p>^^ I was responding to Dr. Horse's implication that hardware is approaching its peak due to the level of integration of an IC. My point is that there's far more to the field of computer engineering than the level of integration of a microprocessor and that far more CEs are employed in areas other than the design of the microprocessor itself.</p>

<p>
[quote]
Those that make nebulous statements say nothing.

[/quote]
</p>

<p>Those that make nebulous statements say nothing.</p>

<p>
[quote]

Where has it been noted as superior? How about a link? Who made 45 nm processors first? Were there remarkable performance per watt gains? Why is AMD using a 5-year-old mobile design? Who is winning 90% of mobile designs? What is AMD's answer to Atom?

[/quote]

Actually Panasonic was first to hit 45nm.<br>
AMD</a> moves to 45-nm process node with Shanghai</p>

<p>Intel</a> to Lose its Lead in Chip Manufacturing Tech in 2009, Sort Of - Tom's Hardware
ATI a AMD company.</p>

<p>AMD</a> manufacturing plan key to chipmaker's future | Reuters</p>

<p>
[quote]

Intel has been making multicore chips for years. They don't seem to have the yield problems that AMD has. Perhaps it's due to a better process! AMD is going to a foundry model so that will further decouple process from design. It's hard to see how AMD is going to improve process here. AMD is in serious financial trouble - Barcelona was a total bomb in 2007.

[/quote]
</p>

<p>They have all the same yield problems, so does panasonic as well as mitsubishi. read the amd links above. </p>

<p><a href="http://www.gigascale.org/pubs/1404/saeed_shamshiri_poster.pdf%5B/url%5D"&gt;http://www.gigascale.org/pubs/1404/saeed_shamshiri_poster.pdf&lt;/a&gt;&lt;/p>

<p>
[quote]

For AMD, sure. Though they did improve IPC in other ways after X2.

[/quote]

Its not just AMD, if processor power could be increase via traditional methods, there would be no need for multicore processors. It was the inability of AMD and Intel to create more powerful and efficient processors, that lead to multicore.</p>

<p>If this paper from Intel doesnt say it for you, then idk what you want. My entire argument is based in this official document. <a href="http://www.cse.ohio-state.edu/%7Epanda/775/slides/intel_quad_core_06.pdf%5B/url%5D"&gt;http://www.cse.ohio-state.edu/~panda/775/slides/intel_quad_core_06.pdf&lt;/a&gt;&lt;/p>

<p>I still don't know what you mean by IPC, you never stated.</p>

<p>
[quote]

Is all of your reading four years out-of-date?

[/quote]

Not at all actually, seems you are about 30 years out of date if you think OS'es still use hardware for IPC.</p>

<p>
[quote]

Why would anyone think about an operating system method in the context of a hardware performance discussion.</p>

<p>Have you ever heard of Macro-Ops Fusion?
Have you seen MMX, SSE, SSE2, SSE3, SSSE3, SSE4? Have you seen the architecture changes to go to 256 bit vector units in x86 processors? Have you heard of x86-64?</p>

<p>IPC improvements from Core to Core 2 were huge. They were small in the move to Penryn. They will again be large in going to Nehalem. IPC. It's not just for breakfast anymore.

[/quote]
</p>

<p>Well because you used the term IPC, which to me is a standard acronym in computing for "Inter process communication", which hasn't been done on hardware for quite some time. I asked you to qualify your use of the term and you didn't.</p>

<p>Um yeah ive heard of all of them, don't really know why you are bringing up instruction sets and decoders. But ok. if you are talking about instructions per cycle, then of course I recognize there have been great improvements. But this really has nothing to do the processors limitations. Ive never said we are at that limit, I just agree with Mr. Moore and say we will reach it.</p>

<p>Transistors ARE reaching a physical limit. It is not a myth. Some believe that this means the end of EE... this is debatable.
Physically, you can only reduce transistor size to about 10 Angstroms, a size we're expected to reach within the next 10-15 years. The thing is that electronics are widely used, and they are used in many many different ways. Whats most likely is that EE will simply not exist in the way it does today (ie: no more semiconductor advancements). However, EE is very very broad and semiconductor is not its only focus. I don't think that EE is a bad choice of major, but a PhD in semiconductor device physics might be.
Research in micro- and nano-electronics, optics, as well as quantum nano-tubes, are some of the more potentially fruitful areas that fall partly or completely within EE.</p>

<br>


<br>

<p>Node size is one factor though it has been an important factor for minimizing costs. The move to different process nodes for their CPU/GPU has me wondering what happened to fusion. Perhaps it's being pushed out.</p>

<p>nVidia has had failure after failure for the last two years and even Apple accuses them of telling lies about their chips. nVidia's 8xxx and 9xxx chips both have heat and other issues and I don't think that a shrink is going to solve their problems. As far as AMD and ATI go, why do you think their stock has gone from $44 to $2. The spent 2007 telling lies about when Barcelona would ship. The absolutely crazy thing is why ATI hasn't been able to take advantage of nVidia's blunders in the marketplace. Process node is important but so is HK/MG and owning the fabs. As Jerry Sanders said, "Real men have fabs." Why is AMD fabbing out ATI parts to Taiwan Semi?</p>

<br>


<br>

<p>Well, this article doesn't actually put AMD in a glowing light and it's from back in August.</p>

<p>"The company has also reported seven straight quarterly net losses in
a row, and it's hard-pressed to afford building a new, next-generation
chip plant, which can cost $3.5 billion, with $5.6 billion in
long-term debt on its books."</p>

<p>"AMD has already raised some money to stem cash flow problems by
selling more than $600 million in AMD stock in December to an Abu
Dhabi investment fund. And the cash from a new deal could go either to
AMD or to fund a new joint venture, analysts said."</p>

<p>They paid what, $11 per share so they have an 80% loss. Those guys in Dubai are pretty bright.</p>

<p>To bring you up to date, an Abu Dhabi company bought AMD's Dresden
Fabs and assumed a chunk of AMD's debt. The Abu Dubai company planned
to contribute billions more to the Foundry Company (the name of the
company owning the Fabs). Of course this was before the price of oil
collapsed. As you may know, Kuwait cancelled a $17.4 billion deal with
Dow Chemical for a joint venture citing global economic crisis and its
impact on the country. If AMD doesn't get cash infusions in 2009,
they're probably going to go belly up if they can't hit up Wall St for
additional funding.</p>

<p>And then there's the news this afternoon that AMD will cut 600 jobs
and take a $70 million charge. It looks like AMD will report another
annual loss, the third year in a row. I had a look at the total
profits and losses at AMD through it's history and right now, it looks
like the company has net lost money in its entire history. Of course
Hector pays himself well, even after running the company into the
ground.</p>

<br>


<br>

<p>You're dead wrong on that one. Intel went with a MCM design for the
first and second generations of Core 2. AMD went with their "True
Native Quad Core" design which they touted over and over again in
2007. They did this at 65 nm and it turned out to be a disaster.</p>

<p>Intel took a much more conservative approach using multichip modules.
They took dual-core chips and put them together in a MCM which
resulted in better yields, the flexibility to manufacture both
dual-core and quad-core CPUs depending on market demand, and the
ability to bin-match pairs of chips. This gave them the flexibility to
do a pair of high-binning parts, a pair of low-binning parts or a
combination. After their 45nm process was proven on Penryn, they
went to a native quad-core design with Nehalem.</p>

<br>


<br>

<p>I didn't say that there weren't gains from multicore. But there are
some pretty big current limitations to real-world games just by adding
new cores. Intel is working on autoparallelization but the problems
are still huge. Having 8 cores isn't going to make Word run any faster.</p>

<p>Your statement is just plain wrong. AMD went to multicore as a quick
way to get a boost on the cheap. K10 was a long, long way off. Intel
responded with Core Duo. But there have been major IPC improvements
from Intel and AMD in their current offerings. Of course the
improvements in Intel's chips dwarf those in AMD's chips if you
compare current offerings. AMD and Intel did create more powerful and
efficient cores - the history is there if you care to read it.</p>

<p>If you want an example of IPC improvements that result in more work
done per watt, take a look at</p>

<p><a href="http://www.techreport.com/r.x/core-i7-940/cine-power-task-energy.gif%5B/url%5D"&gt;http://www.techreport.com/r.x/core-i7-940/cine-power-task-energy.gif&lt;/a&gt;&lt;/p>

<p>It's a nice chart as it expresses task energy, the total amount of
power that it takes to accomplish a particular task. You can see that
Core i7 is more efficient than the second-generation Core 2 chips and
that the second-generation core 2 chips are more efficient than the
first-generation Core 2 chips. Of course the first and second
generation Core 2 chips in addition to the Core i7 chips are all more
efficient than AMD's offerings. Note that all of the cores except for
the two low-end AMD chips are quad-core for an apples-to-apples
comparison.</p>

<p>If you really want to see a blowout, take a look at "A PERFORMANCE
EVALUATION OF THE NEHALEM QUAD-CORE PROCESSOR FOR SCIENTIFIC COMPUTING"
at <a href="http://www.worldscinet.com/ppl/mkt/free/S012962640800351X.html%5B/url%5D"&gt;http://www.worldscinet.com/ppl/mkt/free/S012962640800351X.html&lt;/a&gt;&lt;/p>

<p>"We show that Nehalem outperforms Barcelona on memory-intensive codes
by a factor of two for a Nehalem node with 8 cores and a Barcelona
node containing 16 cores. Further optimizations are possible with
Nehalem, including the use of Simultaneous Multithreading, which
improves the performance of some applications by up to 50%."</p>

<p>The whole article is here: A</a> closer look at the Core i7-940 - The Tech Report - Page 1
and it goes through a battery of performance tests. In general, AMD
chips finish near the bottom compared to the last three generations
of Intel chips.</p>

<br>


<br>

<p>Instructions per clock.</p>

<p>Something as simple as an integrated memory controller can improve
performance by thirty percent. Multicore isn't going to improve
single-threaded application performance by that amount.</p>

<br>


<br>

<p>That was your assumption. Most people don't talk about IPC as an OS
construct in a hardware context.</p>

<br>


<br>

<p>You posted both and I just asked you which you thought was relevant. I thought
that you would see the obvious.</p>

<br>


<br>

<p>Why are instruction sets important? In HPC applications, you have to use SIMD
instructions to approach maximum theoretical FP throughput. You can only get
to about 50% using scalar instructions. So there are some big IPC performance
improvements to be had using the newer instructions. AMD is several years behind
in this area - no clue as to why.</p>

<p>It's not in the near future and of no concern for those considering majors in
CE and EE. That's the real point.</p>

<p>"Actually Amd's global manufacturing process has been noted as superior
to intels, which reflects in its prices."</p>

<p>This is utter rubbish. AMD's prices reflect the fact that their chips are
uncompetitive.</p>

<br>


<br>

<p>Yeah, they had a lot of hubris going with a huge quad-core monolithic
design. In retrospect, AMD did agree that they would have been better
off doing what Intel did in going with a multichip module. In fact AMD
announced that they would do that for their 8-core chip. If they ever
get that off the ground.</p>

<p>I hang out with hardware engineers on a forum and they discuss Intel
and AMD performance and hardware in great detail. There are Intel
employees, process engineers, chip engineers, consultants and me, the
software guy. I've followed the two companies closely for several
years and know the history of their chips going back to a year after
K8 was released so I have a decent recall of their battles. For
anything else, I can just ask the hardware guys and they'll provide me
with a link or information from memory. One of the guys there is Paul
DeMone. He's written for several technical publications, has at least
one hardware patent, is referenced widely as a hardware expert source
on the web.</p>

<p>"Transistors ARE reaching a physical limit. It is not a myth. Some believe that this means the end of EE... this is debatable."</p>

<p>I'm not arguing against this - I'm into a practical discussion about majors. There are lots of chips made at 130 nm and 180 nm and I'd guess that chips will be made at those nodes for some time to come. Fabs cost a lot of money and many chip designers do not want to revalidate at new nodes.</p>

<p>"Physically, you can only reduce transistor size to about 10 Angstroms, a size we're expected to reach within the next 10-15 years. The thing is that electronics are widely used, and they are used in many many different ways."</p>

<p>Would you suggest that someone chose CS over EE and CE right now because we may one day hit physical limits?</p>

<p>
[quote]
> Actually Panasonic was first to hit 45nm.</p>

<p>No they weren't. They released samples in October 2007 - Chipworks</p>

<p>Samples of Penryn surfaced late in 2006 along with benchmarks. Intel made Penryn samples broadly available in July 2007.</p>

<p>Operating Systems and Servers News
18 July 2007</p>

<p>Intel makes Penryn samples available early</p>

<p>By Sumner Lemon, IDG news service</p>

<p>Intel will offer manufacturers samples of its Penryn server chips before the planned launch later this year.</p>

<p>"We're now broadly sampling [Penryn] for all the various platforms," said John Antone, vice president and general manager of Intel Asia-Pacific.</p>

<p>Techworld.com - Intel makes Penryn samples available early</p>

<p>Intel ramped up production of their 45 nm chips well before the launch. It appears that Panasonic only released samples in October. Who knows when they actually made it into product. It was mostly a publicity stunt for the gullible.

[/quote]

Panasonic</a> to ship 45nm chip ahead of Intel ? Register Hardware
panasonic</a> 45nm - Google Search</p>

<p>To best honest I don't care enough to rebut anymore. You are disagreeing with official intel documents and not even taking them in the proper context. So I will just say that we will have to agree to disagree or il say you win.</p>

<p>The website/forum sounds like a nice place, why don't ya post a link. Sounds like a great educational experience.</p>

<p>"To best honest I don't care enough to rebut anymore. You are disagreeing with official intel documents and not even taking them in the proper context. So I will just say that we will have to agree to disagree or il say you win."</p>

<p>Intel had 45 nm chips before Panasonic did and they had them in volume. They started producing them about two quarters before Panasonic released samples and Intel released samples in 2006.</p>

<p>Just a stunt by Panasonic.</p>

<p>You wrote:</p>

<p>"It was the inability of AMD and Intel to create more powerful and efficient
processors, that lead to multicore."</p>

<p>This is what I have a major disagreement with. History shows that individual cores by both AMD and Intel have become more powerful and efficient.</p>

<p>The problem is that they didnt have them before. Intel was second. Just follow the links and read.</p>

<p>The 3rd page of the intel document shows my point. Under "Redefining Performance", it states the reasons and due to limitations.</p>

<p>


</p>

<p>Its really anyone's guess at this point, in my opinion. We might be able to say with reasonable certainty that those wishing to be working on cutting edge technology after ~20 years might want to specialize in something other than semiconductors. We can also suppose that microchips will be widely in use for a long enough time that current engineers need not be concerned. However, if semiconductor advancements slow down or stop, this will adversely affect the number of available jobs in this area... you may no longer need very many design-level engineers, but rather engineers to test and/or maintain product-lines that use chips. For example, every time the physical size of a chip can physically get smaller, we need engineers to design chips that will be that much smaller. We will probably need fewer or none of them.</p>

<p>But it is also possible that microchips will take an entirely new direction (nanotubes?), and chances are that this type of a thrust would be led by those who have specialized in semiconductors in the first place.</p>

<p>Also, average EE jobs in electronics (ie: design, test, and maintenance of electronic devices) will probably not be affected very much by this, for atleast our generation (IMO). We might also see greater interaction between EE and other fields like Bio, Materials, Chem, Neuro,... a push for green energy, alternative energy sources, embedded sensing networks, increased automation, etc. will also give enough to chew on for EE, research or otherwise.</p>

<p>None of my professors that I have talked to about this are particularly concerned about the possible slow-down of the semiconductor industry. At least, no one feels that it will, once and for all, destroy EE or anything like that. But there will probably be changes... some good, some bad. But thats just life... it can happen to anything.</p>

<p>


</p>

<p>Its really anyone's guess at this point, in my opinion. We might be able to say with reasonable certainty that those wishing to be working on cutting edge technology after ~20 years might want to specialize in something other than semiconductors. We can also suppose that microchips will be widely in use for a long enough time that current engineers need not be concerned. However, if semiconductor advancements slow down or stop, this will adversely affect the number of available jobs in this area... you may no longer need very many design-level engineers, but rather engineers to test and/or maintain product-lines that use chips. For example, every time the physical size of a chip can physically get smaller, we need engineers to design chips that will be that much smaller. We will probably need fewer or none of them.</p>

<p>But it is also possible that microchips will take an entirely new direction (nanotubes?), and chances are that this type of a thrust would be led by those who have specialized in semiconductors in the first place.</p>

<p>Also, average EE jobs in electronics (ie: design, test, and maintenance of electronic devices) will probably not be affected very much by this, for atleast our generation (IMO). We might also see greater interaction between EE and other fields like Bio, Materials, Chem, Neuro,... a push for green energy, alternative energy sources, embedded sensing networks, increased automation, etc. will also give enough to chew on for EE, research or otherwise.</p>

<p>None of my professors that I have talked to about this are particularly concerned about the possible slow-down of the semiconductor industry. At least, no one feels that it will, once and for all, destroy EE or anything like that. But there will probably be changes... some good, some bad. But thats just life... it can happen to anything.</p>

<p>"The problem is that they didnt have them before. Intel was second. Just follow the links and read."</p>

<p>This is baloney. Follow the timeline of when chips showed up - don't listen to the gamesmenship of companies. Intel launches in huge quantities so they do product ramps up to two quarters ahead of time. Their 45 nm chips showed up in the wild almost a year before Panasonic was sampling. Intel was broadly sampling several months before Panasonic was. Intel had the first 45 nm chips. Panasonic just pulled a stunt.</p>

<p>"The 3rd page of the intel document shows my point. Under "Redefining Performance", it states the reasons and due to limitations."</p>

<p>If you look at the actual performance improvements and benchmarks, you'll see that Intel and AMD both improved performance greatly on a core basis. You can talk about what's in a document. I'm talking about what has been actually observed. Which do you think is more accurate. What you read in a paper or what has been actually observed in real life? Do you know why you do labs in university? Think!</p>