Jump to content

Talk:Harvard architecture

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Misc

[edit]

will you elaborate the harvard architectur, how it goes and why is this architecture is contrast to von neumann architecture

The best way to understand this is to read both articles. However notice that BOTH (at the simplest conceptual level) operate on very similar fetch-execute cycles currently only explained in the von Neumann architecture article, the main difference being where/how they each get the instructions and data from. Most modern computers are actually a mixture of the two architectures. -- RTC 22:49 15 Jul 2003 (UTC)

I would assume that the Z3 used a harvard architecture since it's instructions were read off of tape? I don't know if this is worth adding to the main text, and I won't add it myself as I'm not 100% sure.

Isn't this more a variant of the neumann architecture than a completely new architecture? The only difference is that the harvard model can read and write at the same time from memory. While the neumann model is still the basis of this model. I see the most important parts of the neumann model in the way this architecture handles the difference between memory and the 'working brain' the cpu. Compare this to the neural network model, where no clear distinction between memory and cpu can be made.

Im not that well versed in different forms of computer architecture, but i'm reading some information about neural networks, could someone with more architecture knowledge elaborate? --Soyweiser 30 Jan 2005

Actually the Harvard architecture is older than the von Neumann architecture not newer. When first introduced, the von Neumann offered one major advantage over the Harvard: it "stored program" in the the same memory as data - making a program just another kind of data that programs could load/manipulate/output. The original Harvard architecture machines used physically separate memorys, usually with different and incompatible technologies, making it impossible for programs to access the program memory as data. This difference made von Neumann machines much easier to program and quickly resulted in the decline of Harvard machines. It was only much later (with the introduction of instruction caches or in special applications like digital signal processing) where the issue of speed reintroduced Harvard architecture features. -- RTC 07:42, 31 Jan 2005 (UTC)
Yes, you're right -- the only difference between Harvard architecture vs. Princeton architecture is "data and code all mixed up in one memory" vs. "data over here, programs over there in that completely distinct memory" [1]. So ... what's the difference between Princeton architecture vs. von Neumann architecture, if any? --DavidCary 02:17, 14 December 2005 (UTC)[reply]

The article doesn't explain the origins or timeframe of the architecture, hence the confusion about before/after Von Neumann architecture. I can't add this myself since this is the information I'm looking for!82.18.196.197 11:12, 14 January 2007 (UTC)[reply]

In the introductory paragraph it states the origin:
The term originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters (23 digits wide).
Click on the link for the timeframe in which that computer was made. -- RTC 04:03, 15 January 2007 (UTC)[reply]

Ads

[edit]

It seems to me like the Microchip Technology mentions are a type of free Advertising. Should they really be a part of this article? Mr. Shoeless (talk) 22:51, 11 March 2020 (UTC)[reply]

They don't strike me as advertising - they're just given as examples of Harvard-architecture processors, and even the advantages mentioned are pretty much just "for the niche in which these chips are used, here's some reasons why a Harvard architecture can be an advantage", not "hey, buy these chips, they're good". Guy Harris (talk) 03:28, 12 March 2020 (UTC)[reply]

Article Quality and Expansions

[edit]

Could someone please put at least one sentence of history here, what year was this (was it prior to von Neumann?) and the source (a citation like the von Neumann article has would be ideal.) — Preceding unsigned comment added by 138.38.98.21 (talkcontribs) 14:25, 12 August 2018 (UTC)[reply]

I think this a good, well-written article. I do think however it could be a little better laid-out. Some of the comments on here are pretty valid and they should perhaps be incorporated into the articles. --Gantlord 13:25, 15 September 2005 (UTC)[reply]

Thank you for your suggestion regarding [[: regarding [[:{{{1}}}]]]]! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome.

relation between architecture and instruction set.

[edit]

Is there a relation between the instruction set (RISC and CISC) and the architecture of the processor/microcontroller? I find Harvard architecture processors with RISC instruction set and Von neumann architecture processors with CISC instruction set (I'm not sure if this is always true). Is there a reason relating the architecture to the instruction set?

No, there is no relationship (e.g., RISC = Harvard / CISC = von Neumann). You can build either of them either way. The PowerPC, SPARC, and ARM are RISC and von Neumann. Seymour Cray's machines were RISC (although the term had not yet been invented when he designed most of them) and von Neumann. I don't believe the Harvard Mark I was RISC as it had some rather complex instructions (e.g., interpolate value from function tape) and it was clearly Harvard.
The issue gets complex with modern machines that use cache. Both the Pentium and PowerPC are considered to be von Neumann, from the point of view of the programmer, but run internally using instruction and data caches as Harvard. The Pentium is CISC and the PowerPC is RISC. To make it even more complex, current Pentium implementations "translate" the CISC instructions from the instruction cache into RISC like microinstructions, before actually executing them. -- RTC 04:25, 5 August 2006 (UTC)[reply]

Plagiarized?

[edit]

This whole article is plagiarised from its primary source. —Preceding unsigned comment added by 202.40.139.164 (talk) 09:36, 30 April 2009 (UTC)[reply]

Evidence, please. Tell us where this primary source may be found so we can evaluate your claim. --Guy Macon (talk) 17:46, 12 August 2018 (UTC)[reply]

Contemporary examples?

[edit]

If there are any currently used examples of the Harvard computer architecture, it might be nice to add such a section. DouglasHeld (talk) 08:11, 18 February 2019 (UTC)[reply]

In what way do you find Harvard architecture#Modern uses of the Harvard architecture to be lacking in currently used examples of the Harvard computer architecture? --Guy Macon (talk) 13:16, 18 February 2019 (UTC)[reply]

Introductory paragraph in re: initialization

[edit]

The introductory paragraph has a poorly informed sentence "These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself." I can supply the information about why this is mistaken but I'm having no luck finding the right words.

There was no (Programmable) Read Only Memory available for early machines. *All* early computers of every architecture relied on an operator to use a manual method to load a handful of instructions into the ubiquitous magnetic core memory to instruct the machine to read more instructions instructions from an easily controlled I/O device sometimes followed by more from a more difficult to control I/O device. This process analogous to pulling itself up by its bootstraps is the origin of "booting" a machine . Then, as now, the processor's program counter was reset to the same address at power-on, even if manually. Magnetic core memory was volatile on a scale of days rather than microseconds. The use of program memory was typically organized to avoid overwriting the manually entered instructions so that the machine could be restarted more easily, at least Tuesday through Friday, and frequently after a 2 day weekend, but this wasn't usually reliable after a 3 day weekend or any longer period.

The manually entered bootstrap code for a Harvard Architecture machine has more instructions than for a Von Neuman engines because data fetched from a device in I/O space must be loaded to register before writing to data memory, and then when enough bytes are available, subsequently written to program memory. since these instructions were typically entered by a bank of toggle switches with a bank of neon lights to provide a visible feedback, each instruction was time consuming to enter. PolychromePlatypus 22:29, 15 August 2019 (UTC) — Preceding unsigned comment added by PolychromePlatypus (talkcontribs)

"ubiquitous magnetic core memory"
Magnetic-core memory first appeared in 1953. The machine that gave the "Harvard architecture" the "Harvard" in its name, the Harvard Mark I (which is mentioned in the paragraph that contains the sentence to which you're objecting), was finished in 1944, so the first Harvard architecture machine was created at a time when magnetic-core memory was not only not ubiquitous, it was nonexistent.
The instruction memory of the Mark I was a punched paper tape; the instructions operated on a set of separate storage registers (the "electro-mechanical counters" referred to in this article's introductory paragraph), which couldn't contain instructions. There were also dials that could be set to hold constant values.
So, in fact, the "poorly informed" sentence is entirely correct when talking about the Harvard Mark I. The data storage - the registers - were within the processing unit, and the instruction storage was a paper tape that could not, in fact, be read as data by instructions on that paper tape.
Some other examples of those early Harvard architecture machines are the Zuse Z1, Zuse Z2, Zuse Z3, and Zuse Z4 (punched-film reader for instructions, separate memory for data). Guy Harris (talk) 01:20, 16 August 2019 (UTC)[reply]
I.e., the Harvard architecture machines cannot load any instructions, as the instruction memory is read-only to the machine - the machine can't fill in holes or punch holes in the tape put into the machine by the operator. There is no, and can be no, "manually entered bootstrap code for a Harvard Architecture machine", as the instruction memory really is read-only, either because it's physically hard to modify (punched paper tape or punched film, for example), or impossible to modify (core rope memory), or is attached to a memory bus that does not support write operations.
A machine with a modified Harvard architecture of the first type ("Split-cache (or almost-von-Neumann) architecture") obviously can have boot code, as the memory for instructions and data is the same (at most, it might have to do instruction cache flushes or otherwise ensure that nothing from the region of memory into which the code is being read is in the instruction cache). A machine with a modified Harvard architecture of the second type ("Instruction-memory-as-data architecture") could have boot code that writes into the instruction memory, and a machine with a modified Harvard architecture of the third type (" and "Data-memory-as-instruction architecture") could have boot code that runs in one segment but writes code to another segment, but this page isn't about modified Harvard architecture machines, it's about pure Harvard architecture machines, where the machine is completely incapable of loading or modifying code. Guy Harris (talk) 02:41, 16 August 2019 (UTC)[reply]
Well... almost. Definitely incapable of modifying code, and the CPU running the program is definitely incapable of loading code, but that doesn't mean that the machine as a whole is incapable of loading code. It is possible to have dedicated hardware that loads the (read only to the actual processor, writable by the dedicated hardware) code from front panel switches, teletype tape, etc. I am not sure if any of the old computers ever actually did this but it is possible. Perkin Elmer had a special version of their minicomputer that actually had an EPROM programmer and a small UV lamp permanently built into the hardware that could erase and reprogram the (read only to the CPU) EPROM that held the boot code without removing the chip. This was done with the CPU totally shut down. This wasn't actually Harvard, of course, just RAM and ROM in the same address space. --Guy Macon (talk) 04:07, 16 August 2019 (UTC)[reply]
There is a similar situation with some modernish microcontrollers. While they don't actually have a Harvard architecture with separate data paths, often the code is in some kind or (to the CPU) ROM but you can use external hardware to, say, pull a normally 5V line to 12V and enter a special mode where you can change the contents of the "ROM". And of course many of the newer microcontrollers have the boot code in flash that the CPU can change, but special hardware makes one small part of the flash unwritable except at the factory. This typically holds a tiny bit of code that that loads new boot code into flash from USB or serial. Still not Harvard, of course, just writable flash, unwritable flash, and RAM in the same address space.
Change "machine" to "CPU" and the above is 100% accurate in all cases. --Guy Macon (talk) 04:07, 16 August 2019 (UTC)[reply]

Note about STCMicro

[edit]

I removed a note that some "8051-compatible microcontrollers from STC have dual ported Flash memory" in revision 912725050. In revision 912725050, Guy Macon restored it. In the name of brevity, I might have gotten a little sloppy and not given sufficient rationale in my description of the change. I don't want to start an edit war by reverting the most recent change without a discussion, so here goes again:

  • The most glaring problem was that the {{note}} template should not have been used. This has since been changed to an {{efn}} template by a third editor, but I still don't think it makes sense as a footnote.
  • Another issue that I didn't really touch on in my change description is that it's not clear the note is accurate (and it's certainly not sourced). Actual dual-ported flash is uncommon in micros and nothing I can find indicates that the devices in question use it.
  • The final concern was that the note reads, to me at least, as if this setup (connecting flash to a core's instruction bus, possible through a cache, as well as the data bus via a peripheral or SFR) is exclusive to or an innovation of these micros. On the contrary, I don't think they're even a notable example; I only found two references to STCMicro on English Wikipedia: a single line in a disambiguation page and a picture and reference link on Intel MCS-51. Most other examples in the article cite more than a single instance or at least point to something more notable.

A more generic discussion of the instruction and data busses on typical flash-based microcontrollers would be useful and could certainly cite examples, but I don't think this parenthetical belongs in the article in its current form.

--50.239.58.122 (talk) 18:04, 4 September 2019 (UTC)[reply]

Thank you for discussing this, and I am certainty open to having my mind changed.
I think the key question is whether the statement...
"The IAP lines of 8051-compatible microcontrollers from STC have dual ported Flash memory, with one of the two ports hooked to the instruction bus of the processor core, and the other port made available in the special function register region."
...implies, as 50.239.58.122 thinks it does...
"the note reads, to me at least, as if this setup (connecting flash to a core's instruction bus, possible through a cache, as well as the data bus via a peripheral or SFR) is exclusive to or an innovation of these micros."
To me it reads the same as "Philadelphia has an NFL football team, the Eagles" not implying that having a team is unique to Philadelphia.
One possible solution would be to say that "Some cites have NFL teams. Example include the Philadelphia Eagles, Dallas Cowboys, and Seattle Seahawks." Could we do something similar here, listing multiple 8051-compatible microcontrollers with dual ported Flash memory? --Guy Macon (talk) 21:19, 4 September 2019 (UTC)[reply]

The Myth of the Harvard Architecture

[edit]

The claims made in regard to the meaning, origins, and benefits claimed for the so-called 'Harvard architecture' have bothered me for years, from both technical and historical perspectives. Starting 2019 I spent 2 years researching this topic, and then a year getting the resulting paper 'The Myth of the Harvard Architecture' through the peer-review process of the IEEE Annals of the History of Computing (considered the 'journal of record' for computing history) and it has just been published. If you don't have access to that journal you can download a pre-publication version from my own website here: http://metalup.org/harvardarchitecture/The%20Myth%20of%20the%20Harvard%20Architecture.pdf . Please read it.

I believe this is the only peer-reviewed research paper on the subject of the 'Harvard architecture' and on that basis alone should be considered a significant resource.

Frankly, based on the findings from my research (which lists 40+ references), I would like to re-write this whole Wikipedia article, which perpetuates several of the myths I exposed. I would prefer it if others would read my paper and make changes, but if I don't see anyone else taking up the baton I will start to make changes. Rpawson (talk) 15:46, 29 September 2022 (UTC)[reply]