Skip to main content
12 events
when toggle format what by license comment
Jul 8, 2020 at 14:09 answer added Raffzahn timeline score: 0
Jul 8, 2020 at 12:58 comment added Raffzahn @NoNameQA you're asking essentially the same question over and over again now for like the 3rd time (here, here and now this one). This only leaves two possible reasons. Either you don't have a grasp of the very basics, or we're dancing around a problem created by a language barrier - maybe even a double sided one. Considering that addressing and word size vs memory interface is eventually the most fundamental area in computing, I rather belive it's language. Do you agree?
Jul 8, 2020 at 4:30 comment added No Name QA @raffzahn the question is: if IBM made a switch to byte addressable memory from word addressable memory, there are should be good reasons behind it. I though it was made due to attempts to create compatible machines. And I’m just wondering what was the real reason?
Jul 8, 2020 at 2:49 answer added dave timeline score: 4
Jul 7, 2020 at 22:40 comment added davidbak They also previously had a machine with decimal digit addressing: The 1620. Before the 360 there weren't "lines" of computers, at least at IBM. There were only computers addressing various markets and "sizes": e.g., large scientific (7090), small scientific (1620), small business (1401). The 360 was IBM's attempt to make a broad architecture that could support a family of implementations of various sizes and for various markets. So all decisions were made anew looking to the future (though informed by their previous experiences). (See Brooks The Mythical Man Month for more on this.)
Jul 7, 2020 at 22:22 answer added rwallace timeline score: 0
Jul 7, 2020 at 21:56 comment added Michael Graf When your word size is 32 bits, and memory is expensive, chances are that there are smaller units within that word (BCD digits, characters, bytes, half-words) you may want to address individually. At some point in the design process, someone probably sat down and tried to work out which of these sub-units would be most commonly used, and which would profit the most from having hardware / instruction set support. Apparently byte adressability was considered important, but (AIUI) half-word and word access exist, too.
Jul 7, 2020 at 20:31 comment added Raffzahn Again, I do not get what you're really asking. Byte addressing allows to address bytes. that's all. Data paths are independent wider or smaller than words and have no relation beyond the obvious. (P.S.: mind to add location to your bio? Maybe we're hitting again a language barrier?)
Jul 7, 2020 at 20:28 history edited No Name QA CC BY-SA 4.0
More details
Jul 7, 2020 at 20:26 comment added No Name QA @Raffzahn my question is about why did IBM switch to byte addressable from word addressable? I though it was because of comparability issues.
Jul 7, 2020 at 20:07 comment added Raffzahn I do not understand what your question is about. Byte addressing is about logic view on memory, nothing else. As such it does not have any implications beside a byte being the smallest direct addressable unit. So, what again is your question?
Jul 7, 2020 at 20:02 history asked No Name QA CC BY-SA 4.0