0
\$\begingroup\$

Well first allow me to explain what I actually meant to ask. Ok, so it is clearly known that without software, a piece of hardware is just a lifeless body, something without a soul. But I clearly cannot understand how this communication/interface between software and hardware is actually happening?

A Software is nothing but lines of code which finally after getting compiled, assembled, linked and is converted to a string of binary digits. But you see the hardware of the processor doesn't understand ones and zeros it understands a high voltage level and a low voltage level. but how is this transition from binary digits to voltage levels actually taking place? For a really quick analogy, if I wanted to lift a book there is actual physical contact between my hands and the book. But what about the interface between hardware and software?

I hope you people understand the crux of my question. It is really difficult to put it into straight-forward sentences. This may sound like a bizarre question, but trust me it's been bugging me for a really long time. I've taken courses on Computer Organization and basic processor Design, but they clearly failed in providing me an answer.

A similar question exists on StackOverflow and there is not one convincing answer. https://stackoverflow.com/questions/3043048/how-does-software-code-actually-communicate-with-hardware

EDIT: You see the standard process for the code to be executed by the processor is such that it passes through compiler -> assembler -> linker -> loader -> Memory. Because once the instructions get into the memory it is pretty straight forward as to how the processing takes place. The only inconsistency which I feel in that flow is about how the transition from loader -> memory. is happening? This is one of the straightforward ways I can put up the question. But if it would be really grateful if you've understood the soul of the question.

\$\endgroup\$
13
  • \$\begingroup\$ Most hardware interprets a 0V level as 0 and some predefined positive voltage as 1 ... \$\endgroup\$
    – PlasmaHH
    Commented Oct 5, 2016 at 19:48
  • 1
    \$\begingroup\$ Essentially your code just switches the voltage at particular output pins at particular times between logical voltages. The code also may monitor pins for these logical voltages. Its pretty much just binary output * logical threshold voltage. For Arduino boards it maps from {0,1} to {0V, 5V} \$\endgroup\$
    – MichaelK
    Commented Oct 5, 2016 at 19:55
  • 3
    \$\begingroup\$ There is no software - it's an abstraction. Take the red pill and learn about digital logic. Then come back in a year or so ... \$\endgroup\$
    – brhans
    Commented Oct 5, 2016 at 19:59
  • 1
    \$\begingroup\$ "1" and "0" or "High" and "Low" are just convenient notation to allow us to talk about the voltage levels that occur in digital logic. When we store a "High" or "1" in memory, we are really making that memory cell store 5 volts (or whatever Vcc is). \$\endgroup\$ Commented Oct 5, 2016 at 20:04
  • 6
    \$\begingroup\$ "it is clearly known that without software, a piece of hardware is just a lifeless body," not actually true. There are many pieces of hardware - including digital computing hardware - that function perfectly well without a line or a bit of software. \$\endgroup\$
    – user16324
    Commented Oct 5, 2016 at 20:15

12 Answers 12

4
\$\begingroup\$

I think a powerful tool for helping you understand what is going on is to realize that software has to be implemented, in terms of magnetic patterns on a harddrive or charges on transistors in memory, to be run. Your hardware is always operating on the implemented/realized version of the software.

We tend to like to talk about software in terms of "information" because that form is convenient for the kind of things we want to do with software. It is convenient that we can say a particular magnetic pattern "is the same as" a particular pattern of charges in a block of RAM. In the physical world, they're fundamentally different medium, but we recognize that they are "logically" identical because we assert their "meaning" to be the same.

So when I hand you a CD with my software on it, I don't hand you "a string of 1's and 0's." I hand you a piece of metalized plastic with some pits carefully stamped into it. The "information content" comes from the fact that you and I both agree on how one should translate the geometry of those pits into 1's and 0's. You then may install that software, writing careful magnetic squiggles onto a harddrive. You're okay with saying those pits and those squiggles are "the same thing" because you know that all you really cared about was the information encoded in them, and they encode the same information.

Thus, when hardware boots, and "software tells hardware what to do," what that really means is that you have a bunch of hardware components (like harddrives and memory chips and whatnot) that all agree on what things mean. The CPU interacts with fluctuating voltages in the BIOS in a way that you and I agree contains the "information" of the software in the BIOS.

The last key piece is that there is some piece of hardware that is not stable: the CPU clock. The CPU clock is constantly changing voltages, and the other components agree to interpret those changing voltages as marching orders to move one step forward in whatever processing they're doing. And finally, at startup, the CPU is designed to come up with instructions to go get more instructions from the BIOS (and eventually from the harddrive).

The key is that, at the physical layer, all of the components interact with the physical implementation of the information. The "software" is nothing more than a way of thinking about that physical implementation as a bunch of information -- and that we agree on what that information means.

\$\endgroup\$
1
  • \$\begingroup\$ This deserves more upvotes. \$\endgroup\$
    – user147895
    Commented May 1, 2017 at 19:58
2
\$\begingroup\$

Memory effectively controls hardware.
So at a low level, hardware is linked to memory locations. For instance in a simple micro controller, physical transistors to drive the pins are linked to memory bits. That is really what registers are in data sheets for hardware chips, they are memory locations. You set a bit in software and the physically connected hardware is activated.

Update (Software -> Memory flow):
To address how the code flows from code to memory: At some point a physical device is used to set voltage levels in memory bits based on the machine code generated by the compiler and linker.

i.e. your first byte of code is 0xAA. The software instructs a memory programmer (via JTAG, UART, SPI etc.) to select a 8 byte wide memory location (lets call it 0x0001). That memory location is then set to the voltage levels defined by the 0s and 1s (0xAA='10101010') using hardware. Upon boot, hardware is hard wired to load a specific memory address into the CPU register and start running from there. If you were to open the memory chip up, and probe the silicon capacitors that make up the bits in flash memory, you could basically measure the 0xAA.

\$\endgroup\$
3
  • \$\begingroup\$ Oh wait, your answer actually clears some air but at the same time adds more doubts! You set a bit in software and the physically connected hardware is activated. This may sound very basic but I'm not able to understand how a code is physically making a change? You see the standard process for the code to be executed by the processor is such that it passes through the compiler -> assembler -> linker -> loader -> Memory. The only inconsistency which I feel is the transition from loader -> memory. \$\endgroup\$
    – vineel13
    Commented Oct 5, 2016 at 21:12
  • \$\begingroup\$ Because once the instructions are in the memory it is pretty straight forward as to how the processing takes place. \$\endgroup\$
    – vineel13
    Commented Oct 5, 2016 at 21:16
  • \$\begingroup\$ @vineel13 See my update, getting from linker/loader to memory is simple, it requires a hardware programmer to select memory locations and physically set the bits to match the machine code. \$\endgroup\$
    – MadHatter
    Commented Oct 5, 2016 at 21:39
2
\$\begingroup\$

For the edited version of the question ... Between the compiled and linked program, and the computer's memory what happens?

Look at your computer's front panel. It might look like this...

enter image description here

For each bit of the first address in your program, set each switch "up" for '1', "down" for '0'. When all 18 bits are set, press the "Load Addr" key on the right hand section. That key sets the current address.

Now repeat with each data bit in that word (there's only 16 of these). Press the "DEP" key to deposit this word of data into that address, and step to the next address.

Repeat for every other word in your program ... don't worry, it gets easier with practice.

When you're done, press the "Start" key and execution will start from the first address...

Usually you want to keep this program as short as possible, since you have to enter it every time you power the machine on. So it's likely to be the simplest possible program to read a more useful program from punched cards, paper tape, or a disk drive if you can afford one. This is called a "bootstrap" program because it "pulls the computer up by its own bootstraps" by loading something more useful into memory.

Or, there is a Read-Only Memory mapped to the first addresses, where execution starts. Perhaps you programmed it bit by bit in a programmer that looked like this front panel. It may have been an array of fuses, setting the switch "up" blew the fuse, "down" left it intact. Then you plugged it into a ROM socket on your motherboard, so having programmed the ROM once, you can run the bootstrap program every time you start up.

Maybe your computer doesn't have a front panel covered in switches ... but it certainly will have a bootstrap loader, or something else like it, somewhere. In a PC, it's called the BIOS ROM. In some microcontrollers, you load a program via a JTAG port - a serial interface that replaces those switches. Then you can save it to Flash ROM, which, unlike the fuse ROM, you can erase and re-use...

\$\endgroup\$
1
\$\begingroup\$

There are no ones and zeros. It's all just high and low voltages. We choose to interpret those voltages as ones or zeros. We may choose to treat collections of adjacent high and low voltages as numbers (in hex or decimal) or even as text characters. But they are still just stored as groups of high or low voltages.

Your source code is just a bunch of high and low voltages, which we choose to interpret as ASCII or Unicode characters. If the programme is stored to a hard disk, it will turn into a pattern of North and South poles, but it will just become high and low voltages again when the disk is read. The compiler is just a bit of software (more high and low voltages) to instruct the computer to translate one collection of high and low voltages (the source code) into another set (the compiled code).

Running the software simply consists of copying the highs and lows to a suitable place in memory, then telling the computer to treat them as instructions to run.

\$\endgroup\$
0
\$\begingroup\$

Hardware is incredibly complex, and was designed so, specifically so that the software could have as much functionality as possible.

Imagine the hardware and software relationship between a very simple device, such as a calculator from the 1970's. There are pushbuttons and some type of display, and you press buttons, an operator symbol, and the answer appears on the display.

In this simple device, pressing the digit buttons inputs numeric data into the system, of which the operator key determines which mathematical function is to be performed. Once entered, the "software" uses the data and operator, to perform a calculation. Once complete, the result of the calculation is displayed.

In this example, the "hardware" are the buttons, operator keys, and display. The "software" would have been digital logic (AND, NOT, NOR gates, etc), likely custom-fabricated onto one integrated circuit. The hardware would be "useless" without the "software" to perform the calculations and give us a meaningful result.

Computers today still follow this archaic method of operation, but are much, much more complex. There are literally hundreds, if not thousands of different "operations" a central-processing-unit could perform, and all of these have been hardware-optimized to complete in as little time as possible.

So today, a CPU (the "brain" of a computer) is an extensive device, built to allow a multitude of possible software "instructions" to execute all as quickly as possible. So it is the infrastructure which is built into the CPU that provides the possible functionality which the software can (opt to, or not) use.

So this is why a computer does little without an operating system (software) - the CPU is capable of doing many different calculations, but without the software there to direct it in actually doing any of them, then it just sits there.

\$\endgroup\$
2
  • 2
    \$\begingroup\$ The calculator example is probably a bad choice. What you are describing is a hardware-only calculator. The "soft" of software means that it can be changed or re-programmed. Your calculator example can't be re-programmed as everything is fixed in hardware. \$\endgroup\$
    – Transistor
    Commented Oct 5, 2016 at 20:28
  • \$\begingroup\$ This example was given to illustrate that "logic" could be hardware and that "software" today is just data which acts on increasingly complex (and capable) hardware. i.e., the 8086/8088 CPU of the 1980's has about 81 software instructions, whereas a modern 64-bit CPU has over three times this. \$\endgroup\$
    – rdtsc
    Commented Oct 6, 2016 at 11:26
0
\$\begingroup\$

As brhans says, you need to take the red pill and learn about digital logic. here is a pdf (probably a copyright violation) of the Roth textbook that existed when i was a student.

so, first you will learn about combinatoric logic (from which an Arithmetic Logic Unit can be made). then you will learn about flip-flops, which is the basic means of memory. then you will learn about state-machines. finally you can learn what an opcode is. and that opcode in memory that is loaded and referenced by the state machine then controls the behavior of the state machine. that is where the rubber meets the road regarding where software begins to touch the hardware.

\$\endgroup\$
2
  • \$\begingroup\$ Trust me Robert I've read Tocci, morris mano and a fair share of Roth for digital electronics. You don't need to take the pain of providing me pdfs. Looks like I can clearly assume that you haven't understood the soul of the question. \$\endgroup\$
    – vineel13
    Commented Oct 5, 2016 at 20:55
  • \$\begingroup\$ Cap'n Combover (you may have seen him on television recently) says "Trust me." and i don't. anyway, i am responding to the title of the question: How exactly does the transition between Software and Hardware occur? and that is the answer. or maybe the answer to "Where exactly ..." where the opcode (or, one level lower, the microcode, which is essentially the same thing as machine code in a RISC chip) is input to the state machine, along with the states of the state machine, that is where the software meets the hardware. and i stand by that answer (even in a comment). \$\endgroup\$ Commented Oct 6, 2016 at 0:54
0
\$\begingroup\$

The best advice I can give you would be to take a course on it. MIT has some courses available for free online (search for MIT open courseware, their classes 'introductory digital systems laboratory', or 'Computation Structures' may have what you are looking for) or if you are in school, enroll in the basic digital systems course. At my school, designing and programming a four bit processor was the final project for the course.

Now, to try to answer the question without a class: The CPU is the only part we need concern ourselves with, as it is what decodes and uses most of the instructions in programming.
The second to last stage of the program, before it is an executable, is in assembly programming language. This language is unique to every type of processor, because it translates directly to the 1s and 0s that only that processor can understand. For example:

ADD r1, r2
MOV r2, r0
SUB r3, r0

This does nothing in particular, and is only for demonstration.
Each command (ADD, MOV, SUB) Will translate directly into 1s and 0s,
the "r#"s are memory locations, in this case called registers. these memory locations are also just numbers. So in essence, all programming does is move numbers from location to location. So, "ADD r1, r2", once assembled may translate to 1011 0110, where the first nibble is the code for add, and the second nibble contains both the code for r1 (01) and the code for r2 (10) what this will do is tell the processor to load the contents of r1 and r2 both into the Arithmetic Logic Unit (ALU) (The part that does all of the math) and tell the ALU to add them. Depending on processor design, the answer may just stay on the output of the ALU, or it may be moved automatically into one of the memory locations that was used. (The processors I am used to will automatically place the answer into the second memory location, in this case "r2" ). Everything in the processor is designed so a set combination of bits is needed to turn it on when this combination is met, that part turns on and does its thing with the numbers it was given.

Everything that happens in a computer, happens because numbers were moved to locations where they could be used to do something more.

If you decide to take the class (and I encourage anyone that has this question to do so, whether in CS, EE or even ME) know that it takes at least halfway through the course to understand how anything is really connected to a processor. Most of the course is just how digital logic circuits (AND, OR, etc. gates) work, and can be put together to do useful things. then you will take some of these useful devices, (flip flops, MUXs, etc) together to do more useful things) Finally you will take some of those parts, and put them together to create a processor.

One more thing. The program 'logisim' is good at simulating small digital circuits. That is the program we implemented our 4 bit processors in (before moving on to hardware, it has good visual feedback system.

If anyone feels I am unclear, please tell me where at, and what I could do better. I am new to answering questions, and it has been some time since I learned this stuff myself.

\$\endgroup\$
2
  • \$\begingroup\$ Can you please look at the question once again? I've added a few details. \$\endgroup\$
    – vineel13
    Commented Oct 5, 2016 at 21:27
  • \$\begingroup\$ So it seems more like you are interested in what happens in the operating system / ram when you run the program, rather than what happens physically on the cpu, is that correct? (edited for clarity) \$\endgroup\$ Commented Oct 5, 2016 at 21:33
0
\$\begingroup\$

I think what you're looking for, in essence, is a ROM, or in many modern systems, firmware.

You seem to understand that once instructions are in memory, processing is straightforward. Likewise, you also seem to understand how instructions are compiled/converted down to assembly/machine code. With this, I assume that you understand how once software like an operating system is already running, software can modify memory, which will change what the CPU is operating on, etc.

Now, think about it in terms of a regular desktop computer. When you first turn the computer on, you can assume that everything in memory is invalid. So how does the computer get to an operational state? Traditionally, an initialization procedure is loaded from a ROM. The ROM contains the basic instructions to initialize all of the registers and other peripherals in the computer upon power-up, and also sets up to load the next set of instructions (like an operating system stored on the hard-disk, etc.). The ROM is "hard-coded," in other words, it contains a fixed set of instructions that are run, and cannot be changed. I'm sure you can imagine that once you've created a platform that allows for the modification of memory through software, you can create a ROM quite easily. Once that's done and set up properly, software can "take over" once the hardware is initialized.

\$\endgroup\$
0
\$\begingroup\$

The answer is "Input/Output Interface Ports". That is the identifiable place where software (collections of ones and zeroes conspiring together to create high and low voltages inside the processor) in the end sets values (ones or zeroes) into each bit of the I/O port(s). And the hardware (external to the processor) takes it from there.

Your premise that "without software, a piece of hardware is just a lifeless body" is absolutely NOT TRUE at any level. There are billions of pieces of hardware, both historic and contemporary that have no dependence whatsoever on software (or firmware or anything else like that.

Perhaps your whole concept of software and hardware are preventing you from seeing the picture properly and making you think that there are no convincing answers.

\$\endgroup\$
0
\$\begingroup\$

I'm assuming you're talking about something like a microcontroller, not a PC. I'm also assuming you know roughly how memories work -- addressing, etc.

The simplest way to go from a compiled program to memory is by building the program directly into the hardware. Instead of using an SRAM cell or a flash transistor for each bit, you have a piece of metal that's either present (for a one) or not (for a zero). This is called a metal ROM.

If you want to use a programmable memory (like flash or RAM), you need a way to access the memory using external IO signals. One common method is to connect a debugger to the CPU using the chip's JTAG port. Using the JTAG protocol, the debugger can take control of the CPU and have it write the program to memory one word at a time.

Another method would be to connect the memory's address and data lines to the IO pins on the chip. This would normally be a special mode that's selected with multiplexers, which frees up the pins for application use.

It's common to use a hybrid approach. You can build a small ROM into a microcontroller that contains a boot loader program. At power-up, the CPU runs the boot loader and uses the on-chip communications peripherals to receive data, which it then writes to memory.

\$\endgroup\$
0
\$\begingroup\$

Algebraic machines were the first step to todays general purpose computers. Arithmetics in computers is best described with boolean algebra (early models also used other numeric bases). It is true that computers do not understand zero and one or true and false but there is a relationship between the currents and voltages in semiconductor and these mathematical concepts.

Non numeric data is represented with codes (which are essentially tables that give every combination of bits a meaning), ASCII for text is an example. We do not notice, because nowadays, computers translate this code to a glyph on the display. Again, there is a fixed relationship. Without it, the data would have no meaning.

OPCODEs are another example. How the CPU executes an instruction is hardwired in silicon (let microcode aside for a moment). Programming means placing a sequence of OPCODEs in memory. In the old days, this was done with dashboard switches or punch cards, and programmers needed to use code tables to find the needed card holes or switch settings for the wanted instruction.

Don't let high level languages confuse you. Your abstract textual formulation of a algorithm is translated into steps the hardware can execute. The compiler needs to know your abstraction (meaning of control structures and statements and much more) and the CPUs capabilities and machine code to do this. Machine code is still considered "software", but it is important to understand that knowledge of the target hardware is in this code like in the good old days, the compiler just does the lookup for you.

\$\endgroup\$
-1
\$\begingroup\$

This is a funny question. Let me try to sort out the relationship between software and hardware.

First, software does not “transition” into hardware, software is separate thing, and hardware is a given separate thing (unless the question refers to designing hardware architecture). Then the relation between software and hardware can be simplified to the following:

  1. Software is a collection of lines of alphabetic sentences, representing logical relationship and processing algorithms expressed in a certain language. Solely on paper;

  2. The language text gets translated into a coherent sequence of machine instructions and associated data using mnemonics of commands and proper format of arguments, all still a paper work;

  3. The coherent sequence of instructions and data gets translated into corresponding binary equivalent of command codes and proper data formats, as appropriate for a particular MCU – still on paper, but now in “1” and “0” (binary) form;

  4. Then the binary code needs to be loaded into MCU memory. This is where “1” and “0” are converted into “high” and “low” voltages. Let’s simplify the MCU as some embedded microprocessor with externally attached memory, where the processor sequencer starts its working by fetching first instruction from address 0000.

  5. Memory load can be done by taking the memory chip out, and “burn” the binary code pattern into it. Then plug the memory IC back into MCU board, power it up, and hit its RESET button. From this point the “paper” 1-s and 0-s are roaming through hardware as “high” and “low” voltages;

  6. Any MCU has internal hardware registers, and external I/O (Input-Output) ports, which provide communication with external world, lighting up LEDs, turning on and off relays, reading sensors, etc.

  7. After coming out of RESET, a MCU begins fetching instructions and data from the memory, and, if the software is correct and has all addresses and data format right, the MCU starts accessing registers at their proper addresses, and, say, a LED lights on. Or the loaded software can read data from some register associated with external communication link, and starts interpreting the register data as commands or else, according some protocol;

  8. If the software is getting too sophisticated that it can update portions of MCU memory with different blocks of instructions and data (and the hardware provides this possibility), it is said that the software can operate with own resources, and this can be elevated to the rank of OS – operating system.

Does this answer the crux of your question convincingly?

\$\endgroup\$
4
  • \$\begingroup\$ "Software" need not be merely "a collection of lines of alphabetic sentences." Opcodes (or machine code) is also software. or if these opcodes be programmed into ROM or flash, it's "firmware". \$\endgroup\$ Commented Oct 6, 2016 at 1:01
  • \$\begingroup\$ These are tiny unimportant details. I was just trying to fit into framework of OP. \$\endgroup\$ Commented Oct 6, 2016 at 1:04
  • \$\begingroup\$ well, the tiny unimportant details simply obviate your steps 1 to 4. where the software meets the hardware is in your step 5. i tried to explain to the OP how "exactly ... the ... software" is "roaming through the hardware" (one needs to understand what a state machine is) and the OP rejected that as an answer to the question. that "transition" of software to hardware happens when machine code (or microcode for a RISC) is input to the state machine along with the states of the state machine. and the output states are a function of the input states and the opcode or microword input. \$\endgroup\$ Commented Oct 6, 2016 at 1:16
  • \$\begingroup\$ well, I have no intent to discuss where exactly FSMs meet microcode, and why it is important to "how the transitioning from loader to memory is happening". \$\endgroup\$ Commented Oct 6, 2016 at 1:34

Not the answer you're looking for? Browse other questions tagged or ask your own question.