Fellows,
I have several computers, some new, some old ( I collect them, since my first one, which has 2Kbytes of RAM, from 1979 ). The collection is getting huge, but the purpose of this question is related to the fact that I always loved the power of supercomputers, or at least, the power of the big machines ).
I ever thought about the idea of joinning machines in order to get a more powerful one. I run a LAN ( local area network ) 1Gbit apeed, where there are 4 machines intel i7 2600k running at 4.8Ghz watercooler, each one with 16Mb RAM, SSD and common hard drives, for a total of 30Tb of space ( total on the LAN ). Having read articles and watched many videos about virtualization, I question for the possibility of installing bare metal ( Type 1 ) hypervisors on each machine, then, creating a virtual machine which spread across the physical machines, so I could install an operational system like windows on top, to run softwares that need much resources, like CPU, RAM, Hard disk, etc.
I imagine that there must be a way that a virtual machine "thinks" that it´s installed on a single machine, but indeed, it´s spread along several nodes ( like a cluster ). For the virtual machine, it sees the system as only one big machine, but indeed, there´s shared CPU, shared RAM and shared hard drives.
Using this way, we could install an OP and run for instance, Adobe After Effects, or Adobe Premiere, which needs an outstanding parallel processing ( or cpu power ) in order to make previews in real time, or to run complex software which could benefit from multiple processors. I know many people would suggest purchasing a big multi-cpu, multi-core xeon machine for parallel processing, but it´s not the case...I like to think that with the current technology, there must be a way to join PCs and get more computational power.
I see people joinning Raspberry pi and making "supercomputers" at youtube, with kind of 1 teraflop, so why can´t we do it with our own machines, which has LAN, ram, disks...isn´t it the same thing, we only need the software and how to do it, no ? Is it possible ? How to do it ?
Thanks
teraflop
means 1trillion floating point instructions per minute
. Those kind of metrics can be misleading if you do not understand what they mean, and they can also be misleading if they do not have context. That 1 "teraflop" could simply mean they were able to run SuperPI on the cluster and achieve those metrics specifically with that software only.