Your computer is already combining and delegating work as much as it can. The computational models, engine calculations etc. are performed by your CPU, while the GPU renders the scenes and calls the DirectX APIs.
The GPU is highly optimized for it's task and frankly it doesn't need no help from the CPU. Even if something like you are suggesting would be possible and the GPU would access the CPU's RAM, that would limit extremely the rendering for the GPU and would be of no avail. Something similar happened earlier this year with Nvidia's GTX960 which was advertised as a 4GB card, but instead had only 3.5GB with the option of accessing 500MB from the systems RAM. When this rare scenario happened, the whole perfomance collapsed.
Regarding your integrated Intel HD Graphic... I don't think it's even thinkable of running one application on two completely different GPU architectures. The application would need to be programmed so loosely coupled and asynchronously because each GPU would be waiting on the other's result, there would be no performance gain. I mean, if you just think how many compatibility limitations the manufacturers set on SLI or Crossfire, that already shows the point.
Interesting thought though.
- The only thing I would suggest to you is upgrade your HD7750 and go get an R9 290X for 200$.
- Obviously, if you can upgrade to a current-gen Nvidia card, that would be much better. (Physics run on Nvidia GPU's instead of the CPU and lately they are mopping the floor with AMD)
- Also, if you are still on a mechanical Harddrive, go get an SSD. Single most useful upgrade for ANY PC.