3

are there any downsides in using two Monitors? One over the integrated graphics card (Intel(R) HD Graphics 530 in my case) and the other over the dedicated graphics card (NVIDIA GeForce GTX 1060 6GB)?

I've connected the integraded card to a monitor and it works (so in my case it is not deactivated), but I'm unsure if this is somehow impacting my 3D Card.

Can this impact my performance and if it could, can I somehow check if it does?

3
  • 1
    The integrated graphics will be using your system memory, so there is some small possibility for a performance impact there.
    – Mokubai
    Commented Apr 26, 2017 at 17:52
  • @SDsolar "a signal to wakeup" is not even vaguely close to being in the same planet let alone the same ballpark in terms of the amount of work or bandwidth required. Ignoring video decoding (which is probably several round trips to memory at 24/30 FPS), texturing (for the desktop compositor) and then display, a full HD screen framebuffer running at 60FPS will be about 325MB/s (1920 * 1080 * 3 * 60) vs Nothing-At-All (the memory controller will be off for the graphics chip and the chip will have a dedicated monitor sense line waiting to turn on). So yes, the difference will be there.
    – Mokubai
    Commented Apr 26, 2017 at 19:55
  • All I can say is that I often run my computer with both, but usually run HDMI with the Radeon. But then, I'm not a heavy gamer. I use my CPU to plot statistics and such with R and gnuplot, and am adding Python to the mix, with the goal of automating my web page displays in real time based on multiple incoming data streams being fed by my Raspberry Pi & Arduino sensor stations (scp via WiFi on a dedicated subnet). Correlating the data is very I/O and CPU intensive. RAM is not an issue. Radeon does the display just fine. Adding the use of the on-board display is not measurable..
    – SDsolar
    Commented Apr 27, 2017 at 4:24

2 Answers 2

2

There will be a (hopefully) minor penalty to memory bandwidth on the processor due to the graphics chip in your processor sharing the memory bus with your processor.

As an absolute minimum your on-chip graphics will have a framebuffer that it has to constantly "copy" out to your monitor for it to display. Assuming a 1920 * 1080 display running at 60Hz that framebuffer will need an amount of bandwidth.

That bandwidth will be in the ballpark of 325MB/s (1920 * 1080 * 3 (24bit) * 60Hz).

On top of that you use extra bandwidth for

  • video decoding, bandwidth factors include
    1. video resolution
    2. scaling
    3. post processing effects (film grain etc)
  • desktop compositing
  • anything else the graphics might be doing

When watching video I'd go for a "finger in the air" estimate of about 1GB/s total memory bandwidth used.

Which isn't too bad on a modern system. If you have a modern DDR4 system running in single channel then your memory bandwidth is probably in the ballpark of 16GB/s, in dual channel that gets you 32GB/s.

That's still 1/16th (or 1/32th) of your memory bandwidth though.

With the display to the onboard graphics disconnected I would expect the graphics core to go into a "sleep" mode where it will use next to no bandwidth at all. It will probably have an interrupt set up to monitor the display detect lines.

If you're not doing anything intensive or demanding you probably won't notice the difference. It will have a small penalty in the amount of time taken to copy stuff to your dedicated graphics, but again, probably not noticeable.

0

Not particularly, no. I have had this same concern and have used Resource Monitor to see if the PC is spending too much CPU or disk to handle both monitors vs just one.

Especially since you have a separate GPU to handle the other monitor, you shouldn't notice any impact on performance either way. The regular monitor will not slow down the nVidia and vice versa.

Have fun!

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .