0

Bit of an odd question maybe, but I am considering setting up a server at home and I want to host a couple of websites, a nextcloud, and a plex server. My websites are pretty simple and this way I'll save quite a bit of money on hosting and dropbox fees etc. I bought an old Dell R710 and looking at the power it uses it's around 350 Watts. This adds up to around 3000 kwh on a yearly basis assuming it's up 24/7.

Aside from the fact that this is pretty expensive (I don't mind this actually since the fees for websites and dropbox would be around the same), I feel this is quite wastefull in terms of energy usage. Therefore I'm wondering what the energy consumption of hosting a website or a cloud is. I can't find anything on it online. My most important argument for not hosting everything at home would be if it was much more efficient to host everything somewhere else.

Anyone knows how much energy these things consume? In an era where we should be less wastefull with our resources I feel this actually should be mentioned with online products. We know bitcoin uses the same amount of energy as some small countries, and Google explains on their website how much energy a search costs but that's about it for me.

Thanks

6 Answers 6

1

You simply won't know unless you know how the hoster does it's just. If you are concerned a simple vhost or just a website host might be what you're looking for. Your calculation is also wrong. It's unlikely that there will be a draw of 350 watts at all times.

Hosters won't make any statements because someone hosting a popular site is going to have a very different usage profile than someone just hosting a bunch of family pictures. With hosting at home don't forget that you probably need to look into some kind of dynamic DNS solution.

2
  • Sure you can say it depends per hoster, but that's my point. I want to differentiate between hosters on energy consumption not just price. In my opinion this is something to take well into consideration. Commented Sep 25, 2019 at 11:43
  • Yes so go and write E-Mails to those hosters. That information isn't discernible from an outside perspective.
    – Seth
    Commented Sep 25, 2019 at 11:46
1

If your concerned with just energy efficiency/usage the cloud will always win for light workloads due to the economy of scales involved and how densely packed cloud servers are on the physical hardware. By contrast your hardware is relatively old and inefficient and will probably not be running fully utilised?

Cloud generally means virtualised, which means multiple virtual servers inside a hypervisor on a single physical server somewhere. By virtualising many severs onto a single physical host cloud providers get much better utilisation of their hardware ("cutting up" a larger server into many smaller virtual servers) and require fewer total servers therefore less hardware and less power to do it. This means its very difficult to compare a physical server to a cloud server and probably why you cant find any useful comparison. Cloud hosts could calculate power usage, or at least give an upper and lower value based on usage but it would probably expose information they'd rather keep private.

Given your server is upto 10 years old, and potentially contains 2x 6 core CPUs @2.4-3.3Ghz, upto 288GB of RAM and upto 8 spindle disks, the question you need to ask yourself is does that seem like overkill? If the answer is yes, its not going to be efficient. Having a big beefy server doing little or nothing will be eating power, heating the room and not a lot else. In the cloud the underlying host would still be doing other useful work as well as eating power and heating up a data center somewhere - theres no contest.

In terms of establishing a running cost of your physical server, your best bet is to just measure it ideally when under a realistic load. You can buy a cheap "energy usage monitor" that you plugin inbetween the server and the wall socket and it will tell you how much power is being used. The 350 watt on the PSU is the maximum draw, the actual usage will be lower and depends on installed hardware / operating system / power management and of course the server config and utilisation.

The ability to size and resize a cloud server appropriately is a key benefit and driver of efficiency - you only pay for what you use (and therefore someone else can use the rest). That goal of paying only for what you use can be achieved to some extent locally with a screwdriver :-). Removing the second CPU would still leave you with probably 6 cores but save 50-100watts from the idle power consumption. Both CPUs are rated upto 120 watts but the amount they draw depends on how hard they are working. Another big power consumer can be the storage. Your server supports up to 8 disks - how many do you have/need? Reducing the number of disks in the server, or switching out spindle disks for SSD's would reduce power consumption. Obviously theres a trade off here between additional costs, running cost and available system resources but its worth considering for a greener option.

0

This question can't be accurately answered - using the cloud may save some power, but how much depends on the age and models of the systems, how they are optimised and percentage used, where they are located [heat dissipation, number of hops/routers used] and the amount of support infrastructure. No one can answer this question for "the general case". If a low energy footprint is your goal, you may be better off using more efficient/low power devices and buying solar panels (and maybe batteries depending on how green and $$ you wish to go) - or looking for a hosting provider that is located to and targeted at renewable energy.

You have not mentioned the ecact specs of yourvsystem, but I300 watts seems aweful high for a system of that age doing what you describe. You should be able to tune the average use to a fraction of that - primarily by reducing core frequency when idle. I would think an average power draw of well less then 100 watts would be easily achievable, excluding external cooling (eg aircon) which may not be necessary and would add about 35% when used.

Depending on storage, You can probably run your basic needs on a much lower power device (I'm thinking something ARM based like a Pi.) - although I confess that I would not classify it as a commercial grade solution, and performance will suffer.

0

During the winter times, if your home has some modern heating systems (with thermostats), then during the cold months, your server will almost certainly be greener than the cloud solution, because whatever your server would consume will be offset by the saved energy by not using the heater. This also needs to be taken into account. So, probably the greenest way is to use the cloud during the summer and home server during the winter.

0
0

Your calculation doesn't make sense. Dell PowerEdge R710:

Idle power consumption 150W Peak power consumption 270W

Another site one user: "I am running it (R710) with two Xeon(R) CPU X5677 @ 3.47GHz, and 40 gb of ram. I am also running (6) 3.5" 2TB drives on the beast" his power consumption 244w.

Another one: "2 x 5675 3.04, 64GB RAM, 6 X 300GB 15K SAS (1 as dedicated hot swap) H700, 2 x 870w PSU" power consumption 211w.

Couldn't be that your power supply runs at 350w? (which is the max. amount of watts it can handle).

1
  • Yeah, I'm a bit confused too. I am measuring my power usage with a measuring plug in between and it says around 300 W. However, the device itself states it uses only 140 W. Commented Sep 25, 2019 at 23:12
0

Don't forget that on this type of server, you have bmc or ipmi permitting to start and shutdown your server remotely, only when you really need it and not on a 24/7 basis.

That's what i'm doing on my side : start the server remotely, sync my files, make some stuff and shutdown.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .