If your concerned with just energy efficiency/usage the cloud will always win for light workloads due to the economy of scales involved and how densely packed cloud servers are on the physical hardware. By contrast your hardware is relatively old and inefficient and will probably not be running fully utilised?
Cloud generally means virtualised, which means multiple virtual servers inside a hypervisor on a single physical server somewhere. By virtualising many severs onto a single physical host cloud providers get much better utilisation of their hardware ("cutting up" a larger server into many smaller virtual servers) and require fewer total servers therefore less hardware and less power to do it. This means its very difficult to compare a physical server to a cloud server and probably why you cant find any useful comparison. Cloud hosts could calculate power usage, or at least give an upper and lower value based on usage but it would probably expose information they'd rather keep private.
Given your server is upto 10 years old, and potentially contains 2x 6 core CPUs @2.4-3.3Ghz, upto 288GB of RAM and upto 8 spindle disks, the question you need to ask yourself is does that seem like overkill? If the answer is yes, its not going to be efficient. Having a big beefy server doing little or nothing will be eating power, heating the room and not a lot else. In the cloud the underlying host would still be doing other useful work as well as eating power and heating up a data center somewhere - theres no contest.
In terms of establishing a running cost of your physical server, your best bet is to just measure it ideally when under a realistic load. You can buy a cheap "energy usage monitor" that you plugin inbetween the server and the wall socket and it will tell you how much power is being used. The 350 watt on the PSU is the maximum draw, the actual usage will be lower and depends on installed hardware / operating system / power management and of course the server config and utilisation.
The ability to size and resize a cloud server appropriately is a key benefit and driver of efficiency - you only pay for what you use (and therefore someone else can use the rest). That goal of paying only for what you use can be achieved to some extent locally with a screwdriver :-). Removing the second CPU would still leave you with probably 6 cores but save 50-100watts from the idle power consumption. Both CPUs are rated upto 120 watts but the amount they draw depends on how hard they are working. Another big power consumer can be the storage. Your server supports up to 8 disks - how many do you have/need? Reducing the number of disks in the server, or switching out spindle disks for SSD's would reduce power consumption. Obviously theres a trade off here between additional costs, running cost and available system resources but its worth considering for a greener option.