I have a similar CPU in a clunky old Core2 machine (E6600, first gen core2, even older than yours) and I think Linux reads its CPU temp as 20 degrees(?) higher than it actually is.
Or else the first couple years I had it, Linux was reading it 20 degrees too low?
Anyway, I wouldn't 100% trust those numbers if that's at idle. But I forget the details, if I ever got to the bottom of this change that I think happened at some point, so I don't know if that's real or not.
sensors
on my Core2 system says:
radeon-pci-0100
Adapter: PCI adapter
temp1: +50.5°C (crit = +120.0°C, hyst = +90.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Core 0: +72.0°C (high = +86.0°C, crit = +100.0°C)
Core 1: +71.0°C (high = +86.0°C, crit = +100.0°C)
This seems unreasonably high; the thermal resistance between the silicon and the heat-sink isn't huge, and touching the heat sink even near the base doesn't feel that hot.
By comparison, modern CPUs run much cooler, e.g. my Skylake i7-6700k is idling at ~33C, just a few degrees above ambient. (It's a fairly hot day, like 27C or so.)
Both systems have fairly large 3rd-party CPU coolers, but the Skylake system runs cool enough that I can configure the BIOS to let the case fans spin all the way down to stopping when the CPU / mobo temps are below 45C or something, I forget exactly what I set.
As other answers have pointed out, much of the improvement to x86 microarchitectures in recent years has been to make them more power efficient, allowing higher clock speeds without melting, but also improving idle / low-load power dramatically. (The laptop market is important, and Intel and AMD both use the same basic design of a core in their laptop, desktop, and server chips.)