I have a folder that holds hunderds of thousands of files called hp-temps.txt
. (There are also tons of subfolders)
The content of these files looks like this for example:
Sensor Location Temp Threshold
------ -------- ---- ---------
#1 PROCESSOR_ZONE 15C/59F 62C/143F
#2 CPU#1 10C/50F 73C/163F
#3 I/O_ZONE 25C/77F 68C/154F
#4 CPU#2 32C/89F 73C/163F
#5 POWER_SUPPLY_BAY 9C/48F 55C/131F
I need to parse through all the files and find the highest entry for the Temperature in the #1 line.
I have a working script but it takes a very long time, and I was wondering, if there is any way to improve it.
Since I'm rather new in Shell Scripting, I imagine this code of mine is really inefficient:
#!/bin/bash
highesetTemp=0
temps=$(find $1 -name hp-temps.txt -exec cat {} + | grep 'PROCESSOR' | cut -c 32-33)
for t in $temps
do
if [ $t -gt $highestTemp ]; then
highestTemp=$t
fi
done
EDIT:
There has been a very efficient code but I forgot to mention that I not only need the biggest value.
I would like to be able to loop through all the files, since I'd like to output the directory of the file and the temperature whenever a higher value is detected.
So the output could look like this for example:
New MAX: 22 in /path/to/file/hp-temps.txt
New MAX: 24 in /another/path/hp-temps.txt
New MAX: 29 in /some/more/path/hp-temps.txt
find
does not find files in a particular order, it makes no sense that you want to see any other result than the final maximum and pathname.