For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
Because the main usage of shell (and/or bash) is to run other commands, there is not only 1 answer!!
0. Shell command line expansion
1. xargs
dedicated tool
2. while read
with some remarks
3. while read -u
using dedicated fd
, for interactive processing (sample)
5. running shell with inline generated script
Regarding the OP request: running chmod
on all targets listed in file, xargs
is the indicated tool. But for some other applications, small amount of files, etc...
If
you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
This command is the simplier one.
xargs
is the right toolFor
For many binutils tools, like chown
, chmod
, rm
, cp -t
...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find
:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt
.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod
accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs
:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
seq 1 5
as inputTry this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
Doing loop under shell is generally a bad idea! There is a lot of warning about doing loop under shell!
Before doing loop, think parallelisation and dedicated tools!!
You could use bash for interact with and administrate dedicated tools. Some samples:
for
and while
loops, in my Swap usage by process script.while read
and variants.For this, make sure to end the file with a newline character.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat |
is a useless fork, and
| while ... ;done
will become a subshell whose environment will disappear after ;done
.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
$IFS
and read
flags:help read
read: read [-r] ... [-d delim] ... [name ...] ... Reads a single line from the standard input... The line is split into fields as with word splitting, and the first word is assigned to the first NAME, the second word to the second NAME, and so on... Only the characters found in $IFS are recognized as word delimiters. ... Options: ... -d delim continue until the first character of DELIM is read, rather than newline ... -r do not allow backslashes to escape any characters ... Exit Status: The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading
file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
while read
, using dedicated fd
.Syntax: while read ...;done <file.txt
will redirect standard input to come from file.txt
. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl
superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
Under bash, you could let him choose any free fd
for you and store into a variable:
exec {varname}</path/to/input
:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh
This can be very useful if sed
input is a feed instead of a file. Practical sample: Using rsync
log output as sed
input for deleting corresponding description file when a project file are deleted. See my answer to Remove file if a file with the same name but different extension doesn't exist in another directory which differ a lot from what SO asker did expect.
xargs
was initialy build for answering this kind of need, some features, like building command as long as possible in the current environment for invoking chmod
in this case as less as possible, reducing forks ensure efficience. while ;do..done <$file
implie running 1 fork for 1 file. xargs
could run 1 fork for thousand files... in a reliable manner.
Commented
Dec 19, 2012 at 1:20
cat file.txt | tr \\n \\0 | xargs -0 -n1 chmod 755
Commented
Sep 28, 2015 at 5:27
tr \\n \\0 <file.txt |xargs -0 [command]
is about 50% faster than the method you described.
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat
process.
cat
is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
cat
is a good idea, but in this case, the indicated command is xargs
Commented
Dec 18, 2012 at 21:05
chmod
(i.e. really run one command for each line in the file).
Commented
Sep 3, 2015 at 7:24
if you have a nice selector (for example all .txt files in a dir) you could do:
for i in *.txt; do chmod 755 "$i"; done
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt
export
here. Its purpose is to make the variable visible to subprocesses (so, useful if you want to change the separator in subshells started from the current one, but not really relevant or useful here).
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel
runs as many threads as your CPUs count. But you can specify it with -j
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
xargs
is robust. This tool is very old and his code is strongly revisited. His goal was initialy to build lines in respect of shell limitations (64kchar/line or something so). Now this tool could work with very big files and may reduce a lot the number of fork to final command. See my answer and/or man xargs
.
Commented
Dec 10, 2013 at 7:40
brew install findutils
), and then invoke GNU xargs with gxargs
instead, e.g. gxargs chmod 755 < file.txt
xargs
itself is robust, but you have to understand how it handles (or fails to handle) quotes etc in your input. The workaround with xargs -0
is completely predictable and robust, but regrettably specific to GNU xargs
.
Now days (In GNU Linux) xargs
still the answer for this, but ... you can now use the -a
option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}
xargs -a
in a GNU extension, which means it typically works on Linux out of the box, but not so much anywhere else unless you separately installed the GNU versions of many common utilities. The standard solution to read file names from standard input continues to work portably across GNU and other versions of xargs
.
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print
:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
perl -lpe 'chmod 0755, $_' file.txt
-- use -l
for the "auto-chomp" feature
Commented
Dec 18, 2012 at 19:01
The logic applies to many other objectives. And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh