0

I have a bunch of files, many of which have mixed line endings (CRLF or LF). I want to write a script that iterates over a list of files, and for each file, converts the file to the line endings that are most prevalent. E.g., if file1 has 23 lines ending in LF, and 15 lines ending in CRLF, I want to run dos2unix on it. If file2 has 2 lines ending in LF, and 16 lines ending in CRLF, I want to run unix2dos on it. Does anyone know how to count the number of line endings of each type in a file? I have tried grep -c $'\r\n' FILE from a Cygwin Bash terminal, but it is matching every line regardless of the line ending type in the file.

1
  • What is the desired outcome if there is exactly the same number of both types of line endings in this file? What is the desired result if you have sole CR (without LF)? And most of all, why do you want to do it? Usually, a certain line ending is or is not required for the file being processed by a certain applications. Even if all the line endings in a file has the same (but wrong) value, you will want to change all of them. Commented Apr 25, 2022 at 10:00

2 Answers 2

0

The command I am looking for is apparently:

grep -c $'\r$' FILE

This appears to only match the Windows (CRLF) line endings.

0

Only a partial answer, but here's a little trick to count the \r's : First convert to ascii hex codes via hexdump, then count the occurrences of "0D" (ie, \r).

cat yadda | hexdump -v -e '/1 "%02X\n"' | grep 0D | wc -l 

Can do the same with 0A (for \n), then do your logic of which one you want to pick.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .