Both are O(N) where N is the number of elements in the array. The second one is just more confusing and has bugs.
Assuming you fix the bugs, they're both the same loop. They'll compile into similar assembly code (or Java bytecode). The second one is just more confusing (and has bugs).
So there's no good reason to write the second one.
O(n^2) isn't when you have two nested loops. O(n^2) is when the amount of time your algorithm takes is proportional to the square of the amount of input data (or more generally, the square of something). The first code runs for arr.length*arr[i].length iterations; the second code also runs for arr.length*arr[i].length iterations if you fix the bugs, it's just more confusing (and has bugs).
CPUs don't understand loops, only goto, and if+goto, and other basic instructions like those. Loops were one of the first shortcuts that programmers invented to make it easier to write programs. When you write
for(int j = 0; j < arr[i].length; j++) {
arr[i][j] = j;
}
the compiler actually turns it into something like this:
j = 0;
start_of_loop:
if (j >= arr[i].length) goto end_of_loop;
arr[i][j] = j;
j++;
goto start_of_loop;
end_of_loop:
and when you write this:
if (j == arr[i].length - 1) {
i++;
if(i == arr.length) {
break;
}
j = 0;
}
the compiler actually turns it into something like this:
if(j != arr[i].length - 1) goto end_of_if;
i++;
if(i == arr.length) goto end_of_loop;
j = 0;
end_of_if:
so you can see the CPU isn't going to care which way you write it - it's (approximately) the same code by the time the CPU actually runs it.
The CPU takes some time to run each instruction, so what matters is (approximately) the number of instructions it runs, and that's what we are trying to say with big-O notation: if n is twice as big, does the CPU run the same number of instructions, or a fixed amount extra, or twice as many, or 4 times as many? Big-O notation tells us that, in a useful approximate way, without caring about the exact number of instructions.