As far as my understanding goes, in projection matrices $x_\text{eye}$ and $y_\text{eye}$ are mapped linearly to NDC by first using proportions to find $x_n = \frac{n\cdot x_e}{-z_e}$ and $y_n = \frac{n\cdot y_e}{-z_e}$ and then remapping linearly $[l,r]\rightarrow [-1,1]$ horizontally and $[b,t]\rightarrow [-1,1]$ vertically, why isn't the same done for $z_\text{eye}$?
Now, I understand how the kind of mapping that is usually done turns out to be useful by providing better floating point precision around the near plane to improve z-fighting; what I don't understand if that is just a convenient consequence of how the third row of the projection matrix is derived or the other way around.
I would like a more thorough explanation of the why that is not just "that is how it is usually done" that I've seen online when researching the topic aka the "put A and B in $\begin{bmatrix}\frac{2n}{r-l}&0&\frac{r+l}{r-l}&0\\0&\frac{2n}{t-b}&\frac{t+b}{t-b}&0\\0&0&A&B\\0&0&-1&0\end{bmatrix}\cdot\begin{bmatrix}x_e\\y_e\\z_e\\w_e\end{bmatrix}$ and solve a linear system", I wish to understand the reasoning and the "methodology" behind this and if there are alternative mappings out there and what are their strengths and shortcomings
n.b. I'd actually try and test a linear mapping for z but I still haven't written the proper C code for that as I wanted to first have a basic grasp of the concept before starting