$\newcommand{\bR}{\mathbb{R}}
\newcommand{\one}{\mathbf{1}}
\newcommand{\diag}{\textrm{diag}}
\newcommand{\Id}{\textrm{Id}}$
I guess you assume independence of the random variables $f,a,\varepsilon$, which I am going to use.
General strategy for Gaussian conditioning in linear models:
Let $a\sim N(\bar{a},Q_{a})$ on $\bR^{n}$ and $z = Aa+\eta$, where $A\in\bR^{m \times n}$ and $\eta \sim N(\bar{\eta},Q_{\eta})$ on $\bR^{m}$ is independent of $a$.
Then consider the joint distribution of $a$ and $z$,
$$
\begin{pmatrix}
a\\ z
\end{pmatrix}
\sim
N\left(
\begin{pmatrix}
\bar{a}\\ A\bar{a}+\bar{\eta}
\end{pmatrix}
,
\begin{pmatrix}
Q_{a} & Q_{a}A^{\top} \\ A Q_{a} & Q_{\eta} + A Q_{a} A^{\top}
\end{pmatrix}
\right)
$$
and apply the Gaussian conditioning formula (see e.g. here, scroll down to Conditional distributions) to obtain the conditional mean
$$
(\ast)\qquad
\bar{a}^{z}
=
\bar{a} + (Q_{a}A^{\top}) (Q_{\eta} + A Q_{a} A^{\top})^{-1} (z-A\bar{a}-\bar{\eta}).
$$
Answer to your question:
Now let us consider your particular case and denote $\one_{m} = (1,\dots,1)^{\top} \in \bR^{m}$ and a diagonal matrix with the values $s_{1},\dots,s_{m}$ on the diagonal by $\diag(s_{1},\dots,s_{m})$ and by $\diag(s)$ if all $s_{j}$ equal $s$.
The first nontrivial step is to identify $\eta$ correctly: $z = A a + \eta$, where $A = \mathrm{Id_{M}}$ and
$$
\eta
=
\underbrace{(\one \ \Id_{M})}_{=: B} \begin{pmatrix}
f\\
\varepsilon_{1}\\
\vdots\\
\varepsilon_{M}
\end{pmatrix}
\sim
N\left(
\bar{f}\one , B \, \diag(\sigma_{f}^2,\sigma_{\varepsilon}^{2},\dots,\sigma_{\varepsilon}^{2}) \, B^{\top}
\right).
$$
(Here we used the fact that $y = Bx$, $x\sim N(\bar{x},Q_{x})$ implies $y\sim N(B\bar{x},BQ_{x}B^{\top})$; I can provide details on this if required.)
Then, a simple (but tedious) computation shows that
\begin{align*}
Q_{\eta} + AQ_{a}A^{\top}
&=
\diag(\sigma_{a}^{2} + \sigma_{\varepsilon}^{2}) + \sigma_{f}^{2} \one \one^{\top},
\\
(Q_{\eta} + AQ_{a}A^{\top})^{-1}
&=
\frac{\diag(\sigma_{a}^{2} + \sigma_{\varepsilon}^{2} + M \sigma_{f}^{2}) - \sigma_{f}^{2} \one \one^{\top}}{(\sigma_{a}^{2} + \sigma_{\varepsilon}^{2})(\sigma_{a}^{2} + \sigma_{\varepsilon}^{2} + M \sigma_{f}^{2})}
\end{align*}
Note that finding the inverse might be hard, but showing that its the inverse, once you have it, is easy (but tedious): Simply multiply the two matrices above to obtain $\Id_{M}$.
If you plug this into $(\ast)$ and write it out componentwise you obtain the desired result.
Hope this helps. If you have any questions, don't hesitate to ask in the comments.
Remarks:
- $(\ast)$ holds similarly if $\bR^{m}$ and $\bR^{n}$ are replaced by arbitrary separable Hilbert space (possibly infinite-dimensional). In this case replace "transpose" $A^{\top}$ by "adjoint" $A^{\ast}$.
- There is also a formula for the conditional covariance matrix/covariance operator, which, somewhat surprisingly, does not dependent of $z$ (a particular property of Gaussian conditioning):
$$
Q_{a}^{z}
=
Q_{a} - Q_{a}A^{\top} (Q_{\eta} + AQ_{a}A^{\top})^{-1} A Q_{a}.
$$
$a|z \sim N(\bar{a}^{z},Q_{a}^{z})$ has again a normal distribution.