(Following is a major revision based on answers I received to a related question.)
Summary
- For i.i.d. Normal random variables, $p_{b,c}$ is monotone increasing in $c$, with $\lim\limits_{c\to\infty}p_{b,c}=1$ (proof below).
- For i.i.d. non-Normal random variables, $\lim\limits_{c\to\infty}p_{b,c}$ may be any value in the interval $\left[{1\over2},1\right]$ (proof below) and numerical integration indicates that $p_{b,c}$ may be either monotone increasing or decreasing in $c$.
Formula for $\lim_{c\to\infty}p_{b,c}$
In the general case of $X,Y$ i.i.d. with c.d.f $F$ and p.d.f. $f$ , not necessarily Normal, we have (letting $\bar F=1-F$):
$$\begin{align}
p_{b,c}&:=P(Y+b>X\mid X>c,Y>c)\\[1ex]
&=1-P(Y +b<X\mid X>c, Y>c)\\[1ex]
&=1-{P(Y+b<X, X>c, Y>c)\over \bar F(c)^2}\\[1ex]
&=1-{\int_{c+b}^\infty f(x) \int_c^{x-b} f(y)\,dy\, dx \over \bar F(c)^2}\\[1ex]
&=1-{\int_{c+b}^\infty f(x)\left(F(x-b)-F(c)\right)\, dx \over \bar F(c)^2}\\[1ex]
&=1-{\int_{c+b}^\infty f(x) F(x-b)\,dx-\bar F(c+b)F(c) \over \bar F(c)^2}\tag{1}\\[1ex]
\end{align}$$
Therefore, applying the same approach as in @BJM's answer, we have
$$\begin{align}
\lim_{c\to\infty}p_{b,c}&=1-\lim_{c\to\infty}{\int_{c+b}^\infty f(x) F(x-b)\,dx-\bar F(c+b)F(c) \over \bar F(c)^2}\\[1ex]
&=1-\lim_{c\to\infty}{0\cdot 1-f(c+b)F(c) - [-f(c+b)F(c) + \bar F(c+b)f(c)] \over -2\bar F(c)f(c)}\tag{2}\\[1ex]
&=1-{1\over2}\lim_{c\to\infty}{\bar F(c+b) \over \bar F(c)}\tag{3}\\
&=1-{1\over2}\lim_{c\to\infty}{-f(c+b)\over -f(c)}\tag{4}\\[1ex]
\end{align}$$
where (2) is obtained using L'Hospital's rule, differentiating the integral using Leibniz' integral rule, and (4) is by another application of L'Hospital's rule. Thus,
$$\bbox[10px, border:3px solid lightgrey]{\lim\limits_{c\to\infty}p_{b,c}=1-{1\over2}\lim_{c\to\infty}{f(c+b)\over f(c)}}\tag{5}$$
Examples
If $X,Y$ i.i.d. Normal$(\sigma)$, then $$\lim_{c\to\infty}p_{b,c}=1$$ because
$${f(c+b)\over f(c)}={\exp\left(-{1\over2}{\left({c+b\over\sigma}\right)^2}+{1\over2}{\left({c\over\sigma}\right)^2}\right)}=\exp\left({-2bc-b^2\over2\sigma^2}\right)\to0\ \ \text{as $c\to\infty$}.$$
If $X,Y$ i.i.d. Laplace$(\sigma=\sqrt{2}\,\alpha)$, then $$\lim_{c\to\infty}p_{b,c}=1-{1\over2}\exp\left(-{b\over\alpha}\right)$$ because $${f(c+b)\over f(c)}={\exp\left(-{|c+b|\over\alpha}+{|c|\over\alpha}\right)}=\exp\left(-{b\over\alpha}\right)$$
If $X,Y$ i.i.d. Cauchy$(\sigma=\infty)$, then $$\lim_{c\to\infty}p_{b,c}={1\over2}$$ because $${f(c+b)\over f(c)}={(1+(c+b)^2)^{-1}\over (1+c^2)^{-1}}\to1$$
Here are typical plots for these examples, obtained by numerical integration using Eq.(1) (confirmed also by Monte Carlo simulation):
![Normal-Laplace-Cauchy plots of p vs c](https://cdn.statically.io/img/i.sstatic.net/D8e9Lj4E.png)
NB: In general, $p_{b,c}$ can't be less than ${1\over2}$, because the bivariate p.d.f.s are symmetrical in their arguments (i.e. $f(x,y)=f(y,x)$); furthermore, the above three examples suffice show that $\lim_{c\to\infty}p_{b,c}$ may be any value in the interval $\left[{1\over2},1\right]$.
Proof (sketch) that, in the Normal case, $p_{b,c}$ is monotone increasing in $c$
Defining $(\beta,\gamma):=(b/\sigma, c/\sigma),$ and $p(\beta,\gamma):=p_{b/\sigma,c/\sigma},$ we can prove that ${\partial p(\beta,\gamma) \over \partial \gamma}>0$ for all $c$ by taking the derivative of Eq.(1) using Leibniz' rule, which reduces to
$${\partial p(\beta,\gamma)\over\partial\gamma}={\phi(\gamma)\over\bar\Phi(\gamma)^2}\left(\bar\Phi(\gamma+\beta)-{2\,I(\beta,\gamma)\over \bar\Phi(\gamma)}\right)\tag{6}
$$
where $\phi$ and $\Phi$ are the standard normal p.d.f and c.d.f., respectively, and
$$I(\beta,\gamma) :=\int_{\gamma+\beta}^\infty\phi(z_1) \int_\gamma^{z_1-\beta}\phi(z_2)\,dz_2\, dz_1 .$$
Therefore, ${\partial p(\beta,\gamma)\over\partial\gamma}>0$ if
$$I(\beta,\gamma) < {1\over2}\,{\bar\Phi(\gamma+\beta)\,\bar\Phi(\gamma)}.\tag{7}
$$
Now (7) is a consequence of the following fact:
If a continuous bivariate distribution has circular symmetry about the origin, then
$$x' \lesseqqgtr y'\implies P(A) \lesseqqgtr P(B)$$
or equivalently,
$$x' \lesseqqgtr y'\implies P(A)\lesseqqgtr{1\over2}P(Q)\implies P(B)\gtreqqless{1\over2}P(Q)$$ where $A$ and $B$ are the upper and lower halves, respectively, of the "shifted quadrant" $Q(x',y'):=\{(x,y)\in\Bbb R^2: x>x', y>y'\}$ bisected by its diagonal through point $(x',y')$.
To apply this, note that $I(\beta,\gamma)=P(B)$ is the probability of the lower half ($B$) of the shifted quadrant $Q(x'=\gamma+\beta,y'=\gamma)$ bisected by its diagonal. Since $x'=\gamma+\beta>y'=\gamma$, we have the desired result:
$$I(\beta,\gamma)=P(B)< {1\over2}P(Q)={1\over2}{\bar\Phi(\gamma+\beta)\,\bar\Phi(\gamma)} .$$