0
$\begingroup$

I am currently studying a paper (Section 3.4.3 of Lanthaler, Mishra, and Karniadakis - Error estimates for DeepONets: a deep learning framework in infinite dimensions) where the authors define an operator $ P_N $ related to Fourier projections on the $n$-dimensional torus $ \mathbb{T}^n $. The operator is defined as: $$P_N u = \sum_{|k|_{\infty} \leq N} \hat{u}_k e_k(x),$$

where $e_k$ are real Fourier basis as stated in Appendix A in the paper. However, the Fourier coefficients, $\hat{u}_k$, are not explicitly stated how they are computed. I would expect it to be the inner product of $u$ and $e_k$.

I am also interested in whether there is a reference in computing the following error which is below Equation (3.33) in the paper: $$ \lVert P_N u - u \rVert_{L^2(\mathbb{T}^n)} \leq \frac{1}{N^s} \lVert u \rVert_{H^s(\mathbb{T}^n)} $$

where $s$ comes from the assumption that $u \in H^s(\mathbb{T}^n)$.

So to summarize :

  1. Is $\hat{u}_k$ correctly interpreted as the inner product between $u$ and $e_k(x)$ in this Fourier projection context? If not then what is a logical interpretation of it?
  2. Is there a reference for the inequality above?
$\endgroup$
1
  • 1
    $\begingroup$ Yes for (1). And some more characters. $\endgroup$
    – LSpice
    Commented May 7 at 21:54

1 Answer 1

2
$\begingroup$

As pointed by LSpice's comment to the OP, the answer to 1. is yes, for $\{e_k\ |\ k\in\mathbb{Z}^d\}$ is an orthonormal (topological) basis of $L^2(\mathbb{T}^d)$. As for 2., equation (3.33) is one of the so-called Bernstein inequalities, which are probably nowadays more widely known in its continuous version (i.e. for the Fourier transform) and for Fourier series is usually proven for $L^\infty$ norms, see e.g. Theorem 8.2, pp. 49-40 of the book by Y. Katznelson, An Introduction to Harmonic Analysis (third edition, Cambridge University Press, 2002) for the precise version you need (the "reverse Bernstein inequality"), albeit only for the $L^\infty$ norms. The proof in the $L^2$ case (which is the relevant one here) is much easier, thanks to the Parseval fomula. Although it can be inferred e.g. from the Littlewood-Paley dyadic characterization of Sobolev spaces in $\mathbb{T}^d$ (see e.g. Proposition 1.3.1, pp. 11 of R. Danchin's lecture notes) I cannot recall right now a more precise reference for it, though... I will update the answer when I find one.

$\endgroup$
2
  • $\begingroup$ Hello Pedro, Thank you for your answer, but I'm still a bit confused. Could you help clarify the relationship between the inequality in the OP and Proposition 1.3.1, pp 11 of R. Danchin's lecture notes that you referred to? It seems like they might be different, but I'm not entirely sure. Are they the same result in different contexts, or is there a specific reason why they're linked? I'd really appreciate any additional explanation you could provide. $\endgroup$
    – Mohammad A
    Commented May 8 at 22:33
  • $\begingroup$ Sorry for the delay in getting back to you. The Bernstein inequality in the OP essentially follows in the case of $\mathbb{R}^d$ from the second inequality in Proposition 1.3.1 in the case $N=2^n$ if we remove the first $n$ terms from the sum in the lhs, up to normalization terms. In the case of $\mathbb{T}^d$, the argument is exactly the same but one then replaces the Littlewood-Paley projections $\Delta_q u$ with its Fourier components $\langle e_k,u\rangle e_k=\hat{u}_k e_k$. Prop. 1.3.1 in that case then follows from integration by parts and the Parseval formula. $\endgroup$ Commented May 29 at 20:09

Not the answer you're looking for? Browse other questions tagged or ask your own question.