17
$\begingroup$

It is well known that, via the polarization identity, a norm (which captures the notion of length) uniquely specifies an inner product; equivalently, if two inner products induce the same norm then they are the same inner product.

My question: If the above is true then how is it that we say that the inner product encodes angle? In a fuzzy sense, it doesn't seem to me like "angle" should be determined by the notion of "length" in a given space, and yet a norm implies an inner product -- which is to say a notion of length implies a notion of angle?

Edit: I realize I may (I'm not sure) have to specify that I'm talking about real vector spaces.

$\endgroup$
5
  • 7
    $\begingroup$ At the most basic level, it's because the side lengths of a triangle determine its angles in the Euclidean plane. $\endgroup$ Commented Jan 28, 2023 at 5:55
  • 2
    $\begingroup$ @QiaochuYuan So it seems you would say that length implying angle is not wrong at all. $\endgroup$
    – EE18
    Commented Jan 28, 2023 at 5:56
  • $\begingroup$ this correspondence only holds for norms satisfying the paralleogram law; so there is inbuilt information about the “angles”. $\endgroup$
    – peek-a-boo
    Commented Jan 28, 2023 at 5:59
  • $\begingroup$ Because we have $\theta = \arccos\left(\frac{a \cdot b}{\lVert a \rVert \lVert b \rVert} \right) $ and we can generalize this by replacing dot product by inner product $\endgroup$ Commented Jan 28, 2023 at 5:59
  • $\begingroup$ Related: math.stackexchange.com/a/4319813/96384 (section "angles"). And links form there. $\endgroup$ Commented Jan 30, 2023 at 2:34

3 Answers 3

25
$\begingroup$

Firstly:

yet a norm implies an inner product -- which is to say a notion of length implies a notion of angle?

A norm actually need not specify an inner product. There are norms which do not come from an inner product.

Let's be more specific. For $\newcommand{\nc}{\newcommand} \nc{\para}[1]{\left( #1 \right)} \nc{\abs}[1]{\left| #1 \right|} \nc{\br}[1]{\left[ #1 \right]} \nc{\set}[1]{\left\{ #1 \right\}} \nc{\ip}[1]{\left \langle #1 \right \rangle} \nc{\n}[1]{\left\| #1 \right\|} \nc{\norm}[1]{\left\| #1 \right\|} \nc{\floor}[1]{\left \lfloor #1 \right \rfloor} \nc{\ceil}[1]{\left \lceil #1 \right \rceil} \nc{\setb}[2]{\set{#1 \, \middle| \, #2}} \nc{\dd}{\mathrm{d}} \nc{\dv}[2]{\frac{\dd #1}{\dd #2}} \nc{\p}{\partial} \nc{\pdv}[2]{\frac{\partial #1}{\partial #2}} \nc{\a}{\alpha} \nc{\b}{\beta} \nc{\g}{\gamma} \nc{\d}{\delta} \nc{\ve}{\varepsilon} \nc{\t}{\theta} \nc{\m}[1]{\begin{bmatrix} #1 \end{bmatrix}} \nc{\C}{\mathbb{C}} \nc{\N}{\mathbb{N}} \nc{\R}{\mathbb{R}} \nc{\P}{\mathbb{P}} \nc{\Q}{\mathbb{Q}} \nc{\Z}{\mathbb{Z}} \nc{\AA}{\mathcal{A}} \nc{\BB}{\mathcal{B}} \nc{\CC}{\mathcal{C}} \nc{\FF}{\mathcal{F}} \nc{\GG}{\mathcal{G}} \nc{\II}{\mathcal{I}} \nc{\JJ}{\mathcal{J}} \nc{\KK}{\mathcal{K}} \nc{\PP}{\mathcal{P}} \nc{\RR}{\mathcal{R}} \nc{\SS}{\mathcal{S}} \nc{\TT}{\mathcal{T}} \nc{\UU}{\mathcal{U}} V$ a vector space over a field $F$, an inner product $\ip{\cdot,\cdot}$ and a norm $\n{\cdot}$ on $V$ are functions that meet certain properties. An inner product is special in this, given one, it defines a norm: $$ \norm{x} := \sqrt{\ip{x,x}} $$ However, a norm need not define an inner product. (After all, one needs to take in two vectors, and the other just one.) One can show, for instance, that a norm is induced by an inner product if and only if the norm satisfies the parallelogram law (MSE post):

$$2 \n{x}^2 + 2 \n{y}^2 = \n{x-y}^2 + \n{x+y}^2$$

Examples would be the so-called $p$-norms; when $p\ne 2$, they are not induced by an inner product. Recall that we define, for $x := (x_i)_{i=1}^n \in \R^n$,

$$\n{x}_p := \para{ \sum_{i=1}^n \abs{x_i}^p }^{1/p}$$

(Note that our familiar Euclidean norm is the $p=2$ norm, and is induced by the dot product.)


Now onto your question: essentially, how do inner products and angles relate?

In $\R^n$, under the usual scenarios (Euclidean distance and norm, inner product is the dot product), we may define the angle $\t$ between $x,y \in \R^n$ by

$$\t = \arccos \para{ \frac{\ip{x,y}}{\n{x}\n{y}}}$$

This comes from one way of defining the dot product:

$$\ip{x,y} := \n{x} \n{y} \cos \t$$

This $\t$ matches up with the angle we think of in the ordinary sense. We can see why the $\t$ arises in the following way...

First, take it as given that we define $$ \ip{x,y} := \sum_{i=1}^n x_i y_i $$ One can prove a polarization identity of inner products: $$\ip{x,y} = \frac{\n{x+y}^2 - \n{x-y}^2}{4}$$ One also has the law of cosines. In the language of vectors, one has that

$$\n{x-y}^2 = \n{x}^2 + \n{y}^2 - 2 \n{x} \n{y} \cos \t$$

for $\t$ (in the geometric sense) the angle between $x,y$. But, using that this norm $\n \cdot$ is induced by $\ip{\cdot,\cdot}$, and various properties of inner products in general, one has that

$$\n{x-y}^2 = \n{x}^2 + \n{y}^2 - 2 \ip{x,y}$$

Equating these two thus yields

$$\ip{x,y} = \n{x} \n{y} \cos \t$$


Of course, looking at an inner product in general, how much do we really know? I tell you that $\ip{x,y} = 0.35$; does this tell us anything?

It does tell us one key property of very common interest all throughout mathematics -- that the vectors are not orthogonal. Two vectors are orthogonal if and only if $\ip{x,y} = 0$.

In the Euclidean-$\R^n$ sense, this amounts to meeting at right angles in the plane they span. Of course, we have long-since generalized this notion to other spaces, e.g. functions, on which the axioms of an inner product can be met, even if the notion of "angle" becomes fuzzy, because orthogonality makes it very easy to represent elements of a vector space in certain bases (bases of elements which are pairwise orthogonal).

Much of the elegance and applicability of Fourier analysis, for instance, comes from the fact that $\set{\sin(kx),\cos(kx)}_{k=1}^\infty$ forms an orthogonal basis of square-integrable functions under the inner product

$$\ip{f,g}_{L^2[-\pi,\pi]} := \int_{-\pi}^\pi f(x) g(x) \, \dd x$$

In particular,

$$\begin{align*} \int_{-\pi}^\pi \sin(mx) \cos(nx) \, \dd x &= 0 \\ \int_{-\pi}^\pi \sin(mx) \sin(nx) \, \dd x &= \begin{cases} \pi, & m = n \\ 0 , & \text{otherwise} \end{cases} \\ \int_{-\pi}^\pi \cos(mx) \cos(nx) \, \dd x &= \begin{cases} \pi, & m = n \\ 0 , & \text{otherwise} \end{cases} \end{align*}$$

In fact, "nice enough" functions can be easily written as an infinite sum of (scaled and modulated) sines and cosines owing to this fact: a Fourier series.

$\endgroup$
0
11
$\begingroup$

Here's a nontechnical response to

In a fuzzy sense, it doesn't seem to me like "angle" should be determined by the notion of "length"

In the Euclidean plane, two triangles are congruent if they have sides of the same length. That congruence clearly implies they have the same angles. The law of cosines quantifies that.

What is actually more surprising is that in non-Euclidean geometry angles determine lengths. If two triangles on the sphere (whose edges are parts of great circles) have the same angles then they are congruent --- (that's spherical geometry). The same is true in the hyperbolic plane for hyperbolic triangles.

Gauss proved that Euclid's parallel postulate --- the one that distinguishes Euclidean from non-Euclidean geometry --- is in fact equivalent to the existence of similar triangles that aren't congruent. See https://www.cut-the-knot.org/triangle/pythpar/SimilarityAndFifth.shtml .

$\endgroup$
7
$\begingroup$

The dot product between two vectors $\vec{a}$ and $\vec{b}$ with different length and directions, both in $\mathbb{R}^2$ and centered in the origin, has geometric interpretation of the product of the norms of each one multiplied by the cosine of the angle between both vectors: $$\vec{x}\cdot\vec{y} = \|\vec{x}\|\ \|\vec{y}\| \cos(\theta)$$

so in this way, you could recover the angle between the vectors by knowing their norms and the dot product as: $$\theta = \cos^{-1}\left(\frac{\vec{x}\cdot\vec{y}}{\|\vec{x}\|\, \|\vec{y}\|}\right)$$ so thinking that the norms are just scalar quantities that behave as a scaling factor, you could visualize that the information about the angle between vectors is contained within the dot product.

Wikipedia image

The image and a deeper explanation could be found in this Wikipedia page.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .