121
$\begingroup$

The dot product of vectors $\mathbf{a}$ and $\mathbf{b}$ is defined as: $$\mathbf{a} \cdot \mathbf{b} =\sum_{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}$$

What about the quantity? $$\mathbf{a} \star \mathbf{b} = \prod_{i=1}^{n} (a_{i} + b_{i}) = (a_{1} +b_{1})\,(a_{2}+b_{2})\cdots \,(a_{n}+b_{n})$$

Does it have a name?

"Dot sum" seems largely inappropriate. Come to think of it, I find it interesting that the dot product is named as such, given that it is, after all, a "sum of products" (although I am aware that properties of $\mathbf{a} \cdot{} \mathbf{b}$, in particular distributivity, make it a meaningful name).

$\mathbf{a} \star \mathbf{b}$ is commutative and has the following property:

$\mathbf{a} \star (\mathbf{b} + \mathbf{c}) = \mathbf{b} \star (\mathbf{a} + \mathbf{c}) = \mathbf{c} \star (\mathbf{a} + \mathbf{b})$

$\endgroup$
8
  • 34
    $\begingroup$ I'm not aware of this operation having a name. It would likely only have a name if it computes something useful or has some nice properties. $\endgroup$ Commented Jan 17, 2018 at 1:38
  • 35
    $\begingroup$ $$\mathbf a\star\mathbf b=\det({\rm diag}(\mathbf a)+{\rm diag}(\mathbf b)).$$ $\endgroup$
    – user856
    Commented Jan 17, 2018 at 3:04
  • 3
    $\begingroup$ The inner product between two matrices is $\operatorname{tr}(A^TB)$. In a similar way, the inverted product could generalize to $\det(A+B)$. $\endgroup$
    – wlad
    Commented Jan 18, 2018 at 15:53
  • 2
    $\begingroup$ I would argue that what you've written is not the definition of the dot product, but rather a convenient formula that works in Cartesian coordinates. The definition is that $a \cdot b = |a||b|\cos(\theta)$, where $\theta$ is the angle between vectors. This definition is independent of the coordinate system. $\endgroup$
    – Brady Gilg
    Commented Jan 21, 2018 at 7:37
  • 3
    $\begingroup$ @BradyGilg No, it is not. Because the notion of angle relies on a coordinate system. Actually, IMHO the definition of the dot product is $(x,y)_{\boldsymbol I}$, where $\boldsymbol I$ is a very specific positive definite matrix, namely the identity matrix. $\endgroup$
    – yo'
    Commented Jan 21, 2018 at 19:08

12 Answers 12

65
$\begingroup$

Too long for a comment, but I'll list some properties below, in hopes some idea comes up.

  • ${\bf a}\star {\bf b}={\bf b}\star {\bf a}$;
  • $(c{\bf a})\star (c {\bf b})=c^n ({\bf a}\star {\bf b})$;
  • $({\bf a+b})\star {\bf c} = ({\bf a+c})\star {\bf b} = ({\bf b+c})\star {\bf a}$;
  • ${\bf a}\star {\bf a} = 2^n a_1\cdots a_n$;
  • ${\bf a}\star {\bf 0} = a_1\cdots a_n$;
  • $(c{\bf a})\star {\bf b} = c^n ({\bf a}\star ({\bf b}/c))$;
  • ${\bf a}\star (-{\bf a}) = 0$;
  • ${\bf 1}\star {\bf 0} = 1$, where ${\bf 1} = (1,\ldots,1)$;
  • $\sigma({\bf a}) \star \sigma({\bf b}) = {\bf a}\star {\bf b}$, where $\sigma \in S_n$ acts as $\sigma(a_1,\ldots,a_n) \doteq (a_{\sigma(1)},\ldots,a_{\sigma(n)})$.
$\endgroup$
6
  • 6
    $\begingroup$ Say... could we try to characterize $\star$ as the unique operation satisfying such and such? Seems tough. $\star$ being highly non-linear also makes our lives harder. (fun fact: just achieved 40k rep with this, hooray) $\endgroup$
    – Ivo Terek
    Commented Jan 17, 2018 at 2:02
  • $\begingroup$ The property discussed in dezdichado's answer is the one of interest in my case, and I believe it makes $\mathbf{a} \star \mathbf{b}$ somewhat unique: $\mathbf{a} \star \mathbf{b} = 0$ iff $\exists i: a_i = -b_i$. $\endgroup$
    – doc
    Commented Jan 17, 2018 at 2:07
  • 1
    $\begingroup$ Observation: dimensionally, this operator produces a [length]^(n dimensions) scalar out of two vectors. Also, it's not invariant under rotation. **i**⋆**j**=1, -**i**⋆**j**=-1 $\endgroup$ Commented Jan 17, 2018 at 13:20
  • $\begingroup$ Maybe adding parentheses is better for clarity, no? ${\bf i}\star {\bf j} =1$, and ${\bf j}\star (-{\bf i}) = -1$. $\endgroup$
    – Ivo Terek
    Commented Jan 17, 2018 at 13:29
  • 2
    $\begingroup$ Almost every identity is immediate if you take Nat's suggestion and replace $a \star b$ with $\text{prod}(a+b)$. These facts are derivable from facts about prod: (1) $\text{prod}(c\vec{v}) = c^n \text{prod}(\vec{v})$; (2) $\text{prod}(\vec{0}) = 0$; (3) $\text{prod}(\vec{1}) = 1$; (4) $\text{prod}(\sigma(\vec{v})) = \text{prod}(\vec{v})$. $\endgroup$ Commented Jan 18, 2018 at 8:08
58
$\begingroup$

I don't know if it has a particular name, but it is essentially a peculiar type of convolution. Note that $$ \prod_{i}(a_{i} + b_{i}) = \sum_{X \subseteq [n]} \left( \prod_{i \in X} a_{i} \right) \left( \prod_{i \in X^{c}} b_{i} \right), $$ where $X^{c} = [n] \setminus X$ and $[n] = \{1, 2, \dots n\}$. In other words, if we define $f_{a}, f_{b}$ via $$ f_{a}(X) = \prod_{i \in X}a_{i}, $$ then $$ a \star b = (f_{a} \ast f_{b})([n]) $$ where $\ast$ denotes the convolution product $$ (f \ast g)(Y) = \sum_{X \subseteq Y} f(X)g(Y \setminus X). $$ To learn more about this, I would recommend reading about multiplicative functions and Moebius inversion in number theory. I don't know if there is a general theory concerning this, but the notion of convolutions comes up in many contexts (see this wikipedia article, and another on its role in number theory).

Edit: For what it's worth, the operation is not a vector operation in linear-algebraic sense. That is, it is not preserved under change-of-basis. In fact, it is not even preserved under orthogonal change-of-basis (aka rotation). For example, consider $a = (3,4) \in \mathbb{R}^{2}$. Note that $a \star a = 32$. Then we apply the proper rotation $T$ defined by $T(a) = (5, 0)$. Then we see $T(a) \star T(a) = 0$.

$\endgroup$
3
  • 17
    $\begingroup$ Re: your last edit, one thing that might be interesting to look at is whether there is some class of transformations under which this does remain unchanged (like orthogonal transformations for the regular dot product). Such a class of transformations, if it exists, might help give some insight into its significance. $\endgroup$
    – Carmeister
    Commented Jan 17, 2018 at 6:34
  • $\begingroup$ I don't remember downvoting this answer, but apparently I somehow did by accident. Can't undo. Very strange... anyways, it's a good answer. $\endgroup$
    – shalop
    Commented Jan 19, 2018 at 5:12
  • 1
    $\begingroup$ Ask for an edit so you can undo it. $\endgroup$
    – user370967
    Commented Jan 19, 2018 at 12:39
34
$\begingroup$

I'd write$$ \mathbf{a} \star \mathbf{b} = \prod_{i=1}^{n} \left(a_{i} + b_{i}\right)=(a_{1} +b_{1})\,(a_{2}+b_{2})\cdots \,(a_{n}+b_{n}) $$up as$$ \mathrm{prod}\left(\mathbf{a}+\mathbf{b}\right), $$where $\mathrm{prod}\left(\right)$ is the array product function,$$ \mathrm{prod}\left(\mathbf{x}\right)~{\equiv}~\prod_{{\forall}\text{indices}~i}{x_i}. $$

As discussed in @6005's answer, the nice properties of this operation follow from the simplicity of this construction.


Alternatives: "Log-norm" or "product-norm"

There're $p$-norms,$$ {\left|\left|\mathbf{x}\right|\right|}_p~{\equiv}~{\left(\sum_{{\forall}i}{\left|{x_i}\right|}^p\right)}^{p^{-1}}, $$which is more generally defined as:

  1. Some element-wise transform, e.g. ${\left(\right)}^{p}$, is applied.

  2. The transformed elements are summed.

  3. The reverse transform, e.g. ${\left(\right)}^{p^{-1}}$, is applied.

An array product can be written in this form where the element-wise transform is $\ln{\left(\right)}$ and its reverse is $\exp{\left(\right)}$, i.e.$$ {\left|\left|\mathbf{x}\right|\right|}_{\text{log}}~{\equiv}~\exp{\left(\sum_{{\forall}i}{\ln{{x_i}}}\right)}. $$ So, this might be written up as$$ {\left|\left|\mathbf{a}+\mathbf{b}\right|\right|}_{\text{log}}, $$which might be called the "log-norm" by analogy to $p$-norms.

This occurs in product topology, so we might call it the "product norm".


Suggestion: Try to work in the log-transformed domain

From a practical perspective, I'd try to work in the log-transformed domain when possible. I mean,$$\mathrm{prod}\left(\mathbf{a}+\mathbf{b}\right)$$is a nice, clean expression, but$$\mathbf{a}+\mathbf{b}$$is really clean.

It'd be easier to describe exactly how to do this if your problem domain were known, but an abstract gist would be like:

  1. At current, you're basically adding vectors up and multiplying out the sum's elements.

  2. The cleanness of this system is obstructed by its formulation as a web of additions and multiplications; either one or the other would be far more systematic.

  3. You can convert one into the other using exponential/logarithmic transforms.

  4. So, find your inputs/outputs, and $\exp{}$/$\ln{}$ 'em.

For precedents, this sort of technique is widely used in specific cases like Laplace transforms and Fourier transforms. So, it seems like you should basically try to create your own log-scale transform domain appropriate to the problem.

$\endgroup$
3
  • 2
    $\begingroup$ A funny construction, if we allow element-wise operations, would be $\exp{\left(\mathbf{1}{\cdot}\ln{\left(\mathbf{a}+\mathbf{b}\right)}\right)}$, where $\ln{\left(\right)}$ is an element-wise natural log and $\mathbf{1}$ is a vector of $1$'s. It works because summing the logs, then transforming back to the original domain, effectively multiplies them all. $\endgroup$
    – Nat
    Commented Jan 17, 2018 at 7:40
  • 3
    $\begingroup$ Excellent, @Nat -- I was waiting for someone to express it in a more natural way, and $\text{prod}(\vec{a} + \vec{b})$ is exactly the right way to do so. All of the identities in Ivo Terek's top answer are then derivable directly from facts about prod -- (1) $\text{prod}(c\vec{v}) = c^n \text{prod}(\vec{v})$; (2) $\text{prod}(\vec{0}) = 0$; (3) $\text{prod}(\vec{1}) = 1$; (4) $\text{prod}(\sigma(\vec{v})) = \text{prod}(\vec{v})$. $\endgroup$ Commented Jan 18, 2018 at 8:10
  • 4
    $\begingroup$ So basically, the appearance of it having nice properties is simply because it results from first applying a very nice operation -- vector addition -- and then applying a unary function with a few other nice properties. It inherits commutativity and associativity from vector addition. $\endgroup$ Commented Jan 18, 2018 at 8:13
23
$\begingroup$

Many operations are better expressed as the combination of several simpler operations. For instance, if I were to define $$ \vec{u} \oplus \vec{v} := \frac{\vec{u} + \vec{v}}{3}, \tag{1} $$ this may have some nice properties -- such as $\vec{u} \oplus \vec{v} = \vec{v} \oplus \vec{u}$ (commutativity), $(\vec{u} \oplus \vec{v}) \oplus (\vec{w} \oplus \vec{x}) = (\vec{u} \oplus \vec{w}) \oplus (\vec{v} \oplus \vec{x})$, and so on.

However, in most contexts, it would be better to only define addition, and scaling a vector by $\frac13$. This concept is a derived concept and it does not need its own name. Its properties are best expressed as resulting from properties of $+$ and of scaling by $\frac13$.

Nat's answer expresses your operation, $\star$, as the composition of two simpler operations: $$ \vec{a} \star \vec{b} = \mathrm{prod}\left(\vec{a}+\vec{b}\right), $$ where $\mathrm{prod}: \mathbb{R}^n \to \mathbb{R}$ is the product of a vector's coordinates. So far, every property I have seen on this page about $\star$ is derived already from properties of $+$ and properties of prod. In particular, absent any algebraic expression that seems nicer to write in terms of $\star$, a working mathematician will always prefer to express it in terms of $+$ and $\mathrm{prod}$.

Now, it may be that you find an expression or a context where $\star$ is the right way to express things (it would have to be something very different from usual linear algebra). If you do, your operation is useful. But I think $\star$ obscures rather than illuminates so far, while using $+$ and $\mathrm{prod}$ illuminates.

$\endgroup$
3
  • 3
    $\begingroup$ I think this is spot-on, but could benefit from contrasting this with the regular dot-product which doesn't have a simple decomposition. $\endgroup$ Commented Jan 19, 2018 at 4:25
  • 1
    $\begingroup$ That is a strange example of an operation that is better expressed as a combination of other operations: The average of two n-tuples (and in general the affine combination of k n-tuples) is a very important operation. In a "position" space, adding positions is meaningless, but affine combinations make sense: concretely, adding "location of Earth" to "location of Sol" is meaningless, but the average of the "location of Earth" and "location of Sol" is meaningful. In a space of positions, expressing average as vector operations is awkward (possible I admit, just awkward) $\endgroup$
    – Yakk
    Commented Jan 19, 2018 at 15:02
  • $\begingroup$ @Yakk Your point is well taken, and I agree the example is a bit strange for that reason. Nevertheless, I hope by using the qualifier "in most contexts" I illustrate the broader point, which is that depending on the context, some notation or concept may either be useful or it may be best thought of as derived. I also think that in your example of locations in actual space, addition of locations is not well-defined -- so that's not the same thing as n-tuples of numbers. In that context, average / affine combination is indeed a more fundamental concept. $\endgroup$ Commented Jan 19, 2018 at 22:10
22
$\begingroup$

One obvious problem seems to be that it doesn't behave well under a change of basis. Consider $\vec{a}=\vec{b}=\left(1,0\right)$. Then $\vec a\star \vec b=0$, but after a change to another (still orthonormal) basis, we could have $\vec{a}=\vec{b}=\left(\frac{1}{\sqrt2},\frac{1}{\sqrt{2}}\right)$ and $\vec a\star \vec b=2$.

You can work out the general expression for how the "star-product" transforms, but this example already shows that a well-behaved transformation changes the result from zero to nonzero. In a $d$-dimensional space, you could e.g. freely pick $d-1$ vectors $\vec v_a$ and choose a basis that their last component is zero. Then all products $\vec v_a\star \vec v_b=0$.

Hence, the star product seems to be an operation on $d$-tuples of numbers, but not on vectors.

UPDATE

We can turn this into a proper vector expression if we include some extra structure: Assume the vector space is equipped with a scalar product and fix an orthonormal basis $\vec e_{(i)}$. In this basis, define the product as you did. Then the obvious generalisation is $$\vec a\star\vec b= \prod_i\left<\vec a+\vec b,\vec e_{(i)}\right>\,.$$ Geometrically, this measures the $d$-dimensional volume of the box (hyperrectangle) with $\vec a+\vec b$ as the 'space diagonal'. The catch of course is that now that the volume depends on the orthonormal set $\left\{\vec e_{(i)}\right\}$, so that in any other basis, the expression will look quite more complicated.

Of course, as has been mentioned by Nat and 6005, this would be more easily framed in terms of the expression on the rhs applied to the sum of $\vec a$ and $\vec b$. (You may call that $\text{prod}$, but in general it is not just a product of components.) The $\star$ operation itself does not seem to be very 'producty' as far as $\vec a$ and $\vec b$ are concerned.

$\endgroup$
8
  • 2
    $\begingroup$ The usual dot product can also change dramatically after changing a basis. $\endgroup$
    – lisyarus
    Commented Jan 17, 2018 at 11:23
  • 4
    $\begingroup$ @lisyarus: Well, depends on what you mean -- the functional form may change, but for orthonormal bases, the component expression is $a_i b_i$. The value of the dot product, on the other hand, does not change under any change of basis. $\endgroup$
    – Toffomat
    Commented Jan 17, 2018 at 11:33
  • 2
    $\begingroup$ I mean the formula for the dot product: it is that simple only in orthonormal basis. My point is that it is not invariant under arbitrary basis change. Probably, for the $\star$-operation some sort of bases exists where the operation preserves this simple form. $\endgroup$
    – lisyarus
    Commented Jan 17, 2018 at 12:00
  • 2
    $\begingroup$ @lisyarus: The OP hasn't even singled out a special basis for the definition. Also, I don't think there can be a special subset of bases since the actual result changes. In particular, it can be zero or non-zero, so that is not even the determinant of the transformation. $\endgroup$
    – Toffomat
    Commented Jan 17, 2018 at 14:20
  • 3
    $\begingroup$ @Toffomat: Of course the very definition of an orthonormal basis already requires the dot product. Therefore the observation that the dot product takes a special form in orthonormal bases is somewhat circular. $\endgroup$
    – celtschk
    Commented Jan 17, 2018 at 21:20
18
$\begingroup$

I have never seen this operation so far. However I am having fun exploring its properties and I would like to contribute with some findings.

( i ) It is partially generalizable, up to a choice of basis(as @Ivo pointed out in the comments section), to a $n$-dimensional vector space $V$ over a field $K$ as follows $$ \star : V \times V \to K : (a, b) \mapsto \prod_{i=1}^{n} \left( a_{i} + b_{i} \right) $$ and all of the properties listed by @Ivo still hold.

( ii ) Nevertheless the previous item suggests we can study how $\star$ interacts with complex numbers operations, like for instance, complex conjugation: $$ \overline{a \star b} = \overline{a} \star \overline{b} $$ $$ \overline{a} \star b = \overline{\left( a \star \overline{b} \right)} $$

( iii ) It has a nice combinatorial attribute.

For $n = 2$ $$ a \star b = a_1 a_2 + b_1 a_2 + a_1 b_2 + b_1 b_2 $$

For $n = 3$ $$ a \star b = a_1 a_2 a_3 + b_1 a_2 a_3 + a_1 b_2 a_3 + a_1 a_2 b_3 + b_1 b_2 a_3 + b_1 a_2 b_3 + a_1 b_2 b_3 + b_1 b_2 b_3 $$ And so on. (This is already very well formalized in @Jacob's Maibach answer)

What is also interesting to note here is that $a \star b$ is the sum of $2^n$ homogeneous polynomials of degree $n$ and $n$ variables from which it inherits its properties.

( iv ) The property that $ a \star b = 0 \iff \exists i \in \mathbb{N}_{\le n} : a_i + b_i = 0$ is not sufficient to ensure uniqueness. To see that simply consider $k \in K\setminus \{0\}$ and $ a \star_2 b \doteq k (a \star b)$. Then the same properties will hold up to a constant, namely $k$.

This happens in a similar way to the determinant of a matrix, which is unique up to a constant if we just require it to be a multilinear alternating form. But "normalizing a unit", that is, setting $\det(I) = 1$, turns it unique. So maybe it is necessary to do something analogous by choosing a unit, say $1 = (1,\cdots,1)$, and requiring $ 1 \star 0 = 1$.

( v ) It interacts somewhat similarly to the dot product with its elementwise analogous operation: the elementwise product(also known as Hadamard product), denoted by $a \circ b$ here. For example, analogously to $ (a \circ b) \cdot 1 = a \cdot b $ we have $ (a+b) \star 0 = a \star b $ and to $ \exp(a) \cdot \exp(b) = \exp(a+b) \cdot 1 $ we have $ \log(a) \star \log(b) = \log(a \circ b) \star 0 $.

Bonus: $ a \star b = 0 \star (a+b) $ and$ (a \circ b) \star (a \circ c) = (0 \star a) (b \star c) $.

$\endgroup$
2
  • 1
    $\begingroup$ Re: point (iv); I guess a more natural normalization condition would be ${\bf 1} \star {\bf 0} = 1$, where ${\bf 1}$ is a vector of one's (as in Nat's answer) and ${\bf 0}$ is the zero vector. Also, I guess you can't use an arbitrary vector space, just $K^n$ (unless you're willing to fix a basis for $V$ once and for all, but this is bad since the definition wouldn't be "geometric" anymore). $\endgroup$
    – Ivo Terek
    Commented Jan 17, 2018 at 17:54
  • 1
    $\begingroup$ Nice points @IvoTerek! Actually the normalization $e_1 \star 0 = 1$ not even works because in fact $e_1 \star 0 = 0$. It was an oversight of mine. I will update the answer with your comment. $\endgroup$
    – mucciolo
    Commented Jan 17, 2018 at 18:43
9
$\begingroup$

One of the important properties of the dot product is that they characterize orthogonal vectors.

On the other hand, your product vanishes whenever two vectors have at least one coordinates that add up to zero. In my experience, that is a very loose, basically "useless" property.

$\endgroup$
4
  • 26
    $\begingroup$ I understand your point. As it turns out, this "useless" property is precisely the one of interest in my application (in secure computation). $\endgroup$
    – doc
    Commented Jan 17, 2018 at 1:59
  • 5
    $\begingroup$ I don't understand your point. Would you call scalar multiplication useless, too? $\endgroup$ Commented Jan 17, 2018 at 8:49
  • 1
    $\begingroup$ @LukasBoersma, I would probably had it not already been named scalar multiplication. $\endgroup$
    – dezdichado
    Commented Jan 17, 2018 at 9:10
  • $\begingroup$ Wouldn't it be true to say that for any pair of vectors, there would be at least one basis for which the "dot sum" is zero? $\endgroup$
    – Dancrumb
    Commented Jan 19, 2018 at 4:39
9
$\begingroup$

Your question is interestingly tied to a generalization of Powers / Rising / Falling Factorials and Stirling Numbers.

Consider at first the monic polynomial $$ p_{\,n} \left( {x,{\bf v}} \right) = \prod\limits_{0\, \le \;k\, \le \,n - 1} {\left( {x - v_{\,k} } \right)} $$ then the product that you indicate corresponds to $$ {\bf a} \star {\bf b} = p_{\,n} \left( {0, - {\bf a} - {\bf b}} \right) $$

This tells us that, apart from simple translations and scaling, the properties of this product cannot be put into a general frame, since there is not a simple relation between polynomials with different roots.

Going a step further, consider the definition of the Rising Factorial. $$ x^{\,\overline {\,n\,} } = x\left( {x + 1} \right) \cdots \left( {x + n - 1} \right) = \prod\limits_{0\, \le \;k\, \le \,n - 1} {\left( {x + k} \right)} = \prod\limits_{0\, \le \;k\, \le \,n - 1} {\left( {x + k} \right)} $$

Then we can imagine to replace the "exponent" with a vector, and define $$ x^{\,{\bf v}_{\,h} } = \left( {\matrix{ 1 \cr {x - v_{\,0} } \cr {\left( {x - v_{\,0} } \right)\left( {x - v_{\,1} } \right)} \cr \vdots \cr {\left( {x - v_{\,0} } \right)\left( {x - v_{\,1} } \right) \cdots \left( {x - v_{\,h - 1} } \right)} \cr } } \right) = \left\| {\;\prod\limits_{0\, \le \;k\, \le \,n - 1} {\left( {x - v_{\,k} } \right)} \quad \left| {\;0 \le n \le h} \right.\;} \right\| $$ which means that a scalar raised to a vector is a vector, whose components are the mentioned polynomials in $x$ of progressively increasing degree. The vectors are indexed from $0$, and note that the last component of $\bf v$ does not come into play.
(these peculiar choices turn out to provide the best parallel with the "usual" definitions).

We have for instance $$ x^{\,{\bf 0}_{\,h} } = \left( {\matrix{ 1 \cr x \cr \vdots \cr {x^{\,h} } \cr } } \right)\quad \quad {\bf v} = \left( {\matrix{ 0 \cr 1 \cr \vdots \cr h \cr } } \right)\;\quad \Rightarrow \quad \;x^{\,{\bf v}_{\,h} } = \left( {\matrix{ {x^{\,\underline {\,0\,} } } \cr {x^{\,\underline {\,1\,} } } \cr \vdots \cr {x^{\,\underline {\,h\,} } } \cr } } \right) $$

If we put $$ p_{\,n} (z,{\bf v}) = \prod\limits_{0\, \le \,\,k\, \le \,n - 1} {\left( {z - v_{\,k} } \right)} = \sum\limits_{0\, \le \,\,k\, \le \,n} {a_{\,n,\,\,k} ({\bf v})\,z^{\,k} } $$ then the Vieta's formulas will give us $$ {\bf A}_{\,h} ({\bf v}) = \left\| {\;a_{n,\,m} \;} \right\|_{\,h} = \left\| {\;\left[ {m \le n} \right]\left( { - 1} \right)^{\,n - m} \sum\limits_{0\, \le \,k_{\,0} \, < \,k_{\,1} \, < \, \cdots \, < \,k_{\,n - m - 1} \, \le \,n - 1\;} {\left( {\prod\limits_{0\, \le \,j\, \le \,n - m - 1} {v_{\,k_{\,j} } } } \right)} \;} \right\|_{\,h} $$

The inverse can be demonstrated to be $$ {\bf A}_{\,h} ({\bf v})^{\, - \,{\bf 1}} = \left\| {\;\left[ {m \le n} \right]\sum\limits_{0\, \le \,k_{\,0} \, \le \,k_{\,1} \, \le \, \cdots \, \le \,k_{\,n - m - 1} \, \le \,m\;} {\left( {\prod\limits_{0\, \le \,j\, \le \,n - m - 1} {v_{\,k_{\,j} } } } \right)} \;} \right\|_{\,h} $$ (note the difference between $<$ and $\le$ in the summation range).

So $$ \;x^{\,{\bf v}_{\,h} } = {\bf A}_{\,h} ({\bf v})\,x^{\,{\bf 0}_{\,h} } \quad x^{\,{\bf 0}_{\,h} } = {\bf A}_{\,h} ({\bf v})^{\, - \,{\bf 1}} \,x^{\,{\bf v}_{\,h} } $$ and it can be seen that ${\bf A}_{\,h} ({\bf v})$ and $ {\bf A}_{\,h} ({\bf v})^{\, - \,{\bf 1}} $ correspond respectively to the (signed) Stirling N. of 1st kind and to those of the 2nd kind.

In conclusion, we are allowed to write $$ {\bf a} * {\bf b}\quad \buildrel {{\rm one}\,{\rm element}\,{\rm of}} \over \longrightarrow \quad 0^{\, - {\bf a} - {\bf b}} $$ then $$ 0^{\, - {\bf a} - {\bf b}} = {\bf A}_{\,h} ( - {\bf a} - {\bf b})\;\,0^{\,{\bf 0}_{\,h} } = {\rm 1st}\,{\rm col}{\rm .}\,{\rm of}\;{\bf A}_{\,h} ( - {\bf a} - {\bf b}) $$ and from here we can establish various relations, like for instance $$ 0^{\, - {\bf a} - {\bf b}} = {\bf A}_{\,h} ( - {\bf a} - {\bf b})\,0^{\,{\bf 0}_{\,h} } = {\bf A}_{\,h} ( - {\bf a} - {\bf b})\,\;{\bf A}_{\,h} ( - {\bf a})^{\, - \,{\bf 1}} \;0^{\, - {\bf a}} $$

But again, the last matrix product does not translate in general into a simple expression.

$\endgroup$
7
$\begingroup$

To add to the discussion I will investigate if there are transformations that leave this star product unchanged. I will just look at the 2D case here. Define the general vectors $u, v$ and matrix $T$ as follows. $$u=\pmatrix{u_1\\u_2} \quad v=\pmatrix{v_1\\v_2}\quad T=\pmatrix{a&b\\c&d}$$ So we will look for a $T$ such that $Tu\star Tv=u\star v$. Finally define $s_1=u_1+v_1,s_2=u_2+v_2$

\begin{align} u\star v&=(u_1+v_1)(u_2+v_2)=s_1s_2\\\\ Tu\star Tv&=(au_1+bu_2+av_1+bv_2)(cu_1+du_2+cv_1+dv_2)\\ &=acs_1^2+bds_2^2+(ad+bc)s_1s_2 \end{align} From this we can conclude that $ac=bd=0$ and $(ad+bc)=1$. This is a bad sign because it seems like $T=0$ is the only solution but the split complex numbers come to the rescue.

The split complex numbers are defined in a similar way to the regular complex numbers by $j^2=1$. In the split complex numbers you can define the numbers $e,e^*$ by: $$e=\frac{1+j}{2}\quad e^*=\frac{1-j}{2}$$ These have the useful properties that $e^2=e,\ e^{*2}=e^*$ and $e e^*=0$. From $ac=bd=0$ I propose a solution of the form $$\cases{a=w e\\b=x e^*\\c=y e^*\\d=z e}$$ From $ad+bc$ then follows that $wz=xy=1$. The general matrix is given by $$T=\pmatrix{we&xe^*\\\frac{1}{x}e^*&\frac{1}{w}e}$$ This result is really interesting because the introduction of split complex numbers hints towards a connection with hyberbolic geometry.

$\endgroup$
4
  • 3
    $\begingroup$ Note that you don't need split complex numbers: $a=d=0$, $c=1/b$ is a solution, as is $c=b=0$, $d=1/a$. This gives the $e$-component and the $e^*$-components of your $T$ separately. Of course, both do essentially the same: scale one component by a factor and the other one by the inverse; the first one also rearranges the components. In higher dimensions, you will presumably have $\text{diag}\!\left(\lambda_1,\dotsc,\lambda_d\right)$ with $\prod_i\lambda_i=1$ and its permutations as the only invariances. $\endgroup$
    – Toffomat
    Commented Jan 18, 2018 at 17:39
  • $\begingroup$ @Toffomat somehow completely overlooked those solutions. Thanks for pointing that out. $\endgroup$ Commented Jan 19, 2018 at 17:04
  • 1
    $\begingroup$ @Toffomat Your affirmation is correct. That might be a bit an overpowered tool for this, but with the theory of reductive groups at hand, it is a matter of seconds to see that the group preserving the star cannot be bigger than the normaliser of the torus you described. $\endgroup$ Commented Jan 22, 2018 at 0:39
  • $\begingroup$ @MichaelLeBarbierGrünewald: Thanks; I don't know much about reductive groups, but it's interesting to know! $\endgroup$
    – Toffomat
    Commented Jan 22, 2018 at 10:02
6
$\begingroup$

I wouldn't see this operation as a first-class citizen as it can be expressed as the product of the components of the sum, say

$$a\star b=\star(a+b)$$ where the unary $\star$ operator is more fundamental one.

The latter can be seen as an "homomorphic sum", i.e. the antilogarithm of the sum of logarithms, and relates to the geometric mean in the same way that ordinary sums relate to the arithmetic mean.

$\endgroup$
6
$\begingroup$

APL was originally conceived as a new mathematical notation where the inconsistencies and ambiguities of traditional mathematics were harmonised and various concepts were generalised.

For example, ∑ and ∏ were recognised as two instances of a single concept, namely the insertion of a single operator (+ and × in these particular two instances) between the successive terms of a sequence. APL generalised this concept in the form of the higher-order function / which modifies its operand in exactly this manner. Thus, sum and product in APL are written as +/ and ×/, but any dyadic function may be used as /’s operand. E.g. we can express ∃ as ∨/ and ∀ as ∧/.

So too was dot product recognised as a single instance of conjoining two operations (here + and ×) into a new operation. APL generalised this concept in the form of the higher-order function . which conjoins its operands in exactly this manner. Thus, dot product in APL is written as +.×, but any two dyadic functions may be used as .’s operands. E.g. we can express the equivalence of two vectors as ∧.= or indeed your “inverted” dot product as ×.+.

$\endgroup$
1
$\begingroup$

Not a purely mathematical answer but the equivalent in programming would be zipSumProduct.

Ie: you have 2 vectors [1,2,3], [4,5,6]

  1. zip them together [(1,4), (2,5), (3,6)]

  2. map over the array summing each pair [5,7,9]

  3. reduce the array using the product operator. [315]

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .