I was thinking again about the division of vectors, and I came with an equivalent definition with an alternative notation, that might be helpful in some situations, dunno. The original definition of vector division I made in the previous post
was pretty much inspired by the concept of dot product and vector projection. As one can easily derive, the projection of a vector y on a vector x is
Today the game starts by rewriting the above expression into matrix form (vectors are column matrices remember)
The matrix multiplication in the numerator is a 1xn times nx1 matrix multiplication, therefore the results is a 1x1 matrix, ie, a scalar, the familiar dot product. Now, there is a nice juggling that we can apply to completely reshape the projection formula – we move the x up to the numerator, and drop the y down…
Note that now the numerator implies a nx1 times 1xn matrix multiplication, therefore the result is an nxn square matrix! Let’s call
We can now compute the projection of the vector y onto x by a regular transformation of a vector by a matrix. Doing so is more expensive than applying the direct dot product version we found in the first place, but still it can be sometimes more elegant to work with the matrix form.
Note that the matrix P is square and symmetric, since if x=(a,b,c), then
Also, the determinant |P| = 0 as can easily be checked and intuitively deduced (the mapping projects many vectors to one, and so it’s not an invertible transformation).
The “antiprojection” of y onto x, or the perpendicular component of y with respect to x, is computed again as ap(y) = y – pr(y):
which means that or in other words, with
I would like to give the “Phase Matrix” name to P and “Quadrature Matrix” to Q, with your permission. Of course the projection of y will be parallel to x and its antiprojection perpendicular to it , , thus
Therefore, if we decide again to allow division of parallel vectors (which give a real scalar as result) and the division of perfectly perpendicular vectors (which give a imaginary scalar as result) by means of , then
By comparison to the original formulation, we have p = (x·y)/|x|² and q = |x^y|/|x|², but now we can solve the problem by matrix manipulations instead (even thou I find it more cumbersome).