Methane In R 4 \mathbb{R}^4 ?

Geometry Level 5

Consider five nonzero vectors in R 4 \mathbb{R}^4 such that any two of them enclose the same nonzero angle θ \theta . Find cos θ \cos\theta , to two significant digits.

Bonus Question : Find the corresponding angle θ n \theta_n between n + 1 n+1 nonzero vectors in R n \mathbb{R^n} and find lim n θ n \displaystyle \lim_{n\to\infty}\theta_n .

Note that θ 3 \theta_3 is the bond angle in the methane molecule.


Image Credit: C.A.G.E


The answer is -0.25.

This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.

5 solutions

Abhishek Sinha
Mar 12, 2016

Denote n + 1 n+1 non-zero unit vectors in R n \mathbb{R}^n by { u i , i = 1 , 2 , , n + 1 } \{\mathbf{u}_i, i=1,2, \ldots, n+1\} . Define the center of gravity of these vectors by g = 1 n + 1 i = 1 n + 1 u i \mathbf{g}=\frac{1}{n+1} \sum_{i=1}^{n+1} \mathbf{u_i} If the common cosines between the vectors { u i } \{\mathbf{u}_i\} be denoted by cos ( θ n ) \cos(\theta_n) , then for all i = 1 , 2 , , n + 1 i=1,2,\ldots, n+1 we have u i . g = 1 n + 1 ( 1 + n cos ( θ n ) ) \mathbf{u}_i.\mathbf{g}=\frac{1}{n+1} (1 + n \cos(\theta_n) ) Since the vectors { u i , i = 1 , 2 , , n + 1 } \{\mathbf{u}_i, i=1,2,\ldots,n+1\} lives in an n n -dimensional space, they must be linearly dependent . Thus there exists scalars λ i \lambda_i , not all-zero, such that i = 1 n + 1 λ i u i = 0 \sum_{i=1}^{n+1}\lambda_i \mathbf{u}_i=0 Taking dot product of the above equation with g \mathbf{g} , we conclude that ( 1 + n cos ( θ n ) ) ( i = 1 n + 1 λ i ) = 0 (1+ n\cos(\theta_n))\bigg(\sum_{i=1}^{n+1}\lambda_i\bigg)=0 Which implies cos ( θ n ) = 1 n \cos(\theta_n)=-\frac{1}{n} provided i = 1 n + 1 λ i 0 \sum_{i=1}^{n+1}\lambda_i\neq 0 , i.e., the vectors are affinely independent, i.e., they enclose a strictly positive volume in R n \mathbb{R}^n . See my comment below for a proof.

Hence, for n = 4 n=4 , the answer is 0.25 -0.25 and lim n θ n π 2 \lim_{n \to \infty} \theta_n \to \frac{\pi}{2} .

Yes, this looks like a solid solution (+1)

Otto Bretscher - 5 years, 3 months ago

Here is the proof of the affine independence of { u i } \{\mathbf{u}_i\} . For i = 1 , 2 , , n i=1,2, \ldots, n , define the vectors v i = u i u n + 1 \mathbf{v}_i=\mathbf{u}_i-\mathbf{u}_{n+1} . Then the ( i , j ) (i,j) th coordinate of the Gramian Matrix G \mathbf{G} of the vectors { v i } \{\mathbf{v}_i\} can be calculated to be 2 2 cos ( θ n ) 2-2\cos(\theta_n) if i = j i=j and 1 cos ( θ n ) 1-\cos(\theta_n) if i j i\neq j . Thus we can write G = ( 1 cos ( θ n ) ) ( J + I ) \mathbf{G}= (1-\cos(\theta_n)) (\mathbf{J}+\mathbf{I}) where J \mathbf{J} and I \mathbf{I} are n × n n\times n all-one and identity matrices respectively. Avoiding the trivial case θ n = 0 \theta_n=0 according to the problem-statement, we see that G \mathbf{G} is full rank (in fact, positive definite) and consequently the vectors { u i , i = 1 , 2 , , n + 1 } \{\mathbf{u}_i, i=1,2,\ldots, n+1\} are affinely independent.

Abhishek Sinha - 5 years, 3 months ago

Log in to reply

How do you know that G \mathbf{G} is positive definite?

Otto Bretscher - 5 years, 3 months ago

Log in to reply

The matrix G \mathbf{G} is a sum of a PSD and a PD matrix, and hence it is PD.

Abhishek Sinha - 5 years, 3 months ago

Log in to reply

@Abhishek Sinha I suspect that most of our young comrades on Brilliant are not very familiar with these terms; how do you convince them that your J \mathbf{J} is positive semidefinite?

Otto Bretscher - 5 years, 3 months ago

Log in to reply

@Otto Bretscher Just compute the quadratic form x T J x = ( i x i ) 2 \mathbf{x}^T\mathbf{J}\mathbf{x}=(\sum_i x_i)^2 .

Abhishek Sinha - 5 years, 3 months ago
Arjen Vreugdenhil
Mar 12, 2016

My solution is inductive in the number of dimensions n n . It is a more general solution than others have given, because it assumes no more than a sequence of nested inner product spaces of increasing dimension.

Suppose that in n n -dimensional space we have n + 1 n+1 vectors, u 0 , u n u_0, \dots u_n such that { u i 2 = 1 u i u j = k \begin{cases} u_i^2 = 1 \\ u_i\cdot u_j = -k \end{cases} for all 0 i j n 0 \leq i \not= j \leq n and some value k = cos θ n k = -\cos \theta_n . (Note: u i u j u_i\cdot u_j is the inner product and u i 2 = u i 2 = u i u i u_i^2 = |u_i|^2 = u_i\cdot u_i is the corresponding square norm.)

Now we go to n + 1 n+1 -dimensional space, which contains our original space as a subspace (preserving its inner product); let z z be a unit vector in the new space orthogonal to the original space. Define { v i = α u i + β z 0 i n v n + 1 = z \begin{cases} v_i = \alpha\:u_i + \beta\:z & 0 \leq i \leq n \\ v_{n+1} = -z\end{cases} Here, α \alpha and β \beta are parameters to determined, in such a way that the v i v_i obey the same relations as the u i u_i (see above), but with a different value for k k . It is sufficient to require that { v i 2 = 1 v i v n + 1 = v i v j = k \begin{cases} v_i^2 = 1 \\ v_i\cdot v_{n+1} = v_i\cdot v_j = -k'\end{cases} for some 0 i j n 0 \leq i \not= j \leq n .

The first condition translates into α 2 + β 2 = 1 \alpha^2 + \beta^2 = 1 ; the second condition gives ( α u i + β z ) ( z ) = ( α u i + β z ) ( α u j + β z ) β = α 2 k + β 2 . ( 1 + k ) β 2 + β k = 0 β = 1 or k 1 + k . (\alpha\:u_i + \beta\:z)\cdot (-z) = (\alpha\:u_i + \beta\:z)\cdot (\alpha\:u_j + \beta\:z) \\ -\beta = -\alpha^2\:k + \beta^2.\\ (1+k)\beta^2 + \beta - k = 0 \\ \beta = -1\ \text{or}\ \frac k{1+k}. The first solution would make all vectors v i v_i equal to z z , which all make the same angle of 0 0^\circ with each other; this is not the solution we are looking for. Thus we go with the second solution and have k = β = k 1 + k ; α = 1 1 + k . k' = \beta = \frac k{1+k};\ \ \alpha = \frac 1{1+k}.

Now we have a way to "upgrade" an n n -dimensional solution to an n + 1 n+1 -dimensional solution, and a way to see how k = cos θ n k = -\cos \theta_n behaves under this transformation.

  • For the basic case n = 1 n = 1 , the only possible non-zero angle between vectors is 18 0 180^\circ , with k = 1 k = 1 .

  • For n = 2 n = 2 we get k = 1 / 2 k = 1/2 corresponding to a 12 0 120^\circ angle.

  • For n = 3 n = 3 we get k = 1 / 3 k = 1/3 corresponding to a 109. 5 109.5^\circ angle.

  • It is not difficult to show that in general k = 1 / n k = 1/n . As n n \to \infty , k 0 k \to 0 , so that the angle approaches 9 0 90^\circ from above.

The solution to the problem, for four dimensions, is cos θ 4 = k = 1 4 \cos\theta_4 = -k = -\tfrac14 or 0.25 \boxed{0.25} .

Why is this solution "more general" than the previous solutions ? In other words, what are you generalizing upon ? The previous two solutions hold in any inner-product space with appropriate dimensions.

Abhishek Sinha - 5 years, 3 months ago

Log in to reply

My solution is coordinate-free. In fact, my " n + 1 n+1 dimensional space" may be any space big enough to contain both the original n n -dimensional space and a non-trivial vector that is orthogonal to it.

I now realize that my algebra relies on the fact that α \alpha , β \beta are real numbers, so that my solution generally works for real inner product spaces.

Of course it can be proven that any finite-dimensional spaces of this kind can be given an orthogonal basis compatible with the sequence of nested subspaces etc. etc. so that one can define a coordinate system; but my problem does not refer to any such coordinate system or basis vectors.

Arjen Vreugdenhil - 5 years, 3 months ago

Log in to reply

In my solution too, no such explicit basis vectors are invoked. But I don't understand how it "generalizes" the problem. Because any n n -dimensional inner-product space over R \mathbb{R} is isomorphic to R n \mathbb{R}^n .

Abhishek Sinha - 5 years, 3 months ago

Log in to reply

@Abhishek Sinha Sure. I just don't need that (rather profound!) fact :)

Arjen Vreugdenhil - 5 years, 3 months ago

Log in to reply

@Arjen Vreugdenhil I think this is a difference in terminology only. My solution is just as valid if I start with "Let u 1 , . . , u n + 1 u_1,..,u_{n+1} be unit vectors in an n n -dimensional real inner product space" , and likewise for Abhishek's solution.

Otto Bretscher - 5 years, 3 months ago

Yes, this is a nice constructive solution! (+1) One can see what is happening: We take an equilateral triangle in the plane, for example, and place it into three-space, making it the base of a regular tetrahedron etc...

Small typos (just to show that I read your solution): ( 1 + k ) β 2 + β k = 0 (1+k)\beta^2\mathbf{+}\beta\mathbf{-}k=0 and β = 1 \beta=\mathbf{-}1

While I would not call your solution "more general", it does have the advantage of proving the existence of a solution, while the other two solutions do not.

All I can say in defense of my solution is that it is brief and elegant ;)

Otto Bretscher - 5 years, 3 months ago

An elementary proof by finding a recursion on θ n \theta_n .

Let θ n \theta_n denotes the desired angle associated with the n + 1 n+1 unit vectors, { x 1 , , x n + 1 } \{\mathbf{x}_1,\cdots,\ \mathbf{x}_{n+1}\} living in R n \mathbb{R}^n , as mentioned in the question. W.l.o.g. we can assume that x 1 = [ 1 , 0 , , 0 ] \mathbf{x}_1=[1,0,\cdots,0] . This forces the other vectors to have the following form: x k + 1 = [ cos θ n x k ( n 1 ) ] , k = 1 , 2 , , n \mathbf{x}_{k+1}=[\cos\theta_n\ \mathbf{x}_k^{(n-1)}],\quad k=1,2,\cdots,\ n Consequently, we have a set of n n vectors { x 1 ( n 1 ) , , x n ( n 1 ) } \{\mathbf{x}_1^{(n-1)},\cdots,\ \mathbf{x}_{n}^{(n-1)}\} , such that x i ( n 1 ) , x j ( n 1 ) = cos θ n cos 2 θ n , i j \langle\mathbf{x}_i^{(n-1)}, \mathbf{x}_j^{(n-1)}\rangle=\cos\theta_n-\cos^2\theta_{n},\ \forall i\ne j . However, the vectors x i ( n 1 ) \mathbf{x}_i^{(n-1)} , thus obtained, are not normalized. By dividing them with their norm (which is sin θ n |\sin\theta_n| as per the construction), we get the new set of n n unit vectors { x 1 ( n 1 ) , , x n ( n 1 ) } \{\mathbf{x}_1^{(n-1)},\cdots,\ \mathbf{x}_{n}^{(n-1)}\} with the same angle θ n 1 \theta_{n-1} enclosed between any two of them where cos θ n 1 = cos θ n cos 2 θ n 1 cos 2 θ n cos θ n 1 = cos θ n 1 + cos θ n \cos\theta_{n-1}=\frac{\cos\theta_n-\cos^2\theta_n}{1-\cos^2\theta_n}\\\implies \cos\theta_{n-1}=\frac{\cos\theta_n}{1+\cos\theta_n} since cos θ n 0 \cos\theta_n\ne 0 . Rearranging, we get the recursion, sec θ n = sec θ n 1 1 , n 1 \sec\theta_n=\sec\theta_{n-1}-1,\ n\ge 1 Finally, noting that for n = 1 n=1 , θ 1 = π \theta_1=\pi , as per definition, we get, cos θ n = 1 / n \cos\theta_n=-1/n .

Hence, the desired answer is cos θ 4 = 0.25 \cos\theta_4=\boxed{-0.25} and letting n , θ n π / 2 n\to \infty, \ \theta_{n}\to\boxed{\pi/2}

Ameya Daigavane
Mar 14, 2016

Looking at the other solutions here, I'm feeling a little (a lot) overwhelmed. I solved this in a slightly different way(might be equivalent to @Abhishek Sinha 's solution - I'm not sure if the solution is correct, so please tell me if anything's wrong. Let's go.

Let's make the n + 1 n + 1 vectors have the same magnitude of 1, with no loss of generality. This doesn't change the angle between any two vectors, which is still cos ( θ n ) \cos(\theta_n) .
I claim that the sum of these vectors must be a zero vector, because of the symmetry of the construction means the vector cannot "point" in a specific direction. (If someone can provide a rigorous proof, I'll be happy to include it.)

Okay, so let's take the entire construction, and rotate it so that one of the vectors lies along one of the n dimensional axes. Let's name this vector i 1 ^ \hat{i_1} because it's neat.
Now, let's name the rest of the vectors too, keeping i 1 ^ , i 1 ^ , . . . , i n ^ \hat{i_1}, \hat{i_1}, ..., \hat{i_n} as the unit vectors of our space, as: ( k k is not the letter, it's the component of the vector along the n t h n^{th} dimension.)

v 1 = a 1 i 1 ^ + b 1 i 2 ^ + . . . + k 1 i n ^ \\ \vec{v_1} = a_1 \hat{i_1} + b_1 \hat{i_2} + ... + k_1 \hat{i_n} v 2 = a 2 i 1 ^ + b 2 i 2 ^ + . . . + k 2 i n ^ \\ \vec{v_2} = a_2 \hat{i_1} + b_2 \hat{i_2} + ... + k_2 \hat{i_n} . . . \\ ... v n = a n i 1 ^ + b n i 2 ^ + . . . + k n i n ^ \\ \vec{v_n} = a_n \hat{i_1} + b_n \hat{i_2} + ... + k_n \hat{i_n}

Note that i j i k = 0 i_j \cdot i_k = 0 for j k j \neq k , where \cdot is the dot product. This is because they are perpendicular in the n-dimensional space. (Rigorous proof encouraged.)
But, by the definition of the dot product, a b = a b cos ( θ ) \vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}|\cos(\theta) , where θ \theta is the angle between b \vec{b} and a \vec{a} .

Back to the question now, we have, taking dot products of the vectors with i 1 ^ \hat{i_1} , cos ( θ n ) = i 1 ^ v 1 i 1 ^ v 1 = . . . = i 1 ^ v n i 1 ^ v n \cos(\theta_n) = \dfrac{\hat{i_1} \cdot \vec{v_1}}{|\hat{i_1}||\vec{v_1}|} = ... = \dfrac{\hat{i_1} \cdot \vec{v_n}}{|\hat{i_1}||\vec{v_n}|}

i 1 ^ = v 1 = . . . = v n = 1 cos ( θ n ) = a 1 = a 2 = . . . = a n |\hat{i_1}| = |\vec{v_1}| = ... = |\vec{v_n}| = 1 \Rightarrow \cos(\theta_n) = a_1 = a_2 = ... = a_n

But i 1 ^ + a 1 i 1 ^ + . . . + a n i 1 ^ = ( 1 + a 1 + . . . + a n ) i 1 ^ = 0 \hat{i_1} + a_1 \hat{i_1} + ... + a_n \hat{i_1} = (1 + a_1 + ... + a_n ) \hat{i_1} = 0
from the zero vector condition above (along one dimension).
This means, n cos ( θ n ) + 1 = 0 cos ( θ n ) = 1 n n \cos(\theta_n) + 1 = 0 \Rightarrow \cos(\theta_n) = \frac{-1}{n}
Here, n = 4 cos ( θ n ) = 0.25 n = 4 \Rightarrow \cos(\theta_n) = -0.25 .
That's it.

As you point out yourself, there is a gap in your solution: You assume that the sum of the vectors is 0. To use Abhishek's terms: You assume that the center of gravity, g \mathbf{g} is 0. I don't see a direct way to justify that assumption. Abhishek skillfully works around the issue with his affine argument, while I avoid the issue altogether.

Otto Bretscher - 5 years, 3 months ago

Log in to reply

But what's wrong with the symmetry argument? I'm just curious about why it's wrong.

Ameya Daigavane - 5 years, 3 months ago

Log in to reply

You need to make the argument more precise. For example, if you take three vectors in R 3 \mathbf{R}^3 such that any two of them enclose the same angle (a right angle, for example), their sum may not be 0 (but you have "symmentry" too).

Otto Bretscher - 5 years, 3 months ago

Log in to reply

@Otto Bretscher But it's not possible for 4 vectors in R 3 \mathbb{R}^3 , right? I think the jump goes from n n to n + 1 n + 1 in R n \mathbb{R}^n , which has to do with the linear dependence of the vectors.

Ameya Daigavane - 5 years, 3 months ago

Log in to reply

@Ameya Daigavane Yes, exactly, but you cannot prove it by appealing to symmetry alone. With 5 vectors in 4-space, it is clear that they are linearly dependent, but it is not clear that they are dependent in this special way that their sum is 0.

Otto Bretscher - 5 years, 3 months ago

Log in to reply

@Otto Bretscher Got it. But it does seem like there does exist a configuration of n + 1 n + 1 vectors in R n \mathbb{R}^n such that their sum is zero, and the angle between each is the same. Could a constructive argument work then?

Ameya Daigavane - 5 years, 3 months ago

Log in to reply

@Ameya Daigavane @Ameya Daigavane : Once we know that cos ( θ n ) = 1 n \cos(\theta_n)=-\frac{1}{n} , then it is easy to show that the sum of the unit vectors must be zero since ( v 0 + v 1 + . . . + v n ) ( v 0 + v 1 + . . . + v n ) (v_0+v_1+...+v_n) \cdot (v_0+v_1+...+v_n) = ( n + 1 ) n ( n + 1 ) cos ( θ n ) = 0 =(n+1)-n(n+1)\cos(\theta_n)=0 .

Otto Bretscher - 5 years, 2 months ago

Log in to reply

@Otto Bretscher Of course, I got that. I was wondering if you could prove that without using the result we have to prove, that is cos θ n = 1 n \cos \theta_n = \frac{-1}{n} .

Ameya Daigavane - 5 years, 2 months ago

@Ameya Daigavane Take a loot at this one ... it may clarify the issues

Otto Bretscher - 5 years, 3 months ago
Otto Bretscher
Mar 12, 2016

The solution I had in mind is similar to that of Abhishek. I assume that the reader has taken a first course in Linear Algebra. If you have not yet done so, you should! ;)

Let u 1 , . . . , u n + 1 \mathbf{u}_1,...,\mathbf{u}_{n+1} be unit vectors in R n \mathbb{R}^n with u i u j = cos ( θ n ) \mathbf{u}_i\cdot\mathbf{u}_j=\cos(\theta_n) , for i j i\neq j . Let G = ( u i u j ) i j \mathbf{G}=(\mathbf{u}_i\cdot\mathbf{u}_j)_{ij} be the associated Gram matrix, which is singular and positive semidefinite, so that the smallest eigenvalue of G \mathbf{G} is 0. Note that the diagonal entries of G \mathbf{G} are 1, while all other entries are cos ( θ n ) \cos(\theta_n) . As we discussed here , the eigenvalues of G \mathbf{G} will be 1 cos ( θ n ) > 0 1-\cos(\theta_n)>0 and n cos ( θ n ) + 1 = 0 n\cos(\theta_n)+1=0 , so that cos ( θ n ) = 1 n \cos(\theta_n)=-\frac{1}{n} .

In particular, cos ( θ 4 ) = 0.25 \cos(\theta_4)=\boxed{-0.25}

FYI If θ n \theta_n exists, then it is equal to the value calculated. However, (if I recall correctly) such vectors need not exist for all n n .

Calvin Lin Staff - 5 years, 3 months ago

Log in to reply

@Calvin Lin : I believe they do exist, but I may be wrong. Consider the ( n + 1 ) × ( n + 1 ) (n+1)\times(n+1) matrix A A with 1's on the diagonal and 1 n -\frac{1}{n} elsewhere, a singular positive semidefinite matrix. Now we have a Cholesky decomposition A = L L T A=LL^T , and the rows of L L will have the property we seek (since they are linearly dependent, they live in an n n -dimensional space).

The Cholesky decomposition is essentially what @Arjen Vreugdenhil does, from first principles.

Otto Bretscher - 5 years, 3 months ago

0 pending reports

×

Problem Loading...

Note Loading...

Set Loading...