On a cube, Markov the ant sits at a vertex, A . After every minute, he travels to one of the three adjacent vertices, along an edge, with equal probability. Find the probability that after ten minutes (having traveled exactly ten times) Markov is back at vertex A . Write the answer as 3 9 m , and enter m .
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Good way of tracking the probability changes. The simplicity of expression is motivated by the underlying Markov chain structure.
If we write the probability as a deviation from the equilibrium, p n = 4 1 + d n
I'm lost here. What does this means?
Log in to reply
Just a computational technique. If you prefer, I could say "let d n = p n − 1 / 4 " Does that make sense, Comrade Pi?
I am doing this problem with eigenvectors and eigenvalues, but I'm trying to avoid them in the solution I write down.
Log in to reply
No. What is the meaning/use of "probability as a deviation from the equilibrium"?
Log in to reply
@Pi Han Goh – Maybe I'm lacking in some basics. Does this question have something to do with actual Markov Chains? Or Gambler's Ruin? etc (because I need to brush up on them again). That's why I didn't understand what this "deviation from equilibrium" line means.
Log in to reply
@Pi Han Goh – Of course it's a Markov chain... that's why Comrade Markov is involved. The systematic way to do those is with eigenvalues... 1 is always an eigenvalue in a Markov chain, and the corresponding eigenvector represents the equilibrium state.
Log in to reply
@Otto Bretscher – Damn. I thought this is NOT a Markov Chain due to the lack of matrices in your solution. I'm not sure how to proceed though. Let me put my thinking cap on.
Log in to reply
@Pi Han Goh – Here you get away with only two compartments: A and (not A), so, you will have a 2 x 2 matrix M . Just write the recursive formulas for your probabilities p n + 1 and q n + 1 in terms of p n and q n , and you have your 2 x 2 matrix M . I simplified things by going directly for M 2 .
I did the work with matrices and then "wrote the matrices out of my solution"... that's why my solution may sound a bit "forced".
Interesting to represent computationally.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Problem Loading...
Note Loading...
Set Loading...
Let's make the coordinates of the vertices all 0's and 1's, for simplicity, and let's say that Markov is at A(0,0,0) initially. Since the number of 1's changes by one each time he moves along an edge, after an even number of moves he will be at one of the points A ( 0 , 0 , 0 ) , B ( 1 , 1 , 0 ) , C ( 1 , 0 , 1 ) and D ( 0 , 1 , 1 ) .
Let p 2 n be the probability that he is at A after 2 n moves and q 2 n = 1 − p 2 n the probability that he is at one of the points B , C or D . The rules of the game and the geometry of the cube imply that p 2 n + 2 = 3 1 p 2 n + 9 2 q 2 n = 9 1 p 2 n + 9 2
If we write the probability as a deviation from the equilibrium, p n = 4 1 + d n , we find that d n + 2 = 9 1 d n and p 2 n = 4 1 ( 1 + 3 2 n − 1 1 )
For n = 5 we have p 1 0 = 3 9 ( 3 9 + 1 ) / 4 = 3 9 4 9 2 1