Stochastic processes sampler pack (I)

Here we will solve a few classic problems in probability using some exotic methods from non-equilibrium statistical mechanics. Though each of these problems can be addressed using more elementary methods, the tools and tricks we'll explore here are powerful, and can be used to great effect in solving more difficult kinds of questions in physics and probability.

What sets stochastic processes apart from the usual questions in probability is the central role of fluctuations. In stochastic settings, it is not enough to reason about the most likely outcome, as it is not necessarily representative of typical states of the system.

For instance, if we're interested in building a temperature control system for a trip to Mars, it isn't enough to know that the average temperature remains 70 deg F, as this could mean anything from an enjoyable, constant 70 deg F to a ship that switches once a day between 0 and 140 deg F, killing all of its passengers.

For stochastic problems, we'd like to know about all the states of the system and thus require the entire probability distribution.

The first problem that we'll talk about is known, in some circles, as Ehrenfest's urn. In this problem, there are two urns, one on the left (L\mathbf{L}), and one on the right (R\mathbf{R}), each of which can hold a total of NTN_\text{T} balls. At each time step, there is some chance per ball of moving from the left urn to the right urn, and some chance of doing the reverse. Part of the inspiration for this model is the second law of thermodynamics which states roughly that any closed dynamical system will, in the long run, tend to states of highest disorder. If a system is currently in a highly ordered state, it should feel a pressure toward disorder upon rearrangements.

For example, consider a container of liquid after a partition is lifted between pure and dye-containing water:

Here, despite the lack of any identifiable driving force, the ordered (locally concentrated) dye molecules feel an overwhelming pressure to diffuse. An entropic force of sorts drives the system.

The balls in the urn are subject to the same considerations and can therefore bring out the entropic effect, as well as the dynamics of the system as it converges to "thermodynamic" equilbrium. The balls can be replaced with dye, heat, or genes, the effect still holds.

One question we can ask about the urns is what distribution the balls will have, on average, between the two urns. Some harder questions of interest are what dynamics the distribution will have on its way to steady state, and what statistics the steady state distribution of balls will exhibit.

Along the way we'll sketch some powerful results that allow us to make statements about uniqueness.

Urn problem\large{\textrm{Urn problem}}

In the urn problem, we start out with NL(0)N_\textrm{L}(0) balls in the left urn, and NR(0)N_\textrm{R}(0) in the right. Each ball in the left urn has a chance α\alpha per unit time of moving to the right. Likewise, each ball in the right urn has a chance β\beta of moving to the left.

After one time step, the number of balls in the left and right urns has changed to

\begin{alignat}{3} N_\text{L}(\Delta t) \quad & = N_\text{L}(0)\left(1-\alpha\right)\Delta t &&+\quad N_\text{R}(0)\beta \Delta t \\ N_\text{R}(\Delta t) \quad & = N_\text{L}(0)\alpha \Delta t &&+\quad N_\text{R}(0)\left(1-\beta\right)\Delta t \end{alignat}

This can be rewritten in matrix form:

(NL(Δt)NR(Δt))Nt=(1αΔtβΔtαΔt1βΔt)I+MΔt(NL(0)NR(0))N0\overbrace{\begin{pmatrix}N_\text{L}(\Delta t) \\ N_\text{R}(\Delta t)\end{pmatrix}}^{\displaystyle \lvert N_t \rangle} = \overbrace{\begin{pmatrix}1-\alpha \Delta t & \beta \Delta t \\ \alpha \Delta t & 1-\beta\Delta t\end{pmatrix}}^{\displaystyle \mathbb{I} + \mathcal{M}\Delta t}\overbrace{\begin{pmatrix}N_\text{L}(0) \\ N_\text{R}(0)\end{pmatrix}}^{\displaystyle \lvert N_0 \rangle}

For ease of notation, we write the state of the urns at time tt as the vector Nt\displaystyle\lvert N_t\rangle. We see that the matrix is the sum of I=(1001)\mathbb{I} = \begin{pmatrix}1&0\\0&1\end{pmatrix} and M=(αβαβ)\mathcal{M} = \begin{pmatrix}-\alpha & \beta \\ \alpha & -\beta\end{pmatrix}

Evidently, to move the system through time we multiply the state of the urn by the matrix I+MΔt\mathbb{I} + \mathcal{M}\Delta t.

Here, we have a choice to make. We can proceed in continous time (the limit Δtdt\Delta t \rightarrow dt ), or in discrete time steps where Δt\Delta t is finite. First, let's look at the continuous time case.

Continuous time\large\textrm{Continuous time}

For simplicity, let us consider the case where α=β=μ\alpha = \beta = \mu.

Moving to the continous limit, we have Δt=dt\Delta t = dt.

First, we find dNtdt\displaystyle \frac{d\lvert N_t\rangle}{dt}:

(I+Mdt)N0=NdtMdtNdt=NdtN0MNdt=NdtN0dt\begin{aligned} \left(\mathbb{I} + \mathcal{M}dt \right) \lvert N_0 \rangle &= \lvert N_{dt} \rangle \\ \mathcal{M}dt\lvert N_{dt}\rangle &= \lvert N_{dt} \rangle - \lvert N_{0} \rangle \\ \mathcal{M}\lvert N_{dt}\rangle &= \frac{\lvert N_{dt} \rangle - \lvert N_{0} \rangle}{dt} \end{aligned}

and in the limit dt0dt\rightarrow 0 we have dNtdt=MNt\displaystyle\frac{d\lvert N_t\rangle}{dt} = \mathcal{M}\mathcal \lvert N_t\rangle , which has the solution Nt=eMtN0\lvert N_t \rangle = e^{\mathcal{M}t}\lvert N_0\rangle.

Let's unpack the exponential term.

If we write γ=(0110)\gamma = \begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}, we see that M=I+γ\mathcal{M} = -\mathbb{I} + \gamma and so we write

eMt=eμIteμγt=eμIt(iμntnγnn!)=eμIt(I+γμt+μ2t22γ2+μ3t33γ3+)\begin{aligned} \displaystyle e^{\mathcal{M}t} &= e^{-\mu\mathbb{I}t}e^{\mu\gamma t} \\ &= e^{-\mu\mathbb{I}t}\left(\sum\limits_i \mu^n t^n\frac{\gamma^n}{n!} \right) \\ &= e^{-\mu\mathbb{I}t}\left(\mathbb{I} + \gamma \mu t + \frac{\mu^2 t^2}{2}\gamma^2 + \frac{\mu^3 t^3}{3}\gamma^3 + \ldots\right) \end{aligned}

This expansion might not seem to help our cause, however, we point out that γ2=(0110)(0110)=(1001)=I\gamma^2 = \begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix} = \begin{pmatrix}1&0\\0&1\end{pmatrix} = \mathbb{I}

as do all even powers of γ\gamma. With that identity, we have

eμγt=I+μtγ+μ2t22I+μ3t33γ+=(I+μ2t22I+)+(μtγ+μ3t33γ+)=cosh(μt)I+sinh(μt)γ=12(eμt+eμt)I+12(eμteμt)γ\begin{aligned} e^{-\mu\gamma t} &= \mathbb{I} + \mu t \gamma + \frac{\mu^2 t^2}{2}\mathbb{I} + \frac{\mu^3 t^3}{3}\gamma + \ldots \\ &= \left(\mathbb{I} + \frac{\mu^2 t^2}{2}\mathbb{I} + \ldots\right) + \left( \mu t \gamma + \frac{\mu^3 t^3}{3}\gamma + \ldots\right) \\ &= \cosh\left( \mu t\right)\mathbb{I} + \sinh \left(\mu t\right)\gamma \\ &= \frac12\left(e^{\mu t}+e^{-\mu t}\right)\mathbb{I} + \frac12\left(e^{\mu t}-e^{-\mu t}\right)\mathbb{\gamma} \end{aligned}

Now,

eμMt=eμtI[12(eμt+eμt)I+12(eμteμt)γ]=12(1+e2μt)I+12(1e2μt)γ\begin{aligned} e^{\mu\mathcal{M}t} &= e^{-\mu t}\mathbb{I}\left[\frac12\left(e^{\mu t}+e^{-\mu t}\right)\mathbb{I} + \frac12\left(e^{\mu t}-e^{-\mu t}\right)\mathbb{\gamma}\right] \\ &= \frac12\left(1+e^{-2\mu t}\right)\mathbb{I} + \frac12\left(1-e^{-2\mu t}\right)\mathbb{\gamma} \end{aligned}

finally, we find the mean distribution between the urns with time:

Nt=eμMtN0=[12(1+e2μt)I+12(1e2μt)γ](NL(0)NR(0))=12(1+e2μt)(NL(0)NR(0))+12(1e2μt)(NR(0)NL(0))=12(Nt+NL(0)e2μtNR(0)e2μtNt+NR(0)e2μtNL(0)e2μt)\begin{aligned} \lvert N_t \rangle &= e^{\mu\mathcal{M}t}\lvert N_0 \rangle \\ &= \left[\frac12\left(1+e^{-2\mu t}\right)\mathbb{I} + \frac12\left(1-e^{-2\mu t}\right)\mathbb{\gamma}\right]\begin{pmatrix}N_\text{L}(0) \\ N_\text{R}(0)\end{pmatrix} \\ &= \frac12\left(1+e^{-2\mu t}\right)\begin{pmatrix}N_\text{L}(0) \\ N_\text{R}(0)\end{pmatrix} + \frac12\left(1-e^{-2\mu t}\right)\begin{pmatrix}N_\text{R}(0) \\ N_\text{L}(0)\end{pmatrix} \\ &= \frac12 \begin{pmatrix}N_t + N_\text{L}(0)e^{-2\mu t} - N_\text{R}(0)e^{-2\mu t} \\ N_t + N_\text{R}(0)e^{-2\mu t} - N_\text{L}(0)e^{-2\mu t}\end{pmatrix} \end{aligned}

As a sanity check, we see that t=0t=0 yields the initial condition, as expected.

We can take the long time limit of this expression tt\rightarrow\infty to find the steady state distribution. We obtain

N=(1/21/2)\lvert N_\infty\rangle = \begin{pmatrix}1/2 \\ 1/2\end{pmatrix}

This shows that the blending process erases all history of the initial distribution between the urns. Moreover, we see that the initial state flows to the steady state at twice the hopping rate 2μ2\mu.

At this point we might reflect on the road to solution. The matrix algebra is elegant and we obtain a clean solution for the problem. On the other hand, it is something of a lucky coincidence that our matrix M\mathcal{M} can be written in terms of γ\gamma, where γ2=I\gamma^2 = \mathbb{I}. Further, there are more complicated systems than the Ehrenfest urn.

What can we say we have learned about linear stochastic processes in general from working the urn problem using the matrix exponential? The answer is, unfortunately, not much. For instance, do linear stochastic systems always have a steady state or can they exhibit oscillations? Can they be multistable? Can the steady state depend on the intial condition? It would be useful to find a more insightful method of solution. We will make a start of this in the next section.

Discrete time\large\text{Discrete time}

Before we begin, we note a difference between our work here and in the continuous case. In discrete time, the α\alpha and β\beta are actual probabilities, whereas they were rates in the work above.

We'll pick up at the point where we'd established

Nt=MNt1=(1αβα1β)Nt1\begin{aligned} \lvert N_{t} \rangle &= \mathcal{M}\lvert N_{t-1} \rangle \\ &= \begin{pmatrix}1-\alpha & \beta \\ \alpha & 1-\beta\end{pmatrix} \lvert N_{t-1} \rangle \end{aligned}

We see that this equation implies a recursion Nt=MtN0\lvert N_{t} \rangle = \mathcal{M}^t\lvert N_0 \rangle. Therefore, we can obtain the dynamics of the average trajectory simply by raising M\mathcal{M} to the power we desire. The trouble is that M\mathcal{M} isn't diagonal, and, so, can result in unwieldy expansions upon raising powers. Luckily, most matrices can be transformed into diagonal representations.

The idea is that contrary to arbitrary matrices, diagonal matrices are easy to manipulate, therefore it would be preferable to work with them if possible. As diagonal matrices only have entries along the diagonal, any power of the matrix is given by raising each entry in the diagonal matrix to the desired power. So, if

D=(λ1000λ2000λ3)\mathcal{D} = \begin{pmatrix}\lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{pmatrix} \\

then

Dn=(λ1n000λ2n000λ3n)\mathcal{D}^n = \begin{pmatrix}\lambda_1^n & 0 & 0 \\ 0 & \lambda_2^n & 0 \\ 0 & 0 & \lambda_3^n \end{pmatrix} \\

Further, we can sandwich factors of I=PP1\mathbb{I}=\mathcal{P}\mathcal{P}^{-1} between each of the D\mathcal{D} without affecting the expression.

Dn=DP1PDP1PDP1PD\mathcal{D}^n = \mathcal{D}\mathcal{P}^{-1}\mathcal{P}\mathcal{D}\mathcal{P}^{-1}\mathcal{P}\mathcal{D}\ldots \mathcal{P}^{-1}\mathcal{P}\mathcal{D}.

We hope to find such D\mathcal{D}, and P\mathcal{P} for M\mathcal{M} such that M=PDP1\mathcal{M} = \mathcal{P}\mathcal{D}\mathcal{P}^{-1}. If we can, then most of the manipulation is carried out with a diagonal matrix, apart from the two factors of P\mathcal{P} buttressing the Dn\mathcal{D}^n.

As it turns out, such a P\mathcal{P} is easy to find, and it is given by the matrix whose columns are the eigenvectors of M\mathcal{M}. The diagonal entries of D\mathcal{D} are the eigenvalues of M\mathcal{M}. So long as M\mathcal{M} has a linearly independent set of eigenvectors, a diagonal representation exists.

Let's find the eigenvalues, and eigenvectors of M\mathcal{M}. What we'd like to find are two vectors which are left unchanged by M\mathcal{M} apart from overall multiplication by a constant:

ME=λE(MλI)E=0\begin{aligned} \mathcal{M}\lvert E \rangle &= \lambda \lvert E \rangle \\ \left(\mathcal{M}-\lambda \mathbb{I}\right)\lvert E \rangle &= 0 \end{aligned}

However, for non-zero vectors E\lvert E \rangle, AE=0\mathcal{A}\cdot\lvert E \rangle = 0 has a non-zero solution A\mathcal{A} if and only if the determinant of A\mathcal{A} is zero: detA=0\det \mathcal{A} = 0. We therefore solve det(MλI)=0\det\left(\mathcal{M}-\lambda\mathbb{I}\right)=0:

0=det(MλI)=det(1αλβαα1βλ)=(1λα)(1βλ)αβ=(λ1)(λ1+α+β)\begin{aligned} 0 &= \det\left(\mathcal{M}-\lambda\mathbb{I}\right) \\ &= \det\begin{pmatrix}1\alpha-\lambda & \beta \\ \alpha & \alpha 1-\beta-\lambda\end{pmatrix} \\ &= \left(1-\lambda-\alpha\right)\left(1-\beta-\lambda\right) - \alpha\beta \\ &= \left(\lambda - 1\right)\left(\lambda-1+\alpha+\beta\right) \end{aligned}

From which we see that the two eigenvalues are λ+=1\lambda_+ = 1 and λ=1αβ\lambda_- = 1-\alpha-\beta.

The eigenvectors corresponding to these values are

λ+=1α+β(βα)λ=(11)\begin{aligned} \lvert \lambda_+ \rangle &= \frac{1}{\alpha+\beta}\begin{pmatrix}\beta \\ \alpha\end{pmatrix} \\ \lvert \lambda_- \rangle &= \begin{pmatrix}1 \\ -1\end{pmatrix} \end{aligned}

as we said, P\mathcal{P} is given by a matrix whose columns are the eigenvectors of M\mathcal{M}, hence

P=(β1α1)\mathcal{P} = \begin{pmatrix}\beta & 1 \\ \alpha & -1\end{pmatrix}

2x2 matrix inversion is simple and we have

P1=1α+β(11αβ)\mathcal{P}^{-1} = -\frac{1}{\alpha+\beta}\begin{pmatrix} -1 & -1 \\ -\alpha & \beta\end{pmatrix}

then,

MnNt=PMtP1N0=1α+β(β1α1)(λ+t00λt)(11αβ)N0\begin{aligned} \mathcal{M}^n\lvert N_t\rangle &= \mathcal{P}\mathcal{M}^t\mathcal{P}^{-1} \lvert N_0\rangle \\ &= -\frac{1}{\alpha+\beta}\begin{pmatrix}\beta & 1 \\ \alpha & -1\end{pmatrix}\begin{pmatrix}\lambda_+^t & 0 \\ 0 & \lambda_-^t\end{pmatrix}\begin{pmatrix} -1 & -1 \\ -\alpha & \beta\end{pmatrix} \lvert N_0\rangle \end{aligned}

Because λ+=1\mid\lambda_+\mid = 1 and λ=1αβ<1\mid\lambda_-\mid = \mid 1-\alpha-\beta\mid < 1 (as α\alpha and β\beta are probabilities), we have λ+t=1\lambda_+^t = 1, and λt0\lambda_-^t \rightarrow 0 as tt\rightarrow\infty. Therefore, at very long times, we can approximate M\mathcal{M} as

(1000)\begin{pmatrix}1&0 \\ 0&0\end{pmatrix}

and

N=1α+β(ββαα)N0=NTα+β(βα)\begin{aligned} \lvert N_\infty \rangle &= \frac{1}{\alpha+\beta}\begin{pmatrix}\beta&\beta \\ \alpha&\alpha\end{pmatrix}\lvert N_0 \rangle \\ &= \frac{N_T}{\alpha+\beta}\begin{pmatrix}\beta \\ \alpha \end{pmatrix} \end{aligned}

We can see that the final state of the urns is, like before, independent of the initial conditions. The balls are split according to the relative magnitudes of the probabilities to jump from one urn to the other.

If instead we work at all time scales, keeping λ+=1\lambda_+=1 and λ=1αβ\lambda_-=1-\alpha-\beta we can get obtain dynamics of the system:

Nt=1α+β(NL(0)(β+αλt)+NR(0)β(1λt)NL(0)α(1λt)+NR(0)(α+βλt))=NTα+β(βα)+λtα+β(NL(0)αNR(0)βNL(0)α+NR(0)β)\begin{aligned}\lvert N_t \rangle &= \frac{1}{\alpha+\beta}\begin{pmatrix}N_\text{L}(0)\left(\beta+\alpha\lambda_-^t\right) + N_\text{R}(0)\beta\left(1 -\lambda_-^t\right) \\ N_\text{L}(0)\alpha\left(1-\lambda_-^t\right) + N_\text{R}(0)\left(\alpha+\beta\lambda_-^t\right)\end{pmatrix} \\ &= \frac{N_T}{\alpha+\beta}\begin{pmatrix}\beta \\ \alpha \end{pmatrix} + \frac{\lambda_-^t}{\alpha+\beta}\begin{pmatrix}N_L(0)\alpha-N_R(0)\beta \\ -N_L(0)\alpha + N_R(0)\beta\end{pmatrix} \end{aligned}

Depending on the particular values of the hopping probabilities, the solution can exhibit different qualitative behaviors. The λ+=1\lambda_+=1 eigenstate produces a constant urn distribution that the system approaches with time. The minor eigenstate, with eigenvalue 1αβ1-\alpha-\beta is more interesting. For 1>α+β1 > \alpha+\beta, the dynamics of the minor eigenstate is an exponential decay, and the system approaches the steady state monotonically. When α+β>1\alpha+\beta > 1 however, the minor mode is an oscillation that decays more or less slowly. In some sense this should be obvious, because when α=β=1\alpha=\beta=1, every ball in the system is always switching, though this kind of behavior might not be expected from such a simple system.

Broadening the result\large\textrm{Broadening the result}

Let's reflect on the calculation and ask what lessons we can take from it. Though somewhat less elegant than the matrix algebra in the first calculation, things seem somehow more general. Our calculation didn't depend on clever tricks with γ\gamma and series manipulation. Almost all matrices are diagonalizable, and so, there is reason to hope that this approach will be more widely applicable.

Another key is that the large eigenvalue λ+\lambda_+ was 1, and the small eigenvalue λ\lambda_- had magnitude less than 1. This ensures that all valid initial conditions attract to λ+\lvert \lambda_+ \rangle.

It is reasonable to ask whether or not this is a coincidence. Can we always expect on the largest eigenvalue to be 1? Can we always expect the other eigenvalues to satisfy λ<1\mid\lambda\mid < 1?

Let's look at the form of I+M\mathbb{I} + \mathcal{M}

(1αβα1β)\begin{pmatrix} 1-\alpha & \beta \\ \alpha & 1-\beta \end{pmatrix}

One thing that stands out is the sum of the columns: iM0i=iM1i=1\sum\limits_i \mathcal{M}_{0i} = \sum\limits_i \mathcal{M}_{1i} = 1. This represents the conservation of probability. Every ball must up somewhere in the system, and, so, the columns must sum to one. This will always be true in a linear stochastic process, regardless of the details of the system. Does this imply anything about the eigenvalues?

Square matrices can have left LA=λLL\vec{L}\mathcal{A} = \lambda_\text{L}\vec{L}, and right eigenvectors AR=λRR\mathcal{A}\vec{R} = \lambda_\text{R}\vec{R}. However, for any square matrix, the set of eigenvalues for the left and right eigenvectors are the same. Therefore, let's try the vector 1\vec{1} with our transformation M\mathcal{M}:

M1=(iM0iiM1i)=(11)\begin{aligned} \mathcal{M}\vec{1} &= \begin{pmatrix}\sum_i\mathcal{M}_{0i} \\ \sum_i\mathcal{M}_{1i}\end{pmatrix} \\ &= \begin{pmatrix}1\\1\end{pmatrix} \end{aligned}

which shows that the eigenvalue of 1\vec{1} is 1. Note that this will be true for all matrices that describe linear stochastic processes. It is the conservation of probability that implies λ+=1\lambda_+=1 will be an eigenvalue for the system, and not any detailed considerations.

Note, however, that the right eigenvalue and the left eigenvalue for λ1\lambda_1 do not have equal entries. The right eigenstate of the system, that corresponds to λ1\lambda_1, depends on the details of the transport between states (i.e. the rates between the urns).

Can we take this any further? Can we summarize the remaining eigenvalues for an arbitrary linear stochastic process? If all remaining eigenvalues have magnitude less than 1, we can be certain about of the uniqueness of steady state solutions.

To begin, no eigenvalue of the system can be greater than 1, if it were, we'd have ME=λE>E\mathcal{M}\lvert E \rangle = \lambda \lvert E \rangle > \lvert E \rangle implying that each entry in ME\mathcal{M}\lvert E \rangle is greater than the corresponding entry in E\lvert E \rangle. However, every entry in the former is just a weighted average of the entries in the latter. It is then impossible for the maximum entry in ME\mathcal{M}\lvert E \rangle to exceed the maximum entry in E\lvert E \rangle. Therefore, no eigenvalue of M\mathcal{M} has magnitude greater than one.

While it is a bit harder to prove, it is true in general that there is only one eigenvector corresponding to the maximum eigenvalue (Perron-Frobenius theorem).

This is powerful. Any linear stochastic process, no matter how complicated, will tend toward a single steady state. In every linear stochastic process, any initial condition of a linear stochastic process will flow toward the same steady state. The blending will cleanse the system state of all remnants of non-steady state modes.

However, this result is also very boring. For instance, there is no chaos. There can be oscillations in the decay to equilibrium (i.e. some λC\lambda \in \mathbb{C}), but all states will eventually converge on the steady state. What is required to have sustained oscillations in non-deterministic systems? Are such oscillations possible without an external driving force? What is required for bistability? Is it possible in closed systems? We'll address these questions in future notes.

Next time we'll again consider the urn, asking what are the statistics of the steady state, and how do fluctuations scale with large urn systems (many balls).

#Combinatorics #Calculus #Mechanics #LinearAlgebra #StochasticProcesses

Note by Josh Silverman
6 years, 11 months ago

No vote yet
1 vote

  Easy Math Editor

This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.

When posting on Brilliant:

  • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
  • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
  • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
  • Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events.

MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold

- bulleted
- list

  • bulleted
  • list

1. numbered
2. list

  1. numbered
  2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1

paragraph 2

paragraph 1

paragraph 2

[example link](https://brilliant.org)example link
> This is a quote
This is a quote
    # I indented these lines
    # 4 spaces, and now they show
    # up as a code block.

    print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.

print "hello world"
MathAppears as
Remember to wrap math in \( ... \) or \[ ... \] to ensure proper formatting.
2 \times 3 2×3 2 \times 3
2^{34} 234 2^{34}
a_{i-1} ai1 a_{i-1}
\frac{2}{3} 23 \frac{2}{3}
\sqrt{2} 2 \sqrt{2}
\sum_{i=1}^3 i=13 \sum_{i=1}^3
\sin \theta sinθ \sin \theta
\boxed{123} 123 \boxed{123}

Comments

Very wll written. Keep it up! :)

Snehal Shekatkar - 6 years, 11 months ago

nice piece of work...keep it up..

balaram tej - 6 years, 11 months ago
×

Problem Loading...

Note Loading...

Set Loading...