Stochastic OR

5.5 \(M/G/1\) Queue Length Distribution

In Section 5.3 we used level-crossing arguments to find a recursion for the stationary distribution \(\pi(n)\) of the \(M^B/M/1\) queue.1 You can skip this if you are not interested. Here we find a similar recursion to compute \(\pi(n)\) for the \(M/G/1\) queue. However, we cannot simply copy the ideas of Section 5.3 to the present situation, because in the \(M^B/M/1\) queue the service times of the items are exponential, hence memoryless, while in the \(M/G/1\) this is not the case.

If we want to characterize the state at all moments in time, we need to keep track of the number of jobs in the system and the remaining service time of the job in service (if any), because service times are not memoryless. But, by sampling at job departure times \(\{D_k\}\),2 So that the remaining service time is guaranteed to be \(0\). we can restrict the state description to just the number in the system. For ease we write3 i.e., not \(\L(D_k-)\). \(L_k=L(D_{k})\) for the number of jobs in the system left behind at the departure epoch of the \(k\)th job.

We now find a useful recursion for \(\{L_k\}\). Assume first that \(L_{k-1}>0\).4 This implies that \(A_k<D_{k-1}\). If \(Y_{k}\) jobs arrive during the service \(S_{k}\) of job \(k\), then \(L_k=L_{k-1}-1 + Y_{k}\). The situation is slightly more complicated when \(L_{k-1}=0\): we first have to wait for job \(k\) to arrive, and then, if \(Y_k\) jobs arrive while job \(k\) is in service, \(L_k=Y_{k}\). Thus, in general,

\begin{equation*} L_k=[L_{k-1}-1]^+ + Y_{k}. \end{equation*}

With this recursion it is simple to find the level-crossing equations. Write \(\delta_k(n) := \P{L_k=n}\). The rvs \(\{Y_k\}\) form a sequence of iid rvs \(\sim Y\) with pmf \(f(j) = \P{Y = j}\) and survivor function \(G\).

If \(L_{k}\) down-crosses level \(n\), then \(L_{k-1}=n+1\); see Fig. 1 for an example at level \(n=3\). Hence, the probability of down-crossing level \(n\) at time \(D_{k}\) is \(\delta_{k-1}(n+1) f_{0}\).5 \(Y_{k}\) and \(L_{k-1}\) are independent by assumption for the \(M/G/1\) queue. To up-cross level \(n\), it is necessary that \(L_{k-1}\leq n\) and \(L_k>n\). When \(L_{k-1}=0\), then \(L_k = Y_k\), so we need \(Y_k > n\), giving probability \(G(n)\). When \(L_{k-1}=m\geq 1\), then \(L_k = m-1+Y_k > n\) requires \(Y_k > n+1-m\). The probability of up-crossing is then \(\sum_{m=1}^{n} \delta_{k-1}(m) G(n+1-m) + \delta_{k-1}(0) G(n)\).6 Beware off-by-one errors; check!

It is possible to prove7 With some Markov chain theory. that \(\delta(n) = \lim_{k\to \infty} \delta_{k}(n)\) exists for all \(n\). Then, equating the up- and down-crossing probabilities, taking the limit \(k\to\infty\) at the LHS and the RHS, and noting that \(\pi(n) = \delta(n)\), see Eq. (4.6.8), we arrive at a recursion for \(\{\pi(n)\}\):

\begin{equation*} \pi(n+1) f(0)= \pi(0) G(n) + \sum_{m=1}^{n} \pi(m) G(n+1-m). \tag{5.5.1} \end{equation*}
mg1_level_crossing.png
Figure 1: Level crossing at departure moments.

For the evaluation of the above recursion we can just follow the scheme of Section 5.3, but there is an important difference: here the \(\{f(k)\}\) need to be computed.

We use a conditioning argument to find an expression for \(\P{Y=j}\). Since jobs arrive as a Poisson process, \(Y|S=s \sim P(\lambda s)\), i.e.,

\begin{equation*} \P{Y =j\given S=s} = e^{-\lambda s}\frac{(\lambda s)^j}{j!}. \end{equation*}

If \(S\equiv s\), that is, a constant, then this is the distribution we are looking for. When \(F\) has a density \(f\), then,

\begin{equation*} \P{Y=j} = \int_0^\infty \P{Y =j\given S=s} f(s) \d s = \int_{0}^{\infty} e^{-\lambda s}\frac{(\lambda s)^j}{j!} f(s) \d s, \end{equation*}

which can be solved by hand in simple cases, for instance when \(S\sim \Exp{\mu}\).8 Ex 5.5.2. When we cannot obtain a closed-form expression for the integral we need numerical methods. For this we can use the function quad of scipy.

Python
import numpy as np
from scipy.integrate import quad

labda, mu = 2, 3
j = 5


def f(s):
    return mu * np.exp(-mu * s)


def g(x):
    return np.exp(-labda * x) * (labda * x) ** j / np.math.factorial(j) * f(x)


print(quad(g, 0, np.inf))
Ex 5.5.1

If \(\L(D_{k-1}) = 0\), why is \(\E{D_{k}-D_{k-1}} = \E{X} + \E{S}\)?

Solution
Did you actually try? Maybe see the 'hints' above!:
Solution, for real

After job \(k-1\) left, job \(k\) first has to arrive. Hence, \(\E{D_k - D_{k-1}} = \E{X_k + S_k} = 1/\lambda + \E S\), where we use that \(X_k\) is memoryless.

Ex 5.5.2

If \(S\sim \Exp{\mu}\), show that

\begin{equation*} f(j) = \P{Y_k = j} = \frac{\mu}{\lambda+\mu}\left(\frac{\lambda}{\lambda+\mu}\right)^j. \end{equation*}
Hint

Use conditional probability or Ex 1.2.10 and Ex 1.2.11.

Solution
Did you actually try? Maybe see the 'hints' above!:
Solution, for real

Use conditional probability to see that

\begin{align*} \P{Y_n = j} &= \int_0^\infty e^{-\lambda x}\frac{(\lambda x)^j}{j!}\, \d F( x) = \int_0^\infty e^{-\lambda x}\frac{(\lambda x)^j}{j!} \mu e^{-\mu x}\, \d x = \frac{\mu}{j!}\lambda^j \int_0^\infty e^{-(\lambda+\mu) x}x^j\,\d x \\ &= \frac{\mu}{j!}\left(\frac{\lambda}{\lambda+\mu}\right)^j \int_0^\infty e^{-(\lambda+\mu) x}((\lambda+\mu)x)^j\,\d x = \frac{\mu}{j!}\left(\frac{\lambda}{\lambda+\mu}\right)^j \frac{j!}{\lambda+\mu}. \end{align*}

Method 2. Consider the Poisson process with rate \(\lambda+\mu\), and thin with probability \(\mu/(\lambda+\mu)\). Then the probability that \(j\) `failures' occur before a `success' is precisely \(\P{Y=j}\).

Ex 5.5.3

If \(S\sim \Exp{\mu}\), show that

\begin{equation*} G(j) = \sum_{k=j+1}^\infty f(k) = \left(\frac{\lambda}{\lambda+\mu}\right)^{j+1}. \end{equation*}
Hint

Use Ex 5.5.2.

Solution
Did you actually try? Maybe see the 'hints' above!:
Solution, for real

Take \(\alpha = \lambda/(\lambda+\mu)\) so that \(f(j) = (1-\alpha) \alpha^j\).

\begin{align*} G(j) &= \sum_{k=j+1}^\infty f(k) = (1-\alpha) \sum_{k=j+1}^\infty \alpha^k = (1-\alpha) \alpha^{j+1}\sum_{k=0}^\infty \alpha^{k} = \alpha^{j+1}. \end{align*}
Ex 5.5.4

Implement the recursion Eq. (5.5.1) in Python (or R) and test it for the \(M/M/1\) queue.

Hint

Use the results of Ex 5.5.2.

Solution
Did you actually try? Maybe see the 'hints' above!:
Solution, for real

This is the most direct, but not the fastest, implementation.

Python
import numpy as np

num = 50


def compute_mg1_pi(pmf):
    cdf = pmf.cumsum()
    sf = 1 - cdf
    pi = np.zeros(num)
    pi[0] = 1
    for n in range(len(pi) - 1):
        res = pi[0] * sf[n]
        res += sum(pi[m] * sf[n + 1 - m] for m in range(1, n + 1))
        pi[n + 1] = res / pmf[0]
    pi /= pi.sum()
    return pi


labda = 1
mu = 2
rho = labda / mu
pmf = np.ones(num)
for i in range(1, len(pmf)):
    pmf[i] *= pmf[i - 1] * labda / (labda + mu)
pmf /= pmf.sum()


pi = compute_mg1_pi(pmf)

# simple checks
print(f"{1 - rho=}, {pi[0]=}")
print(f"{(1 - rho) * rho**3=}, {pi[3]=}")
Python
1 - rho=0.5, pi[0]=np.float64(0.4999999999999981)
(1 - rho) * rho**3=0.0625, pi[3]=np.float64(0.06249999999999966)

The next exercise is a nice test of your algebra skills.9 This is a nice exercise to test your algebra skills.

Ex 5.5.5

Check that the queue length distribution \(\{\pi(n)\}\) of the \(M/M/1\) queue satisfies Eq. (5.5.1).

Hint

Solve Ex 5.5.2 and Ex 5.5.3 first. Use the following shorthands: \(\alpha=\lambda/(\lambda+\mu) \implies \mu/(\lambda+\mu) = 1-\alpha \implies \alpha/(1-\alpha) = \lambda /\mu = \rho\).

Solution
Did you actually try? Maybe see the 'hints' above!:
Solution, for real

Observe that \(f(j)=(1-\alpha)\alpha^j\), and \(G(j) = \alpha^{j+1}\). As the normalization factor cancels on both sides, we drop the normalization and just write \(\pi(n) = \rho^n\) to simplify the algebra.

For \(n=0\): \(f(0) \pi(1) = \pi(0) G(0) \iff (1-\alpha) \rho = 1\, \alpha\), and this checks with the hint. For \(n\geq 1\):

\begin{align*} (1-\alpha)\rho^{n+1} &= \pi(0) G(n) + \sum_{m=1}^n\pi(m) G(n+1-m) =\alpha^{n+1} + \sum_{m=1}^n \rho^m \alpha^{n-m+2} \\ &= \alpha^{n+1} + \alpha^{n+2}\sum_{m=1}^n (\rho/\alpha)^m = \alpha^{n+1} + \alpha^{n+1}\rho \sum_{m=0}^{n-1} (\rho/\alpha)^m \\ &= \alpha^{n+1} + \alpha^{n+1}\rho \frac{1-(\rho/\alpha)^n}{1-\rho/\alpha}\\ &= \alpha^{n+1} - \alpha^{n+1}(1-(\rho/\alpha)^n), \quad\text{as } 1- \rho/\alpha = -\rho,\\ &= \alpha^{n+1}(\rho/\alpha)^n = \alpha \rho^n. \end{align*}

Since \(\rho=\alpha/(1-\alpha)\) we see that the LHS and RHS are the same.