5.3 \(M^B/M/1\) Queue and Rejection Policies
In this section, by using the renewal-reward theorem, we derive for the \(M^{B}/M/1\) queue expressions for \(\rho\), \(\E \W\), and \(\E \L\). To compute more difficult performance measures, such as the loss probability \(\P{\L>n}\), we need expressions for the stationary distribution \(\pi(n)\). We use level-crossing to derive a recursive scheme which, when combined with a policy to reject batches, can be used to compute these probabilities.
Assume that batches arrive as a Poisson process with rate \(\lambda_{B}\), and the number of jobs in a batch are iid \(\sim B\). Let \(S_{k,i}\) denote the service time of the \(i\)th job of batch \(k\); assume that the \(S_{k, i}\) are iid \(\sim S\) and independent of \(B\). By Eq. (4.1.2) the utilization \(\rho = \lambda_{B} \E B \E S\); as always, we require \(\rho<1\).
An arriving batch joins the end of the queue (if present). Once it arrives at the head of the queue, the batch moves in its entirety to the server1 This is the same batching model as in Section 6.2. , and then the server processes the jobs one after another until the batch is finished. Writing \(\QQ^B\) for the number of batches in queue and \(\Ls^B\) for the number of jobs (if any) at the server, the average time \(\E{\W^{B}}\) a batch spends in queue is2 First the jobs at the server have to be served, then the ones in queue.
\begin{equation*} \E{\W^B} = (\E{\Ls^B} + \E{\QQ^B} \E B) \E S. \tag{5.3.1} \end{equation*}Substituting the equality3 Little's law. \(\E{\QQ^B} = \lambda_{B} \E{\W^B}\) and simplifying yields
\begin{align*} \E{\W^B} &= \frac{\E{\Ls^B}}{1-\rho}\E{S}. \tag{5.3.2} \end{align*}We can relate the average time \(\E \W\) a job spends in queue to \(\E{\W^{B}}\) by observing that \(\E{\Ls^B} + \E{\QQ^B} \E B = \E \L\),4 The total number of jobs in the system is the number at the server plus the number in queue. The latter equals the number of batches in queue times the average batch size. so from Eq. (5.3.1)
\begin{align*} \E{\W^B} &= \E\L \E S \stackrel1= \E{\W}, \end{align*}where 1 follows from the memorylessness of the service time of the job in service (if any). Thus, to compute \(\E \W\) it remains to find an expression for \(\E{\Ls^B}\) in Eq. (5.3.2).
For this, we use the renewal reward theorem. Define5 Much like the proof of Little's law. \(R(t) := \int_0^t \Ls^B(s) \d s\), where \(\Ls^{B}(s)\) is the number of jobs (of the batch in service) at the server at time \(s\), and \(Z_k = R(D_k)-R(D_{k-1})\) where \(D_k\) is the departure time of the \(k\)th batch. Suppose the \(k\)th batch has batch size \(B_{k}\), then
\begin{align*} Z_k|B_{k} &= \int_{D_{k-1}}^{D_k} \Ls^B(s) \d s = B_{k} S_{k,1} + (B_{k}-1)S_{k,2} + \cdots + S_{k, B_{k}}, \end{align*}because \(B_{k}\) jobs are at the server during the service of the first job, \(B_{k}-1\) during the service of the second, \ldots Using that the \(S_{k,i} \sim S\),
\begin{align*} \E{Z_k | B_k} &= B_{k}\E S + (B_{k}-1)\E S + \cdots \E S = \frac{B_{k}(B_{k}+1)} 2 \E S. \end{align*}As \(B_{k}\sim B\),
\begin{align*} \E{Z_k} & = \E{\E{Z_k| B_k}} = \E{\frac{B(B+1)}2} \E S = \frac{\E{B^2}+\E B}2 \E S. \end{align*}We now have all elements to fill in \(R=\alpha Z\). First, \(R = \lim_{t\to\infty}R(t)/t = \E{L_{S}^{B}}\). Second, we sample at batch departure moments, so \(\alpha=\delta_{B}\) which equals \(\lambda_{B}\) by rate stability. Third, \(Z=\E{Z_{k}}\). Combining all this, and using that \(\rho=\lambda_{B} \E B\E S\), we obtain
\begin{align*} \E{\Ls^B} = R = \lambda_{B} Z = \lambda_{B} \frac{\E{B^2}}2\E S + \frac\rho 2. \tag{5.3.3} \end{align*}We can brush this up by realizing that6 Recall that the scv of a rv \(B\) is \(C_B^2 = \V B/ (\E B)^{2}\). We assume that \(\E{B^{2}} < \infty\).
\begin{align*} \frac{\E{B^2}}{(\E{B})^2} =\frac{\E{B^2}-(\E B)^2 + (\E B)^2}{(\E{B})^2} = \frac{\V B + (\E B)^2}{(\E B)^2}= C_B^2+1. \end{align*}Substituting this into the above gives the final result
\begin{equation*} \E{\W} = \frac{1+C_B^2}2 \frac{\rho}{1-\rho} \E B \E S + \frac12\frac\rho{1-\rho} \E S. \tag{5.3.4} \end{equation*}This formula is an important result. It tells us that \(\E \W\) increases when the load \(\rho\) or the variability \(C_{B}^{2}\) or \(\E B\) or \(\E S\) increases (even when the utilization \(\rho\) remains the same). As such, it gives us the most important clues about what to change if \(\E \W\)7 Hence, by Little's law, \(\E \L\). has to be reduced.
Rather than using the time an entire batch spends at the server, we can concentrate on the expected time spent by a single job at the server. This will provide us with a simple expression for \(\P{\Ls^{B}=i}\), from which we also can derive Eq. (5.3.3).
In this case, let \(R_i(t) = \int_0^t \1{\Ls^{B}(s)=i} \d s\) be the total time there are \(i\) jobs at the server, so that \(R_i(t)/t \to \P{\Ls^{B}=i}\) as \(t\to\infty\). By sampling \(R(\cdot)\) at batch departure times \(\{D_k\}\), see Fig. 1, we see that
\begin{align*} Z_k = R_i(D_k) - R_i(D_{k-1}) = S_{k,i} \1{B_{k} \geq i}, \end{align*}because only when \(B_{k}\geq i\), we can see \(i\) jobs at the server for the \(k\)th batch, and then \(S_{k,i}\) is the time this job occupies the server. With this,
\begin{equation*} Z = \lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^n S_{k,i} \1{B_{k} \geq i} = \E{S \1{B\geq i}} = \E S G(i-1), \end{equation*}since \(S_{k, i}\) and \(B_{k}\) are independent, and8 Recall: \(\P{A} := \E{\1{A}}\). \(G(i-1) = \E{\1{B\geq i}} = \P{B> i-1}\) is the survivor function of \(B\). With the renewal reward theorem and rate stability, \(\delta_{B} = \lambda_{B}\), we conclude that
\begin{align*} \P{\Ls^{B} = i} &= \lambda_{B} \E{S} G(i-1). \end{align*}This expression provides us with another derivation of Eq. (5.3.4). Just note that \(\E{\Ls^{B}} = \sum_{i=0}^\infty i \P{\Ls^{B}=i} = \lambda_{B} \E S \sum_{i=1}^\infty i G(i-1)\) and use Opt 5.3.12 to simplify.
The second task is to use level-crossing to find a recursive formula for the state probabilities \(\{\pi(n)\}\).9 Note: \(n\) means the number of jobs in the system, not the number of batches. For this we need to generalize our earlier level-crossing equation \(\lambda p(n) = \mu p(n-1)\) because when a batch arrives, multiple levels can be up-crossed, see Fig. 2. We use the PASTA property to express level-crossing in terms of \(\{\pi(n)\}\) instead of \(\{p(n)\}\).
The down-crossing rate of level \(n\) is easy: since jobs are still served one-by-one, we have \(\mu \pi(n+1)\).
For the up-crossings of level \(n\) we should consider batch arrivals that see \(m\leq n\) jobs in the system. Since batches arrive at a rate \(\lambda_{B}\), PASTA implies that \(\lambda_{B} \pi(m)\) is the arrival rate of batches that see \(m\) upon arrival.10 That is, the total arrival process is thinned with a fraction \(\pi(m)\). When an arriving batch sees \(m\) jobs in the system, and level \(n\) has to be up-crossed, the batch must contain more than \(n-m\) jobs. As \(G(n-m)\) is the probability that the batch size is larger than \(n-m\) and as batch sizes are independent of arrival times, such batches arrive as a Poisson process with rate \(\lambda_{B} \pi(m) G(n-m)\). Finally, to compute the rate at which level \(n\) is up-crossed, we merge all these thinned Poisson processes into a new Poisson process with rate \(\lambda_{B} \sum_{m=0}^n \pi(m) G(n-m)\).
By level-crossing, the up-crossing and down-crossing rates must match, thus,
\begin{equation*} \lambda_{B} \sum_{m=0}^n \pi(m) G(n-m) = \mu \pi(n+1). \tag{5.3.5} \end{equation*}It is an important check11 Opt 5.3.13. to retrieve Eq. (5.3.4) from the expression \(\E\L = \sum_{n} n \pi(n)\) and Little's law.
It remains to find the normalization constant for \(\pi\).
However, this is not possible in general because Eq. (5.3.5) does not lead to a closed form expression for \(\pi(n)\).12
Compare Eq. (5.1.1).
Numerically, though, we can start with \(\pi(0)=1\), apply the recursion, and truncate when the (unnormalized) probabilities become negligible.
The RV class then normalizes the result.
The function mxm1 applies this idea.
from random_variable import NumericRV as RV
labda_B, mu = 1, 3
B = RV({1: 1, 2: 1, 3: 1})
rho = labda_B * B.mean() / mu
rho_B = labda_B / mu
def mxm1(eps=1e-10):
"Compute stationary distribution of number of jobs in the system."
if rho >= 1:
print("The load is too high")
quit()
pi, n = [1], 0
batch_sizes = [int(m) - 1 for m in B.support()]
while n < batch_sizes[-1]:
res = sum(pi[m] * B.sf(n - m) for m in range(n + 1))
pi.append(res * rho_B)
n += 1
while pi[-1] > eps:
res = sum(pi[n - m] * B.sf(m) for m in batch_sizes)
pi.append(res * rho_B)
n += 1
return RV({i: pi[i] for i in range(len(pi))})
A different approach is to completely bypass the normalization problem by blocking the demand above some level \(K\). There are three common policies for deciding which jobs in a batch to accept.
- Complete acceptance: accept all batches that arrive when the system contains \(K-1\) or fewer jobs, and reject the entire batch when the system contains \(K\) or more jobs. For this case, we make the assumption that the batch size is bounded by the number \(K_{B}\) so that \(L(t) \leq K + K_{B}\).13 Ex 5.3.8.
- Partial acceptance: accept whatever fits of a batch, i.e., up to \(K\), and reject the rest.14 Ex 5.3.9.
- Complete rejection: if a batch does not fit entirely into the system, reject it completely.15 Ex 5.3.10.
In Ex 5.3.9--Ex 5.3.10 we will derive recursions similar to Eq. (5.3.5) to compute \(\{\pi(n)\}\) for these different blocking models.
For the \(M^{B}/M/1\) queue, let \(\E{\QQ}\) be the expected number of jobs in queue, \(\E{\QQ^{B}}\) the expected number of batches in the queue, \(\E{\W}\) the expected waiting time of a job in queue, \(\lambda\) the rate at which jobs arrive, and \(\lambda_{B}\) the rate at which batches arrive. Claim: Little's law informs us that \(\E{\QQ^{B}}\E B=\lambda\E{\W}\).
Solution
Solution, for real
True. We have for the batches that \(\lambda_{B} \E{\W^{B}} = \E{\QQ^{B}}\). We also find that \(\lambda_{B} = \lambda /\E B\), and \(\E{\W^{B}} = \E \W\).
For an \(M^B/M/1\) queue we know that: \[ \E{\W}=\frac{1+C_B^2}{2}\frac{\rho}{1-\rho}\E{B}\E{S}+\frac{1}{2}\frac{\rho}{1-\rho}\E{S}. \] Suppose that \(\V B = 0\). Claim: This formula implies that if the batch size increases but \(\E S\) decreases such that \(\rho\) remains the same, the expected waiting time decreases.
Solution
Solution, for real
True. \(\V B = 0 \implies C_{B}^{2} = 0 \implies\) the first term on the RHS does not change when \(\rho\) remains constant. The second term becomes smaller.
Suppose that the time to process a job has an exponential distribution. Claim: the inter-arrival times at which individual jobs arrive in a \(M^B/M/1\) queue are exponentially distributed.
Solution
Solution, for real
False. Jobs come in batches; hence, multiple jobs can arrive at the same time, implying that the inter-arrival distribution is no longer exponential.
Claim: This code correctly implements an \(M^{B}/M/1/K\) queue with complete rejection.
def complete_rejection(K):
pi, n = {}, 0
pi[0] = 1
while n < K:
pi[n + 1] = sum(
pi[m] * (S.sf(n - m) - S.sf(K - m)) for m in range(n)
)
pi[n + 1] *= rho
n += 1
return RV(pi)
Solution
Solution, for real
False. The range over \(m\) should include \(n\); now it runs up to (but not including) \(n\).
Claim: In the \(M^B/M/1\) queue, if the batch size \(B\) is deterministic, the waiting time formula reduces to the Pollaczek-Khinchine formula for the \(M/M/1\) queue.
Solution
Solution, for real
False. When \(B\) is deterministic, \(C_B^2 = 0\), but the waiting time formula still contains the factor \(\E B\) from the batch structure. Only when \(B \equiv 1\) does the formula reduce to the \(M/M/1\) result.
A company operates a machine that receives batches of various sizes. Management would like to know how a reduction in batch size variability would affect average queueing time. Suppose, for the sake of an example, that batch sizes are such that \(\P{B=1} = \P{B=2} = \P{B=3} = 1/3\). The batches arrive at the rate \(\lambda_B = 1/h\). The average service time for a job is \(25\) minutes. Calculate by how much \(\E\L\) would decrease if \(B\equiv 2\).
Solution
Solution, for real
Start with the simple case. \(B\equiv 2 \implies \V B=0 \implies C_s^2 = 0\), \(\rho=\lambda_B \E B \E S = 1\cdot 2 \cdot 25/60 = 5/6\). Hence,
\begin{equation*} \E{\L} = \frac 12 \frac{5/6}{1/6} 2 + \frac 12 \frac{5/6}{1/6} = 5 + \frac52. \end{equation*}Now we have the other case. \(\E{B^2} = (1+4+9)/3 = 14/3\). Hence, \(\V B=14/3 - 4=2/3\). Hence, \(C_B^2= \frac 16\). And thus,
\begin{equation*} \E{\L} = \frac {1+1/6}2 \frac{5/6}{1/6} 2 + \frac 12 \frac{5/6}{1/6} = \frac76 5 + \frac 52. \end{equation*}The ratio between \(\E{\L}\) is \(10/9\): the average waiting time can be reduced by about 10\% by working in fixed batch sizes.
Suppose \(B\) is geometrically distributed16 Sometimes this distribution is called the First Success distribution. There is no consensus on the definition of the geometric distribution, see Wikipedia. with \(\E B = 1/p\).
Compute \(\E\L\) for the \(M^{B}/M/1\) queue when \(B\) is geometrically distributed with \(\E B = 1/p\). Check that if \(\E B =1\) the \(M/M/1\) queue results.
Hint
\(\P{B=k}=q^{k-1}p\) with \(q=1-p\). Look up the expectation and variance of a geometric rv.
Solution
Solution, for real
For a geometric rv,
\begin{align*} \E B &= \frac 1 p, & \V B &= \frac1{p^2}-\frac1p, & C_B^2&= \frac{\V B}{(\E B)^2} = p^2 \left(\frac1{p^2}-\frac1p\right)=1-p, \\ (1+C_B^2)/2 &= 1-p/2, \end{align*}With this, fill in the formula for \(\E \W\) to get
\begin{equation*} \E \L = \left(1-\frac p2\right) \frac\rho{1-\rho} \frac 1 p + \frac12\frac\rho{1-\rho} =\frac\rho{1-\rho} \frac 1 p. \end{equation*}For the \(M/M/1\) queue, \(\E B = 1 \implies p = 1 \implies \E\L=\rho/(1-\rho)\).
If you cannot find the expectation and variance of a geometric rv, then here is the derivation.
\begin{align*} M_B(s) &= \E{e^{sB}} = \sum_{k=0}^\infty e^{sk} \P{B=k} \\ &= \sum_{k=0}^\infty e^{sk} p q^{k-1} = \frac p q \sum_{k=0}^\infty (q e^s)^k = \frac p q \frac1{1-qe^s},\\ \E B &= M_B'(0) = \left.\frac p q \frac q{(1-q e^s)^2}\right|_{s=0}= \frac p{(1-q)^2} = \frac 1 p,\\ \E{B^2} &= M_B''(0) = \frac2{p^2} - \frac1p, \\ \V B &= \E{B^2} - (\E B)^2 = \frac2{p^2} - \frac1p - \frac1{p^2} = \frac1{p^2}-\frac1p,\\ \end{align*}Derive a set of recursions analogous to Eq. (5.3.5) to compute \(\pi(n)\) for the \(M^B/M/1/K\) queue with complete acceptance.
Solution
Solution, for real
The complete acceptance policy is quite simple. As any batch will be accepted when \(n\leq K\), the queue length is not bounded. Only when the number of jobs in the system is larger than \(K\), do we reject jobs.
\begin{equation*} \mu \pi(n+1) = \begin{cases} \lambda_B \sum_{m=0}^n \pi(m) G(n-m), & \text{ for } n< K,\\ \lambda_B \sum_{m=0}^K \pi(m) G(n-m), & \text{ for } n\geq K. \end{cases} \end{equation*}Use the code of Ex 5.3.9 to get a working example.
def complete_acceptance(K):
pi, n = {}, 0
pi[0] = 1
while B.sf(n - K) > 0:
pi[n + 1] = sum(pi[m] * B.sf(n - m) for m in range(min(n, K) + 1))
pi[n + 1] *= rho_B
n += 1
return RV(pi)
Derive a set of recursions analogous to Eq. (5.3.5) to compute \(\pi(n)\) for the \(M^B/M/1/K\) queue with partial acceptance.
Solution
Solution, for real
For the partial acceptance case, any job is accepted, but the system accepts only what fits. As level \(n\in {0,1,\ldots,K-1}\) is still up-crossed by any batch of size at least \(n-m\) when the system is in state \(m\), the formula for the up-crossing rate is identical to the case without this acceptance policy. Hence, \(\mu \pi(n+1) = \lambda_B \sum_{m=0}^n \pi(m) G(n-m)\), for \(n=0,1,\ldots, K-1\).
This is the code.
def partial_acceptance(K):
pi, n = {}, 0
pi[0] = 1
while n < K:
pi[n + 1] = sum(pi[m] * B.sf(n - m) for m in range(n + 1))
pi[n + 1] *= rho_B
n += 1
return RV(pi)
Derive a set of recursions analogous to Eq. (5.3.5) to compute \(\pi(n)\) for the \(M^B/M/1/K\) queue with complete rejection.
Solution
Solution, for real
Suppose a batch of size \(k\) arrives when the system contains \(m\) jobs. When \(m+k \leq K\), the batch can be accepted since the entire batch will fit into the queue; otherwise, it will be rejected. Further, level \(n, 0\leq n < K\), can only be crossed when \(m+k > n\). Thus, for \(n=0,\ldots, K-1\),
\begin{align*} \mu \pi(n+1) &= \lambda_B \sum_{m=0}^n \pi(m) \P{n-m < B \leq K-m} \\ &=\lambda_B \sum_{m=0}^n \pi(m) [G(n-m)-G(K-m)], \end{align*}Let us check this for some simple cases. First, when \(K\to \infty\), then \(G(K-m)\to 0\), so we get our earlier result. Second, take \(n=K\). Then \(G(n-m)-G(K-m)=0\) for all \(m\), so that the RHS is 0, as it should. Third, take \(n=0\) and \(K=1\), then \(\mu \pi(1)= \lambda_B \pi(0)\). This also makes sense.
Use the code of Ex 5.3.9 to get a working example.
def complete_rejection(K):
pi, n = {}, 0
pi[0] = 1
while n < K:
pi[n + 1] = sum(
pi[m] * (B.sf(n - m) - B.sf(K - m)) for m in range(n + 1)
)
pi[n + 1] *= rho_B
n += 1
return RV(pi)
Show that when \(B\leq K_{B}\), the probabilities \(\pi(n)\) decrease geometrically fast.
Hint
Use Ex 1.1.6, the recursion for \(\pi(n)\), \(\rho = \lambda_{B}/\mu \sum_{i=0}^{K_{B}-1} \P{B>i}\), and realize that \(\E B \geq 1\) whatever the distribution of \(B\).
Solution
Solution, for real
The recursion for \(\pi(n+1)\) states that
\begin{equation*} \pi(n+1) = \lambda_{B}/\mu \sum_{i=0}^{K_{B}-1} \pi(n-i) \P{B>i}. \end{equation*}Start with taking \(\pi(0)=1\), so we consider the unnormalized probabilities. Then, by the recursion, \(\pi(1) = \lambda_{B}/\mu \cdot \pi(0) \leq \rho\), where we use the hint. Since \(\pi(1) <\pi(0)\), \(\pi(2) \leq \lambda_{B}/\mu \sum_{i=0}^{1} \P{B>i} \leq \rho\). By similar reasoning, \(\pi(n)\leq \rho\) for all \(n=1, \ldots K_{B}-1\). This in turn implies that
\begin{equation*} \pi(K_{B}) = \lambda_{B}/\mu \sum_{i=0}^{K_{B}-1} \pi(n-i) \P{B>i} \leq \lambda_{B}/\mu \cdot \rho \sum_{i=0}^{K_{B}-1} \P{B>i} = \rho^{2}. \end{equation*}Continuing like this we see that \(\pi(n)\leq\rho^{2}\) for all \(n=K_{B}, \ldots 2K_{B}-1\). Clearly, this in turn implies that \(\pi(n)\leq\rho^{3}\) for all \(n=2K_{B}, \ldots 3K_{B}-1\), and so on.
Use Ex 1.1.6 to show that \(\sum_{i=1}^\infty i G(i-1)= ({\E{B^2} + \E B})/{2}\).
Hint
Read the answer of Ex 1.1.6.
Solution
Solution, for real
Use Eq. (5.3.5) in \(\E{\L} = \sum_{n=0}^\infty n \pi(n)\) to derive Eq. (5.3.4).
Hint
Show first that
\begin{equation*} \mu \E \L =\mu \sum_{n=0}^\infty n \pi(n) = \lambda_B \frac{\E{B^2}}2 + \lambda_B \E B \E \L +\lambda_B \frac{\E B}2. \end{equation*}Solution
Solution, for real
We use that \(\mu \pi(n) =\lambda_B \sum_{i=0}^{n-1} \pi(i) G(n-1-i)\) and the results of the exercises of Section 5.3 to see that
\begin{align*} \mu \E \L &=\sum_{n=0}^\infty n\, \mu \pi(n), \quad \text{now substitute for \(\mu \pi(n)\) the recursion Eq. (5.3.5)}, \\ & =\lambda_B \sum_{n=0}^\infty n \sum_{i=0}^{n-1} \pi(i) G(n-1-i) =\lambda_B \sum_{n=0}^\infty n \sum_{i=0}^\infty \1{i<n} \pi(i) G(n-1-i) \\ & =\lambda_B \sum_{i=0}^\infty \pi(i) \sum_{n=0}^\infty \1{i<n} n G(n-1-i) =\lambda_B \sum_{i=0}^\infty \pi(i) \sum_{n=i+1}^\infty n G(n-1-i) \\ & =\lambda_B \sum_{i=0}^\infty \pi(i) \sum_{n=0}^\infty (n+i+1) G(n) \\ & =\lambda_B \sum_{i=0}^\infty \pi(i) \left[\sum_{n=0}^\infty n G(n) +(i+1) \sum_{n=0}^\infty G(n)\right] \\ &=\lambda_B \sum_{i=0}^\infty \pi(i)\sum_{n=0}^\infty n G(n) +\lambda_B \E B \sum_{i=0}^\infty \pi(i) (i+1) \\ &=\lambda_B \sum_{i=0}^\infty \pi(i) \frac{\E{B^2} -\E B}2 + \lambda_B \E B (\E \L +1) \\ &= \lambda_B \frac{\E{B^2} -\E B}2 + \lambda_B \E B \E \L +\lambda_B \E B \\ &= \lambda_B \frac{\E{B^2} }2 + \lambda_B \E B \E \L +\lambda_B \frac{\E B}2. \end{align*}Dividing both sides by \(\mu\) and using that \(\lambda_B \E B/\mu = \rho\), we obtain
\begin{equation*} \E \L = \lambda_B \frac{\E{B^2}}2 \E S + \rho \E \L + \frac\rho2. \end{equation*}By bringing \(\rho \E \L\) to the LHS, the RHS becomes equal to Eq. (5.3.3).