8. Comparisons of algorithms

Sun Woong Kim, Steven G. Heeringa and Peter W. Solenberger

Using the four controlled selection problems given in Section 2, we present some results from the two methods using $\text{\hspace{0.17em}}{d}_{2}^{*}$ and $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ in the new algorithm, and compare the solutions for these two methods to solutions generated under the algorithms previously described by Jessen (1970), Jessen (1978), Causey et al. (1985), Huang and Lin (1998), and Winkler (2001). The solutions from the two methods using $\text{\hspace{0.17em}}{d}_{2}^{*}$ and $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ were obtained by implementing the SOCSLP, running on the version 9.2 of SAS/OR (2008). Solutions for the algorithm of Sitter and Skinner (1994) using LP were also obtained using PROC LP of the version 9.2 of SAS/OR (2008). Solutions for the other methods are the results as they appeared in the original papers.

The answers to two questions help us compare the algorithms: 1) Are the solutions from the new methods different from those of the previous algorithms described in Section 5? 2) Do the solutions from the new methods give higher probabilities of selection for optimum arrays compared to those generated using the previous methods?

Prior to the comparison of the algorithms, we need to take a look at the results in Table 8.1 obtained from the two methods. In the table, the method using $\text{\hspace{0.17em}}{d}_{2}^{*}$ and the one using $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ are denoted by ${N}_{2}$ and ${N}_{\infty }$ , respectively. Since when calculated by $\text{\hspace{0.17em}}{d}_{2}^{*}$ ( $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ ), the arrays with the same distance value are in the same group, there would be different groups for all possible arrays (see Remark 6.2). Let $G$ denote the number of the different groups. Also, let $OF$ be the actual value of the objective function (6.5) or (6.6) and $t$ the actual number of $T$ , the number of transitions, introduced in Section 6.3. They are all obtained from the SOCSLP, and $t$ especially indicates the number of iterations in phase 1 and 2 of the PROC LP in the software.

Problem 2.1 Problem 2.2 Problem 2.3 Problem 2.4 ${N}_{2}$ ${N}_{\infty }$ ${N}_{2}$ ${N}_{\infty }$ 4 3 9 2 6 2 157 14 1.336 0.62 1.689 0.64 1.582 0.72 1.661 0.701 2 2 8 6 18 15 43 41

As seen in the table, most values of $G$ are much smaller than $L$ , the number of all possible arrays given in Table 6.1, except for the case of the large value of “157” for Problem 2.4, which arises simply due to the fact that the ${a}_{ij}$ are given to three decimal places. When using $\text{\hspace{0.17em}}{d}_{2}^{*}$ , the values of $OF$ range between 1 and 2, while they are always less than 1, when using $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ . Most values of $t$ do not reach the 95% CI of $T$ shown at the bottom of Table 6.1. Thus, the actual computational demands are less than those expected in the theory.

The solutions from different algorithms for the first three problems are presented in order in Table 8.2 through Table 8.4. Results for Problem 2.4 are simply described below. (The table of solutions to this problem is available on request.) In Table 8.2, the method of Sitter and Skinner (1994), Jessen’s (1970) method 2 and method 3 are denoted by $SS$ , $J2$ and $J3$ , respectively. The solutions for $J2$ and $J3$ in the table are from Jessen (1970, page 782). The table shows that all methods except Jessen’s (1970) method 3 yield the same solution for the array Problem 2.1. In the common solutions, the probability of selection for the optimum arrays, denoted by ${\sum }_{{B}_{k}\in {\mathfrak{B}}_{\infty }}p\left({B}_{k}\right)$ , is 0.5.

Table 8.2
Comparison of solutions to Problem 2.1
Table summary
This table displays the results of Comparison of solutions to Problem 2.1. The information is grouped by ${B}_{k}$ (appearing as row headers), $p\left({B}_{k}\right)$ (appearing as column headers).
${B}_{k}$ $p\left({B}_{k}\right)$
${N}_{2}$ ${N}_{\infty }$ $SS$ $J2$ $J3$
$\begin{array}{ccc}0& 1& 1\\ 1& 0& 1\\ 1& 1& 0\end{array}$ 0.2 0.2 0.2 0.2 0.1
$\begin{array}{ccc}1& 0& 1\\ 1& 1& 0\\ 0& 1& 1\end{array}$ Note * 0.5 0.5 0.5 0.5 0.4
$\begin{array}{ccc}1& 1& 0\\ 0& 1& 1\\ 1& 0& 1\end{array}$ 0.3 0.3 0.3 0.3 0.2
$\begin{array}{ccc}0& 1& 1\\ 1& 1& 0\\ 1& 0& 1\end{array}$         0.1
$\begin{array}{ccc}1& 0& 1\\ 0& 1& 1\\ 1& 1& 0\end{array}$         0.1
$\begin{array}{ccc}1& 1& 0\\ 1& 0& 1\\ 0& 1& 1\end{array}$         0.1
Total 1.0 1.0 1.0 1.0 1.0
Total Note  0.5 0.5 0.5 0.5 0.4

In Table 8.3, Jessen’s (1978) method is denoted by $JS$ . The solution for $JS$ in the table is from Jessen (1978, pages 375-376). As shown in the table, the new methods using $\text{\hspace{0.17em}}{d}_{2}^{*}$ and $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ have the same solution for the Problem 2.2 $4×4$ array; however only one-half of the arrays in those solutions overlap with the arrays in the solutions from the methods of Sitter and Skinner (1994) and Jessen (1978). Also, the Sitter and Skinner and Jessen methods provide a lower probability of 0.6 to optimum arrays, whereas the new methods allocate the higher probability of 0.8 to the arrays.

Table 8.3
Comparison of solutions to Problem 2.2
Table summary
This table displays the results of Comparison of solutions to Problem 2.2. The information is grouped by ${B}_{k}$ (appearing as row headers), $p\left({B}_{k}\right)$ (appearing as column headers).
${B}_{k}$ $p\left({B}_{k}\right)$
${N}_{2}$ ${N}_{\infty }$ $SS$ $JS$
$\begin{array}{cccc}0& 0& 1& 1\\ 0& 1& 0& 1\\ 1& 1& 0& 0\\ 1& 0& 1& 0\end{array}$ 0.2 0.2
$\begin{array}{cccc}0& 0& 1& 1\\ 1& 1& 0& 0\\ 0& 0& 1& 1\\ 1& 1& 0& 0\end{array}$ Note * 0.2 0.2 0.4 0.2
$\begin{array}{cccc}0& 1& 1& 0\\ 1& 0& 0& 1\\ 0& 0& 1& 1\\ 1& 1& 0& 0\end{array}$ Note * 0.2 0.2
$\begin{array}{cccc}0& 1& 1& 0\\ 1& 0& 1& 0\\ 1& 0& 0& 1\\ 0& 1& 0& 1\end{array}$ Note * 0.4 0.4 0.2 0.4
$\begin{array}{cccc}0& 1& 1& 0\\ 0& 0& 1& 1\\ 1& 1& 0& 0\\ 1& 0& 0& 1\end{array}$     0.2
$\begin{array}{cccc}0& 1& 1& 0\\ 1& 0& 0& 1\\ 1& 0& 0& 1\\ 0& 1& 1& 0\end{array}$     0.2
$\begin{array}{cccc}0& 1& 1& 0\\ 1& 0& 0& 1\\ 0& 1& 0& 1\\ 1& 0& 1& 0\end{array}$       0.2
$\begin{array}{cccc}0& 0& 1& 1\\ 0& 1& 0& 1\\ 1& 0& 1& 0\\ 1& 1& 0& 0\end{array}$       0.2
Total 1.0 1.0 1.0 1.0
Total Note  0.8 0.8 0.6 0.6

Problem 2.3, with 141 possible arrays, is considerably larger than the above two problems. The solutions to this problem under the five methods are compared in Table 8.4. In the table, the methods of Causey et al. (1985) and Huang and Lin (1998) are denoted by $CA$ and $HU$ , respectively. The solutions for $CA$ and $HU$ in the table are from Causey et al. (1985, page 906) and Huang and Lin (1998, Figure 3), respectively.

Table 8.4
Comparison of solutions to Problem 2.3
Table summary
This table displays the results of Comparison of solutions to Problem 2.3. The information is grouped by ${B}_{k}$ (appearing as row headers), $p\left({B}_{k}\right)$ (appearing as column headers).
${B}_{k}$ $p\left({B}_{k}\right)$
${N}_{2}$ ${N}_{\infty }$ $SS$ $CA$ $HU$
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 2& 0& 0\\ 1& 1& 0\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}$ 0.2 0.2 0.2
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 1& 0& 0\\ 1& 0& 1\\ 1& 0& 1\\ 0& 0& 0\\ 0& 1& 0\\ 0& 0& 0\end{array}$ 0.1 0.2 0.03
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 1& 0& 0\\ 1& 1& 0\\ 1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}$ 0.1
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 0& 0\\ 1& 0& 1\\ 0& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}$ 0.1
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 0& 1\\ 1& 0& 0\\ 0& 1& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}$ 0.1
$\begin{array}{ccc}1& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 1& 0& 0\\ 1& 1& 0\\ 0& 0& 1\\ 0& 0& 1\\ 0& 0& 0\end{array}$ Note * 0.1   0.08
$\begin{array}{ccc}1& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 1& 1& 0\\ 1& 1& 0\\ 0& 0& 1\\ 0& 0& 0\\ 0& 0& 0\end{array}$ Note * 0.3 0.4 0.2 0.4 0.4
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 0& 0\\ 1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 1\end{array}$   0.2
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 1& 0& 0\\ 1& 0& 0\\ 1& 0& 0\\ 0& 1& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}$     0.11
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 1& 0& 0\\ 1& 0& 1\\ 1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}$     0.03
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 1& 0& 0\\ 1& 0& 1\\ 1& 1& 0\\ 0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}$     0.03
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 1& 0\\ 1& 0& 1\\ 0& 0& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}$     0.09
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 1& 0\\ 1& 0& 1\\ 0& 0& 1\\ 0& 0& 0\\ 0& 0& 0\end{array}$     0.08
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 1& 0\\ 1& 1& 0\\ 0& 0& 1\\ 0& 0& 0\\ 0& 0& 0\end{array}$     0.03
$\begin{array}{ccc}1& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 1& 0& 1\\ 1& 0& 0\\ 0& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}$     0.06
$\begin{array}{ccc}1& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 1& 0& 1\\ 1& 1& 0\\ 0& 1& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}$     0.06
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 2& 0& 0\\ 1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 1\end{array}$       0.2
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 1& 0& 0\\ 1& 0& 0\\ 1& 0& 1\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}$       0.2 0.2
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 0& 1\\ 1& 1& 0\\ 0& 0& 0\\ 0& 1& 0\\ 0& 0& 0\end{array}$       0.2
$\begin{array}{ccc}0& 2& 0\\ 1& 0& 1\\ 0& 0& 0\\ 2& 0& 0\\ 1& 1& 0\\ 0& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}$         0.2
$\begin{array}{ccc}0& 2& 0\\ 2& 0& 1\\ 0& 0& 0\\ 1& 0& 1\\ 1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}$         0.2
Total 1.0 1.0 1.0 1.0 1.0
Total Note  0.4 0.4 0.28 0.4 0.4

We note that all these methods provide different solutions, and about half of the arrays overlap between the new methods and the method of Sitter and Skinner (1994). Moreover, the solutions from the methods of Causey et al. (1985) and Huang and Lin (1998) are quite unlike the solution from the method using $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ . The method using $\text{\hspace{0.17em}}{d}_{2}^{*}$ and Sitter and Skinner’s method distribute the probabilities of selection to two optimum arrays, whereas the other three methods just allocate the probability to only one optimum array. Sitter and Skinner’s method appears to be less effective in selecting optimum arrays since their method gives the probability of 0.28 to those, while the others give the higher probability of 0.4.

The solutions to Problem 2.4, which is the largest of the given problems, are compared under the four methods ( ${N}_{2}$ , ${N}_{\infty }$ , $SS$ , and Winkler’s (2001) method). Only two arrays, including one optimum, overlap in the solutions, and the two new methods give the same probabilities (0.127 and 0.483) to those arrays. Even when comparing the method using $\text{\hspace{0.17em}}{d}_{\infty }^{*}$ with the methods of Sitter and Skinner (1994) and Winkler (2001), their solutions are very different. Also, the new methods give the same probability of selection of 0.483 to the optimum array, whereas the other previous methods give the lower probabilities of 0.385 and 0.104, respectively.

In summary, it seems that the new methods successfully achieve S1 and S2 of optimal solutions. Note that the new methods consistently give higher probabilities of selection for optimum arrays and that the totals of those probabilities are always the same. The solutions from the new methods are very different from those obtained using previous methods, when the controlled selection problems are not small. This implies that the solutions from the previous methods may be far from optimal under criteria S1 and S2 (R1 and R2).

Is something not working? Is there information outdated? Can't find what you're looking for?