Design Expert - Mixture Designs. Ab einer bestimmten Bestellmenge stellen wir Ihnen ein individuelles Angebot zusammen. If you do product formulation, you need the mixture designs taught in Mixture Design for Optimal Formulations. Develop statistical models of your product performance. Then use response surface methods to identify the 'sweet. Learn how to use the most common mixture design, the optimal/custom design, in Design-Expert® software.EARLY BIRD REGISTRATION ENDS SOON FOR APRIL!Take our M. In this paper, waste engine oil was used as the base oil to produce a competent rejuvenator for the recycled asphalt mixture based on the Design-Expert surface response method. From the surface response in terms of penetration, ductility, softening point, glass transition temperature, and mass-loss percentage, the optimum waste engine oil-based. Design-Expert provides powerful tools to lay out an ideal experiment on your process, mixture or combination of factors and components. When in doubt, build it stout via in-line power calculations and the ability to add blocks and center points. Key Results: S, R-sq, R-sq (adj), R-sq (pred) In these results, the model explains 99.98% of the variation in the flavor score. For these data, the R 2 value indicates the model provides a good fit to the data. If additional models are fit with different predictors, use the adjusted R 2 values and the predicted R 2 values to compare how well.

Lesson 11: Response Surface Methods and DesignsWe are now going to shift from screening designs where the primary focus of previous lessons was factor screening – (two-level factorials, fractional factorials being widely used), to trying to optimize an underlying process and look for the factor level combinations that give us the maximum yield and minimum costs. In many applications, this is our goal. However in some cases we are trying to hit a target or aim to match some given specifications - but this brings up other issues which we will get to later.

Here the objective of Response Surface Methods (RSM) is optimization, finding the best set of factor levels to achieve some goal. This lesson aims to cover the following goals:

The text has a graphic depicting a response surface method in three dimensions, though actually it is four dimensional space that is being represented since the three factors are in 3-dimensional space the the response is the 4th dimension.

Instead, let's look at 2 dimensions - this is easier to think about and visualize. There is a response surface and we will imagine the ideal case where there is actually a 'hill' which has a nice centered peak. (If only reality were so nice, but it usually isn't!). Consider the geologic ridges that exist here in central Pennsylvania, the optimum or highest part of the 'hill' might be anywhere along this ridge. There's no clearly defined centered high point or peak that stands out. In this case there would be a whole range of values of (x_{1}) and (x_{2}) that would all describe the same 'peak' -- actually the points lying along the top of the ridge. This type of situation is quite realistic where there does not exist a predominate optimum.

But for our purposes let's think of this ideal 'hill' and the problem is that you don't know where this is and you want to find factor level values where the response is at its peak. This is your quest, to find the values (x_{1}^{optimum}) and (x_{2}^{optimum}), where the response is at its peak. You might have a hunch that the optimum exists in certain location. This would be good area to start - some set of conditions, perhaps the way that the factory has always been doing things - and then perform an experiment at this starting point.

The actual variables in their natural units of measurement are used in the experiment. However, when we design our experiment we will use our coded variables, (x_{1}) and (x_{2}) which will be centered on 0, and extend +1 and -1 from the center of the region of experimentation. Therefore, we will take our natural units and then center and rescale them to the range from -1 to +1.

Our goal is to start somewhere using our best prior or current knowledge and search for the optimum spot where the response is either maximized or minimized.

Here are the models that we will use.

**Screening Response Model**

(y = beta_{0} + beta_{1} x_{1} + beta_{2} x_{2} + beta_{12} x_{1} x_{2} + varepsilon)

The screening model that we used for the first order situation involves linear effects and a single cross product factor, which represents the linear x linear interaction component.

**Steepest Ascent Model**

If we ignore cross products which gives an indication of the curvature of the response surface that we are fitting and just look at the first order model this is called the steepest ascent model:

(y=beta_{0} + beta_{1} x_{1} + beta_{2} x_{2} + varepsilon)

**Optimization Model **

Then, when we think that we are somewhere near the 'top of the hill' we will fit a second order model. This includes in addition the two second-order quadratic terms.

(y=beta_{0} + beta_{1} x_{1} + beta_{2} x_{2} + beta_{12} x_{1} x_{2} + beta_{11} x_{1}^2 + beta_{22} x_{2}^2 + varepsilon)

Let's look at the first order situation - the method of steepest ascent. Now, remember, in the first place we don't know if the 'hill' even exists so we will start somewhere where we think the optimum exists. We start somewhere in terms of the natural units and use the coded units to do our experiment. Consider the example 11.1 in the textbook. We want to start in the region where (x_{1} = ) reaction time (30 - 40 seconds) and (x_{2} = ) temperature (150 - 160 degrees), and we want to look at the yield of the process as a function of these factors. In a sense, for the purpose of illustrating this concept, we can superimpose this region of experimentation on to our plot of our unknown 'hill'. We obviously conduct the experiment in its natural units but the designs will be specified in the coded units so we can apply them to any situation.

Specifically, here we use a design with four corner points, a (2^2) design and five center points. We now fit this first-order model and investigate it.

We put in the actual data for A and B and the response measurements Y.

We fit a full model first: See: Ex-11-1-output.doc

We fit the surface. The model has two main effects, one cross product term and then one additional parameter as the mean for the center point. The residuals in this case have four (df)* which come from replication of the center points.* Since there are five center points, i.e., four (df) among the five center points. This is a measure of pure error.

We start by testing for curvature. The question is whether the mean of the center points is different from the values at (x_{1},x_{2} = (0,0)) predicted from the screening response model (main effects plus interaction). We are testing whether the mean of the points at the center are on the plane fit by the four corner points. If the p-value had been small, this would have told you that a mean of the center points is above or below the plane indicating curvature in the response surface. The fact that, in this case, it is not significant indicates there is no curvature. Indeed the center points fall exactly on the plane that fits the quarter points.

There is just one degree of freedom for this test because the design only has one additional location in terms of the *x*'s.

Next we check for significant effects of the factors. We see from the ANOVA that there is no interaction. So, let's refit this model without the interaction term, leaving just the A and B terms. We still have the average of the center points and our AOV now shows (5 df) for residual error. One of these is lack of fit of the additive model and there are (4 df) of pure error as before. We have (1 df) for curvature, and lack of fit in this case is just the interactions from the model.

What do we do with this? See the Minitab analysis and redo these results in EX11-1.mpx Ex11-1.csv

Our estimated model is: (hat{y} = 40.34 + 0.775x_{1} + 0.325x_{2})

So, for any (x_{1}) and (x_{2}) we can predict (y). This fits a flat surface and it tells us that the predicted (y) is a function of (x_{1}) and (x_{2}) and the coefficients are the gradient of this function. We are working in coded variables at this time so these coefficients are unitless.

If we move 0.775 in the direction of (x_{1}) and then 0.325 in the direction of (x_{2}) this is the direction of steepest ascent. All we know is that this flat surface is one side of the 'hill'.

The method of steepest ascent tells us to do a first order experiment and find the direction that the 'hill' goes up and start marching up the hill taking additional measurements at each ((x_{1}, x_{2})) until the response starts to decrease. If we start at 0, in coded units, then we can do a series of single experiments on this path up the 'hill' of the steepest descent. If we do this at a step size of (x_{1} = 1), then:

(1 / 0.775 = x_{2} / 0.325 rightarrow x_{2} = 0.325 / 0.775 = 0.42 )

and thus our step size of (x_{1} = 1) determines that (x_{2} = 0.42), in order to move in the direction determined to be the steepest ascent. If we take steps of 1 in coded units, this would be five minutes in terms of the time units. And for each step along that path, we would go up 0.42 coded units in (x_{2}) or approximately 2º on the temperature scale.

Steps | Coded Variables | Natural Variables | Treatment Total | ||
---|---|---|---|---|---|

(x_1) | (x_1) | (xi_1) | (xi_1) | y | |

Origin | 0 | 0 | 35 | 155 | |

( Delta) | 1.00 | 0.42 | 5 | 2 | |

Origin + ( Delta) | 1.00 | 0.42 | 40 | 157 | 41.0 |

Origin + (2 Delta) | 2.00 | 0.84 | 45 | 159 | 42.9 |

Origin + (3 Delta) | 3.00 | 1.26 | 50 | 161 | 47.1 |

Origin + (4 Delta) | 4.00 | 1.68 | 55 | 163 | 49.7 |

Origin + (5 Delta) | 5.00 | 2.10 | 60 | 165 | 53.8 |

Origin + (6 Delta) | 6.00 | 2.52 | 65 | 167 | 59.9 |

Origin + (7 Delta) | 7.00 | 2.94 | 70 | 169 | 65.0 |

Origin + (8 Delta) | 8.00 | 3.36 | 75 | 171 | 70.4 |

Origin + (9 Delta) | 9.00 | 3.78 | 80 | 173 | 77.6 |

Origin + (10 Delta) | 10.00 | 4.20 | 85 | 175 | 80.3 |

Origin + (11 Delta) | 11.00 | 4.62 | 90 | 179 | 76.2 |

Origin + (12 Delta) | 12.00 | 5.04 | 95 | 181 | 75.1 |

Table 11-3 Steepest Ascent Experiment for Example 11-1 |

Here is the series of steps in additional measures of five minutes and 2º temperature. The response is plotted and shows an increase that drops off towards the end.

This is a pretty smooth curve and in reality, you probably should go a little bit more beyond the peak to make sure you are at the peak. But all you are trying to do is to find out approximately where the top of the 'hill' is. If your first experiment is not exactly right you might have gone off in the wrong direction!

So you might want to do another first-order experiment just to be sure. Or, you might wish to do a second order experiment, assuming you are near the top. This is what we will discuss in the next section. The second order experiment will help find a more exact location of the peak.

The point is, this is a fairly cheap way to 'scout around the mountain' to try to find where the optimum conditions are. Remember, this example is being shown in two dimensions but you may be working in three or four-dimensional space! You can use the same method, fitting a first-order model and then moving up the response surface in *k* dimensional space until you think you are close to where the optimal conditions are.

If you are in more than 2 dimensions, you will not be able to get a nice plot. But that is OK. The method of steepest ascent tells you where to take new measurements, and you will know the response at those points. You might move a few steps and you may see that the response continued to move up or perhaps not - then you might do another first order experiment and redirect your efforts. The point is, when we do the experiment for the second order model, we hope that the optimum will be in the range of the experiment - if it is not, we are extrapolating to find the optimum. In this case, the safest thing to do is to do another experiment around this estimated optimum. Since the experiment for the second order model requires more runs than experiments for the first order model, we want to move into the right region *before* we start fitting second-order models.

(y = beta_{0} + beta_{1} x_{1} + beta_{2} x_{2} + beta_{12} x_{1} x_{2} + beta_{11} x_{1}^2 + beta_{22} x_{2}^2 + varepsilon)

This second order model includes linear terms, cross product terms and a second order term for each of the x's. If we generalize this to (k) *x*'s, we have (k) first order terms, (k) second order terms and then we have all possible pairwise first-order interactions. The linear terms just have one subscript. The quadratic terms have two subscripts. There are (k * dfrac{(k - 1)}{2}) interaction terms. To fit this model, we are going to need a response surface design that has more runs than the first order designs used to move close to the optimum.

This second order model is the basis for response surface designs under the assumption that although the hill is not a perfect quadratic polynomial in *k* dimensions, it provides a good approximation to the surface near the maximum or a minimum.

Assuming that we have 'marched up this hill' and if we re-specified the region of interest in our example, we are now between 80 - 90 in terms of time and 170 - 180 in terms of temperature. We would now translate these natural units into our coded units and if we fit the first order model again, hopefully we can detect that the middle is higher than the corner points so we would have curvature in our model, and could now fit a quadratic polynomial.

After using the Steepest Ascent method to find the optimum location in terms of our factors, we can now go directly to the second order response surface design. A favorite design that we consider is sometimes referred to as a central composite design. The central compositive design is shown on Figure 11.3 above and in more detail in the text in Figure 11.10. The idea is simple - take the (2^k) corner points, add a center point, and then create a star by drawing a line through the center point orthogonal to each face of the hypercube. Pick a radius along this line and place a new point at that radius. The effect is that each factor is now measured at 5 levels - center, 2 corners and the 2 star points. This gives us plenty of unique treatments to fit the 2nd order model with treatment degrees of freedom left over to test the goodness of fit. Replication is still usually done only at the center point.

Upon successful completion of this lesson, you should be able to:

- Response Surface Methodology and its sequential nature for optimizing a process
- First order and second order response surface models and how to find the direction of steepest ascent (or descent) to maximize (or minimize) the response
- How to deal with several responses simultaneously (Multiple Response Optimization)
- Central Composite Designs (CCD) and Box-Behnken Designs as two of the major Response Surface Designs and how two generate them using Minitab
- Design and Analysis of Mixture Designs for cases where the sum of the factor levels equals a constant, i.e. 100% or the totality of the components
- Introductory understanding of designs for computer models

RSM dates from the 1950's. Early applications were found in the chemical industry. We have already talked about Box. Box and Draper have some wonderful references about building RSMs and analyzing them which are very useful.

11.1 - Multiple ResponsesIn many experiments more than one response is of interest for the experimenter. Furthermore, we sometimes want to find a solution for controllable factors which result in the best possible value for each response. This is the context of multiple response optimization, where we seek a compromise between the responses; however, it is not always possible to find a solution for controllable factors which optimize all of the responses simultaneously. Multiple response optimization has an extensive literature in the context of multiple objective optimization which is beyond the scope of this course. Here, we will discuss the basic steps in this area.

As expected, multiple response analysis starts with building a regression model for each response separately. For instance, in Example 11.2 we can fit three different regression models for each of the responses which are Yield, Viscosity and Molecular Weight based on two controllable factors: Time and Temperature.

One of the traditional methods way to analyze and find the desired operating condition one is **overlaid contour plots**. This method is mainly useful when we have two or maybe three controllable factors but in higher dimensions it loses its efficiency. This method simply consists of overlaying contour plot for each of the responses one over another in the controllable factors space and finding the area which makes the best possible value for each of the responses. Figure 11.16 (Montgomery, 7th Edition) shows the overlaid contour plots for example 11.2 in Time and Temperature space.

The unshaded area is where *yield* > 78.5, 62 < *viscosity* < 68, and* molecular weight* < 3400. This area might be of special interest for the experimenter because they satisfy given conditions on the responses.

Another dominant approach for dealing with multiple response optimization is to form a **constrained optimization problem**. In this approach we treat one of the responses as the objective of a constrained optimization problem and other responses as the constraints where the constraint’s boundary is to be determined by the decision maker (DM). The Design-Expert software package solves this approach using a direct search method.

Another important procedure that we will discuss here, also implemented in Minitab, is the **desirability function** approach. In this approach the value of each response for a given combination of controllable factors is first translated to a number between zero and one known as **individual desirability**. Individual desirability functions are different for different objective types which might be Maximization, Minimization or Target. If the objective type is maximum value, the desirability function is defined as

(d=left{begin{array}

{cl}

0 & y<L left(frac{y-L}{T-L}right)^r & Lleq yleq T 1 & y>T

end{array} right. )

When the objective type is a minimum value the, the individual desirability is defines as

(d=left{begin{array}

{cl}

1 & y<T left(frac{U-y}{U-T}right)^r & Tleq yleq U 0 & y>U

end{array} right. )

Finally the two-sided desirability function with target-the-best objective type is defined as

(d=left{begin{array}

{cl}

0 & y<L left(frac{y-L}{T-L}right)^{r_1} & Lleq yleq T left(frac{U-y}{U-T}right)^{r_2} & Tleq yleq U 0 & y>U

end{array} right. )

Where the (r_1) , (r_2) and *r* define the shape of the individual desirability function (Figure 11.17 in the text shows the shape of individual desirability for different values of shape parameter). Individual desirability is then used to calculate the overall desirability using the following formula:

(D=(d_1 d_2 ldots d_m)^{1/m})

where m is the number of responses. Now, the design variables should be chosen so that the overall desirability will be maximized. Minitab’s `Stat` > `DOE` > `Response Surface` > `Response Optimizer` routine uses the desirability approach to optimize several responses, simultaneously.

In this paper, waste engine oil was used as the base oil to produce a competent rejuvenator for the recycled asphalt mixture based on the Design-Expert surface response method. From the surface response in terms of penetration, ductility, softening point, glass transition temperature, and mass-loss percentage, the optimum waste engine oil-based rejuvenator had 5% plasticizer, 6.33% antiaging agent, and 7.3% fusogen. The thermogravimetric (TG) analysis indicated that the prepared rejuvenator possessed excellent thermostability at 200°C. The optimum dosage of the rejuvenator was determined as 6% by the weight of asphalt binder based on the penetration, ductility, and softening point results. Then, the effects of the rejuvenator on the asphalt binder properties were evaluated through Fourier transform infrared spectroscopy (FTIR), dynamic shear rheometer (DSR), and differential scanning calorimetry (DSC). The binder test results implied that the rejuvenator significantly reduced the glass transition temperature of aged asphalt and restored the chemical composition, which also increased the rheological properties. Subsequently, the effects of the rejuvenator on the asphalt mixture properties were also evaluated through freeze-thaw splitting, beam bending, and dynamic stability tests. The mixture test results showed that the rejuvenator effectively increased the moisture resistance and low-temperature cracking resistance of aged asphalt mixture, which yielded comparable mixture properties with virgin asphalt mixtures.

Coments are closed