Details

Engineering Optimization


Engineering Optimization

Applications, Methods and Analysis
Wiley-ASME Press Series 1. Aufl.

von: R. Russell Rhinehart

101,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 14.03.2018
ISBN/EAN: 9781118936313
Sprache: englisch
Anzahl Seiten: 776

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>An Application-Oriented Introduction to Essential Optimization Concepts and Best Practices</b></p> <p>Optimization is an inherent human tendency that gained new life after the advent of calculus; now, as the world grows increasingly reliant on complex systems, optimization has become both more important and more challenging than ever before. <i>Engineering Optimization</i> provides a practically-focused introduction to modern engineering optimization best practices, covering fundamental analytical and numerical techniques throughout each stage of the optimization process.</p> <p>Although essential algorithms are explained in detail, the focus lies more in the human function: how to create an appropriate objective function, choose decision variables, identify and incorporate constraints, define convergence, and other critical issues that define the success or failure of an optimization project.</p> <p>Examples, exercises, and homework throughout reinforce the author’s “do, not study” approach to learning, underscoring the application-oriented discussion that provides a deep, generic understanding of the optimization process that can be applied to any field.</p> <p>Providing excellent reference for students or professionals, <i>Engineering Optimization</i>:</p> <ul> <li>Describes and develops a variety of algorithms, including gradient based (such as Newton’s, and Levenberg-Marquardt), direct search (such as Hooke-Jeeves, Leapfrogging, and Particle Swarm), along with surrogate functions for surface characterization</li> <li>Provides guidance on optimizer choice by application, and explains how to determine appropriate optimizer parameter values</li> <li>Details current best practices for critical stages of specifying an optimization procedure, including decision variables, defining constraints, and relationship modeling</li> <li>Provides access to software and Visual Basic macros for Excel on the companion website, along with solutions to examples presented in the book</li> </ul> <p>Clear explanations, explicit equation derivations, and practical examples make this book ideal for use as part of a class or self-study, assuming a basic understanding of statistics, calculus, computer programming, and engineering models. Anyone seeking best practices for “making the best choices” will find value in this introductory resource.</p>
<p>Contents</p> <p>Preface xix</p> <p>Acknowledgments xxvii</p> <p>Nomenclature xxix</p> <p>About the Companion Website xxxvii</p> <p><b>Section 1 Introductory Concepts 1</b></p> <p><b>1 Optimization: Introduction and Concepts 3</b></p> <p>1.1 Optimization and Terminology 3</p> <p>1.2 Optimization Concepts and Definitions 4</p> <p>1.3 Examples 6</p> <p>1.4 Terminology Continued 10</p> <p>1.4.1 Constraint 10</p> <p>1.4.2 Feasible Solutions 10</p> <p>1.4.3 Minimize or Maximize 11</p> <p>1.4.4 Canonical Form of the Optimization Statement 11</p> <p>1.5 Optimization Procedure 12</p> <p>1.6 Issues That Shape Optimization Procedures 16</p> <p>1.7 Opposing Trends 17</p> <p>1.8 Uncertainty 20</p> <p>1.9 Over- and Under-specification in Linear Equations 21</p> <p>1.10 Over- and Under-specification in Optimization 22</p> <p>1.11 Test Functions 23</p> <p>1.12 Significant Dates in Optimization 23</p> <p>1.13 Iterative Procedures 26</p> <p>1.14 Takeaway 27</p> <p>1.15 Exercises 27</p> <p><b>2 Optimization Application Diversity and Complexity 33</b></p> <p>2.1 Optimization 33</p> <p>2.2 Nonlinearity 33</p> <p>2.3 Min, Max, Min–Max, Max–Min, … 34</p> <p>2.4 Integers and Other Discretization 35</p> <p>2.5 Conditionals and Discontinuities: Cliffs Ridges/Valleys 36</p> <p>2.6 Procedures, Not Equations 37</p> <p>2.7 Static and Dynamic Models 38</p> <p>2.8 Path Integrals 38</p> <p>2.9 Economic Optimization and Other Nonadditive Cost Functions 38</p> <p>2.10 Reliability 39</p> <p>2.11 Regression 40</p> <p>2.12 Deterministic and Stochastic 42</p> <p>2.13 Experimental w.r.t. Modeled OF 43</p> <p>2.14 Single and Multiple Optima 44</p> <p>2.15 Saddle Points 45</p> <p>2.16 Inflections 46</p> <p>2.17 Continuum and Discontinuous DVs 47</p> <p>2.18 Continuum and Discontinuous Models 47</p> <p>2.19 Constraints and Penalty Functions 48</p> <p>2.20 Ranks and Categorization: Discontinuous OFs 50</p> <p>2.21 Underspecified OFs 51</p> <p>2.22 Takeaway 51</p> <p>2.23 Exercises 51</p> <p><b>3 Validation: Knowing That the Answer Is Right 53</b></p> <p>3.1 Introduction 53</p> <p>3.2 Validation 53</p> <p>3.3 Advice on Becoming Proficient 55</p> <p>3.4 Takeaway 56</p> <p>3.5 Exercises 57</p> <p><b>Section 2 Univariate Search Techniques 59</b></p> <p><b>4 Univariate (Single DV) Search Techniques 61</b></p> <p>4.1 Univariate (Single DV) 61</p> <p>4.2 Analytical Method of Optimization 62</p> <p>4.2.1 Issues with the Analytical Approach 63</p> <p>4.3 Numerical Iterative Procedures 64</p> <p>4.3.1 Newton’s Methods 64</p> <p>4.3.2 Successive Quadratic (A Surrogate Model or Approximating Model Method) 68</p> <p>4.4 Direct Search Approaches 70</p> <p>4.4.1 Bisection Method 70</p> <p>4.4.2 Golden Section Method 72</p> <p>4.4.3 Perspective at This Point 74</p> <p>4.4.4 Heuristic Direct Search 74</p> <p>4.4.5 Leapfrogging 76</p> <p>4.4.6 LF for Stochastic Functions 79</p> <p>4.5 Perspectives on Univariate Search Methods 82</p> <p>4.6 Evaluating Optimizers 85</p> <p>4.7 Summary of Techniques 85</p> <p>4.7.1 Analytical Method 86</p> <p>4.7.2 Newton’s (and Variants Like Secant) 86</p> <p>4.7.3 Successive Quadratic 86</p> <p>4.7.4 Golden Section Method 86</p> <p>4.7.5 Heuristic Direct 87</p> <p>4.7.6 Leapfrogging 87</p> <p>4.8 Takeaway 87</p> <p>4.9 Exercises 88</p> <p><b>5 Path Analysis 93</b></p> <p>5.1 Introduction 93</p> <p>5.2 Path Examples 93</p> <p>5.3 Perspective About Variables 96</p> <p>5.4 Path Distance Integral 97</p> <p>5.5 Accumulation along a Path 99</p> <p>5.6 Slope along a Path 101</p> <p>5.7 Parametric Path Notation 103</p> <p>5.8 Takeaway 104</p> <p>5.9 Exercises 104</p> <p><b>6 Stopping and Convergence Criteria: 1-D Applications 107</b></p> <p>6.1 Stopping versus Convergence Criteria 107</p> <p>6.2 Determining Convergence 107</p> <p>6.2.1 Threshold on the OF 108</p> <p>6.2.2 Threshold on the Change in the OF 108</p> <p>6.2.3 Threshold on the Change in the DV 108</p> <p>6.2.4 Threshold on the Relative Change in the DV 109</p> <p>6.2.5 Threshold on the Relative Change in the OF 109</p> <p>6.2.6 Threshold on the Impact of the DV on the OF 109</p> <p>6.2.7 Convergence Based on Uncertainty Caused by the Givens 109</p> <p>6.2.8 Multiplayer Range 110</p> <p>6.2.9 Steady-State Convergence 110</p> <p>6.3 Combinations of Convergence Criteria 111</p> <p>6.4 Choosing Convergence Threshold Values 112</p> <p>6.5 Precision 112</p> <p>6.6 Other Convergence Criteria 113</p> <p>6.7 Stopping Criteria to End a Futile Search 113</p> <p>6.7.1 N Iteration Threshold 114</p> <p>6.7.2 Execution Error 114</p> <p>6.7.3 Constraint Violation 114</p> <p>6.8 Choices! 114</p> <p>6.9 Takeaway 114</p> <p>6.10 Exercises 115</p> <p><b>Section 3 Multivariate Search Techniques 117</b></p> <p><b>7 Multidimension Application Introduction and the Gradient 119</b></p> <p>7.1 Introduction 119</p> <p>7.2 Illustration of Surface and Terms 122</p> <p>7.3 Some Surface Analysis 123</p> <p>7.4 Parametric Notation 128</p> <p>7.5 Extension to Higher Dimension 130</p> <p>7.6 Takeaway 131</p> <p>7.7 Exercises 131</p> <p><b>8 Elementary Gradient-Based Optimizers: CSLS and ISD 135</b></p> <p>8.1 Introduction 135</p> <p>8.2 Cauchy’s Sequential Line Search 135</p> <p>8.2.1 CSLS with Successive Quadratic 137</p> <p>8.2.2 CSLS with Newton/Secant 138</p> <p>8.2.3 CSLS with Golden Section 138</p> <p>8.2.4 CSLS with Leapfrogging 138</p> <p>8.2.5 CSLS with Heuristic Direct Search 139</p> <p>8.2.6 CSLS Commentary 139</p> <p>8.2.7 CSLS Pseudocode 140</p> <p>8.2.8 VBA Code for a 2-DV Application 141</p> <p>8.3 Incremental Steepest Descent 144</p> <p>8.3.1 Pseudocode for the ISD Method 144</p> <p>8.3.2 Enhanced ISD 145</p> <p>8.3.3 ISD Code 148</p> <p>8.4 Takeaway 149</p> <p>8.5 Exercises 149</p> <p><b>9 Second-Order Model-Based Optimizers: SQ and NR 155</b></p> <p>9.1 Introduction 155</p> <p>9.2 Successive Quadratic 155</p> <p>9.2.1 Multivariable SQ 156</p> <p>9.2.2 SQ Pseudocode 159</p> <p>9.3 Newton–Raphson 159</p> <p>9.3.1 NR Pseudocode 162</p> <p>9.3.2 Attenuate NR 163</p> <p>9.3.3 Quasi-Newton 166</p> <p>9.4 Perspective on CSLS, ISD, SQ, and NR 168</p> <p>9.5 Choosing Step Size for Numerical Estimate of Derivatives 169</p> <p>9.6 Takeaway 170</p> <p>9.7 Exercises 170</p> <p><b>10 Gradient-Based Optimizer Solutions: LM, RLM, CG, BFGS, RG, and GRG 173</b></p> <p>10.1 Introduction 173</p> <p>10.2 Levenberg–Marquardt (LM) 173</p> <p>10.2.1 LM VBA Code for a 2-DV Case 175</p> <p>10.2.2 Modified LM (RLM) 176</p> <p>10.2.3 RLM Pseudocode 177</p> <p>10.2.4 RLM VBA Code for a 2-DV Case 178</p> <p>10.3 Scaled Variables 180</p> <p>10.4 Conjugate Gradient (CG) 182</p> <p>10.5 Broyden–Fletcher–Goldfarb–Shanno (BFGS) 183</p> <p>10.6 Generalized Reduced Gradient (GRG) 184</p> <p>10.7 Takeaway 186</p> <p>10.8 Exercises 186</p> <p><b>11 Direct Search Techniques 187</b></p> <p>11.1 Introduction 187</p> <p>11.2 Cyclic Heuristic Direct (CHD) Search 188</p> <p>11.2.1 CHD Pseudocode 188</p> <p>11.2.2 CHD VBA Code 189</p> <p>11.3 Hooke–Jeeves (HJ) 192</p> <p>11.3.1 HJ Code in VBA 195</p> <p>11.4 Compare and Contrast CHD and HJ Features: A Summary 197</p> <p>11.5 Nelder–Mead (NM) Simplex: Spendley, Hext, and Himsworth 199</p> <p>11.6 Multiplayer Direct Search Algorithms 200</p> <p>11.7 Leapfrogging 201</p> <p>11.7.1 Convergence Criteria 208</p> <p>11.7.2 Stochastic Surfaces 209</p> <p>11.7.3 Summary 209</p> <p>11.8 Particle Swarm Optimization 209</p> <p>11.8.1 Individual Particle Behavior 210</p> <p>11.8.2 Particle Swarm 213</p> <p>11.8.3 PSO Equation Analysis 215</p> <p>11.9 Complex Method (CM) 216</p> <p>11.10 A Brief Comparison 217</p> <p>11.11 Takeaway 218</p> <p>11.12 Exercises 219</p> <p><b>12 Linear Programming 223</b></p> <p>12.1 Introduction 223</p> <p>12.2 Visual Representation and Concepts 225</p> <p>12.3 Basic LP Procedure 228</p> <p>12.4 Canonical LP Statement 228</p> <p>12.5 LP Algorithm 229</p> <p>12.6 Simplex Tableau 230</p> <p>12.7 Takeaway 231</p> <p>12.8 Exercises 231</p> <p><b>13 Dynamic Programming 233</b></p> <p>13.1 Introduction 233</p> <p>13.2 Conditions 236</p> <p>13.3 DP Concept 237</p> <p>13.4 Some Calculation Tips 240</p> <p>13.5 Takeaway 241</p> <p>13.6 Exercises 241</p> <p><b>14 Genetic Algorithms and Evolutionary Computation 243</b></p> <p>14.1 Introduction 243</p> <p>14.2 GA Procedures 243</p> <p>14.3 Fitness of Selection 245</p> <p>14.4 Takeaway 250</p> <p>14.5 Exercises 250</p> <p><b>15 Intuitive Optimization 253</b></p> <p>15.1 Introduction 253</p> <p>15.2 Levels 254</p> <p>15.3 Takeaway 254</p> <p>15.4 Exercises 254</p> <p><b>16 Surface Analysis II 257</b></p> <p>16.1 Introduction 257</p> <p>16.2 Maximize Is Equivalent to Minimize the Negative 257</p> <p>16.3 Scaling by a Positive Number Does Not Change DV∗ 258</p> <p>16.4 Scaled and Translated OFs Do Not Change DV∗ 258</p> <p>16.5 Monotonic Function Transformation Does Not Change DV∗ 258</p> <p>16.6 Impact on Search Path or NOFE 261</p> <p>16.7 Inequality Constraints 263</p> <p>16.8 Transforming DVs 263</p> <p>16.9 Takeaway 263</p> <p>16.10 Exercises 263</p> <p><b>17 Convergence Criteria 2: N-D Applications 265</b></p> <p>17.1 Introduction 265</p> <p>17.2 Defining an Iteration 265</p> <p>17.3 Criteria for Single TS Deterministic Procedures 266</p> <p>17.4 Criteria for Multiplayer Deterministic Procedures 267</p> <p>17.5 Stochastic Applications 268</p> <p>17.7 Takeaway 269</p> <p>17.8 Exercises 269</p> <p><b>18 Enhancements to Optimizers 271</b></p> <p>18.1 Introduction 271</p> <p>18.2 Criteria for Replicate Trials 271</p> <p>18.3 Quasi-Newton 274</p> <p>18.4 Coarse–Fine Sequence 275</p> <p>18.5 Number of Players 275</p> <p>18.6 Search Range Adjustment 276</p> <p>18.7 Adjustment of Optimizer Coefficient Values or Options in Process 276</p> <p>18.8 Initialization Range 277</p> <p>18.9 OF and DV Transformations 277</p> <p>18.10 Takeaway 278</p> <p>18.11 Exercises 278</p> <p><b>Section 4 Developing Your Application Statements 279</b></p> <p><b>19 Scaled Variables and Dimensional Consistency 281</b></p> <p>19.1 Introduction 281</p> <p>19.2 A Scaled Variable Approach 283</p> <p>19.3 Sampling of Issues with Primitive Variables 283</p> <p>19.4 Linear Scaling Options 285</p> <p>19.5 Nonlinear Scaling 286</p> <p>19.6 Takeaway 287</p> <p>19.7 Exercises 287</p> <p><b>20 Economic Optimization 289</b></p> <p>20.1 Introduction 289</p> <p>20.2 Annual Cash Flow 290</p> <p>20.3 Including Risk as an Annual Expense 291</p> <p>20.4 Capital 293</p> <p>20.5 Combining Capital and Nominal Annual Cash Flow 293</p> <p>20.6 Combining Time Value and Schedule of Capital and Annual Cash Flow 296</p> <p>20.7 Present Value 297</p> <p>20.8 Including Uncertainty 298</p> <p>20.8.1 Uncertainty Models 301</p> <p>20.8.2 Methods to Include Uncertainty in an Optimization 303</p> <p>20.9 Takeaway 304</p> <p>20.10 Exercises 304</p> <p><b>21 Multiple OF and Constraint Applications 305</b></p> <p>21.1 Introduction 305</p> <p>21.2 Solution 1: Additive Combinations of the Functions 306</p> <p>21.2.1 Solution 1a: Classic Weighting Factors 307</p> <p>21.2.2 Solution 1b: Equal Concern Weighting 307</p> <p>21.2.3 Solution 1c: Nonlinear Weighting 309</p> <p>21.3 Solution 2: Nonadditive OF Combinations 311</p> <p>21.4 Solution 3: Pareto Optimal 311</p> <p>21.5 Takeaway 316</p> <p>21.6 Exercises 316</p> <p><b>22 Constraints 319</b></p> <p>22.1 Introduction 319</p> <p>22.2 Equality Constraints 320</p> <p>22.2.1 Explicit Equality Constraints 320</p> <p>22.2.2 Implicit Equality Constraints 321</p> <p>22.3 Inequality Constraints 321</p> <p>22.3.1 Penalty Function: Discontinuous 323</p> <p>22.3.2 Penalty Function: Soft Constraint 323</p> <p>22.3.3 Inequality Constraints: Slack and Surplus Variables 325</p> <p>22.4 Constraints: Pass/Fail Categories 329</p> <p>22.5 Hard Constraints Can Block Progress 330</p> <p>22.6 Advice 331</p> <p>22.7 Constraint-Equivalent Features 332</p> <p>22.8 Takeaway 332</p> <p>22.9 Exercises 332</p> <p><b>23 Multiple Optima 335</b></p> <p>23.1 Introduction 335</p> <p>23.2 Solution: Multiple Starts 337</p> <p>23.2.1 A Priori Method 340</p> <p>23.2.2 A Posteriori Method 342</p> <p>23.2.3 Snyman and Fatti Criterion A Posteriori Method 345</p> <p>23.3 Other Options 348</p> <p>23.4 Takeaway 349</p> <p>23.5 Exercises 350</p> <p><b>24 Stochastic Objective Functions 353</b></p> <p>24.1 Introduction 353</p> <p>24.2 Method Summary for Optimizing Stochastic Functions 356</p> <p>24.2.1 Step 1: Replicate the Apparent Best Player 356</p> <p>24.2.2 Step 2: Steady-State Detection 357</p> <p>24.3 What Value to Report? 358</p> <p>24.4 Application Examples 359</p> <p>24.4.1 GMC Control of Hot and Cold Mixing 359</p> <p>24.4.2 MBC of Hot and Cold Mixing 359</p> <p>24.4.3 Batch Reaction Management 359</p> <p>24.4.4 Reservoir and Stochastic Boot Print 361</p> <p>24.4.5 Optimization Results 362</p> <p>24.5 Takeaway 365</p> <p>24.6 Exercises 365</p> <p><b>25 Effects of Uncertainty 367</b></p> <p>25.1 Introduction 367</p> <p>25.2 Sources of Error and Uncertainty 368</p> <p>25.3 Significant Digits 370</p> <p>25.4 Estimating Uncertainty on Values 371</p> <p>25.5 Propagating Uncertainty on DV Values 372</p> <p>25.5.1 Analytical Method 373</p> <p>25.5.2 Numerical Method 375</p> <p>25.6 Implicit Relations 378</p> <p>25.7 Estimating Uncertainty in DV∗ and OF∗ 378</p> <p>25.8 Takeaway 379</p> <p>25.9 Exercises 379</p> <p><b>26 Optimization of Probable Outcomes and Distribution Characteristics 381</b></p> <p>26.1 Introduction 381</p> <p>26.2 The Concept of Modeling Uncertainty 385</p> <p>26.3 Stochastic Approach 387</p> <p>26.4 Takeaway 389</p> <p>26.5 Exercises 389</p> <p><b>27 Discrete and Integer Variables 391</b></p> <p>27.1 Introduction 391</p> <p>27.2 Optimization Solutions 394</p> <p>27.2.1 Exhaustive Search 394</p> <p>27.2.2 Branch and Bound 394</p> <p>27.2.3 Cyclic Heuristic 394</p> <p>27.2.4 Leapfrogging or Other Multiplayer Search 395</p> <p>27.3 Convergence 395</p> <p>27.4 Takeaway 395</p> <p>27.5 Exercises 395</p> <p><b>28 Class Variables 397</b></p> <p>28.1 Introduction 397</p> <p>28.2 The Random Keys Method: Sequence 398</p> <p>28.3 The Random Keys Method: Dichotomous Variables 400</p> <p>28.4 Comments 401</p> <p>28.5 Takeaway 401</p> <p>28.6 Exercises 401</p> <p><b>29 Regression 403</b></p> <p>29.1 Introduction 403</p> <p>29.2 Perspective 404</p> <p>29.3 Least Squares Regression: Traditional View on Linear Model Parameters 404</p> <p>29.4 Models Nonlinear in DV 405</p> <p>29.4.1 Models with a Delay 407</p> <p>29.5 Maximum Likelihood 408</p> <p>29.5.1 Akaho’s Method 411</p> <p>29.6 Convergence Criterion 416</p> <p>29.7 Model Order or Complexity 421</p> <p>29.8 Bootstrapping to Reveal Model Uncertainty 425</p> <p>29.8.1 Interpretation of Bootstrapping Analysis 428</p> <p>29.8.2 Appropriating Bootstrapping 430</p> <p>29.9 Perspective 431</p> <p>29.10 Takeaway 431</p> <p>29.11 Exercises 432</p> <p><b>Section 5 Perspective on Many Topics 441</b></p> <p><b>30 Perspective 443</b></p> <p>30.1 Introduction 443</p> <p>30.2 Classifications 443</p> <p>30.3 Elements Associated with Optimization 445</p> <p>30.4 Root Finding Is Not Optimization 446</p> <p>30.5 Desired Engineering Attributes 446</p> <p>30.6 Overview of Optimizers and Attributes 447</p> <p>30.6.1 Gradient Based: Cauchy Sequential Line Search, Incremental Steepest Descent, GRG, Etc. 447</p> <p>30.6.2 Local Surface Characterization Based: Newton–Raphson, Levenberg–Marquardt, Successive Quadratic, RLM, Quasi-Newton, Etc. 448</p> <p>30.6.3 Direct Search with Single Trial Solution: Cyclic Heuristic, Hooke–Jeeves, and Nelder–Mead 448</p> <p>30.6.4 Multiplayer Direct Search Optimizers: Leapfrogging, Particle Swarm, and Genetic Algorithms 448</p> <p>30.7 Choices 448</p> <p>30.8 Variable Classifications 449</p> <p>30.8.1 Nominal 449</p> <p>30.8.2 Ordinal 450</p> <p>30.8.3 Cardinal 450</p> <p>30.9 Constraints 451</p> <p>30.10 Takeaway 453</p> <p>30.11 Exercises 453</p> <p><b>31 Response Surface Aberrations 459</b></p> <p>31.1 Introduction 459</p> <p>31.2 Cliffs (Vertical Walls) 459</p> <p>31.3 Sharp Valleys (or Ridges) 459</p> <p>31.4 Striations 463</p> <p>31.5 Level Spots (Functions 1, 27, 73, 84) 463</p> <p>31.6 Hard-to-Find Optimum 466</p> <p>31.7 Infeasible Calculations 468</p> <p>31.8 Uniform Minimum 468</p> <p>31.9 Noise: Stochastic Response 469</p> <p>31.10 Multiple Optima 471</p> <p>31.11 Takeaway 473</p> <p>31.12 Exercises 473</p> <p><b>32 Identifying the Models, OF, DV, Convergence Criteria, and Constraints 475</b></p> <p>32.1 Introduction 475</p> <p>32.2 Evaluate the Results 476</p> <p>32.3 Takeaway 482</p> <p>32.4 Exercises 482</p> <p><b>33 Evaluating Optimizers 489</b></p> <p>33.1 Introduction 489</p> <p>33.2 Challenges to Optimizers 490</p> <p>33.3 Stakeholders 490</p> <p>33.4 Metrics of Optimizer Performance 490</p> <p>33.5 Designing an Experimental Test 492</p> <p>33.6 Takeaway 495</p> <p>33.7 Exercises 496</p> <p><b>34 Troubleshooting Optimizers 499</b></p> <p>34.1 Introduction 499</p> <p>34.2 DV Values Do Not Change 499</p> <p>34.3 Multiple DV∗ Values for the Same OF∗ Value 499</p> <p>34.4 EXE Error 500</p> <p>34.5 Extreme Values 500</p> <p>34.6 DV∗ Is Dependent on Convergence Threshold 500</p> <p>34.7 OF∗ Is Irreproducible 501</p> <p>34.8 Concern over Results 501</p> <p>34.9 CDF Features 501</p> <p>34.10 Parameter Correlation 502</p> <p>34.11 Multiple Equivalent Solutions 504</p> <p>34.12 Takeaway 504</p> <p>34.13 Exercises 504</p> <p><b>Section 6 Analysis of Leapfrogging Optimization 505</b></p> <p><b>35 Analysis of Leapfrogging 507</b></p> <p>35.1 Introduction 507</p> <p>35.2 Balance in an Optimizer 508</p> <p>35.3 Number of Initializations to be Confident That the Best Will Draw All Others to the Global Optimum 510</p> <p>35.3.1 Methodology 511</p> <p>35.3.2 Experimental 512</p> <p>35.3.3 Results 513</p> <p>35.4 Leap-To Window Amplification Analysis 515</p> <p>35.5 Analysis of α and M to Prevent Convergence on the Side of a Hill 519</p> <p>35.6 Analysis of α and M to Minimize NOFE 521</p> <p>35.7 Probability Distribution of Leap-Overs 522</p> <p>35.7.1 Data 526</p> <p>35.8 Takeaway 527</p> <p>35.9 Exercises 528</p> <p><b>Section 7 Case Studies 529</b></p> <p><b>36 Case Study 1: Economic Optimization of a Pipe System 531</b></p> <p>36.1 Process and Analysis 531</p> <p>36.1.1 Deterministic Continuum Model 531</p> <p>36.1.2 Deterministic Discontinuous Model 534</p> <p>36.1.3 Stochastic Discontinuous Model 535</p> <p>36.2 Exercises 536</p> <p><b>37 Case Study 2: Queuing Study 539</b></p> <p>37.1 The Process and Analysis 539</p> <p>37.2 Exercises 541</p> <p><b>38 Case Study 3: Retirement Study 543</b></p> <p>38.1 The Process and Analysis 543</p> <p>38.2 Exercises 550</p> <p><b>39 Case Study 4: A Goddard Rocket Study 551</b></p> <p>39.1 The Process and Analysis 551</p> <p>39.2 Pre-Assignment Note 554</p> <p>39.3 Exercises 555</p> <p><b>40 Case Study 5: Reservoir 557</b></p> <p>40.1 The Process and Analysis 557</p> <p>40.2 Exercises 559</p> <p><b>41 Case Study 6: Area Coverage 561</b></p> <p>41.1 Description and Analysis 561</p> <p>41.2 Exercises 562</p> <p><b>42 Case Study 7: Approximating Series Solution to an ODE 565</b></p> <p>42.1 Concepts and Analysis 565</p> <p>42.2 Exercises 568</p> <p><b>43 Case Study 8: Horizontal Tank Vapor–Liquid Separator 571</b></p> <p>43.1 Description and Analysis 571</p> <p>43.2 Exercises 576</p> <p><b>44 Case Study 9: In Vitro Fertilization 579</b></p> <p>44.1 Description and Analysis 579</p> <p>44.2 Exercises 583</p> <p><b>45 Case Study 10: Data Reconciliation 585</b></p> <p>45.1 Description and Analysis 585</p> <p>45.2 Exercises 588</p> <p><b>Section 8 Appendices 591</b></p> <p>Appendix A Mathematical Concepts and Procedures 593</p> <p>Appendix B Root Finding 605</p> <p>Appendix C Gaussian Elimination 611</p> <p>Appendix D Steady-State Identification in Noisy Signals 621</p> <p>Appendix E Optimization Challenge Problems (2-D and Single OF) 635</p> <p>Appendix F Brief on VBA Programming: Excel in Office 2013 709</p> <p>Section 9 References and Index 717</p> <p>References and Additional Resources 719</p> <p>Index 723</p> <p> </p>
<p> <b>R. Russell Rhinehart</b> is an Emeritus Professor and Amoco Chair in the School of Chemical Engineering at Oklahoma State University. He was named as one of InTECH's 50 Most Influential Industry Innovators in 2004, and was inducted into the Automation Hall of Fame for the Process Industries in 2005. His research focuses on process improvement through modeling, optimization and control, and product improvement through modeling and design.
<p> <b>An Application-Oriented Introduction to Essential Optimization Concepts and Best Practices</b> <p> Optimization is an inherent human tendency that gained new life after the advent of calculus; now, as the world grows increasingly reliant on complex systems, optimization has become both more important and more challenging than ever before. <i>Engineering Optimization</i> provides a practically-focused introduction to modern engineering optimization best practices, covering fundamental analytical and numerical techniques throughout each stage of the optimization process. <p> Although essential algorithms are explained in detail, the focus lies more in the human function: how to create an appropriate objective function, choose decision variables, identify and incorporate constraints, define convergence, and other critical issues that define the success or failure of an optimization project. <p>Examples, exercises, and homework throughout reinforce the author's "do, not study" approach to learning, underscoring the application-oriented discussion that provides a deep, generic understanding of the optimization process that can be applied to any field. <p>Providing excellent reference for students or professionals, <i>Engineering Optimization</i>: <ul> <li>Describes and develops a variety of algorithms, including gradient based (such as Newton's, and Levenberg-Marquardt), direct search (such as Hooke-Jeeves, Leapfrogging, and Particle Swarm), along with surrogate functions for surface characterization</li> <li>Provides guidance on optimizer choice by application, and explains how to determine appropriate optimizer parameter values</li> <li>Details current best practices for critical stages of specifying an optimization procedure, including decision variables, defining constraints, and relationship modeling</li> <li>Provides access to software and Visual Basic macros for Excel on the companion website, along with solutions to examples presented in the book</li> </ul> <br> <p> Clear explanations, explicit equation derivations, and practical examples make this book ideal for use as part of a class or self-study, assuming a basic understanding of statistics, calculus, computer programming, and engineering models. Anyone seeking best practices for "making the best choices" will find value in this introductory resource.

Diese Produkte könnten Sie auch interessieren:

The IMO Compendium
The IMO Compendium
von: Dusan Djukic, Vladimir Jankovic, Ivan Matic, Nikola Petrovic
PDF ebook
66,99 €
The Fast Solution of Boundary Integral Equations
The Fast Solution of Boundary Integral Equations
von: Sergej Rjasanow, Olaf Steinbach
PDF ebook
96,29 €