Presently, general-purpose optimization techniques such as Simulated Annealing, PDF · Ant Colony Optimization. Vittorio Maniezzo, Luca Maria Gambardella, Fabio de Luigi Applications of New Optimization Techniques in Engineering. PDF | 15 minutes read | Computational mathematics, science and engineering are rapidly growing multidisciplinary areas with connections to. Request PDF on ResearchGate | On Jan 1, , Godfrey C. Onwubolu and others published New Optimization Techniques in Engineering.

Author: | MATHA MANUAL |

Language: | English, Spanish, Japanese |

Country: | Ivory Coast |

Genre: | Health & Fitness |

Pages: | 288 |

Published (Last): | 07.09.2016 |

ISBN: | 507-5-73427-781-4 |

Distribution: | Free* [*Registration Required] |

Uploaded by: | REGINA |

Download PDF of New Optimization Techniques in Engineering by Godfrey C. Onwubolu, B.V. Babu. Summary. References. Part I: New Optimization Techniques. Chapter 2: Introduction to Genetic Algorithms for Engineering Optimization. Kalyanmoy Deb. There are many optimization algorithms available to engineers with a new point (second trial solution) and the objective function is evaluated again.

Main article: Multi-objective optimization Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that cannot be improved upon according to one criterion without hurting another criterion is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier.

In addition, sometime, we have to optimize several objective functions instead of just one function, which will in turn make a problem more challenging to solve. The objective function is sometimes called the cost function, loss function, or energy function in the literature.

All the feasible points or vectors form the feasible regions in the search space. In a rare but extreme case where there is no objective at all, there are only con- straints. Such a problem is called a feasibility problem and any feasible solution is an optimal solution to such problems. If all the problem functions are linear, the problem becomes a linear pro- gramming LP problem.

Thus, mathe- matical optimization is also called mathematical programming. If the variables can only take two values 0 or 1 , such an LP is called binary IP.

If there are no constraints i. If there is NO randomness in the formulation, the problem is called deterministic and in fact all the above problems are essentially deterministic.

However, if there is uncer- tainty in the variables or function forms, then optimization involves probability distribution and expectation, such problems are often called stochastic opti- mization or robust optimization.

We summarize most of these terms in Figure I. Whether an optimization problem is considered easy or hard, it can depend on many factors and the actual perspective of mathematical formulations.

In fact, three factors that make a problem more challenging are: nonlinearity of the objective function, the high dimensionality of the problem, and the complex shape of the search domain.

In most cases, algorithms to solve such problems are more likely to get trapped in local modes. Combinatorial optimization Discrete In some cases, feasible regions can be split into multiple disconnected regions with isolated islands, which makes it harder for algorithms to search all the feasible regions thus potentially missing the true optimality.

Other factors such as the evaluation time of an objective are also important. In many applications such as protein folding, bio-informatics, aero-space engi- neering, and deep machine learning ML , the evaluation of a single objective can take a long time from a few hours to days or even weeks , therefore the computational costs can be very high.

Introduction xxix Algorithms for solving optimization problems tend to be iterative, and thus multiple evaluations of objectives are needed, typically hundreds or thousands or even millions of evaluations. If the objective is not smooth or has a kink, then the Nelder—Mead simplex method can be used because it is a gradient-free method, and can work well even for problems with discontinuities, but it can become slow and get stuck in a local mode.

Algorithms for solving nonlinear optimization are diverse, including the trust-region method, interior-point method, and others, but they are mostly local search methods. Quadratic programming QP and sequential quadratic programming use such convexity properties to their advantage. But, if an LP problem has integer vari- ables, the simplex method will not work directly, it has to be combined with branch and bound to solve IP problems.

As traditional methods are usually local search algorithms, one of the current trends is to use heuristic and metaheuristic algorithms.

However, recent trends tend to name all stochastic algorithms with randomization and local search as metaheuristic.

Here, we will also use this convention. Randomization provides a good way to move away from local search to the search on the global scale. Therefore, almost all metaheuristic algorithms intend to be suitable for global optimization, though global optimality may be still challenging to achieve for most problems in practice.

Most metaheuristic algorithms are nature-inspired as they have been devel- oped based on some abstraction of nature. Nature has evolved over millions of years and has found perfect solutions to almost all the problems she met. Consequently, they are said to be biology-inspired or simply bio-inspired.

Two major components of any metaheuristic algorithms are: selection of the best solutions and randomization. Foundations and Advanced Designs. Ant Colony Optimization. Differential Evolution. Applications in Mass Transfer. Applications in Fluid Mechanics. Applications in Reaction Engineering.

Improvement of Search Process in Genetic Algorithms: Genetic Algorithms in Irrigation Planning: VLSI design: Godfrey C. Ghosh presents an upper-bound solution for bearing capacity of shallow strip footing considering composite failure mechanisms by the pseudodynamic approach. The results are compared with the available literature. At the end of the study, the authors concluded that the results obtained from analytical analysis are well justified with the numerical solutions.

Daloglu, M. Artar, K. Ozgan, and A. But the authors stated that the TLBO algorithm requires longer time for the analysis. Another study is presented by the authors A. Kangrang, H. Prasanchum, and R. Their study applied the conditional genetic algorithm CGA and the conditional tabu search algorithm CTSA technique to connect with the reservoir simulation model in order to search optimal reservoir rule curves.

The results obtained from their study show that the new obtained rule curves from CTSA are more suitable for reservoir operating than the existing rule curves, and it is an effective method for application to find optimal reservoir rule curves. An experimental study by taking into account the optimization of calcareous fly ash-added cement containing grinding aids and strength-improving additives is made by G.

Kaplan, S.