In real-world applications, the order of magnitude in each objective varies, whereas most of fitness evaluation methods in many-objective solvers are scaling dependent. Objective space normalization has a large effect...
详细信息
In real-world applications, the order of magnitude in each objective varies, whereas most of fitness evaluation methods in many-objective solvers are scaling dependent. Objective space normalization has a large effect on the performance of each algorithm (i.e., on the practical applicability of each algorithm to real-world problems). In order to put equal emphasis on each objective, a normalization mechanism is always encouraged to be employed in the framework of the algorithm. decomposition-based algorithms have become more and more popular in many-objective optimization. MOEA/D is a representative decomposition-based algorithm. Recently, some negative effects of normalization have been reported, which may deteriorate the practical applicability of MOEA/D to real-world problems. In this paper, to remedy the performance deterioration introduced by normalization in MOEA/D, we propose an idea of using two types of normalization methods in MOEA/D simultaneously (denoted as MOEA/D-2N). The proposed idea is compared with the standard MOEA/D and MOEA/D with normalization (denoted as MOEA/D-N) via two widely-used test suites (as well as their variants) and a real-world optimization problem. Experimental results show that MOEA/D-2N can effectively evolve a more diverse set of solutions and achieve robust and comparable performance compared with the standard MOEA/D and MOEA/D-N. (C) 2020 Elsevier B.V. All rights reserved.
Solvents have been widely used in chemical manufacturing processes. When involved in liquid homogeneous-phase kinetic reactions, they can have significant impacts on the reaction product yield. In this paper, an optim...
详细信息
Solvents have been widely used in chemical manufacturing processes. When involved in liquid homogeneous-phase kinetic reactions, they can have significant impacts on the reaction product yield. In this paper, an optimization-based framework is developed for reaction solvent design. The framework first identifies a reaction kinetic model using a hybrid method consisting of three steps. In step one, a rigorous thermodynamic derivation based on CTST (Conventional Transition State Theory) is performed to formulate a primary reaction kinetic model. In step two, a knowledge-based method is used to select additional solvent properties as supplementary descriptors to account for quantitative correction to the model and thereby improving the prediction accuracy. In step three, model identification is performed to obtain the best regressed reaction kinetic model. This hybrid modelling method is tested through two case studies, namely Diels-Alder and Menschutkin reactions, and an impressive consistency of the results is observed when the infinite dilution activity coefficients (calculated by COSMO-SAC model), hydrogen-bond donor, hydrogen-bond acceptor and solvent surface tension are selected as descriptors in the final reaction kinetic model. The GC-COSMO and GC (Group Contribution) methods are combined for the prediction of these descriptors. Finally, the Computer-Aided Molecular Design (CAMD) technique is integrated with the derived kinetic model for reaction solvent design by formulating and solving a Mixed-Integer Non-Linear Programming (MINLP) model. A decomposition-based solution algorithm is employed to manage the complexity involved with the nonlinear COSMO-SAC equations. Promising reaction solvents are identified and compared with those reported by others, indicating wide applicability and high accuracy of the developed optimization-based framework. (C) 2019 Elsevier Ltd. All rights reserved.
Blood shortage may lead to immeasurable losses. But the perishable nature of blood products limits the possibility of storing a large amount of it, and the quality of blood products reduces rapidly with transportation...
详细信息
Blood shortage may lead to immeasurable losses. But the perishable nature of blood products limits the possibility of storing a large amount of it, and the quality of blood products reduces rapidly with transportation time. Specifically, in China, the management of blood products is even more complicated due to the significant demand for clinical blood, which increases every single year because of the reformation of the health system and the resulting scale expansion of hospitals. In this research, we aim to optimize the blood product scheduling scheme by constructing a vendor-managed inventory routing problem (VMIRP) for blood products, which balances the supply and demand such that the relevant operational cost is minimized. Then a decomposition-based algorithm is developed to solve the proposed mathematical model efficiently. based on a series of numerical experiments of platelets, we obtain and examine the distribution plan and optimal transportation path over the planning horizon. In addition to the illustrated high algorithm efficiency, the computation results show that the VMIRP scheme can considerably decrease the operational cost of the blood supply chain.
With the growing popularity of IoT nowadays, tremendous amount of time series data at high resolution is being generated, transmitted, stored, and processed by modern sensor networks in different application domains, ...
详细信息
ISBN:
(纸本)9783319681559;9783319681542
With the growing popularity of IoT nowadays, tremendous amount of time series data at high resolution is being generated, transmitted, stored, and processed by modern sensor networks in different application domains, which naturally incurs extensive storage and computation cost in practice. Data compression is the key to resolve such challenge, and various compression techniques, either lossless or lossy, have been proposed and widely adopted in industry and academia. Although existing approaches are generally successful, we observe a unique characteristic in certain time series data, i.e., significant periodicity and strong randomness, which leads to poor compression performance using existing methods and hence calls for a specifically designed compression mechanism that can utilise the periodic and stochastic patterns at the same time. To this end, we propose a decomposition-based compression algorithm which divides the original time series into several components reflecting periodicity and randomness respectively, and then approximates each component accordingly to guarantee overall compression ratio and maximum error. We conduct extensive evaluation on a real world dataset, and the experimental results verify the superiority of our proposals compared with current state-of-the-art methods.
In this paper, we develop a method of analyzing a serial production line with single part-type and multiple parallel-machine workstations, which has not been paid much attention to in existing literature. At first, we...
详细信息
In this paper, we develop a method of analyzing a serial production line with single part-type and multiple parallel-machine workstations, which has not been paid much attention to in existing literature. At first, we analyze a short line with one parallel-machine workstation and two buffers by constructing its probability balance equation, and evaluating its performances such as throughput and work-in-process level. Then a decomposition-based algorithm is developed for the analysis of a long line with multiple parallel-machine workstations by using the result of short line analysis as building block. Our method is different from traditional decomposition-based method in that our decomposition is workstation-centered while the traditional one is buffer-centered. Numerical experiments shows that result of performance evaluation by using the proposed method is quite close to those obtained by simulation, and our method is much less time-consuming.
暂无评论