Aiming at the decision problem of safety input in assembly building construction,a secondary decomposition of the risk factors of assembly building construction is carried out with an actual engineering project as an ...
详细信息
Aiming at the decision problem of safety input in assembly building construction,a secondary decomposition of the risk factors of assembly building construction is carried out with an actual engineering project as an example,while three functional relationships are introduced for fitting,and a nonlinear planning model is established based on mathematical planning theory,which is solved in Matlab using particle swarm optimization algorithm to obtain a relatively better *** results show that a large safety improvement can be achieved with a small increase in certain risk indicators,which provides a reference basis for the actual engineering safety input.
The growing concern about climate change has led to the rise of carbon cycle research. Forest cutting planning affects the carbon cycle due to the carbon sequestration function of forests. In this work, we propose a p...
详细信息
The growing concern about climate change has led to the rise of carbon cycle research. Forest cutting planning affects the carbon cycle due to the carbon sequestration function of forests. In this work, we propose a planning model for determining the regeneration cutting age of forests to optimize carbon sequestration and improving the associated economic and ecological benefits. We first built a model based on the carbon sequestration consumption of forest products and forest carbon sequestration to predict the change in forest carbon sequestration over time. The accuracy of the model was verified with forest data from the Great Khingan mountains. Furthermore, we added in economic and ecological factors to build an improved model, which was also applied to the Great Khingan forest. The improved regeneration cutting ages were calculated as 65, 134, 123, 111 and 73 years for white birch, larch, Scots pine, oak, and poplar trees for natural forests, whereas the ages were 34, 65, 64, 77 and 37 years for planted forests, respectively. It can be predicted that the total carbon sequestration in the Great Khingan forests will accumulate to 974.80 million tons after 100 years. The results of this study can provide useful guidance for local governments to develop a sustainable timeline for forest harvesting to optimize carbon sequestration and improve the associated economic and ecological benefits.
In this work we consider the hybrid Data-Driven Computational Mechanics (DDCM) approach, in which a smooth constitutive manifold is reconstructed to obtain a well-behaved nonlinear optimization problem (NLP) rather th...
详细信息
This paper studies the effect of perturbations on the gradient flow of a general nonlinear programming problem, where the perturbation may arise from inaccurate gradient estimation in the setting of data-driven optimi...
详细信息
We study solution sensitivity for nonlinear programs (NLPs) whose structures are induced by graphs. These NLPs arise in many applications such as dynamic optimization, stochastic optimization, optimization with partia...
详细信息
We study solution sensitivity for nonlinear programs (NLPs) whose structures are induced by graphs. These NLPs arise in many applications such as dynamic optimization, stochastic optimization, optimization with partial differential equations, and network optimization. We show that for a given pair of nodes, the sensitivity of the primal-dual solution at one node against a data perturbation at the other node decays exponentially with respect to the distance between these two nodes on the graph. In other words, the solution sensitivity decays as one moves away from the perturbation point. This result, which we call exponential decay of sensitivity, holds under the strong second-order sufficiency condition and the linear independence constraint qualification. We also present conditions under which the decay rate remains uniformly bounded;this allows us to characterize the sensitivity behavior of NLPs defined over subgraphs of infinite graphs. The theoretical developments are illustrated with numerical examples.
In this work, we consider the problem of estimating the 3D position of multiple humans in a scene as well as their body shape and articulation from a single RGB video recorded with a static camera. In contrast to expe...
详细信息
Proper identification of critical source areas (CSAs) is crucial for improving the efficiency of pollutant reduction and the economic viability of best management practices (BMPs) aimed at controlling non-point source...
详细信息
Proper identification of critical source areas (CSAs) is crucial for improving the efficiency of pollutant reduction and the economic viability of best management practices (BMPs) aimed at controlling non-point source pollution (NPSP). Different identification criteria, referring to different identification ratios, scales, and methods, affect the determination of CSAs, and then affect the spatial layout of BMPs. Nevertheless, few studies have optimized the BMPs placement in CSAs determined by different identification criteria and selected appropriate identification factors by comparing the effectiveness of BMPs schemes. To address this issue, a simulation-optimization method called simulation-based mixed-integer multi-objective non-linear programming (SMIMONLP) model was proposed by coupling Soil and Water Assessment Tool (SWAT) with mixed-integer multi-objective non-linear programming (MIMONLP). The method was presented to optimize BMPs placement in CSAs identified by different combinations of identification ratios, methods, and scales, while appropriate identification factors were selected by comparing the effectiveness of these BMPs optimization schemes. The effectiveness of these BMPs optimization schemes were evaluated by pollutant reduction, economic cost, and the satisfaction level of the decision maker with the objective function. The developed method was applied to a real case study in the Luan River Basin of North China. By comparing the economic cost and pollutant reduction of BMPs optimization schemes based on CSAs determined by different identification criteria, the results revealed that the economic cost difference of BMPs schemes could reach up to 5.9 times, 5.7 times, and 5.3 times due to the impact of CSAs identification ratio, scale and method, respectively. When other identification factors were the same, with the increase of pollutant reduction, the scenarios with a larger identification ratio, or using the Load Per Region Area Index (LPRAI) iden
We propose a novel method for joint estimation of shape and pose of rigid objects from their sequentially observed RGB-D images. In sharp contrast to past approaches that rely on complex non-linear optimization, we pr...
详细信息
In recent years, several applications have been proposed in the context of distribution networks. Many of these can be formulated as an optimal power flow problem, a mathematical optimization program which includes a ...
详细信息
In recent years, several applications have been proposed in the context of distribution networks. Many of these can be formulated as an optimal power flow problem, a mathematical optimization program which includes a model of the steady-state physics of the electricity network. If the network loading is balanced and the lines are transposed, the network model can be simplified to a single-phase equivalent model. However, these assumptions do not apply to low-voltage distribution networks, so the network model should model the effects of phase unbalance correctly. In many parts of the world, the low-voltage distribution network has four conductors, i.e. three phases and a neutral. This paper develops OPF formulations for such networks, including transformers, shunts and voltage-dependent loads, in two variable spaces, i.e. current-voltage and power- voltage, and compares them for robustness and scalability. A case study across 128 low-voltage networks also quantifies the modelling error introduced by Kron reductions and its impact on the solve time. This work highlights the advantages of formulations in current-voltage variables over power-voltage, for four-wire networks.
Traditionally, nonlinear data processing has been approached via the use of polynomial filters, which are straightforward expansions of many linear methods, or through the use of neural network techniques. In contrast...
详细信息
Traditionally, nonlinear data processing has been approached via the use of polynomial filters, which are straightforward expansions of many linear methods, or through the use of neural network techniques. In contrast to linear approaches, which often provide algorithms that are simple to apply, nonlinear learning machines such as neural networks demand more computing and are more likely to have nonlinear optimization difficulties, which are more difficult to solve. Kernel methods, a recently developed technology, are strong machine learning approaches that have a less complicated architecture and give a straightforward way to transforming nonlinear optimization issues into convex optimization problems. Typical analytical tasks in kernel-based learning include classification, regression, and clustering, all of which are compromised. For image processing applications, a semisupervised deep learning approach, which is driven by a little amount of labeled data and a large amount of unlabeled data, has shown excellent performance in recent years. For their part, today's semisupervised learning methods operate on the assumption that both labeled and unlabeled information are distributed in a similar manner, and their performance is mostly impacted by the fact that the two data sets are in a similar state of distribution as well. When there is out-of-class data in unlabeled data, the system's performance will be adversely affected. When used in real-world applications, the capacity to verify that unlabeled data does not include data that belongs to a different category is difficult to obtain, and this is especially true in the field of synthetic aperture radar image identification (SAR). Using threshold filtering, this work addresses the problem of unlabeled input, including out-of-class data, having a detrimental influence on the performance of the model when it is utilized to train the model in a semisupervised learning environment. When the model is being trained, unla
暂无评论