Background: Good cognitive function is important for school-age children. Although essential fatty acids play a main role in cognitive functions, their intakes are assumed as inadequate among developing countries incl...
详细信息
Background: Good cognitive function is important for school-age children. Although essential fatty acids play a main role in cognitive functions, their intakes are assumed as inadequate among developing countries including Myanmar. However, there is still lack of evidence to show whether they are problem nutrients. Objective: This study aimed to determine the problem nutrients in the diets of Myanmar primary schoolchildren and to formulate food-based recommendations (FBR) to optimize the intake of these micronutrients. Methods: A cross-sectional study was conducted at 3 primary schools in Nyaungdon Township of Myanmar. A 1-week dietary intake assessment was done on 7- to 9-year-old (n = 100) primary schoolchildren. A linear programming approach using the World Health Organization Optifood software was used to assess the nutrient intake and develop FBRs. Results: The prevalence of stunted growth, wasting, and being underweight in the students were 28%, 18%, and 28%, respectively. The intake of calcium, vitamin B-1, folate, iron, omega-3 fatty acids, eicosapentaenoic acid, and docosahexaenoic acid was insufficient. Locally available nutrient-dense foods that include water spinach, carp fish, duck egg, garden pea, and shrimp were selected to develop FBR to increase the intake of problem nutrients. Conclusion: The linear programming analysis showed that the primary schoolchildren have difficulty meeting nutrient recommendations given locally available foods, especially iron and essential fatty acids which are important for cognitive performance of schoolchildren.
We introduce a new quantum optimization algorithm for dense linear programming problems, which can be seen as the quantization of the interior point predictor-corrector algorithm [1] using a quantum linear system algo...
详细信息
We introduce a new quantum optimization algorithm for dense linear programming problems, which can be seen as the quantization of the interior point predictor-corrector algorithm [1] using a quantum linear system algorithm [2]. The (worst case) work complexity of our method is, up to polylogarithmic factors, O(L root n(n+m)(parallel to M parallel to) over bar (F) (kappa)over bar> over bar epsilon(-2)) for n the number of variables in the cost function,mthe number of constraints,epsilon(-1)the target precision,Lthe bit length of the input data, (parallel to M parallel to) over bar (F) over bar (F), and (kappa) over bar an upper bound to the condition number kappa of those systems of equations. This represents a quantum speed-up in the number n of variables in the cost function with respect to the comparable classical interior point algorithms when the initial matrix of the problem A is dense: if we substitute the quantum part of the algorithm by classical algorithms such as conjugate gradient descent, that would mean the whole algorithm has complexity O(L root n(n + m)(2) (kappa) over bar log(epsilon(-1))), or with exact methods, at least O(L root n(n + m)(2.373)). Also, in contrast with any quantum linear system algorithm, the algorithm described in this article outputs a classical description of the solution vector, and the value of the optimal solution.
The problem of maximizing a linear function with linear and quadratic constraints is considered. The solution of the problem is obtained in a constructive form using the Lagrange function and the optimality conditions...
详细信息
The problem of maximizing a linear function with linear and quadratic constraints is considered. The solution of the problem is obtained in a constructive form using the Lagrange function and the optimality conditions. Many optimization problems can be reduced to the problem of this type. In this paper, as an application, we consider an improper linear programming problem formalized in the form of maximization of the initial linear criterion with a restriction to the Euclidean norm of the correction vector of the right-hand side of the constraints or the Frobenius norm of the correction matrix of both sides of the constraints.
Neutrosophic set theory is a generalization of the intuitionistic fuzzy set which can be considered as a powerful tool to express the indeterminacy and inconsistent information that exist commonly in engineering appli...
详细信息
Neutrosophic set theory is a generalization of the intuitionistic fuzzy set which can be considered as a powerful tool to express the indeterminacy and inconsistent information that exist commonly in engineering applications and real meaningful science activities. In this paper an interval neutrosophic linear programming (INLP) model will be presented, where its parameters are represented by triangular interval neutrosophic numbers (TINNs) and call it INLP problem. Afterward, by using a ranking function we present a technique to convert the INLP problem into a crisp model and then solve it by standard methods.
We develop a novel method of constructing confidence bands for nonparametric regression functions under shape constraints. This method can be implemented via a linear programming, and it is thus computationally appeal...
详细信息
This paper deals with the problem of linear programming with inexact data represented by real intervals. We introduce the concept of duality gap to interval linear programming. We give characterizations of strongly an...
详细信息
This paper deals with the problem of linear programming with inexact data represented by real intervals. We introduce the concept of duality gap to interval linear programming. We give characterizations of strongly and weakly zero duality gap in interval linear programming and its special case where the matrix of coefficients is real. We show computational complexity of testing weakly- and strongly zero duality gap for commonly used types of interval linear programming.
A support vector machine (SVM) classifier corresponds in its most basic form to a quadratic programming problem. Various linear variations of support vector classification have been investigated such as minimizing the...
详细信息
A support vector machine (SVM) classifier corresponds in its most basic form to a quadratic programming problem. Various linear variations of support vector classification have been investigated such as minimizing the L-1-norm of the weight-vector instead of the L-2-norm. In this paper we introduce a classifier where we minimize the boundary (lower envelope) of the epigraph that is generated over a set of functions, which can be interpreted as a measure of distance or slack from the origin. The resulting classifier appears to provide a generalization performance similar to SVMs while displaying a more advantageous computational complexity. The discussed formulation can also be extended to allow for cases with imbalanced data.
The story of linear programming is one with all the elements of a grand historical drama. The original idea of testing if a polyhedron is non-empty by using a variable elimination to project down one dimension at a ti...
详细信息
The story of linear programming is one with all the elements of a grand historical drama. The original idea of testing if a polyhedron is non-empty by using a variable elimination to project down one dimension at a time until a tautology emerges dates back to a paper by Fourier in 1823. This gets re-invented in the 1930s by Motzkin. The real interest in linear programming happens during World War II when mathematicians ponder best ways of utilising resources at a time when they are constrained. The problem of optimising a linear function over a set of linear inequalities becomes the focus of the effort. Dantzig's Simplex Method is announced and the Rand Corporation becomes a hot bed of computational mathematics. The range of applications of this modelling approach grows and the powerful machinery of numerical analysis and numerical linear algebra becomes a major driver for the advancement of computing machines. In the 1970s, constructs of theoretical computer science indicate that linear programming may in fact define the frontier of tractable problems that can be solved effectively on large instances. This raised a series of questions and answers: Is the Simplex Method a polynomial-time method and if not can we construct novel polynomial time methods, etc. And that is how the Ellipsoid Method from the Soviet Union and the Interior Point Method from Bell Labs make their way into this story as the heroics of Khachiyan and Karmarkar. We have called this paper a primer on linear programming since it only gives the reader a quick narrative of the grand historical drama. Hopefully it motivates a young reader to delve deeper and add another chapter.
In tabular multi-agent reinforcement learning with average-cost criterion, a team of agents sequentially interacts with the environment and observes local incentives. We focus on the case that the global reward is a s...
详细信息
ISBN:
(数字)9781665467612
ISBN:
(纸本)9781665467629
In tabular multi-agent reinforcement learning with average-cost criterion, a team of agents sequentially interacts with the environment and observes local incentives. We focus on the case that the global reward is a sum of local rewards, the joint policy factorizes into agents’ marginals, and full state observability. To date, few global optimality guarantees exist even for this simple setting, as most results yield convergence to stationarity for parameterized policies in large/possibly continuous spaces. To solidify the foundations of MARL, we build upon linear programming (LP) reformulations, for which stochastic primal-dual methods yield a model-free approach to achieve optimal sample complexity in the centralized case. We develop multi-agent extensions, whereby agents solve their local saddle point problems and then perform local weighted averaging. We establish that the sample complexity to obtain near-globally optimal solutions matches tight dependencies on the cardinality of the state and action spaces, and exhibits classical scalings with respect to the network in accordance with multi-agent optimization. Experiments corroborate these results in practice.
It has been verified that the linear programming (LP) is able to formulate many real-life optimization problems, which can obtain the optimum by resorting to corresponding solvers such as OptVerse, Gurobi and CPLEX. I...
详细信息
暂无评论