Cellular vehicle-to-everything (C-V2X) has been continuously evolving since Release 14 of the 3rd Generation Partnership Project (3GPP) for future autonomous vehicles. Apart from automotive safety, 5G NR further bring...
详细信息
In this paper, we propose and study the parity-constrained k-supplier (PAR k-supplier) problem, generalizing the classical (unconstrained) k-supplier problem. In the PAR k-supplier problem, we are given a set of facil...
详细信息
Given an undirected graph G=(V,E), a vertex v∈V is edge-vertex (ev) dominated by an edge e∈E if v is either incident to e or incident to an adjacent edge of e. A set Sev⊆E is an edge-vertex dominating set (referred ...
详细信息
In real-world applications, not all instances in the multiview data are fully represented. To deal with incomplete data, incomplete multiview learning (IML) rises. In this article, we propose the joint embedding learn...
详细信息
In real-world applications, not all instances in the multiview data are fully represented. To deal with incomplete data, incomplete multiview learning (IML) rises. In this article, we propose the joint embedding learning and low-rank approximation (JELLA) framework for IML. The JELLA framework approximates the incomplete data by a set of low-rank matrices and learns a full and common embedding by linear transformation. Several existing IML methods can be unified as special cases of the framework. More interestingly, some linear transformation-based complete multiview methods can be adapted to IML directly with the guidance of the framework. Thus, the JELLA framework improves the efficiency of processing incomplete multiview data, and bridges the gap between complete multiview learning and IML. Moreover, the JELLA framework can provide guidance for developing new algorithms. For illustration, within the framework, we propose the IML with the block-diagonal representation (IML-BDR) method. Assuming that the sampled examples have an approximate linear subspace structure, IML-BDR uses the block-diagonal structure prior to learning the full embedding, which would lead to more correct clustering. A convergent alternating iterative algorithm with the successive over-relaxation optimization technique is devised for optimization. The experimental results on various datasets demonstrate the effectiveness of IML-BDR.
Federated learning (FL) emerges to mitigate the privacy concerns in machine learning-based services and applications, and personalized federated learning (PFL) evolves to alleviate the issue of data heterogeneity. How...
详细信息
In this article, two new techniques of approximation of the reliability of a two-terminal network are developed based on the constructive theory of functions and related methods. Two methods of generating an approxima...
详细信息
In this article, two new techniques of approximation of the reliability of a two-terminal network are developed based on the constructive theory of functions and related methods. Two methods of generating an approximation cubic spline are used: Lagrange-type interpolation procedures and Bernstein approximation operator. A possibility of minimizing the total error of approximation, based on keeping some properties invariant, is described in case of a large class of pairs of dual two-terminal networks. Simulations are included, showing that the error of approximation is negligible in case of some special initial data.
In an era of sustainable development, considerable emphasis has been put onto energy saving, environment friendly, and social welfare as well as productivity in the manufacturing sector. In this work, an unrelated par...
详细信息
In an era of sustainable development, considerable emphasis has been put onto energy saving, environment friendly, and social welfare as well as productivity in the manufacturing sector. In this work, an unrelated parallel manufacturing setting with time-of-use (TOU) electricity price is explored, with an aim to reduce the electricity cost and increase productivity simultaneously. A nonlinear mathematical programming model is formulated to exploit the special structure of the scheduling problem, where the quadratic constraints are reformulated as second-order-cone (SOC) constraints, and several tailored cutting planes are introduced to further tighten the feasible region of the problem. Then, the original scheduling problem is transformed into several single-machine scheduling problems with TOU electricity price, which could be relaxed as a single-objective programming problem, and it could be solved rapidly via commercial solvers, such as CPLEX. Based on the optimal solution of the relaxed problem, an approximate algorithm is proposed, where a special rounding technique is employed to assign jobs to the unrelated parallel machines in a local search manner. Furthermore, a lower bound model is constructed by eliminating the nonpreemption constraint, and an iteration-based algorithm is devised to obtain the optimal solution of the lower bound problem. Meanwhile, a dispatch rule-based approach is proposed to provide an upper bound of the scheduling problem with TOU constraint. In the numerical analysis section, the proposed approximate algorithm is validated through extensive testing on various scales of instances, different emphasis on productivity and electricity price, and under two typical TOU electricity pricing policies. It is observed that the gap between the proposed approximate algorithm and CPLEX is mostly within 4%, and the lower/upper bound methods could obtain a relaxed/feasible solution within 0.01 s. Note to Practitioners-Energy saving together with prod
Among the most important graph parameters is the diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the diameter exactly. Thus, much research has been d...
详细信息
Among the most important graph parameters is the diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [Proceedings of SODA 2014, Portland, OR, 2014, pp. 1041--1052] showed that the diameter can be approximated within a multiplicative factor of 3/2 in (O) over tilde (m(3/2)) time. Furthermore, Roditty and Vassilevska W. [Proceedings of STOC '13, New York, ACM, 2013, pp. 515--524] showed that unless the strong exponential time hypothesis (SETH) fails, no O(n(2-epsilon)) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m(3/2 -epsilon)) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters is the eccentricities. The eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [Proceedings of SODA 2014, Portland, OR, 2014, pp. 1041--1052] showed that the eccentricities of all vertices can be approximated within a factor of 5/3 in O (m(3/2)) time and Abboud, Vassilevska W., and Wang [Proceedings of SODA 2016, Arlington, VA, 2016, pp. 377--391] showed that no O(n(2-epsilon)) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m(3/2-epsilon)) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the eccentricities. This is the first lower bound
The mathematical foundation of deep learning is the theorem that any continuous function can be approximated within any specified accuracy by using a neural network with certain non-linear activation functions. Howeve...
详细信息
The mathematical foundation of deep learning is the theorem that any continuous function can be approximated within any specified accuracy by using a neural network with certain non-linear activation functions. However, this theorem does not tell us what the network architecture should be and what the values of the weights are. One must train the network to estimate the weights. There is no guarantee that the optimal weights can be reached after training. This paper develops an explicit architecture of a universal deep network by using the Gray code order and develops an explicit formula for the weights of this deep network. This architecture is target function independent. Once the target function is known, the weights are calculated by the proposed formula, and no training is required. There is no concern whether the training may or may not reach the optimal weights. This deep network gives the same result as the shallow piecewise linear interpolation function for an arbitrary target function.
Recent years have seen great progress in the approximability of fundamental clustering and facility location problems on high-dimensional Euclidean spaces, including k-Means and k-Median. While they admit strictly bet...
详细信息
暂无评论