Fast-acting reactive power support from distributed generations (DGs) is a promising approach for tackling rapid voltage fluctuations in distribution networks. However, the voltage regulation range via reactive power ...
详细信息
Fast-acting reactive power support from distributed generations (DGs) is a promising approach for tackling rapid voltage fluctuations in distribution networks. However, the voltage regulation range via reactive power of DGs alone is narrow especially in distribution networks with high resistance-reactance ratio. In this paper, a randomized algorithm is proposed to improve the voltage profile in distribution networks via coordinated regulation of the active and reactive power of DGs. To this end, first the variables of the proposed quadratically constrained quadratic programming problem on voltage control are partitioned into disjoint subsets, each of which corresponds to a unique low-dimensional subproblem. Second, these subsets are updated serially in a randomized manner via solving their corresponding subproblems, which overcomes the requirement for system-wide coordination among participating agents and guarantees an optimal solution. Compared with the existing algorithms, the proposed algorithm is resilient to network reconfigurations and achieves a wider voltage regulation range. The effectiveness and convergence performance of the proposed algorithm is validated by the case studies.
In this paper we propose an efficient method to compress a high dimensional function into a tensor ring format, based on alternating least squares (ALS). Since the function has size exponential in d, where d is the nu...
详细信息
In this paper we propose an efficient method to compress a high dimensional function into a tensor ring format, based on alternating least squares (ALS). Since the function has size exponential in d, where d is the number of dimensions, we propose an efficient sampling scheme to obtain O(d) important samples in order to learn the tensor ring. Furthermore, we devise an initialization method for ALS that allows fast convergence in practice. Numerical examples show that to approximate a function with similar accuracy, the tensor ring format provided by the proposed method has fewer parameters than the tensor-train format and also better respects the structure of the original function.
Consider the class of discrete time, general state space Markov chains which satisfy a ''uniform ergodicity under sampling'' condition. There are many ways to quantify the notion of ''mixing ti...
详细信息
Consider the class of discrete time, general state space Markov chains which satisfy a ''uniform ergodicity under sampling'' condition. There are many ways to quantify the notion of ''mixing time'', i.e., time to approach stationarity from a worst initial state. We prove results asserting equivalence (up to universal constants) of different quantifications of mixing time. This work combines three areas of Markov theory which are rarely connected: the potential-theoretical characterization of optimal stopping times, the theory of stability and convergence to stationarity for general-state chains, and the theory surrounding mixing times for finite-state chains. (C) 1997 Elsevier Science B.V.
We analyze a compression scheme for large data sets that randomly keeps a small percentage of the components of each data sample. The benefit is that the output is a sparse matrix, and therefore, subsequent processing...
详细信息
We analyze a compression scheme for large data sets that randomly keeps a small percentage of the components of each data sample. The benefit is that the output is a sparse matrix, and therefore, subsequent processing, such as principal component analysis (PCA) or K-means, is significantly faster, especially in a distributed-data setting. Furthermore, the sampling is single-pass and applicable to streaming data. The sampling mechanism is a variant of previous methods proposed in the literature combined with a randomized preconditioning to smooth the data. We provide guarantees for PCA in terms of the covariance matrix, and guarantees for K-means in terms of the error in the center estimators at a given step. We present numerical evidence to show both that our bounds are nearly tight and that our algorithms provide a real benefit when applied to standard test data sets, as well as providing certain benefits over related sampling approaches.
We consider two new variants of online integer programs that are duals. In the packing problem we are given a set of items and a collection of knapsack constraints over these items that are revealed over time in an on...
详细信息
We consider two new variants of online integer programs that are duals. In the packing problem we are given a set of items and a collection of knapsack constraints over these items that are revealed over time in an online fashion. Upon arrival of a constraint we may need to remove several items (irrevocably) so as to maintain feasibility of the solution. Hence, the set of packed items becomes smaller over time. The goal is to maximize the number, or value, of packed items. The problem originates from a buffer-overflow model in communication networks, where items represent information units broken into multiple packets. The other problem considered is online covering: there is a universe to be covered. Sets arrive online, and we must decide for each set whether we add it to the cover or give it up. The cost of a solution is the total cost of sets taken, plus a penalty for each uncovered element. The number of sets in the solution grows over time, but its cost goes down. This problem is motivated by team formation, where the universe consists of skills, and sets represent candidates we may hire. The packing problem was introduced in Emek et al. (SIAM J Comput 41(4):728-746, 2012) for the special case where the matrix is binary;in this paper we extend the solution to general matrices with non-negative integer entries. The covering problem is introduced in this paper;we present matching upper and lower bounds on its competitive ratio.
A few iterations of alternating least squares with a random starting point provably suffice to produce nearly optimal spectral- and Frobenius-norm accuracies of low-rank approximations to a matrix;iterating to converg...
详细信息
A few iterations of alternating least squares with a random starting point provably suffice to produce nearly optimal spectral- and Frobenius-norm accuracies of low-rank approximations to a matrix;iterating to convergence of the matrix entries is unnecessary. Such good accuracy is in fact well known for the low-rank approximations calculated via subspace iterations and other well-known methods that happen to produce mathematically the same low-rank approximations as alternating least squares, at least when starting all the methods with the same appropriately random initializations. Thus, software implementing alternating least squares can be retrofitted via appropriate setting of parameters to calculate nearly optimally accurate low-rank approximations highly efficiently, with no need for convergence of the matrix entries. (Even so, convergence could still be helpful for some applications, say to ensure that the approximations are strongly rank-revealing.)
We consider a communication channel in which the only available mode of communication is transmitting beeps. A beep transmitted by a station attached to the channel reaches all the other stations instantaneously. Stat...
详细信息
We consider a communication channel in which the only available mode of communication is transmitting beeps. A beep transmitted by a station attached to the channel reaches all the other stations instantaneously. Stations are anonymous, in that they do not have any individual identifiers. The algorithmic goal is to assign names to the stations in such a manner that the names make a contiguous segment of positive integers starting from 1. We develop a Las Vegas naming algorithm, for the case when the number of stations n is known, and a Monte Carlo algorithm, for the case when the number of stations n is not known. The given randomized algorithms are provably optimal with respect to the expected time O (n log n), the expected number of used random bits O (n log n), and the probability of error.
Even though the widespread use of social platforms provides convenience to our daily life, it causes some bad results at the same time. For example, misinformation and personal attack can be spread easily on social ne...
详细信息
Even though the widespread use of social platforms provides convenience to our daily life, it causes some bad results at the same time. For example, misinformation and personal attack can be spread easily on social networks, which drives us to study how to block the spread of misinformation effectively. Unlike the classical rumor blocking problem, we study how to protect the targeted users from being influenced by rumor, called targeted protection maximization (TPM). It aims to block the least edges such that the expected ratio of nodes in targeted set influenced by rumor is at most beta. Under the IC-model, the objective function of TPM is monotone non-decreasing, but not submodular and not supermodular, which makes it difficult for us to solve it by existing algorithms. In this paper, we propose two efficient techniques to solve TPM problem, called Greedy and General-TIM. The Greedy uses simple Hill-Climbing strategy, and get a theoretical bound, but the time complexity is hard to accept. The second algorithm, General-TIM, is formed by means of randomized sampling by Reverse Shortest Path (Random-RS-Path), which reduces the time consuming significantly. A precise approximation ratio cannot be promised in General-TIM, but in fact, it can get good results in reality. Considering the community structure in networks, both Greedy and General-TIM can be improved after removing unrelated communities. Finally, the effectiveness and efficiency of our algorithms is evaluated on several real datasets.
The online car-sharing problem finds many real-world applications. The problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and k total cars. ...
详细信息
The online car-sharing problem finds many real-world applications. The problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and k total cars. Each request which specifies its pick-up time and pick-up location (among 0 and 1, and the other is the drop-off location) is released in each stage a fixed amount of time before its specified start (i.e. pick-up) time. The time between the booking (i.e. released) time and the start time is enough to move empty cars between 0 and 1 for relocation if they are not used in that stage. The model, called kS2L-F, assumes that requests in each stage arrive sequentially regardless of the same booking time and the decision (accept or reject) must be made immediately. The goal is to accept as many requests as possible. In spite of only two locations, the analysis does not seem easy and the (tight) competitive ratio (CR) is only known to be 2 for k = 2 and 1.5 for a restricted value of k, i.e., a multiple of three. In this paper, we remove all the holes of unknown CR's;namely we prove that the CR is 2k/k+Lk/3J for all k >= 2. Furthermore, if the algorithm can delay its decision until all requests have come in each stage, the CR is improved to roughly 4/3. We can take this advantage even further;precisely we can achieve a CR of 2+R/3 if the number of requests in each stage is at most Rk, 1 <= R <= 2, where we do not have to know the value of R in advance. Finally we demonstrate that randomization also helps to get (slightly) better CR's, and prove some lower bounds to show the tightness.
Point forecasting suffers from its poor interpretation respects in cases of the existence of uncertainties associated with the data or instability in the system. Prediction intervals (PIs) can cope with these deficien...
详细信息
Point forecasting suffers from its poor interpretation respects in cases of the existence of uncertainties associated with the data or instability in the system. Prediction intervals (PIs) can cope with these deficiencies and can qualify the level of uncertainty related with point predictions. In this paper, the wellknown adaptive neuro-fuzzy inference systems (ANFIS) are employed as learner models to construct PIs with a randomized algorithm. Two ANFIS models are independently built to produce the lower-bound and upper-bound of PIs, respectively. The obtained results with comparisons over six datasets demonstrate that our proposed algorithm performs positively in terms of both coverage rate and specificity. The proposed algorithm is also applied for a real-world application in energy science, and the experimental results show its applicability to construct PIs with satisfactory performance. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论