The optimum design and development of a high-frequency transformer (HFT) is a key requirement in the development of a solid state transformer (SST) for incorporating in smart grid environment. This paper proposes an i...
详细信息
The optimum design and development of a high-frequency transformer (HFT) is a key requirement in the development of a solid state transformer (SST) for incorporating in smart grid environment. This paper proposes an iteration- based algorithm for the optimum design of a HFT. The algorithm generate optimum design by evaluating an objective function of minimizing the total owning cost (TOC). The unique features of the algorithm developed for the optimum design of HFT include the following: it iterates eight design variables from their minimum values to maximum values and considers four design constraints for selecting the valid designs. This algorithm can work with three different core materials and can select a suitable AC test voltage based on the HFT voltage rating. A case study is conducted on a HFT incorporated in 1000 kVA, 11 kV/415 V, Dyn11 three-phase SST. It enables us to determine the optimum design parameters of HFT. In this case study, the algorithm is iterated with 121,500 design data inputs, generating 19,873 designs that satisfied all design constraints. The optimum design with minimum TOC is selected from the generated 19,873 designs. The optimum design is validated using finite element analysis in ANSYS software. The results obtained in finite element analysis are comparable with the analytical results and hence the algorithm is validated.
Twin support vector machine (TSVM) is a practical machine learning algorithm, whereas traditional TSVM can be limited for data with outliers or noises. To address this problem, we propose a novel TSVM with the symmetr...
详细信息
Twin support vector machine (TSVM) is a practical machine learning algorithm, whereas traditional TSVM can be limited for data with outliers or noises. To address this problem, we propose a novel TSVM with the symmetric LINEX loss function (SLTSVM) for robust classification. There are several advantages of our method: (1) The performance of the proposed SLTSVM for data with outliers or noise can be improved by using the symmetric LINEX loss function. (2) The introduction of regularization term can effectively improve the generalization ability of our model. (3) An efficient iterative algorithm is developed to solve the optimization problems of our SLTSVM. (4) The convergence and time complexity of the iterative algorithm are analyzed in detail. Furthermore, our model does not involve loss function parameter, which makes our method more competitive. Experimental results on synthetic, benchmark and image datasets with label noises and feature noises demonstrate that our proposed method slightly outperforms other state-of-the-art methods on most datasets.(c) 2023 Elsevier Ltd. All rights reserved.
With the explosive growth of online alternative objects, the availability of a credible rating system could play a valuable role for users who are evaluating and/or choosing items. Due to the existence of spammers, ma...
详细信息
With the explosive growth of online alternative objects, the availability of a credible rating system could play a valuable role for users who are evaluating and/or choosing items. Due to the existence of spammers, many methods have been proposed that employ a reputation-based mechanism in the last few years. However, these methods only consider spammers' behaviours but neglect the effect of thorny objects, which are difficult to evaluate because of their intrinsic uncertainty and complexity with respect to the problem context, knowledge gaps, and various user criteria. To solve this problem, we propose a new reputation iterative algorithm based on Z-statistics (ZS), which eliminates the effect of thorny objects and strengthens the ability to deal with spammers. In this article, the proposed method makes effectiveness comparisons with other typical methods using a rating example and a simulated rating system, and presents additional effectiveness and complexity analyses of these results. Finally, the experimental results also demonstrate that the ZS algorithm outperforms other methods in simultaneously dealing with the effects of spammers and thorny objects.
The stochastic volatility inspired (SVI) model is widely used to fit the implied variance smile. Currently, most optimization algorithms for SVI models are strongly dependent on the input starting point. In this study...
详细信息
The stochastic volatility inspired (SVI) model is widely used to fit the implied variance smile. Currently, most optimization algorithms for SVI models are strongly dependent on the input starting point. In this study, we develop an efficient iterative algorithm for the SVI model based on a fixed-point least-squares optimizer, further presenting the convergence results for this novel iterative algorithm under certain condition. The experimental evaluation results of our approach using market data demonstrate the advantages of the proposed fixed-point iterative algorithm over the Quasi-explicit SVI method.
Pairwise comparison matrix (PCM) is an important tool to rank items by deriving priorities and has been used in various applications. Though large-scale sparse PCMs appear frequently in today's big data environmen...
详细信息
Pairwise comparison matrix (PCM) is an important tool to rank items by deriving priorities and has been used in various applications. Though large-scale sparse PCMs appear frequently in today's big data environment, it is hard for existing prioritization methods to handle large-scale sparse PCMs efficiently due to the curse of dimensionality. The goal of this article is to propose a new algorithm, bipartite graph iterative method (BGIM), to derive priorities from large-scale sparse PCMs. We first extended graph representations of PCMs to bipartite graphs. A transition matrix was induced by resource allocation on the bipartite graph. Finally, an iterative algorithm was designed to calculate priorities. The theoretical properties of the BGIM were analyzed to show its ability to derive priorities from large-scale sparse PCMs. Two experiments were conducted to validate the proposed approach. The numerical examples indicated that the BGIM can deal with traditional decision problems and derive reliable priorities with minimum Euclidean distance (ED) and minimum violation (MV) among the tested methods. The simulation examples suggested that the BGIM can not only derive reliable priorities from large-scale sparse PCMs but also require the least computation time compared with eight prioritization approaches. To demonstrate its applicability to real-world large-scale problems, we applied the BGIM to rank movies using MovieLens dataset with more than 100,000 ratings for 9125 movies. The results showed that the BGIM was the fastest approach and obtained the best ranking among the average ratings and the five prioritization methods.
Spectrally efficient frequency-division multiplexing (SEFDM) is a promising solution to increase communication spectral efficiency, which can pack even more sub-carriers than the orthogonal frequency-division multiple...
详细信息
ISBN:
(纸本)9781510667716;9781510667723
Spectrally efficient frequency-division multiplexing (SEFDM) is a promising solution to increase communication spectral efficiency, which can pack even more sub-carriers than the orthogonal frequency-division multiplexing in a given bandwidth. However, the cost is the introduction of inter-carrier interference (ICI), increasing the difficulty of signal reception and demodulation. When SEFDM signals are incorporated into a microwave photonic link, in addition to ICI, the nonlinear interference introduced by the nonlinearity of the microwave photonic link should also be considered. In this work, an iterative algorithm for microwave photonic SEFDM transmission systems is proposed to compensate for the inherent ICI of the SEFDM signal and reduce the third-order intermodulation distortion (IMD3) introduced by the microwave photonic transmission link. In the digital algorithm, the received 16 quadrature-amplitude modulation (QAM) SEFDM signal experiences several iterations, and in each iteration, the input SEFDM signal is modified, demodulated, and forward error correction (FEC) decoded into a bit sequence, which is remapped to QAM symbols to reconstruct the interference signals for canceling the distortion. Experimental results show that 16-QAM SEFDM signals with a bandwidth compression factor of 0.85 and high nonlinearity are successfully recovered from a microwave photonic link. The proposed method integrates the demodulation of SEFDM signals with the elimination of IMD3. Due to the employment of the FEC, compared with the traditional iterative ICI compensation (IIC) algorithm only for SEFDM signal demodulation, the signal demodulation capability is improved, and the improvement of signal demodulation capability also provides great help for the elimination of IMD3.
Nonnegative matrix factorization, which decomposes a target matrix into the product of two matrices with nonnegative elements, has been widely used in various fields of science, engineering and technology. In this pap...
详细信息
Nonnegative matrix factorization, which decomposes a target matrix into the product of two matrices with nonnegative elements, has been widely used in various fields of science, engineering and technology. In this paper, we consider the more general Q-weighted nonnegative matrix factorization (QWNMF) problem. By using the additive representation of the Q-weighted norm, the QWNMF problem is transformed into an unconstraint optimization problem, and then a new iterative algorithm is designed to solve it. The numerical analysis of this algorithm is also given. Numerical examples show that the new method is feasible and effective.
This paper proposes an iterative algorithm to solve the inverse displacement for a hyper-redundant elephant's trunk robot (HRETR). In this algorithm, each parallel module is regarded as a geometric line segment an...
详细信息
This paper proposes an iterative algorithm to solve the inverse displacement for a hyper-redundant elephant's trunk robot (HRETR). In this algorithm, each parallel module is regarded as a geometric line segment and point model. According to the forward approximation and inverse pose adjustment principles, the iteration process can be divided into forward and backward iteration. This iterative algorithm transforms the inverse displacement problem of the HRETR into the parallel module's inverse displacement problem. Considering the mechanical joint constraints, multiple iterations are carried out to ensure that the robot satisfies the required position error. Simulation results show that the algorithm is effective in solving the inverse displacement problem of HRETR.
An iterative algorithm for the decomposition of data series into trend and residual (including the seasonal effect) components is proposed. This algorithm is based on the approaches proposed by the authors in several ...
详细信息
An iterative algorithm for the decomposition of data series into trend and residual (including the seasonal effect) components is proposed. This algorithm is based on the approaches proposed by the authors in several previous studies and allows unbiased estimates for the trend and seasonal components for data with a strong trend containing different periodic (including seasonal) variations, as well as observational gaps and omissions. The main idea of the algorithm is that both the trend and the seasonal components should be estimated using a signal that is maximally cleaned of any other variations, which are considered a noise. In estimating the trend component, seasonal variation is a noise, and vice versa. The iterative approach allows a priori information to be more completely used in the optimization of models of both trend and seasonal components. The approximation procedure provides maximum flexibility and is fully controllable at all stages of the process. In addition, it allows one to naturally solve the problems in the case of missing observations and defective measurements without filling these dates with artificially simulated values. The algorithm was tested using data on changes in the concentration of CO2 in the atmosphere at four stations belonging to different latitudinal zones. The choice of these data is explained by the features that complicate the use of other methods, namely, high interannual variability, high-amplitude seasonal variations, and gaps in the series of observed data. This algorithm made it possible to obtain trend estimates (which are of particular importance for studying the characteristics and searching for the causes of global warming) for any time interval, including those that are not multiples of an integer number of years. The rate of increase in the CO2 content in the atmosphere has also been analyzed. It has been reliably established that in around 2016, the rate of CO2 accumulation in the atmosphere became stabilized and
Considering coordinate errors of both control points and non-control points, and different weights between control points and non-control points, this contribution proposes an extended weighted total least squares (WT...
详细信息
Considering coordinate errors of both control points and non-control points, and different weights between control points and non-control points, this contribution proposes an extended weighted total least squares (WTLS) iterative algorithm of 3D similarity transformation based on Gibbs vector. It treats the transformation parameters and the target coordinate of non-control points as unknowns. Thus it is able to recover the transformation parameters and compute the target coordinate of non-control points simultaneously. It is also able to assess the accuracy of the transformation parameters and the target coordinates of non-control points. Obviously it is different from the traditional algorithms that first recover the transformation parameters and then compute the target coordinate of non-control points by the estimated transformation parameters. Besides it utilizes a Gibbs vector to represent the rotation matrix. This representation does not introduce additional unknowns;neither introduces transcendental function like sine or cosine functions. As a result, the presented algorithm is not dependent to the initial value of transformation parameters. This excellent performance ensures the presented algorithm is suitable for the big rotation angles. Two numerical cases with big rotation angles including a real world case (LIDAR point cloud registration) and a simulative case are tested to validate the presented algorithm.
暂无评论