This paper presents a topic-enhanced recurrent autoencoder model to improve the accuracy of sentiment classification of short texts. First, the concept of recurrent autoencoder is proposed to tackle the problems in re...
详细信息
Non-convex models, like deep neural networks, have been widely used in machine learning applications. Training non-convex models is a difficult task owing to the saddle points of models. Recently,stochastic normalized...
详细信息
Non-convex models, like deep neural networks, have been widely used in machine learning applications. Training non-convex models is a difficult task owing to the saddle points of models. Recently,stochastic normalized gradient descent(SNGD), which updates the model parameter by a normalized gradient in each iteration, has attracted much attention. Existing results show that SNGD can achieve better performance on escaping saddle points than classical training methods like stochastic gradient descent(SGD).However, none of the existing studies has provided theoretical proof about the convergence of SNGD for non-convex problems. In this paper, we firstly prove the convergence of SNGD for non-convex ***, we prove that SNGD can achieve the same computation complexity as SGD. In addition, based on our convergence proof of SNGD, we find that SNGD needs to adopt a small constant learning rate for convergence guarantee. This makes SNGD do not perform well on training large non-convex models in practice. Hence, we propose a new method, called stagewise SNGD(S-SNGD), to improve the performance of SNGD. Different from SNGD in which a small constant learning rate is necessary for convergence guarantee,S-SNGD can adopt a large initial learning rate and reduce the learning rate by stage. The convergence of S-SNGD can also be theoretically proved for non-convex problems. Empirical results on deep neural networks show that S-SNGD achieves better performance than SNGD in terms of both training loss and test accuracy.
A speech code/decode algorithm which combines MBE and LPC speech model is proposed. In this model, the spectral envelope is represented using Linear Prediction Coefficients, which are coded using Line Spectrum Frequen...
详细信息
ISBN:
(纸本)7543909405
A speech code/decode algorithm which combines MBE and LPC speech model is proposed. In this model, the spectral envelope is represented using Linear Prediction Coefficients, which are coded using Line Spectrum Frequencies (LSFs). It can operate at 2.4 kbps with much higher quality of synthesis speech than LPC-10e and less computation complexity than CELP, VSELP and so on. Therefore it is particularly attractive for VLSI implementation.
The adaptive beamformer of large-scale sensor array mainly suffers from two limits. One limit is an insufficient number of training snapshots, which usually results in an ill-posed sample covariance matrix in many rea...
详细信息
The adaptive beamformer of large-scale sensor array mainly suffers from two limits. One limit is an insufficient number of training snapshots, which usually results in an ill-posed sample covariance matrix in many real applications. The other limit is the high computation complexity of the beamformer that severely restricts its online processing. To overcome these two limits, two fast and robust adaptive beam forming algorithms are proposed in this paper, which refers to the linear kernel approaches and formulates the weight vector as a linear combination of the training samples and the signal steering vector. The proposed algorithms only need to calculate a low-dimensional combination vector instead of the high-dimensional adaptive weight vector, which remarkably reduces the computation complexity. Moreover, regularization techniques are utilized to suppress the excessive variation of the combination vector caused by an underdetermined estimation of the Gram matrix. Experimental results show that the proposed algorithms achieve better performance and lower computation complexity than algorithms in the literature. Especially, like the kernel approaches, the proposed algorithms achieve good performance under the small sample case.& nbsp;(c) 2021 Published by Elsevier B.V.
We provide evidence that it is computationally difficult to approximate the partition function of the ferromagnetic q-state Potts model when q > 2. Specifically, we show that the partition function is hard for the ...
详细信息
We provide evidence that it is computationally difficult to approximate the partition function of the ferromagnetic q-state Potts model when q > 2. Specifically, we show that the partition function is hard for the complexity class #RH Pi(1) under approximation-preserving reducibility. Thus, it is as hard to approximate the partition function as it is to find approximate solutions to a wide range of counting problems, including that of determining the number of independent sets in a bipartite graph. Our proof exploits the first-order phase transition of the "random cluster" model, which is a probability distribution on graphs that is closely related to the q-state Potts model.
For a cognitive radio network (CRN) in which a set of secondary users (SUs) competes for a limited number of channels (spectrum resources) belonging to primary users (PUs), the channel allocation is a challenge and do...
详细信息
For a cognitive radio network (CRN) in which a set of secondary users (SUs) competes for a limited number of channels (spectrum resources) belonging to primary users (PUs), the channel allocation is a challenge and dominates the throughput and congestion of the network. In this paper, the channel allocation problem is first formulated as the 0-1 integer programming optimization, with considering the overall utility both of primary system and secondary system. Inspired by matching theory, a many-to-one matching game is used to remodel the channel allocation problem, and the corresponding PU proposing deferred acceptance (PPDA) algorithm is also proposed to yield a stable matching. We compare the performance and computation complexity between these two solutions. Numerical results demonstrate the efficiency and obtain the communication overhead of the proposed scheme.
Deep neural networks have achieved great success in many tasks of pattern recognition. However, large model size and high cost in computation limit their applications in resource-limited systems. In this paper, our fo...
详细信息
Deep neural networks have achieved great success in many tasks of pattern recognition. However, large model size and high cost in computation limit their applications in resource-limited systems. In this paper, our focus is to design a lightweight and efficient convolutional neural network architecture by directly training the compact network for image recognition. To achieve a good balance among classification accuracy, model size, and computation complexity, we propose a lightweight convolutional neural network architecture named IIRNet for resource-limited systems. The new architecture is built based on Intensely Inverted Residual block (IIR block) to decrease the redundancy of the convolutional blocks. By utilizing two new operations, intensely inverted residual and multi-scale low-redundancy convolutions, IIR block greatly reduces its model size and computational costs while matches the classification accuracy of the state-of-the-art networks. Experiments on CIFAR-10, CIFAR-100, and ImageNet datasets demonstrate the superior performance of IIRNet on the trade-offs among classification accuracy, computation complexity, and model size, compared to the mainstream compact network architectures. (C) 2019 Elsevier B.V. All rights reserved.
The self-organizing map (SOM) is a traditional neural network algorithm used to achieve feature extraction, clustering, visualization and data exploration. However, it is known that the computational cost of the tradi...
详细信息
The self-organizing map (SOM) is a traditional neural network algorithm used to achieve feature extraction, clustering, visualization and data exploration. However, it is known that the computational cost of the traditional SOM, used to search for the winner neuron, is expensive especially in case of treating high-dimensional data. In this paper, we propose a novel hierarchical SOM search algorithm which significantly reduces the expensive computational cost associated with traditional SOM. It is shown here that the computational cost of the proposed approach, compared to traditional SOM, to search for the winner neuron is reduced into O(D-1 + D-2 + . . . + D-N) instead of O(D-1 x D-2 x . . . x D-N), where D-j is the number of neurons through a dimension d(j) of the feature map. At the same time, the new algorithm maintains all merits and qualities of the traditional SOM. Experimental results show that the proposed algorithm is a good alternate to traditional SOM, especially, in high-dimensional feature space problems.
Hyperspectral imaging has been attracting considerable interest as it provides spectrally rich acquisitions useful in several applications, such as remote sensing, agriculture, astronomy, geology and medicine. Hypersp...
详细信息
Hyperspectral imaging has been attracting considerable interest as it provides spectrally rich acquisitions useful in several applications, such as remote sensing, agriculture, astronomy, geology and medicine. Hyperspectral devices based on compressive acquisitions have appeared recently as an alternative to conventional hyperspectral imaging systems and allow for data-sampling with fewer acquisitions than classical imaging techniques, even under the Nyquist rate. However, compressive hyperspectral imaging requires a reconstruction algorithm in order to recover all the data from the raw compressed acquisition. The reconstruction process is one of the limiting factors for the spread of these devices, as it is generally time-consuming and comes with a high computational burden. Algorithmic and material acceleration with embedded and parallel architectures (e.g., GPUs and FPGAs) can considerably speed up image reconstruction, making hyperspectral compressive systems suitable for real-time applications. This paper provides an in-depth analysis of the required performance in terms of computing power, data memory and bandwidth considering a compressive hyperspectral imaging system and a state-of-the-art reconstruction algorithm as an example. The results of the analysis show that real-time application is possible by combining several approaches, namely, exploitation of system matrix sparsity and bandwidth reduction by appropriately tuning data value encoding.
Increasing population and water use, rising pollution of water resources, and climate change affect the quantity and quality of water resources. Reservoir operation is an important tool for water supply that can be op...
详细信息
Increasing population and water use, rising pollution of water resources, and climate change affect the quantity and quality of water resources. Reservoir operation is an important tool for water supply that can be optimized by simulation-optimization considering the impact of climate change on water quality. This study presents a simulation-optimization approach linking the CE-QUAL-W2 hydrodynamic model with the firefly algorithm k-nearest neighbor (FA-KNN) model to obtain optimal reservoir discharges to achieve water quality objectives under climate change conditions. The developed algorithm overcomes the computational burden of CE-QUAL-W2. The FA-KNN hybrid algorithm is employed to optimize the total dissolved solids (TDS) while achieving computational efficiently beyond what could be achieved with CE-QUAL-W2 simulations alone. This paper's approach is evaluated with the Aidoghmoush Reservoir (East Azerbaijan, Iran). Overall, 36 simulation-optimization scenarios for dry and wet years under baseline and climate change conditions are evaluated by considering three initial water levels for the reservoir (minimum, average, and normal) and three thresholds for assessing the hybrid algorithm. The TDS released from the reservoir in wet years would be acceptable for agricultural use;in dry years, on average, the TDS would not be acceptable for 24 days per year under climate change. The reservoir in winter undergoes complete mixing;it becomes stratified in spring and summer, and it is close to complete mixing in the autumn. The highest TDS in the reservoir would occur during the summer in dry years under climate change, reaching TDS of approximately 2,645 g/m(3). (C) 2021 American Society of Civil Engineers.
暂无评论