In recent years, deep learning methods, especially convolutional neural networks (CNNs), have shown powerful discriminative ability in the field of remote sensing change detection. However, the multi-temporal remote s...
详细信息
In the complex context of 5G ultra-dense networks, the unprecedented growth in traffic and computing demand has made the interaction between edge devices and edge servers extremely intricate due to the time-varying na...
详细信息
作者:
Du, SichunZhu, HaodiZhang, YangHong, QinghuiHunan University
College of Computer Science and Electronic Engineering Changsha418002 China Shenzhen University
Computer Vision Institute School of Computer Science and Software Engineering National Engineering Laboratory for Big Data System Computing Technology Guangdong Key Laboratory of Intelligent Information Processing Shenzhen518060 China
Address event representation (AER) object recognition task has attracted extensive attention in neuromorphic vision processing. The spike-based and event-driven computation inherent in the spiking neural network (SNN)...
详细信息
In this article, the power control and interference pricing problems between the macro base station (MBS) and small cell base stations (SBSs) in Heterogeneous networks (HetNets) are investigated. To optimally allocate...
详细信息
In computer science, substring search or string Matching is a vulnerable problem when text resources are very large. Productivity of diverse scraping applications depend on the effectiveness of searching algorithms. P...
详细信息
The utilization of multi-agent systems has been increasingly prevalent across various sectors, owing to their notable efficacy in execution. However, a multitude of hazards exist that possess the capability to undermi...
The utilization of multi-agent systems has been increasingly prevalent across various sectors, owing to their notable efficacy in execution. However, a multitude of hazards exist that possess the capability to undermine the integrity of the agent and pose a threat to the overall security of the system. Therefore, it is crucial to carefully evaluate and address security vulnerabilities during the process of creating Multi-Agent systems (MAS). This survey assesses different models that aim to guarantee the security of Multi-Agent systems (MAS). These models were developed based on principles related to the agents' functionality and communication. This study offers a comprehensive examination and categorization of the prevailing threats targeting Multi-Agent systems (MASs). Following this, we conduct a comprehensive analysis and assessment of diverse security strategies described in scholarly sources, classifying them into prevention, detection, and resiliency approaches based on their established credibility and reliability. In conclusion, we offer a suggestion for the optimal security approach to address particular types of threats, considering the most recent breakthroughs in the field of study.
When identifying facial expressions using a set of salient features, reversible neural network plays a crucial role. In order to create a prominent feature set, these salient features are extracted from a face image u...
详细信息
Handwritten digit recognition is a significant challenge in the field of machine learning, particularly for pattern recognition and computer vision applications. It has found applications in various areas, such as ide...
Handwritten digit recognition is a significant challenge in the field of machine learning, particularly for pattern recognition and computer vision applications. It has found applications in various areas, such as identifying digits on utility maps, postal mail zip codes, bank check amounts, and more. Offline handwritten digits possess distinct characteristics, including size, orientation, position, and thickness. Each person's handwriting is unique, making the classification process more difficult. Additionally, the high similarity between certain digits and the over fitting problem with high-dimensional data can increase computational time and cost. Consequently, numerous researchers have developed and implemented different machine learning algorithms to effectively address the issue of recognizing handwritten digits. This paper introduces a novel approach to improve the classification performance and accuracy of handwritten digit recognition. The proposed method utilizes an ensemble majority vote classifier, which combines three classifiers in each experiment by utilizing a range of different algorithms; Naïve Bayes (NB), Adaptive Boosting Algorithm (AdaBoost), K-Nearest Neighbors (K-NN), Multi-layer Perceptron Classifier (MLP), eXtreme Gradient Boosting algorithm (XGBoost), and Decision Tree Algorithm (DT) to form a single ensemble classifier. To validate the approach, we used the MINST free public handwritten dataset, to ascertain the classifier with the highest rate of accuracy in this study. The results obtained from the experiments demonstrated that the full performance of Ensemble model classifiers MLP, XGBOOST, and DT is superior to AdaBoost, K-NN, and NB. It achieved the highest accuracy level, reaching 98% in the experiment. The experimental results of this research showed that the recognition accuracy was improved compared to previous works.
Emerging cross-point memory can in-situ perform vector-matrix multiplication (VMM) for energy-efficient scientific computation. However, parasitic-capacitance-induced row charging and discharging latency is a major pe...
ISBN:
(纸本)9798350323481
Emerging cross-point memory can in-situ perform vector-matrix multiplication (VMM) for energy-efficient scientific computation. However, parasitic-capacitance-induced row charging and discharging latency is a major performance bottleneck of subarray VMM. We propose a memory-timing-compliant bulk VMM processing-using-memory design with row access and column access co-optimization from rethinking of read access commands and μ-op timing. We propose row-level-parallelism-adaptive timing termination mechanism to reduce tail latency of tRCD and tRP by exploiting row nonlinear charging and bulk-interleaved row-column-cooperative VMM access mechanism to reduce tRAS and overlap CL without increasing column ADC precision. Evaluations show that our design can achieve 5.03× performance speedup compared with an aggressive baseline.
The memristive Computing-in-Memory (CIM) sys-tem can efficiently accelerate matrix-vector multiplication (MVM) operations through in-situ computing. The data layout has a significant impact on the communication perfor...
详细信息
ISBN:
(数字)9798350350579
ISBN:
(纸本)9798350350586
The memristive Computing-in-Memory (CIM) sys-tem can efficiently accelerate matrix-vector multiplication (MVM) operations through in-situ computing. The data layout has a significant impact on the communication performance of CIM systems. Existing software-level communication opti-mizations aim to reduce communication distance by carefully designing static data layouts, while wear-leveling (WL) and error mitigation methods use dynamic scheduling to enhance system reliability, resulting in randomized data layouts and increased communication overhead. Besides, existing CIM compilers di-rectly map data to physical crossbars and generate instructions, which causes inconvenience for dynamic scheduling. To address these challenges of balancing communication performance and reliability while coordinating existing CIM compilers and dy-namic scheduling, we propose a disorder-resistant computation translation layer (DRCTL), which improves system lifetime and communication performance through co-optimization of data layout and dynamic scheduling. It consists of three parts: (1) We propose an address conversion method for dynamic scheduling, which updates the addresses in the instruction stream after dynamic scheduling, thereby avoiding recompilation. (2) Dynamic scheduling strategy for reliability improvement. We propose a hierarchical wear-leveling (HWL) strategy, which reduces communication by increasing scheduling granularity. (3) Communication optimization for dynamic scheduling. We propose data layout-aware selective remapping (LASR), which helps dynamic scheduling methods improve communication lo-cality and reduce latency by exploiting data dependencies. The experiments demonstrate that HWL extends lifetime by 100.3-205.9 x compared to not using WL. Even with a slight lifetime decrease compared to the state-of-the-art WL (TIWL), it still supports continuous neural network training for 7 years. After applying LASR to HWL, the number of execution cycles, energy consumption
暂无评论