The usage of machine learning and deep learning algorithms have necessitated Artificial Intelligence'. AI is aimed at automating things by limiting human interference. It is widely used in IT, healthcare, finance,...
详细信息
Every single day, users are increasing as the world has explored technology where data is producing more and more big data and cloud technologies are the reason behind the scene because big data and cloud computing ar...
详细信息
Deep neural networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications. With the rapid development of DNNs, efficient hardware architecture...
详细信息
Deep neural networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications. With the rapid development of DNNs, efficient hardware architectures for deploying DNN-based applications on edge devices have been extensively studied. Emerging nonvolatile memories (NVMs), with their better scalability, nonvolatility, and good read performance, are found to be promising candidates for deploying DNNs. However, despite the promise, emerging NVMs often suffer from reliability issues, such as stuck-at faults, which decrease the chip yield/memory lifetime and severely impact the accuracy of DNNs. A stuck-at cell can be read but not reprogrammed, thus, stuck-at faults in NVMs may or may not result in errors depending on the data to be stored. By reducing the number of errors caused by stuck-at faults, the reliability of a DNN-based system can be enhanced. This article proposes CRAFT, i.e., criticality-aware fault-tolerance enhancement techniques to enhance the reliability of NVM-based DNNs in the presence of stuck-at faults. A data block remapping technique is used to reduce the impact of stuck-at faults on DNNs accuracy. Additionally, by performing bit-level criticality analysis on various DNNs, the critical-bit positions in network parameters that can significantly impact the accuracy are identified. Based on this analysis, we propose an encoding method which effectively swaps the critical bit positions with that of noncritical bits when more errors (due to stuck-at faults) are present in the critical bits. Experiments of CRAFT architecture with various DNN models indicate that the robustness of a DNN against stuck-at faults can be enhanced by up to 105 times on the CIFAR-10 dataset and up to 29 times on ImageNet dataset with only a minimal amount of storage overhead, i.e., 1.17%. Being orthogonal, CRAFT can be integrated with existing fault-tolerance schemes to further enhance the robustness of DNNs aga
Feature selection (FS) analyzes the most important features to enhance classification accuracy during preprocessing. The objective of FS techniques is to reduce the number of input variables and computational load to ...
详细信息
This research introduces real-time monitoring and localizing product stock using the First-In-First-Out (FIFO) method with radio frequency identification (RFID) pressure sensing tags. The proposed FIFO system has RFID...
详细信息
This study investigates the application of clustering algorithms to data collected from color sensors providing frequency signals during the cultivation process. Continuous sampling allowed for real-time data collecti...
详细信息
ISBN:
(纸本)9798331543952
This study investigates the application of clustering algorithms to data collected from color sensors providing frequency signals during the cultivation process. Continuous sampling allowed for real-time data collection, and variations in temperature and humidity were observed to influence signal reception. The dataset's large volume and varying sample sizes necessitated the evaluation of several clustering models for efficient data processing. The experiment tested K-Means, K-Means++, Mini-Batch K-Means, and Bisecting K-Means algorithms. The results reveal that the computation time for each model increased with the number of groups. Specifically, the K-Means, K-Means++, and Mini-Batch K-Means models exhibited shorter computation times, ranging from 0.02 to 0.03 seconds for two-group scenarios, while the Bisecting K-Means algorithm took significantly longer, approximately 0.10 seconds. Despite the variation in processing times, all models performed similarly in terms of clustering quality, with an average Silhouette Score of 0.82. This score indicates that the algorithms effectively separated data points into well-defined groups while maintaining high accuracy in clustering. Additionally, the analysis highlighted the importance of balancing computation time and clustering quality, particularly as the number of clusters increases. These findings suggest that while computational efficiency may vary across models, the clustering quality remains robust, providing valuable insights for data analysis in agricultural sensor networks. The study's results demonstrate that each clustering model is capable of handling large datasets with high accuracy in grouping, making them applicable to real-world agricultural data processing scenarios. In conclusion, the experiment confirms that all tested models, despite differing in computation time, successfully partitioned the data with high-quality results, offering promising applications for data analysis and optimization in agricult
Rice is an important food crop for human *** distinguishing different varieties and sowing methods of rice on a large scale can provide more accurate information for rice growth monitoring,yield estimation,and phenolo...
详细信息
Rice is an important food crop for human *** distinguishing different varieties and sowing methods of rice on a large scale can provide more accurate information for rice growth monitoring,yield estimation,and phenological monitoring,which has significance for the development of modern *** polarimetric(CP)synthetic aperture radar(SAR)provides multichannel information and shows great potential for rice monitoring and ***,the use of machine learning methods to build classification models is a controversial *** this paper,the advantages of CP SAR data,the powerful learning ability of machine learning,and the important factors of the rice growth cycle were taken into account to achieve high-precision and fine classification of rice ***,CP SAR data were simulated by using the seven temporal RADARSAT-2 C-band data ***,20-two CP SAR parameters were extracted from each of the seven temporal CP SAR data *** addition,we fully considered the change degree of CP SAR parameters on a time scale(ΔCP_(DoY)).Six machine learning methods were employed to carry out the fine classification of rice *** results show that the classification methods of machine learning based on multitemporal CP SAR data can obtain better results in the fine classification of rice paddies by considering the parameters ofΔCP_(DoY).The overall accuracy is greater than 95.05%,and the Kappa coefficient is greater than *** them,the random forest(RF)and support vector machine(SVM)achieve the best results,with an overall accuracy reaching 97.32%and 97.37%,respectively,and Kappa coefficient values reaching 0.965 and 0.966,*** the two types of rice paddies,the average accuracy of the transplant hybrid(T-H)rice paddy is greater than 90.64%,and the highest accuracy is 95.95%.The average accuracy of direct-sown japonica(D-J)rice paddy is greater than 92.57%,and the highest accuracy is 96.13%.
With the extensive penetration of distributed renewable energy and self-interested prosumers,the emerging power market tends to enable user autonomy by bottom-up control and distributed *** paper is devoted to solving...
详细信息
With the extensive penetration of distributed renewable energy and self-interested prosumers,the emerging power market tends to enable user autonomy by bottom-up control and distributed *** paper is devoted to solving the specific problems of distributed energy management and autonomous bidding and peer-to-peer(P2P)energy sharing among prosumers.A novel cloud-edge-based We-Market is presented,where the prosumers,as edge nodes with independent control,balance the electricity cost and thermal comfort by formulating a dynamic household energy management system(HEMS).Meanwhile,the autonomous bidding is initiated by prosumers via the modified Stone-Geary utility *** the cloud center,a distributed convergence bidding(CB)algorithm based on consistency criterion is developed,which promotes faster and fairer bidding through the interactive iteration with the edge ***,the proposed scheme is built on top of the commercial cloud platform with sufficiently secure and scalable computing *** results show the effectiveness and practicability of the proposed We-Market,which achieves 15%cost reduction with shorter running *** analysis indicates better scalability,which is more suitable for largerscale We-Market implementation.
Museum ecosystems need digital transformation to provide value added to museums. This study discusses the digital museum ecosystem conceptual model based on artificial intelligence (AI) technology and the Internet of ...
详细信息
The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application *** the existing dee...
详细信息
The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application *** the existing deep learning compilers,TVM is well known for its efficiency in code generation and optimization across diverse hardware *** the meanwhile,the Sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning *** paper combines the trends in these two ***,we propose swTVM that extends the original TVM to support ahead-of-time compilation for architecture requiring cross-compilation such as *** addition,we leverage the architecture features during the compilation such as core group for massive parallelism,DMA for high bandwidth memory transfer and local device memory for data locality,in order to generate efficient codes for deep learning workloads on *** experiment results show that the codes generated by swTVM achieve 1.79x improvement of inference latency on average compared to the state-of-the-art deep learning framework on Sunway,across eight representative *** work is the first attempt from the compiler perspective to bridge the gap of deep learning and Sunway processor particularly with productivity and efficiency in *** believe this work will encourage more people to embrace the power of deep learning and Sunwaymany-coreprocessor.
暂无评论