We propose a novel icing prediction model consisting of a heterogeneous information embedding layer, a direction-aware information aggregation layer, and a detection layer. In particular, we represent transmission lin...
详细信息
Air pollution poses a significant issue in numerous cities worldwide, impacting public health and the environment. We study three significant cities under the Dhaka division, including Kuril Bishow Road, Uttara, and T...
详细信息
Reversible data hiding techniques in encrypted images (RDH-EI) have been extensively researched to safeguard the privacy of images. To enhance the embedding capability of RDH-EI algorithms and security, this work leve...
Reversible data hiding techniques in encrypted images (RDH-EI) have been extensively researched to safeguard the privacy of images. To enhance the embedding capability of RDH-EI algorithms and security, this work leverages the topological coherence within the input plaintext image to create more additional space for embedding confidential information prior to image encryption. Initially, the prediction error of the input image is computed using the pixel prediction technique. Subsequently, the bit-plane serpentine rearrangement mechanism and the bit-stream compression method are employed to further enhance the process. This compression technique efficiently compresses the most significant bit-planes, resulting in a high-capacity space for embedding. Finally, the end-user successfully extracts the data and restores the input image using the possessed keys. The results clearly show that the proposed approach achieves not only attains a superior embedding capacity which achieves the maximum average embedding rate with a value of 3.624 (bpp), but also obtains a commendable reconstructed image quality that is PSNR=+∞ and SSIM=1.
The goal of this paper is to provide an internet of things (IoT)-based approach for automated electricity metering. When measuring power using standard methods, inaccuracies and erroneous readings are possible, puttin...
The goal of this paper is to provide an internet of things (IoT)-based approach for automated electricity metering. When measuring power using standard methods, inaccuracies and erroneous readings are possible, putting the data's integrity and authenticity at risk. The suggested method employs sensors linked to the internet of things (IoT) and machine learning algorithms to detect abnormalities and inconsistencies in real-time energy use statistics. This was evident during the testing phase. The proposed method offers several benefits over the present standard of power metering systems in use. The testing findings demonstrated that the proposed system is more reliable and effective in terms of 0.957 success rate, 0.943 accuracy, 0.967 recall, 0.956 F1 score and 0.982 AUC.
Approximate computing finds its application in image processing, machine learning, data mining and multimedia dataprocessing. The paper proposes a scalable and lightweight $4\times 4$ approximate multiplier, which ...
详细信息
Approximate computing finds its application in image processing, machine learning, data mining and multimedia dataprocessing. The paper proposes a scalable and lightweight $4\times 4$ approximate multiplier, which is designed to efficiently utilize the FPGA resources. We have divided the partial product matrix and selectively chosen the input variables for the effective usage of the LUTs and carry-chain. The bit error position due to the approximation is reduced by assigning the possible lower bits to ‘1’ which reduces the maximum error magnitude from 16 to 4. The proposed $4\times 4$ approximate multiplier achieves a reduction in area, power and latency of 26.7%, 57.6% and 60.1% respectively compared to the Xilinx multiplier IP. The proposed multiplier is used as a building block to implement $8\times 8$ approximate multipliers for image sharpening applications. We achieve a high PSNR value of 69.4 dB using the $8\times 8$ approximate multiplier.
In the context of Industry 4.0, Autonomous Guided Vehicles (AGVs) and other types of Multi-Agent System (MAS) have been widely used in large-scale warehouses and unmanned factories. Coordination of AGVs in shared spac...
In the context of Industry 4.0, Autonomous Guided Vehicles (AGVs) and other types of Multi-Agent System (MAS) have been widely used in large-scale warehouses and unmanned factories. Coordination of AGVs in shared space is drawing increasing attention of researchers, especially in the field of Multi-Agent Path Finding (MAPF). The MAPF problem has been studied a lot, and there are many kinds of solutions. Not all kinds of MAPF solvers can be applied in the field of AGV. Few literature focused on which MAPF solutions are suitable for AGV navigation. This review categorized existing MAPF algorithms into five branches, Conflict-Based Search (CBS) algorithms, Rule-Based algorithms, Priority-Based algorithms, Numerical Optimization-based algorithms and Learning-based algorithms. Furthermore, we evaluate these algorithms in terms of their optimality, lifelong property, scalability and computational cost. Finally, we analyzed which MAPF solvers are suitable for AGV navigation.
Credit-lending organizations have resorted to the use of machine learning (ML) algorithms in the recent past to predict the probability of the default of a business. Explainability of the decisions made by the traditi...
详细信息
Credit-lending organizations have resorted to the use of machine learning (ML) algorithms in the recent past to predict the probability of the default of a business. Explainability of the decisions made by the traditional statistical algorithms like Logit models brings transparency to every stakeholder involved in the process. On the other hand, machine learning models like XGBoost and Neural Nets have achieved better accuracy scores, but their decisions are not easily comprehensible. In this paper, we propose a graph based variable clustering (GVC) method as a filter based approach to select prominent features while retaining as much variance as possible. Our experiments show that our GVC approach is not only almost 40 times faster than the existing variable clustering methods but retains retains 5% more variance than the existing *** feature set from GVC approach has performed better with an increase of 6% accuracy on an average. The predictions on the feature set from GVC were 98% accurate using XGBoost algorithm.
In this paper, we propose an automatic segmentation and fitting method for hand-drawn sketches based on greedy strategy. This method first resamples and smoothly filter the input sketch stroke sequence, and then autom...
详细信息
Phonetics is a crucial branch of linguistics that studies human speech sounds and is essential for language learning, speech therapy, and speech technology development. However, current Arabic speech systems cannot in...
Phonetics is a crucial branch of linguistics that studies human speech sounds and is essential for language learning, speech therapy, and speech technology development. However, current Arabic speech systems cannot instantly analyze and detect mispronunciation of Arabic phonemes, only offering a recording feature for self-evaluation. So, this paper presents a state-of-the-art approach using a Long Short-Term Memory (LSTM) model for detecting and discriminating between Arabic phonemes. The model was tested on a collected dataset for the Arabic phonemes /s/ 'sīn' and /sˤ/ 'ṣād and demonstrated excellent performance in discriminating between them despite their close articulation. The model achieved an accuracy of 97.33% and outperformed other techniques proposed for small datasets. The model can be utilized either independently or integrated into a computer-assisted language learning (CALL) system. Furthermore, our findings suggest that there is significant room for improvement in this area, particularly in collecting large audio datasets for all the Arabic phonemes, refining the algorithms, and optimizing the dataprocessing pipeline.
With the advancement of cloud computing technology, research on cloud computing task scheduling has become a hot topic. This paper mainly studies the cloud computing scheduling method of information systems based on s...
With the advancement of cloud computing technology, research on cloud computing task scheduling has become a hot topic. This paper mainly studies the cloud computing scheduling method of information systems based on swarm intelligence algorithm, including cloud computing scheduling model based on particle swarm optimization(PSO), improved Invasive Tumor Growth Optimization(ITGO) model structure, and multi-objective cloud computing task scheduling based on improved artificial bee colony algorithm. This paper concludes that by introducing multi-objective optimization terms, replacing multi-objective optimization terms with fitness functions, initializing tumor cell set functions, and improving search strategies and selection strategies, the efficiency and quality of cloud computing task scheduling can be improved. In order to realize the optimal utilization of cloud computing resources and meet the needs of users.
暂无评论