Fog computing is an extension of cloud computing and presents additional devices called fog devices near IoT devices providing services on behalf of cloud servers. Although fog computing brings several advantages, rap...
Fog computing is an extension of cloud computing and presents additional devices called fog devices near IoT devices providing services on behalf of cloud servers. Although fog computing brings several advantages, rapid growth in the data generated by IoT devices increases communication and computational costs. As data sensed by IoT devices may correlate, there is a high possibility of duplicate copies in the sensed data. Data deduplication is a compression technique that reduces communication and storage overhead by skipping the upload and storage of duplicate copies of data. However, deduplication for a fog-enabled IoT system introduces new security issues. In this paper, we propose a novel secure deduplication approach with dynamic key management in a fog-enabled IoT system. We introduce a multilayer encryption scheme with dynamic key management to prevent the access of revoked users to the data. We implement the proposed scheme in a realistic scenario using Raspberry Pi and Firebase cloud services. The performance analysis shows that our approach achieves confidentiality and forward secrecy along with lower storage, computational, and communication costs.
Phishing is a well-known cybercrime that is increasing rapidly day by day. Phishing involves sending emails with suspicious links or their attachments that can accomplish a number of tasks and includes credentials or ...
详细信息
ISBN:
(数字)9798350330861
ISBN:
(纸本)9798350330878
Phishing is a well-known cybercrime that is increasing rapidly day by day. Phishing involves sending emails with suspicious links or their attachments that can accomplish a number of tasks and includes credentials or personal account information of the individual who experienced harm. These emails hurt the recipients, result in financial loss, and steal identities. Most fraudulent websites design the exact interface and universal resource locator (URL) as the legitimate websites. As anti-hacking persons are now in search for trustworthy and stable phishing detection methods in URLs. In this paper, the focus is on utilizing machine learning technology to identify phishing URLs. The approach involves analyzing a range of features present in both legitimate and phishing URLs. In order to identify phishing websites URLs, we use Decision Tree, random forest, Support vector machine and other useful machine learning algorithms. The conclusion of the studies shows that the considered methods perform more effectively in detecting suspicious URLs than more recent methods.
When natural disasters such as floods, earthquakes, terrorist attacks, and industrial accidents occur, first responders, news agencies, and victims increasingly use social media platforms as primary communication chan...
When natural disasters such as floods, earthquakes, terrorist attacks, and industrial accidents occur, first responders, news agencies, and victims increasingly use social media platforms as primary communication channels for disseminating reliable situational information to the public. Although social media is a powerful tool for spreading news, it may also facilitate the spread of fake news, which poses a threat to societal security. Traditionally, verification methods require a great deal of human and social resources, and they are not able to keep pace with the rate at which news is disseminated. In order to determine the authenticity of news articles, we propose a multi-modal approach that analyzes different modalities of information. In this paper, we employ an image captioning model to generate textual descriptions of news images, which provides supplementary information for verification. Our experimental evaluations on the real-world dataset have demonstrated that the proposed method achieves higher performance than baseline methods.
Breathing difficulties, such as shortness of breath or rapid breathing, can significantly harm health and diminish the quality of life. Breathing exercises are widely recognized for their effectiveness in resolving su...
Breathing difficulties, such as shortness of breath or rapid breathing, can significantly harm health and diminish the quality of life. Breathing exercises are widely recognized for their effectiveness in resolving such difficulties. They increase lung function, ease discomfort, and alleviate stress. However, current breathing-assisted methods, such as the incentive spirometer, do not accurately track the duration and frequency of training despite their advantages. Furthermore, the process of performing breathing exercises is tedious over time, making users feel bored and reducing their motivation. Thus, we presented Breathing+, a breathing video game designed to address these limitations by providing a series of game challenges. Breathing+ incorporated sensors to detect breathing signals and provided users with real-time feedback on a screen. According to the experimental results, Breathing+ achieved a breathing recognition accuracy of 95%. In addition, the response time to the user’s breathing was around 0.23 s.
Few-shot learning is a challenging task in which a classifier needs to quickly adapt to new classes. These new classes are unseen in the training stage, and there are only very few samples (e.g., five images) provided...
Few-shot learning is a challenging task in which a classifier needs to quickly adapt to new classes. These new classes are unseen in the training stage, and there are only very few samples (e.g., five images) provided for learning each new class in the testing stage. When the existing methods learn with such a small amount of samples, they could easily be affected by the outliers. Moreover, the category center calculated from those few samples may deviate from the true center. To address these issues, we propose a novel approach called Current Task Variational Auto-Encoder (CTVAE) for few-shot learning. In our framework, a trained feature extractor first produces the features of the current task, and these features are used to repeatedly train the generator in CTVAE. After that, we can use CTVAE to generate additional features of samples, and then find a new center of the category based on these newly generated features. Compared with the original center, the new center tends to be closer to the true center in vector space. CTVAE can break the limitation of traditional few-shot learning methods, which can only fine-tune the model with very few samples in the testing stage. Moreover, by generating the features directly without producing the images first, the training process of the generator in CTVAE is simplified and becomes more efficient, and the features can be generated faster and more precisely. According to the experiments on benchmark datasets (i.e., Mini-ImageNet, CUB, and CIFAR-FS), our proposed framework is able to outperform the state-of-the-art methods and improves the accuracy by 1–4%. We also conduct experiments on the cross-domain tasks, and the results show that the proposed framework can bring 1-5% accuracy improvements.
Cattle are susceptible to the extremely contagious viral infection known as Lumpy Skin Disease (LSD), which results in skin lesions and substantial financial losses for the livestock industry. Visual Geometry Group 16...
Cattle are susceptible to the extremely contagious viral infection known as Lumpy Skin Disease (LSD), which results in skin lesions and substantial financial losses for the livestock industry. Visual Geometry Group 16(VGG16), Visual Geometry Group 19(VGG19), and DenseNet121 are a few examples of Convolutional Neural Networks (CNNs) that have shown success in correctly diagnosing LSD lesions. Transfer learning is frequently used to optimize previously trained models on certain datasets. Model performance is measured using evaluation measures like accuracy, precision, recall, and F1-score. In comparison to VGG16 and VGG19, CNN and DenseNet121 provide a balance between accuracy and processing economy. To implement effective control measures and lessen the financial burden on cow populations, it is important to accurately classify LSD lesions. The proposed system with DenseNet121 model enables early detection and intervention. DenseNet121 has given the highest accuracy of 99.10% with approximately balanced dataset of size 4759. The proposed model also includes an attractive user interface to display and compare results of all models for a given input image. Categorization report, a confusion matrix, and a test accuracy calculation all are obtained as results.
To further reduce the forward gate current of Schottky-type $p$ -GaN gate HEMTs, inadequate Mg activation in $p$ -GaN is deployed in this work, which tends to convert the conventional $p$ -GaN into insulating GaN w...
To further reduce the forward gate current of Schottky-type $p$ -GaN gate HEMTs, inadequate Mg activation in $p$ -GaN is deployed in this work, which tends to convert the conventional $p$ -GaN into insulating GaN with high concentration of Mg passivated by hydrogens. The free hole concentration in $p$ -GaN is reduced, and so is the hole deficiency effect that is the main cause of dynamic threshold voltage ( $V_{\text{TH}}$ ) in commercial Schottky-type $p$ -GaN gate HEMTs. However, plenty of electron traps left in $p$ -GaN lead to more significant dynamic $V_{\text{TH}}$ shift (up to 6 V) under reverse gate bias ( $V_{\text{GSQ}}$ , up to -13 V), as revealed by the $V_{\text{TH}}$ recovery processes under different conditions of light illumination and forward gate bias. Fortunately, under forward $V_{\text{GSQ}}$ , the fully depleted $p$ -GaN layer facilitates electron acceleration by the electric field, suppressing the electron trapping and consequent dynamic $V_{\text{TH}}$ shift. Besides, deeper-level electron trapping in AlGaN may account for the slight dynamic $V_{\text{TH}}$ shift under $V_{\text{GSQ}}\geq 7\mathrm{V}$ .
Distance measure can measure the difference between intuitionistic fuzzy sets. According to the indeterminacy of steganography communication, the statistical features of cover and stego image are regarded as two intui...
详细信息
Modern network administration desires to have early detection of DDoS traffic before damages occur. Nevertheless, as Internet traffic grows over years, it becomes more challenging to detect DDoS traffic in an efficien...
Modern network administration desires to have early detection of DDoS traffic before damages occur. Nevertheless, as Internet traffic grows over years, it becomes more challenging to detect DDoS traffic in an efficient and effective manner. The survey of existing literature shows that random forest classifier, when applied to DDoS detection, yields great performance at low learning costs. This work is devoted to a case study of implementing and using random forest classifier to detect DDoS traffic generated by a well-known DDoS tool named HULK. Our result indicates that when used with a good feature selection mechanism, random forest classifier can achieve a high detection accuracy with fast training time.
This is the era of modern technology, where we are all surrounded and influenced b y technology. T he essay highlights the significance o f recommendation systems i n our technology-driven era and specifically focuses...
This is the era of modern technology, where we are all surrounded and influenced b y technology. T he essay highlights the significance o f recommendation systems i n our technology-driven era and specifically focuses o n a movie recommendation system developed using the collaborative filtering approach. The system utilizes user information, analyzes it, and suggests movies based on the user's taste. The recommended movies are sorted according to the ratings given by the system. The development of this system involved Python programming language and MYSQL Workbench for database connectivity. To address the cold start problem, the system utilized the Movielens database. Various methods were employed, including user-similarity, content-based, collaborative filtering, and hybrid models, with different algorithms to enhance the accuracy of recommendations. Each algorithm was critically examined to determine their performance and identify the most accurate among them. The system takes into account distinct features such as user interests, history, and location. It recommends movies based on user interests, ratings, and reviews given by other users with similar item interests. The system operates on a user similarity model and also incorporates the Louvain algorithm to identify communities among users based on their preferences. We are analyzing the Louvain algorithm's performance, which is utilized for community detection. The algorithm employs a two-step process, initially creating clusters of communities based on user similarity. It then incorporates collaborative filtering techniques to enhance recommendation system functionality.
暂无评论