作者:
G. RamkumarDepartment of ECE
Saveetha School of Engineering Saveetha Institute of Medical and Technical Sciences (SIMATS) Saveetha University Chennai
Phishing attacks pose a huge threat for online security and therefore need advanced detecting techniques in order to mitigate the damages it causes. this paper introduces a sophisticated model for the detection of phi...
详细信息
ISBN:
(数字)9798350379945
ISBN:
(纸本)9798350379952
Phishing attacks pose a huge threat for online security and therefore need advanced detecting techniques in order to mitigate the damages it causes. this paper introduces a sophisticated model for the detection of phishing websites, named the Blended ResNet-EfficientNet Model (BREM), which unifies the advantages of the ResNet and EfficientNet architectures. To address this challenge, BREM uses the rich hierarchical patternrecognition ability of ResNet-50 and the practical feature extraction capability of EfficientNet-B3 to achieve classification performance in phishing detection. On overall assessment BREM outperforms both traditional machine learning models and standalone deep learning model with an accuracy of 96%, precision at 94%, recall of 95% and FI score of 94.5%. these same authors validate the high specificity (97 %), negative predictive value (95 %) and Matthews correlation coefficient of 0.92 further underlining the robustness and reliability of BREM. this approach does not only improve the accuracy of detection, but also provides much better security against phishing campaigns. In the near future research directions such as real-time deployment, more experiments on different feature sets, adversarial robustness, transfer learning from heterogeneous datasets, model learning models and patternrecognition. they are able to draw from an immense amount of data and address minor deviances, which allows AI's better discrimination of real versus false websites, therefore increasing user cyber protections and preventable breaches [3] [4]. At the heart of securing phishing websites identification is making use of powerful AI capabilities to scan and recognize phishing convincingly. this AI utilizes different methods - natural language processing (NLP), image recognition and behavioral analysis - to keep an eye on things happening to web content and its design. Behavior analysis tracks user-interactions and web behavior to detect for abnormal operations, which co
During radiofrequency ablation procedures for atrial fibrillation patients, physicians can assist in locating ablation targets by analyzing the conduction pathways and low-voltage areas of the intracardiac electrogram...
详细信息
ISBN:
(数字)9798350355925
ISBN:
(纸本)9798350355932
During radiofrequency ablation procedures for atrial fibrillation patients, physicians can assist in locating ablation targets by analyzing the conduction pathways and low-voltage areas of the intracardiac electrogram signals. the feature points of intracardiac electrogram signals serve as criteria for assessing excitation patterns and low-voltage areas. therefore, extracting feature points from intracardiac signals is of significant value. To identify key points in intracardiac electrogram signals through deep learning, this experiment collected intracardiac electrogram signals from 21 clinical patients, which were then filtered using wavelet transformation. Subsequently, each excitation was treated as a signal group, indicating the key feature points within. Finally, a total of 36,183 groups of intracardiac electrogram signals annotated with key points were obtained. the dataset consisted of data from 17 patients allocated for training, data from 2 patients reserved for testing, and data from the remaining 2 patients designated for validation purposes. Training was conducted using a DenseNet network incorporating both convolutional attention and feature pyramid modules. the final network successfully identifies the feature points of intracardiac electrogram signals. It outputs six feature points, including the starting point, first rising point, first falling point, peak point, valley point, and termination point, with average absolute errors of only 0.34, 0.27, 0.31, 0.01, 0.27, and 0.42 ms, respectively. the final detection results are highly satisfactory, indicating the potential utility of the findings in assisting the analysis of intracardiac electrogram signals. the research outcomes presented in this paper lay a foundation for the analysis of intracardiac electrogram signals.
In Natural Language Processing (NLP) tasks, the target domain data is usually limited and the quality is poor. Text data augmentation is one of the effective methods to solve the problem of insufficient sample size in...
详细信息
ISBN:
(数字)9798350355925
ISBN:
(纸本)9798350355932
In Natural Language Processing (NLP) tasks, the target domain data is usually limited and the quality is poor. Text data augmentation is one of the effective methods to solve the problem of insufficient sample size in many NLP tasks. At present, on the one hand, the text data augmentation method is difficult to achieve the diversity and authenticity of the generated data text structure, on the other hand, it is also a challenge for the screening and elimination of low-quality samples in the generated data. therefore, this paper proposes a scene-aware template of chinese text data augmentation method (Scene-Aware Prompt Template, SAPT) based on large language model. the SAPT method first uses prompt learning to construct a Scene-Aware Prompt (SAP)template. Secondly, combined withthe text understanding and generation ability of the large language model, the SAP is improved by analyzing the semantic and contextual features of the original data to generate scene prompt words. then, the original data is re-expressed into multiple data samples with different text structures but similar semantics by using the perfect SAP and large language model. Further, the quality of generated data samples is evaluated by fusing Rouge score and cosine similarity score, and low-quality data samples are filtered. the final generated data will be used for training the target domain task model. the experimental results on the Chinese news dataset thUCNews show that the overall performance of the classification model trained by the data generated by the SAPTDA method proposed in this paper reaches 89.89 %, which is 2.79 % higher than the baseline model, and has superior performance compared withthe existing text data augmentation methods.
Identifying plant Deoxyribonucleic acid (DNA) barcodes has gained substantial importance for biodiversity conservation and understanding evolutionary relationships. However, classifying these species remains challengi...
详细信息
ISBN:
(数字)9798350391749
ISBN:
(纸本)9798350391756
Identifying plant Deoxyribonucleic acid (DNA) barcodes has gained substantial importance for biodiversity conservation and understanding evolutionary relationships. However, classifying these species remains challenging due to the complex nature of the DNA sequences. Convolutional Neural Networks (CNNs) have proven effective for patternrecognition tasks in DNA sequence classification, however, standard pooling techniques often result in the loss of important information. this study introduced a modified CNN model with an enhanced pooling technique called Horizontal Sequence Pooling, designed to retain important features and improve accuracy. the method is applied to classify the Magnoliophyta plant DNA barcodes of 75 genera and evaluated against the CNN model with standard max pooling and average pooling techniques. Results show that the proposed technique achieved the highest accuracy of 0.9801, the lowest validation loss of 0.0393, superior Area Under Curve – Precision and Recall (AUC-PR) of 0.9979, and Matthews Correlation Coefficient (MCC) of 0.9814, respectively, outperformed standard pooling techniques. these results indicate that Horizontal Sequence Pooling effectively extracts relevant features from DNA sequences, enhancing the classification accuracy, precision, recall, and balance across both positive and negative classes demonstrating its robustness in handling imbalanced datasets.
the expanding healthcare sector requires creative solutions to connect patients with vital information. Language barriers exacerbate the challenge, as many chatbots are predominantly English-based, limiting accessibil...
详细信息
ISBN:
(数字)9798331528553
ISBN:
(纸本)9798331528560
the expanding healthcare sector requires creative solutions to connect patients with vital information. Language barriers exacerbate the challenge, as many chatbots are predominantly English-based, limiting accessibility for non- English speakers. therefore, this work introduces OYEN, a Large Language Model (LLM)-based chatbot designed for patients with bilingual (Mandarin and English) healthcare guidance. Early chatbot architectures relied on statistical Natural Language Processing (NLP) methods and keyword patternrecognition. However, the rise of LLMs with learning capabilities revolutionized chatbot development since 2018. Transformer-based LLMs became dominant due to their exceptional performance in modeling natural text. Developed for healthcare institutions in Malaysia offering Traditional Chinese Medicine (TCM) services, OYEN accurately responds to open-ended TCM-related questions. To ensure OYEN's capability to be user-language centric, responding in the language of the user's input (if a user inputs Mandarin, the response will be in Mandarin and vice versa for English language), a similarity search mechanism using a vector database which is known as Retrieval Augmented Generation (RAG) is integrated. this advanced technique enhances OYEN's ability to retrieve and present relevant TCM information.
In natural environments, bird sounds are often accompanied by background noise, so denoising becomes crucial to automated bird sound recognition. Recently, thanks to neural network embeddings, the deep clustering meth...
详细信息
ISBN:
(数字)9781728161365
ISBN:
(纸本)9781728161372
In natural environments, bird sounds are often accompanied by background noise, so denoising becomes crucial to automated bird sound recognition. Recently, thanks to neural network embeddings, the deep clustering method has achieved better performances than traditional denoising methods, like filter-based methods, due to its ability to solve the problem when noise is in the same frequency range as bird sounds. In this paper, we propose a generalized denoising method based on deep clustering, which can process more complex recordings with less distortion. Also, we optimize the original affinity loss function to get a novel loss function to ensure the embedding vectors withthe minimum distance belong to the same source, named Joint Center Loss (JCL), which can both increase the inter-class variance and decrease the intra-class variance of embeddings. Experiments are conducted on the gated convolutional neural network architecture and the bidirectional long short term memory architecture respectively with different loss functions. Given the signal-noise ratio being -3dB, the recognition accuracy increases relatively by 9.5% withthe proposed denoising method in the best case, and the Relative Root Mean Square Error (RRMSE) increases relatively by 14.2% by using JCL, compared withthe original affinity loss function AL.
In the era of big data, analyzing vessels patterns using massive trajectory data has become the main method of mining activity pattern. Trajectory shape feature, as one of the important features of vessel trajectory d...
详细信息
ISBN:
(纸本)9781450384155
In the era of big data, analyzing vessels patterns using massive trajectory data has become the main method of mining activity pattern. Trajectory shape feature, as one of the important features of vessel trajectory data, can be used to identify the vessel activity patterns. But most of research only focused on the features such as standard deviation of latitude and longitude, navigation heading to the analysis of vessels trajectories. therefore, considering the spatial-temporal feature of vessels data, we propose a method based on Sevcik fractal dimension to extract shape feature for identifying vessels activity types. Firstly, we segment the vessel trajectories to form the sub-trajectory according to the speed and temporal threshold. Secondly, we construct the feature vector of trajectory shape using the improved Sevcik fractal dimension algorithm. then, we select the standard deviation of latitude and longitude and shape features extracted by Sevcik fractal dimension as the comparison features, and observe the performance in K-means and GMM algorithms respectively to verify the effectiveness of shape feature vectors we proposed. Finally, we select the simulation data and two real data sets for experimental analysis. the results show that the shape feature extraction algorithm can extract the shape features of trajectories, and the performance in classification algorithm is better than the standard deviation and Sevcik fractal dimension. So the method we proposed can realize the patternrecognition of vessel and abnormal trajectory analysist.
A lot of retailers face the challenge of how to understand their customers’ buying habits, and this makes stocking products, arrange items on shelves, and making customers to come back again a difficult task. Without...
详细信息
ISBN:
(数字)9798331542559
ISBN:
(纸本)9798331542566
A lot of retailers face the challenge of how to understand their customers’ buying habits, and this makes stocking products, arrange items on shelves, and making customers to come back again a difficult task. Without understanding what the customer typically wants to buy, retail stores can miss out on opportunities to increase sales and improve customer satisfaction. To address this, this study developed a system using the Apriori algorithm to analyze past shopping data from a supermarket. the developed system identifies patterns in the items of customers that are frequently bought together. this allows the business to predict future purchases. By using these method, stores can now manage their stock better to create better marketing strategies, and to improve their customers overall shopping experience. the results that were obtained from this study also show that the system can help businesses stay competitive by understanding customer needs and acting on them more effectively.
Objective: Chest X-rays are a non-invasive and cost-effective diagnostic method; however, they are not commonly used for diagnosing atrial septal defects (ASD). Typically, the diagnosis of ASD requires more expensive ...
详细信息
ISBN:
(数字)9798350355925
ISBN:
(纸本)9798350355932
Objective: Chest X-rays are a non-invasive and cost-effective diagnostic method; however, they are not commonly used for diagnosing atrial septal defects (ASD). Typically, the diagnosis of ASD requires more expensive tests. We have discovered that an enlarged heart shadow can be an initial indicator of ASD. We have developed and validated a model using deep learning techniques to determine whether a patient has an atrial septal defect based on their chest X-ray. Method: this study involved chest X-ray examinations of 1459 patients diagnosed with atrial septal defects at West China Hospital of Sichuan University from January 8, 2009, to June 25, 2022, as well as 1580 individuals who underwent general health check-ups at the West China Hospital's Examination Center from July 21, 2022, to August 13, 2022. Results: this study included chest X-ray examinations of 3,039 patients (44.3% male; median age 33 years with an interquartile range of 28–42 years). For the classification task, the deep learning model achieved an accuracy of 0.96, sensitivity of 0.97, specificity of 0.96, and an AUC (Area Under the Curve) of 0.9922. Conclusion: Current research indicates that chest X-ray analysis based on deep learning can determine the presence of atrial septal defects, a congenital heart condition. these findings suggest that deep learning methods can offer an objective diagnosis for atrial septal defects in routine health screenings, offering a low-cost and non-invasive approach.
暂无评论