Recent speech synthesis technology can generate high-quality speech indistinguishable from human speech, thus introducing various security and privacy risks. Numerous recent studies have focused on fake voice detectio...
详细信息
ISBN:
(数字)9798350390155
ISBN:
(纸本)9798350390162
Recent speech synthesis technology can generate high-quality speech indistinguishable from human speech, thus introducing various security and privacy risks. Numerous recent studies have focused on fake voice detection to address these risks, with many claiming to achieve ideal performance. However, is this really the case? A recent research work introduced Speaker-Irrelative-Features (SiFs), unrelated to the information in speech files but capable of influencing fake detectors. This means that existing detectors may rely on SiFs to a certain extent to distinguish real and fake speech. In this paper, we introduce an evaluation framework to evaluate the influence of SiFs in existing fake voice detectors in depth. We evaluate three SiFs which include background noise, the mute parts before and after voice, and the sampling rate on ASVspoof2019 and FoR. Our results confirm the substantial influence of SiFs on fake voice detection performance, and we delve into the analysis of the underlying mechanisms.
The rise of mobile devices with abundant sensory data and local computing capabilities has driven the trend of federated learning (FL) on these devices. And personalized FL (PFL) emerges to train specific deep models ...
Optimization in ad hoc networks is a highly specialized task because the structure of the network is loosely formed, and each node is independently responsible for its operation. In order to obtain a better result fro...
详细信息
ISBN:
(数字)9798350383867
ISBN:
(纸本)9798350383874
Optimization in ad hoc networks is a highly specialized task because the structure of the network is loosely formed, and each node is independently responsible for its operation. In order to obtain a better result from this solely formed networks this report offer extensive Dynamic Topology Management (DTM) method. Dynamic routing protocols lager lines, dividend balancing, Life-saving administration, guarantee of reliability, cross-layer design, security aspects and the on-time monitoring and discovery of topology changes are the main objectives and steps to be accomplished. Real-time Location monitoring and forest structure discovery are made possible by employing adaptive routing protocols, namely Dynamic Source Routing(DSR) and Ad-Hoc On-Demand Distance Vector(AODV). Also, the algorithms are capable of making advance adjustments to routing routes and resource allocations drawn from the mobility patterns forecasting which is vital. Load balancing methods, which distribute traffic in a network evenly, enable the system to run efficiently and avoid problems caused by overloading. Algorithms that are energy-efficient and make the best use of energy resources having their topology's optimized are introduced to the streamlined energy distribution systems. These algorithms are focused on the node's energy level. Through fault tolerance techniques which recognize and handle failures in nodes and partitioned networks, connectedness activity is a continuous one. Through QoS perception, the network topology might automatically be tailored to conform to the given quality criteria, thus not restricting the choice for differing applications. The proposed could have defenses against adversary nodes and their potential DDoS attacks which will enhance network protection. Yet another advantage of an architectural style that promotes cross-layer collaboration is enhanced topology management which is the goal around which the functions of various levels of the protocol stack revolve
Price hike has always been a substantial concern for people all over the world. The crisis gets more conspicuous, and people find themselves more confounded when even the bare minimum of expenses still exceeds the amo...
详细信息
Recent progress in generative artificial intelligence (gen-AI) has enabled the generation of photo-realistic and artistically-inspiring photos at a single click, catering to millions of users online. To explore how pe...
详细信息
Log parsing, which involves log template extraction from semi-structured logs to produce structured logs, is the first and the most critical step in automated log analysis. However, current log parsers suffer from lim...
详细信息
ISBN:
(数字)9798400702174
ISBN:
(纸本)9798350382143
Log parsing, which involves log template extraction from semi-structured logs to produce structured logs, is the first and the most critical step in automated log analysis. However, current log parsers suffer from limited effectiveness for two reasons. First, traditional data-driven log parsers solely rely on heuristics or handcrafted features designed by domain experts, which may not consistently perform well on logs from diverse systems. Second, existing super-vised log parsers require model tuning, which is often limited to fixed training samples and causes sub-optimal performance across the entire log source. To address this limitation, we propose Di-vLog, an effective log parsing framework based on the in-context learning (ICL) ability of large language models (LLMs). Specifically, before log parsing, DivLog samples a small amount of offline logs as candidates by maximizing their diversity. Then, during log parsing, DivLog selects five appropriate labeled candidates as examples for each target log and constructs them into a prompt. By mining the semantics of examples in the prompt, DivLog generates a target log template in a training-free manner. In addition, we design a straightforward yet effective prompt format to extract the output and enhance the quality of the generated log templates. We conducted experiments on 16 widely-used public datasets. The results show that DivLog achieves (1) 98.1% Parsing Accuracy, (2) 92.1% Precision Template Accuracy, and (3) 92.9% Recall Template Accuracy on average, exhibiting state-of-the-art performance.
CT of the urinary system, abdomen, and kidneys appears to improve kidney stone screening and identification. Segmentation and categorization are essential for kidney stone identification and management. This work deve...
详细信息
ISBN:
(数字)9798331509675
ISBN:
(纸本)9798331509682
CT of the urinary system, abdomen, and kidneys appears to improve kidney stone screening and identification. Segmentation and categorization are essential for kidney stone identification and management. This work developed a semi-automatic kidney stone detection system using geometric concepts and image processing. The main steps include model training, feature extraction, segmentation, embossing, and preprocessing. Preprocessing involves trimming the input image to remove unnecessary kidney regions to focus on critical CT scan locations. Relief embossing enhanced image details. Segmentation accurately classified veins and blood arteries to define kidney stone boundaries. CNN-RELM was our training model. Compared to CNN and ELM models, the recommended system performed better. Its 93.87% kidney stone detection and localization accuracy prove its reliability. The semi-automatic system's reliable kidney stone detection and segmentation method may improve clinical decision-making. Combining CNN-RELM modeling and enhanced image processing could lead to medical imaging applications.
Extensive research and methodological improvements over time attest to the necessity of brain tumour detection and analysis for any indication system. Accurate tumour detection is essential for this study; hence an ef...
详细信息
ISBN:
(数字)9798350379716
ISBN:
(纸本)9798350379723
Extensive research and methodological improvements over time attest to the necessity of brain tumour detection and analysis for any indication system. Accurate tumour detection is essential for this study; hence an efficient automated method must be implemented Numerous segmentation algorithms have been developed to classify brain tumors with greater precision. Medical image processing experts agree that brain picture segmentation presents unique difficulties. A novel automatic detection and classification system is proposed in this work. The suggested method is broken down into several stages, including preprocessing of MRI images, segmentation of those images, extraction of features, and classification of those features. The noise in the MRI picture is reduced during the preprocessing stage by applying an adaptive filter. The grey level co-occurrence matrix (GLCM) is used for feature extraction, while the improved Means clustering (IMC) technique is used for image segmentation. To categorize MRI scans, we used a deep learning algorithm scan after features were extracted from the scans them as benign lesions, gliomas, meningiomas, or pituitary tumours. In order to accomplish the classification task, recurrent convolutional neural networks (RCNN) were utilised. For a given input dataset containing photographs of the brain, the proposed technique yields superior classification results. The investigations made use of MRI pictures from Kaggle's data sets, which included 464 testing sets and 3264 training sets. The outcomes show that the proposed strategy outperforms its predecessors. The suggested RCNN method is then compared to BP, VG-Net, and RCNN, three of the most popular classification techniques currently in use. The suggested classifier successfully identified 96.21 percent of brain tumour tissues in magnetic resonance imaging scans.
Link prediction is a crucial issue in opportunistic networks routing research. Static link prediction methods ignore the historical information of network evolution, which affects the prediction accuracy. In this pape...
详细信息
Business intelligence (BI) encompasses the tools and uses to gather, combine, examine, and display data about a specific firm. Using historical and current efficiency comparisons, these structures assist in formulatin...
详细信息
暂无评论