The potential for being able to identify individuals at high disease risk solely based on genotype data has garnered significant *** widely applied,traditional polygenic risk scoring methods fall short,as they are bui...
详细信息
The potential for being able to identify individuals at high disease risk solely based on genotype data has garnered significant *** widely applied,traditional polygenic risk scoring methods fall short,as they are built on additive models that fail to capture the intricate associations among single nucleotide polymorphisms(SNPs).This presents a limitation,as genetic diseases often arise from complex interactions between multiple *** address this challenge,we developed DeepRisk,a biological knowledge-driven deep learning method for modeling these complex,nonlinear associations among SNPs,to provide a more effective method for scoring the risk of common diseases with genome-wide genotype *** demonstrated that DeepRisk outperforms existing PRs-based methods in identifying individuals at high risk for four common diseases:Alzheimer's disease,inflammatory bowel disease,type 2diabetes,and breast cancer.
The convergence of the Internet of Things (IoT) and Artificial Intelligence (AI) is significantly transforming the landscape of future networking. The Internet of Things (IoT) is a technological paradigm that encompas...
详细信息
The convergence of the Internet of Things (IoT) and Artificial Intelligence (AI) is significantly transforming the landscape of future networking. The Internet of Things (IoT) is a technological paradigm that encompasses embedded systems, wireless sensors, and automation, facilitating the integration of various applications ranging from smart homes to wearable devices. In addition, the advent of artificial intelligence (AI) amplifies this influence by providing data-driven analytics, optimising processes, and presenting novel opportunities for growth. Nevertheless, the widespread adoption of devices within Internet of Things (IoT) networks gives rise to apprehensions regarding increased energy consumption. In order to ensure the longevity of network operations, it is imperative to employ energy-efficient protocols for sensor nodes that possess limited power resources. One example of a protocol that demonstrates this concept is the Low Energy Adaptive Clustering Hierarchy (LEACH) protocol. This protocol effectively divides networks into clusters and dynamically adjusts the cluster heads to optimise the transmission of data to the base stations. Our study enhances the LEACH protocol by incorporating digital twin simulation, thereby enhancing the efficiency of IoT systems. Virtual network models and AI analytics are employed to assess energy consumption and performance. Cache nodes play a crucial role within this framework as they collect data from cluster heads in order to transmit it to the base station. By leveraging artificial intelligence (AI) and simulation techniques, we are able to improve the energy efficiency and reliability of the Internet of Things (IoT) systems. The findings indicate a significant reduction of 83% in non-functioning nodes and a notable increase of 1.66 times in energy levels of nodes compared to conventional approaches. This study highlights a potential direction for energy-efficient, AI-enhanced Internet of Things (IoT) networking through
Load balancing is vital for the efficient and long-term operation of cloud data *** virtualization,post(reactive)migration of virtual machines(VMs)after allocation is the traditional way for load balancing and consoli...
详细信息
Load balancing is vital for the efficient and long-term operation of cloud data *** virtualization,post(reactive)migration of virtual machines(VMs)after allocation is the traditional way for load balancing and consoli***,it is not easy for reactive migration to obtain predefined load balance objectives and it may interrupt services and bring ***,we provide a new approach,called Prepartition,for load *** partitions a VM request into a few sub-requests sequentially with start time,end time and capacity demands,and treats each sub-request as a regular VM *** this way,it can proactively set a bound for each VM request on each physical machine and makes the scheduler get ready before VM migration to obtain the predefined load balancing goal,which supports the resource allocation in a fine-grained *** with real-world trace and synthetic data show that our proposed approach with offline version(PrepartitionOff)scheduling has 10%–20%better performance than the existing load balancing baselines under several metrics,including average utilization,imbalance degree,makespan and Capacity_*** also extend Prepartition to online load *** results show that our proposed approach also outperforms state-of-the-art online algorithms.
Fuzzing is a widely-used software vulnerability discovery technology, many of which are optimized using coverage-feedback. Recently, some techniques propose to train deep learning (DL) models to predict the branch cov...
详细信息
Fuzzing is a widely-used software vulnerability discovery technology, many of which are optimized using coverage-feedback. Recently, some techniques propose to train deep learning (DL) models to predict the branch coverage of an arbitrary input owing to its always-available gradients etc. as a guide. Those techniques have proved their success in improving coverage and discovering bugs under different experimental settings. However, DL models, usually as a magic black-box, are notoriously lack of explanation. Moreover, their performance can be sensitive to the collected runtime coverage information for training, indicating potentially unstable performance. In this work, we conduct a systematic empirical study on 4 types of DL models across 6 projects to (1) revisit the performance of DL models on predicting branch coverage (2) demystify what specific knowledge do the models exactly learn, (3) study the scenarios where the DL models can outperform and underperform the traditional fuzzers, and (4) gain insight into the challenges of applying DL models on fuzzing. Our empirical results reveal that existing DL-based fuzzers do not perform well as expected, which is largely affected by the dependencies between branches, unbalanced sample distribution, and the limited model expressiveness. In addition, the estimated gradient information tends to be less helpful in our experiments. Finally, we further pinpoint the research directions based on our summarized challenges. IEEE
Exclusion analysis in life insurance stands out as a crucial task in minimizing customer risks. Numerous insur-ance companies have implemented artificial intelligence (AI) solutions to streamline processes and mitigat...
详细信息
The proposed models can design the airfoil by Cuckoo search with Levenberg-Marquardt. The Neural Network framework has impediments due to over-fitting. This paper proposed a modified cuckoo search. here the aerodynami...
详细信息
This research presents an analysis of smart grid units to enhance connected units’security during data *** major advantage of the proposed method is that the system model encompasses multiple aspects such as network ...
详细信息
This research presents an analysis of smart grid units to enhance connected units’security during data *** major advantage of the proposed method is that the system model encompasses multiple aspects such as network flow monitoring,data expansion,control association,throughput,and *** addition,all the above-mentioned aspects are carried out with neural networks and adaptive optimizations to enhance the operation of smart grid ***,the quantitative analysis of the optimization algorithm is discussed concerning two case studies,thereby achieving early convergence at reduced *** suggested method ensures that each communication unit has its own distinct channels,maximizing the possibility of accurate *** results in the provision of only the original data values,hence enhancing *** power and line values are individually observed to establish control in smart grid-connected channels,even in the presence of adaptive settings.A comparison analysis is conducted to showcase the results,using simulation studies involving four scenarios and two case *** proposed method exhibits reduced complexity,resulting in a throughput gain of over 90%.
Cross-domain recommendation (CDR) offers a promising solution to the data sparsity problem by enabling knowledge transfer between source and target domains. However, many recent CDR models overlook crucial issues such...
Neural Vector Search (NVS) has exhibited superior search quality over traditional key-based strategies for information retrieval tasks. An effective NVS architecture requires high recall, low latency, and high through...
详细信息
ISBN:
(纸本)9798331506476
Neural Vector Search (NVS) has exhibited superior search quality over traditional key-based strategies for information retrieval tasks. An effective NVS architecture requires high recall, low latency, and high throughput to enhance user experience and cost-efficiency. However, implementing NVS on existing neural network accelerators and vector search accelerators is sub-optimal due to the separation between the embedding stage and vector search stage at both algorithm and architecture levels. Fortunately, we unveil that Product Quantization (PQ) opens up an opportunity to break separation. However, existing PQ algorithms and accelerators still focus on either the embedding stage or the vector search stage, rather than both simultaneously. Simply combining existing solutions still follows the beaten track of separation and suffers from insufficient parallelization, frequent data access conflicts, and the absence of scheduling, thus failing to reach optimal recall, latency, and throughput. To this end, we propose a unified and efficient NVS accelerator dubbed NeuVSA based on algorithm and architecture co-design philosophy. Specifically, on the algorithm level, we propose a learned PQ-based unified NVS algorithm that consolidates two separate stages into the same computing and memory access paradigm. It integrates an end-to-end joint training strategy to learn the optimal codebook and index for enhanced recall and reduced PQ complexity, thus achieving smoother acceleration. On the architecture level, we customize a homogeneous NVS accelerator based on the unified NVS algorithm. Each sub-accelerator is optimized to exploit all parallelism exposed by unified NVS, incorporating a structured index assignment strategy and an elastic on-chip buffer to alleviate buffer conflicts for reduced latency. All sub-accelerators are coordinated using a hardware-aware scheduling strategy for boosted throughput. Experimental results show that the joint training strategy improves recall
One of the forms of cancerous tumors that can be fatal is malignant lymphoma. Histopathological examination of lymphoma tissue images is a diagnostic technique for detecting malignant lymphomas. Differentiating lympho...
详细信息
暂无评论