this paper explores the problem of boundary data classification ambiguity that arises when machine learning techniques are applied in the field of intrusion detection. the features and attributes of the boundary data ...
详细信息
the exponential data volume and complexity of Machine Learning (ML) algorithms has resulted in an increase in computational limitations, which affects artificial intelligence [AI]. this research seeks to establish whe...
详细信息
Machine learning techniques such as NLP (Natural language processing) play a key role in a context where mining social media data could add great value to governments of the world countries. the posts and tweets share...
详细信息
Nowadays, Generative Artificial Intelligence (GenAI) is increasingly making inroads in Data Centers, helping to improve aspects related to latency and high speed. Data center-level network infrastructures require mana...
详细信息
ISBN:
(数字)9798331532970
ISBN:
(纸本)9798331532987
Nowadays, Generative Artificial Intelligence (GenAI) is increasingly making inroads in Data Centers, helping to improve aspects related to latency and high speed. Data center-level network infrastructures require managing large volumes of data using high-speed protocols such as InfiniBand, as it allows low latency and parallelprocessing, as well as various applied switches at different network architecture levels. In this article, we examine those protocols at the data communication level that GenAI uses to optimize and guarantee data flow between nodes, as well as the design and reference of network architectures in the field of GenIA. On the other hand, the components of the GenAI networks used are analyzed. Similarly, a case study is proposed where the performance of the Graphic processing Units (GPUs) is analyzed based on the definition of a series of metrics such as memory access, memory bandwidth, throughput, energy consumption, temperature, and clock speed, and that through the simulation of neural networks specifically using the Radial Basic Function (RBF) and Multilayer Perceptron (MLP) algorithms, it helps to understand and analyze to what extent these metrics behave and generate possible solutions in the field of data centers.
Receiver plays a vital role in various electronic countermeasure systems and has always been a hot topic to research. Based on the traditional receiver digital channelization structure, this paper further researches a...
详细信息
In recent years, pre-trained language models based on Transformer have brought significant breakthroughs to natural language processing (NLP) tasks. their outstanding performance in general text understanding enables ...
详细信息
Accurately tracking dynamic objects in video sequences is an ongoing challenge in target tracking tasks, especially in the face of occlusion, fast motion, and drastic changes in target scale and appearance. Although s...
详细信息
ISBN:
(数字)9798350373820
ISBN:
(纸本)9798350373837
Accurately tracking dynamic objects in video sequences is an ongoing challenge in target tracking tasks, especially in the face of occlusion, fast motion, and drastic changes in target scale and appearance. Although significant progress has been made in deep learning-based target tracking algorithms, these approaches often overlook the potential value of target history feature information in enhancing tracking stability and accuracy. To this end, we propose a novel target tracking framework that enhances the performance of the tracking algorithms by integrating both long-term and short-term historical feature information of the target into the tracking algorithms. this structure not only simplifies the processing flow, but also dramatically improves the operational efficiency through parallelprocessing mechanisms, enabling the algorithm to achieve fast and accurate target tracking in complex dynamic environments. Experimental results on several publicly available target tracking datasets show that our approach provides significant improvements in improving the accuracy and robustness of tracking compared to existing algorithms.
Beyond 5G and 6G networks are foreseen to be highly dynamic. these are expected to support and accommodate temporary activities and leverage continuously changing infrastructures from extreme edge to cloud. In additio...
详细信息
ISBN:
(纸本)9783031488023;9783031488030
Beyond 5G and 6G networks are foreseen to be highly dynamic. these are expected to support and accommodate temporary activities and leverage continuously changing infrastructures from extreme edge to cloud. In addition, the increasing demand for applications and data in these networks necessitates the use of geographically distributed Multi-access Edge Computing (MEC) to provide reliable services with low latency and energy consumption. Service management plays a crucial role in meeting this need. Research indicates widespread acceptance of Reinforcement Learning (RL) in this field due to its ability to model unforeseen scenarios. However, it is difficult for RL to handle exhaustive changes in the requirements, constraints and optimization objectives likely to occur in widely distributed networks. therefore, the main objective of this research is to design service management approaches to handle changing services and infrastructures in dynamic distributed MEC systems, utilizing advanced RL methods such as Distributed Deep Reinforcement Learning (DDRL) and Meta Reinforcement Learning (MRL).
this paper focuses on developing algorithms for parallel determinant processing, a crucial task in linear algebra and computational mathematics. the aim is to improve efficiency in high-performance computing environme...
详细信息
ISBN:
(数字)9798350387568
ISBN:
(纸本)9798350387575
this paper focuses on developing algorithms for parallel determinant processing, a crucial task in linear algebra and computational mathematics. the aim is to improve efficiency in high-performance computing environments by designing and analyzing algorithmsthat use parallelprocessing to expedite determinant computation for various matrices range. the research explores methods like Laplace expansion, LU decomposition, eigenvalue decomposition, Gaussian elimination, and cofactor expansion, assessing their efficiency, scalability, and applicability in different computational environments. the study employs advanced parallel programming techniques and architectures, utilizing multi-core processors withthe focus aim into utilization of Chio’s method of rectangular determinants processing in parallel etc. the research also investigates the mathematical underpinnings of parallel determinant algorithms, addressing challenges like load balancing, data distribution, and synchronization. the results show significant improvements in determinant calculations efficiency, reducing computation times for large matrices.
Tensor decomposition (TD) is an important method for extracting latent information from high-dimensional (multi-modal) sparse data. this study presents a novel framework for accelerating fundamental TD operations on m...
详细信息
ISBN:
(纸本)9781450392815
Tensor decomposition (TD) is an important method for extracting latent information from high-dimensional (multi-modal) sparse data. this study presents a novel framework for accelerating fundamental TD operations on massively parallel GPU architectures. In contrast to prior work, the proposed Blocked Linearized CoOrdinate (BLCO) format enables efficient out-of-memory computation of tensor algorithms using a unified implementation that works on a single tensor copy. Our adaptive blocking and linearization strategies not only meet the resource constraints of GPU devices, but also accelerate data indexing, eliminate control-flow and memory-access irregularities, and reduce kernel launching overhead. To address the substantial synchronization cost on GPUs, we introduce an opportunistic conflict resolution algorithm, in which threads collaborate instead of contending on memory access to discover and resolve their conflicting updates on-the-fly, without keeping any auxiliary information or storing non-zero elements in specific mode orientations. As a result, our framework delivers superior in-memory performance compared to prior state-of-the-art, and is the only framework capable of processing out-of-memory tensors. On the latest Intel and NVIDIA GPUs, BLCO achieves 2.12 - 2.6x geometric-mean speedup (with up to 33.35x speedup) over the state-of-the-art mixed-mode compressed sparse fiber (MM-CSF) on a range of real-world sparse tensors.
暂无评论