Many methods have been applied to image segmentation, including unsupervised, supervised, and even deep learning-based models. Semantic and instance segmentation are the two most widely researched forms of segmentatio...
详细信息
Gait has unique physiological characteristics and supports long-distance recognition, so gait recognition is ideal for areas such as home security and identity detection. Methods using graph convolutional networks usu...
详细信息
Gait has unique physiological characteristics and supports long-distance recognition, so gait recognition is ideal for areas such as home security and identity detection. Methods using graph convolutional networks usually extract features in the spatial and temporal dimensions by stacking GCNs and TCNs, but different joints are interconnected at different moments, so splitting the spatial and temporal dimensions can cause the loss of gait information. Focus on this problem, we propose a gait recognition network, Multi-scale Spatio-Temporal Gait (MST-Gait), which can learn multi-scale gait information simultaneously from spatial and temporal dimensions. We design a multi-scale spatio-temporal groups Transformer (MSTGT) to model the correlation of intra-frame and inter-frame joints simultaneously. And a multi-scale segmentation strategy is designed to capture the periodic and local features of the gait. To fully exploit the temporal information of gait motion, we design a fusion temporal convolution (FTC) to aggregate temporal information at different scales and motion information. Experiments on the popular CASIA-B gait dataset and OUMVLP-Pose dataset show that our method outperforms most existing skeleton-based methods, verifying the effectiveness of the proposed modules.
Most existing computational tools for assumption-based argumentation (ABA) focus on so-called flat frameworks, disregarding the more general case. Here, we study an instantiation-based approach for reasoning in possib...
详细信息
To face with the surge of cloud computing services, energy problem and carbon mission constraint, more and more geo-distributed hybrid powered data centers are build over the world. Geo-distributed hybrid powered data...
详细信息
To face with the surge of cloud computing services, energy problem and carbon mission constraint, more and more geo-distributed hybrid powered data centers are build over the world. Geo-distributed hybrid powered data centers are located in different locations equipped with different types of green energy source. In this work, we propose a periodic requests dispatcher, including a periodic model and a spatial balance algorithm, to minimize the brown energy use. The periodic model divides the long-term period into several time slots, and the spatial balance algorithm achieves energy optimization by balancing user requests in each time slot. Besides, a research case is simulated by Cloudsim platform. We conduct several comparative analysis in aspect of energy utilization, electricity cost and carbon emission. The experiment results show that the dispatcher can effectively reduce the electricity cost, and reasonably achieve the request distribution balance in geo-distributed hybrid powered data centers.
Link prediction on dynamic heterogeneous graphs has attracted much attention in recent years, because it is an ideal abstract model of many problems in real application, such as the prediction of customer's consum...
详细信息
Global Navigation Satellite System (GNSS) is widely used today for both positioning and timing purposes. Many distinct receiver chips are available off-the-shelf, each tailored to match various applications' requi...
详细信息
ISBN:
(纸本)9798350323085
Global Navigation Satellite System (GNSS) is widely used today for both positioning and timing purposes. Many distinct receiver chips are available off-the-shelf, each tailored to match various applications' requirements. Being implemented as Application-Specific Integrated Circuits, these chips provide good performance and low energy consumption but must be treated as "black boxes" by customers. This prevents modification, research in GNSS processing chain enhancement (e.g., application of Approximate Computing techniques), and design-space exploration for finding the optimal receiver implementation per each use case. In this paper, we review the development of SyDR, an open-source software-Defined Radio oriented towards benchmarking of GNSS algorithms. Specifically, our goal is to integrate certain components of the GNSS processing chain in a Field-Programmable Gate Array and manage their operation with a Python program using the Xilinx PYNQ flow. We present the early steps of converting parts of SyDR to C, which will be later converted to Hardware Description Language descriptions using High-Level Synthesis. We demonstrate successful conversion of the tracking process and discuss benefits and drawbacks arising thereof, before outlining next steps in preparation for hardware implementation.
Increasing attention has been garnered by LiDAR points and multi-view images fusion based on Transformer for supplementing another modality in 3D object detection. However, challenges persist for cross-modal fusion me...
详细信息
Diverse opinion summarization aims to generate a summary that captures multiple opinions in texts. Although large language models (LLMs) have become the main choice for this task, the performance is highly depend on p...
详细信息
Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,*** the past decade or so,researchers have done a lot of work on GPU arc...
详细信息
Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,*** the past decade or so,researchers have done a lot of work on GPU architecture and proposed a variety of theories and methods to study the microarchitectural characteristics of various *** this study,the GPU serves as a co-processor and works together with the CPU in an embedded real-time system to handle computationally intensive *** models the architecture of the GPU and further considers it based on some excellent *** SIMT mechanism and Cache-miss situation provide a more detailed analysis of the GPU *** order to verify the GPU architecture model proposed in this article,10 GPU kernel_task and an Nvidia GPU device were used to perform *** experimental results showed that the minimum error between the kernel task execution time predicted by the GPU architecture model proposed in this article and the actual measured kernel task execution time was 3.80%,and the maximum error was 8.30%.
暂无评论