The traditional domain adaptation task is not sufficient in mining inter domain image context. We propose a solution from the perspective of image semantic level retrieval. Image semantic level retrieval is based on t...
详细信息
Pre-trained language models for code (PLMCs) have gained attention in recent research. These models are pre-trained on large-scale datasets using multi-modal objectives. However, fine-tuning them requires extensive su...
详细信息
Worker activity recognition is an important aspect of the construction of smart factory. The development of deep neural networks and the widespread distribution of sensors in the smart factory have brought opportuniti...
详细信息
Entity alignment (EA) aims to identify entities across different knowledge graphs that represent the same real-world objects. Recent embedding-based EA methods have achieved state-of-the-art performance in EA yet face...
详细信息
Deep learning has gained tremendous success in various fields while training deep neural networks(DNNs) is very compute-intensive, which results in numerous deep learning frameworks that aim to offer better usability ...
详细信息
Deep learning has gained tremendous success in various fields while training deep neural networks(DNNs) is very compute-intensive, which results in numerous deep learning frameworks that aim to offer better usability and higher performance to deep learning practitioners. Tensor Flow and Py Torch are the two most popular frameworks. Tensor Flow is more promising within the industry context, while Py Torch is more appealing in academia. However, these two frameworks differ much owing to the opposite design philosophy:static vs dynamic computation graph. Tensor Flow is regarded as being more performance-friendly as it has more opportunities to perform optimizations with the full view of the computation graph. However, there are also claims that Py Torch is faster than Tensor Flow sometimes, which confuses the end-users on the choice between them. In this paper, we carry out the analytical and experimental analysis to unravel the mystery of comparison in training speed on single-GPU between Tensor Flow and Py Torch. To ensure that our investigation is as comprehensive as possible, we carefully select seven popular neural networks, which cover computer vision, speech recognition, and natural language processing(NLP). The contributions of this work are two-fold. First, we conduct the detailed benchmarking experiments on Tensor Flow and Py Torch and analyze the reasons for their performance difference. This work provides the guidance for the end-users to choose between these two frameworks. Second, we identify some key factors that affect the performance,which can direct the end-users to write their models more efficiently.
The Vision Transformer (ViT), which benefits from utilizing self-attention mechanisms, has demonstrated superior accuracy compared to CNNs. However, due to the expensive computational costs, deploying and inferring Vi...
详细信息
X-armed bandits have achieved the state-of-the-art performance in optimizing unknown stochastic continuous functions, which can model many machine learning tasks, specially in big data-driven personalized recommendati...
详细信息
Semantic reasoning techniques based on knowledge graphs have been widely studied since they were proposed. Previous studies are mostly based on closed-world assumptions, which cannot reason about unknown facts. To thi...
详细信息
Commonsense knowledge (CSK) is the information that people use in daily life but do not often mention. It summarizes the practical knowledge about how the world works. Existing machines have knowledge but lack commons...
详细信息
Reliable hand mesh reconstruction (HMR) from commonly-used color and depth sensors is challenging especially under scenarios with varied illuminations and fast motions. Event camera is a highly promising alternative f...
详细信息
暂无评论