Although numerous high-pressure electride (HPE) superconductors have been reported, their superconducting transition temperatures (Tc) are low. No HPE superconductor with a Tc exceeding 100 K has been reported. Herein...
详细信息
Although numerous high-pressure electride (HPE) superconductors have been reported, their superconducting transition temperatures (Tc) are low. No HPE superconductor with a Tc exceeding 100 K has been reported. Herein, we predicted a HPE superconductor, Li4Rh, with a high Tc=108.2 K at 300 GPa, making it the first HPE superconductor with a Tc exceeding 100 K. Li4Rh features strong hybridization between nonnuclear attractors (NNAs) and atoms near the Fermi level and a large, deformed cylindrical Fermi sheet. This Fermi sheet structure induces strong electron-phonon coupling (EPC) and allows a broad range of electrons and phonons with a wide range of q vectors to participate in EPC, resulting in high Tc. Unlike other HPE superconductors, the Tc of Li4Rh does not decrease with decreasing EPC but remains stable because the logarithmic average phonon frequency increases as the EPC strength decreases. This helps maintain Tc with increasing pressure with little fluctuation. The results indicate that HPEs with strong hybridization between NNAs and atoms near the Fermi level and with large and closed Fermi surfaces are more likely to exhibit high Tc, offering deep insights into HPE superconductivity and providing valuable guidance for future research into high-Tc electrides.
News text is an important branch of natural language processing. Compared to ordinary texts, news text has significant economic and scientific value. The characteristics of news text include structural hierarchy, dive...
详细信息
Aspect-based sentiment analysis (ABSA) is a natural language processing (NLP) technique to determine the various sentiments of a customer in a single comment regarding different aspects. The increasing online data con...
详细信息
Storage and inference of deep neural network models are resource-intensive, limiting their deployment on edge devices. Structured pruning methods can reduce the resource requirements for model storage and inference by...
详细信息
Synthetic aperture imaging(SAI) methods aim to see through dense occlusions and reconstruct the target scene behind occlusions. Traditional frame-based SAI methods,e.g., DeOccNet [1], take the occluded light field ima...
Synthetic aperture imaging(SAI) methods aim to see through dense occlusions and reconstruct the target scene behind occlusions. Traditional frame-based SAI methods,e.g., DeOccNet [1], take the occluded light field images captured by a camera array as input, and fuse them to achieve image de-occlusion.
Raw point clouds are extensively used in 3D object detection due to their detailed spatial positioning information, which surpasses that of voxelized point clouds. However, point-based 3D object detection networks oft...
详细信息
Deep learning has become an important computational paradigm in our daily lives with a wide range of applications,from authentication using facial recognition to autonomous driving in smart vehicles. The quality of th...
Deep learning has become an important computational paradigm in our daily lives with a wide range of applications,from authentication using facial recognition to autonomous driving in smart vehicles. The quality of the deep learning models, i.e., neural architectures with parameters trained over a dataset, is crucial to our daily living and economy.
As Deep Neural Networks (DNNs) continue to increase in complexity, the computational demands of their training have become a significant bottleneck. Low-precision training has emerged as a crucial strategy, wherein fu...
详细信息
As Deep Neural Networks (DNNs) continue to increase in complexity, the computational demands of their training have become a significant bottleneck. Low-precision training has emerged as a crucial strategy, wherein full-precision values are quantized to lower precisions, reducing computational overhead while aiming to maintain model accuracy. While prior research has primarily focused on minimizing quantization noise and optimizing performance for specific models and tasks, a comprehensive understanding of the general principles governing low-precision computations across diverse DNN architectures has been lacking. In this paper, we address this gap by systematically analyzing the factors that influence low-precision matrix computations, which are fundamental to DNN training. We investigate three critical factors-accumulation in matrix calculations, the frequency of element usage, and the depth of matrices within the model-and their impact on low-precision training. Through controlled experiments on standard models, as well as customized experiments designed to isolate individual factors, we derive several key insights: layers with higher accumulation and matrices with lower usage frequencies demonstrate greater tolerance to low-precision noise, without significantly compromising the stability of model training. Additionally, while the depth of matrices influences the stability of matrix operations to some extent, it does not have a noticeable effect on the overall training outcomes. Our findings contribute to the development of generalizable principles for low-precision training, offering a systematic framework applicable across various DNN architectures. We provide empirical evidence supporting the strategic allocation of training bit-widths based on the analyzed factors, thereby enhancing the efficiency and effectiveness of DNN training in resource-constrained environments.
With the development of large language models (LLMs), detecting whether text is generated by a machine becomes increasingly challenging in the face of malicious use cases like the spread of false information, protecti...
详细信息
With the rapid advancement of artificial intelligence, chips have become increasingly important. The emerging RISC-V instruction set gradually provides powerful computing support for this field. In this context, along...
详细信息
暂无评论