We demonstrate Purcell enhancement of a single T center integrated in a silicon photonic crystal cavity, increasing the fluorescence decay rate by a factor of 6.89 and achieving a photon outcoupling rate of 73.3 kHz. ...
详细信息
Exceptional point (EP)-based optical sensors exhibit exceptional sensitivity but poor detectivity. Slightly off EP operation boosts detectivity without much loss in sensitivity. We experimentally demonstrate a high-de...
详细信息
Exceptional point (EP)-based optical sensors exhibit exceptional sensitivity but poor detectivity. Slightly off EP operation boosts detectivity without much loss in sensitivity. We experimentally demonstrate a high-de...
详细信息
Quantum repeaters are proposed to overcome exponential photon loss over distance in fibers. One-way quantum repeaters eliminate the need for two-way classical communications, which can potentially outperform quantum m...
详细信息
The amount of data processed in the cloud, the development of Internet-of-Things (IoT) applications, and growing data privacy concerns force the transition from cloud-based to edge-based processing. Limited energy and...
详细信息
Efficient quantum repeaters are needed to combat photon losses in fibers in future quantum networks. Single atom coupled with photonic cavity offer a great platform for photon-atom gate. Here I propose a quantum repea...
详细信息
FPGAs are a compelling substrate for supporting machine learning inference. Tools such as High-Level Synthesis and hls4ml can shorten the development cycle for deploying ML algorithms on FPGAs, but can struggle to han...
详细信息
ISBN:
(纸本)9798400713965
FPGAs are a compelling substrate for supporting machine learning inference. Tools such as High-Level Synthesis and hls4ml can shorten the development cycle for deploying ML algorithms on FPGAs, but can struggle to handle the large on-chip storage needed for many of these models. In particular the high BRAM usage found in many of these flows can cause Place & Route failures during synthesis. In this paper we propose using a Simulated-Annealing based flow to perform BRAM-aware quantization. This approach trades off inference accuracy with BRAM usage, to provide a high-quality inference engine that still meets on-chip resource constraints. We demonstrate this flow for Transformer-based machine learning algorithms, which include Flash Attention in a Stream-based Dataflow architecture. Our system imposes minimal accuracy drops, yet can reduce BRAM usage by 20%-50%, and improve power efficiency by 264%-812% compared to existing Transformer-based accelerators on FPGAs
In this paper, we consider the problem of characterizing a robust global dependence between two brain regions where each region may contain several voxels or channels. This work is driven by experiments to investigate...
详细信息
Quantum memory devices with high storage efficiency and bandwidth are essential elements for future quantum networks. Solid-state quantum memories can provide broadband storage, but they primarily suffer from low stor...
详细信息
暂无评论