Wide adoption of Internet of Things solutions in the industry requires IoT platforms to support long-running, stateful protocols with numerous connections to ingest the measurements, and at the same time to provide th...
详细信息
Wide adoption of Internet of Things solutions in the industry requires IoT platforms to support long-running, stateful protocols with numerous connections to ingest the measurements, and at the same time to provide the highest throughput possible to achieve service level objectives (SLOs). The challenge of providing sufficient throughput can be solved via automated horizontal scalability for IoT platforms' gateways. Firstly, to address this challenge, a study of the state-of-the-art solutions and approaches adopted in the industry to ensure predictable throughput at IoT gateways was conducted. Secondly, the following three gateways autoscaling metrics were studied on the in-house developed cloud-native IoT Platform: CPU utilization, the number of concurrent active connections, and the absolute throughput per gateway. The results of the experimental evaluation suggest the combination of rate limiting and active connection monitoring as a prerequisite for high throughput at IoT gateways.
Containerization approaches based on namespaces offered by the Linux kernel have seen an increasing popularity in the HPC community both as a means to isolate applications and as a format to package and distribute the...
详细信息
As quantum computing systems mature and move from laboratories to production computing environments, corresponding software stacks are becoming key for their successful utilization. In particular, the expected use of ...
As quantum computing systems mature and move from laboratories to production computing environments, corresponding software stacks are becoming key for their successful utilization. In particular, the expected use of quantum systems as HPC accelerators requires a deep integration with the existing and widely deployed HPC software stacks. Additionally, new requirements such as dynamic compilation and new challenges for tools and programming models must be considered. We tackle these challenges by developing the Munich Quantum Software Stack—a comprehensive initiative by the Munich Quantum Valley to offer a flexible, efficient, and user-oriented software environment. In this poster, we describe the core components and workflows, and how they will enable this transformation from quantum experiments to quantum accelerators.
In this paper we propose a novel way to integrate time-evolving partial differential equations that contain nonlinear advection and stiff linear operators, combining exponential integration techniques and semi-Lagrang...
详细信息
The increasing demand for cloud computing drives the expansion in scale of datacenters and their internal optical network, in a strive for increasing bandwidth, high reliability, and lower latency. Optical transceiver...
The increasing demand for cloud computing drives the expansion in scale of datacenters and their internal optical network, in a strive for increasing bandwidth, high reliability, and lower latency. Optical transceivers are essential elements of optical networks, whose reliability has not been well-studied compared to other hardware components. In this paper, we leverage high quantities of monitoring data from optical transceivers and OS-level metrics to provide statistical insights about the occurrence of optical transceiver failures. We estimate transceiver failure rates and normal operating ranges for monitored attributes, correlate early-observable patterns to known failure symptoms, and finally develop failure prediction models based on our analyses. Our results enable network administrators to deploy early-warning systems and enact predictive maintenance strategies, such as replacement or traffic re-routing, reducing the number of incidents and their associated costs.
As quantum computers mature, they migrate from laboratory environments to HPC centers. This movement enables large-scale deployments, greater access to the technology, and deep integration into HPC in the form of quan...
As quantum computers mature, they migrate from laboratory environments to HPC centers. This movement enables large-scale deployments, greater access to the technology, and deep integration into HPC in the form of quantum acceleration. In laboratory environments, specialists directly control the systems' environments and operations at any time with hands-on access, while HPC centers require remote and autonomous operations with minimal physical contact. The requirement for automation of the calibration process needed by all current quantum systems relies on maximizing their coherence times and fidelities and, with that, their best performance. It is, therefore, of great significance to establish a standardized and automatic calibration process alongside unified evaluation standards for quantum computing performance to evaluate the success of the calibration and operation of the system. In this work, we characterize our in-house superconducting quantum computer, establish an automatic calibration process, and evaluate its performance through quantum volume and an application-specific algorithm. We also analyze readout errors and improve the readout fidelity, leaning on error mitigation.
Quantum computing is a promising technology that requires a sophisticated software stack to connect end users to the wide range of possible quantum backends. However, current software tools are usually hard-coded for ...
详细信息
ISBN:
(数字)9798331541378
ISBN:
(纸本)9798331541385
Quantum computing is a promising technology that requires a sophisticated software stack to connect end users to the wide range of possible quantum backends. However, current software tools are usually hard-coded for single platforms and lack a dynamic interface that can automatically retrieve and adapt to changing physical characteristics and constraints of different platforms. With new hardware platforms frequently introduced and their performance changing on a daily basis, this constitutes a serious limitation. In this paper, we show-case a concept and a prototypical realization of an interface, called the Quantum Device Management Interface (QDMI), that addresses this problem by explicitly connecting the software and hardware developers, mediating between their competing interests. QDMI allows hardware platforms to provide their physical characteristics in a standardized way, and software tools to query that data to guide the compilation process accordingly. This enables software tools to automatically adapt to different platforms and to optimize the compilation process for the specific hardware constraints. QDMI is a central part of the Munich Quantum Software Stack (MQSS)-a sophisticated software stack to connect end users to the wide range of possible quantum backends. QDMI is publicly available as open source at https://***/Munich-Quantum-Software-Stack/QDMI.
IT systems of today are becoming larger and more complex, rendering their human supervision more difficult. Artificial Intelligence for IT Operations (AIOps) has been proposed to tackle modern IT administration challe...
详细信息
The modeling of atmospheric processes in the context of weather and climate simulations is an important and computationally expensive challenge. The temporal integration of the underlying PDEs requires a very large nu...
详细信息
Spike detection plays a central role in neural data processing and brain-machine interfaces (BMIs). A challenge for future-generation implantable BMIs is to build a spike detector that features both low hardware cost ...
详细信息
暂无评论