We propose MoveFeel, a movement computing framework that leverages vision-based analysis to compute meaningful metrics for assessing expressive dance movement. Our system is a multi-component workflow which extracts a...
详细信息
With the continuous increase of resident space objects (RSO) in near-earth space, it has become challenging to safeguard our space assets with the available ground resources. Computation of collision probability is an...
With the continuous increase of resident space objects (RSO) in near-earth space, it has become challenging to safeguard our space assets with the available ground resources. Computation of collision probability is an essential part of the safety assessment of active satellites, which requires estimating the position and velocity probability density function (PDF) of the active satellite itself and other potential candidate RSOs that can cause a collision. Tracking, i.e., RSO's position and velocity estimation, is generally performed from ground-based observations, for example, range, range rate, azimuth, and elevation measurements. The 3-σ position estimation accuracy of less than 200m is recommended for an operational collision probability threshold of 0.0001. It is well known that 200m of 3-σ accuracy in the tracking of RSO is only available sometimes due to multiple resource constraints, such as the limited number of sensors and physical constraints of the measurement system itself. In this article, we have analyzed the ground-based measurement accuracy and the number of ground station requirements to achieve 200m of 3-σ accuracy. This analysis is performed considering the RSO position and velocity estimation problem in a maximum a posteriori estimation framework and calculating the required measurement standard deviation and the number of observations to achieve the 200 m of 3-σ accuracy. It should be noted that the observations can be available in intervals, and the orbit uncertainty is propagated during this interval. We compute the maximum propagation time required to exceed 200m accuracy threshold for various orbits provides insight into the maximum allowable propagation time interval. It can be deduced that maintaining 200 m of the 3-σ threshold requires a trade-off between the number of ground stations, propagation time, and accuracy of measurementsystems. This analysis will aid sensor tasking and ground station allocation for space situational awareness
The measurement range of ceilometer depends both on its optical system characteristics and on the properties of its electronic scheme, including the detector. The optical system of the ceilometer is required to have t...
详细信息
A significant concern in using electric vehicles (EVs) is the range variability, despite vehicles being from the same manufacturers and driven under similar conditions. This vari-ance often stems from cell-to-cell imp...
详细信息
With an ever-growing compute advantage over CPUs, GPUs are often used in workloads with ample BLAS computation to improve performance. However, several factors including data-to-compute ratio, amount of data re-use, a...
详细信息
ISBN:
(纸本)9798350355543
With an ever-growing compute advantage over CPUs, GPUs are often used in workloads with ample BLAS computation to improve performance. However, several factors including data-to-compute ratio, amount of data re-use, and data structure shape can all impact performance. Hence, using a GPU is not a guarantee of better BLAS performance. In this work, we introduce the GPU BLAS Offload Benchmark (GPU-BLOB), a novel and portable benchmark that measures CPU and GPU compute performance of different BLAS kernels and problem configurations. From the GPU offload threshold (a BLAS kernel's minimum dimensions for a certain configuration where using a GPU is guaranteed to yield improved performance), we evaluate the per-node performance of three, in-production, HPC systems. We show that the offload threshold for GEMM is highly dependant on problem shape and number of consecutive BLAS calls, and that, contrary to conventional wisdom, GEMV can benefit from GPU acceleration, especially on SoC-based systems.
Computation on architectures that feature fine-grained parallelism requires algorithms that overcome load imbalance, inefficient memory access, serialization, and excessive synchronization. In this paper, we explore a...
详细信息
As business of Alibaba expands across the world among various industries, higher standards are imposed on the service quality and reliability of big data cloud computing platforms which constitute the infrastructure o...
详细信息
ISBN:
(纸本)9781450384469
As business of Alibaba expands across the world among various industries, higher standards are imposed on the service quality and reliability of big data cloud computing platforms which constitute the infrastructure of Alibaba Cloud. However, root cause analysis in these platforms is non-trivial due to the complicated system architecture. In this paper, we propose a root cause analysis framework called CloudRCA which makes use of heterogeneous multi-source data including Key Performance Indicators (KPIs), logs, as well as topology, and extracts important features via state-of-the-art anomaly detection and log analysis techniques. The engineered features are then utilized in a Knowledge-informed Hierarchical Bayesian Network (KHBN) model to infer root causes with high accuracy and efficiency. Ablation study and comprehensive experimental comparisons demonstrate that, compared to existing frameworks, CloudRCA 1) consistently outperforms existing approaches in f1-score across different cloud systems;2) can handle novel types of root causes thanks to the hierarchical structure of KHBN;3) performs more robustly with respect to algorithmic configurations;and 4) scales more favorably in the data and feature sizes. Experiments also show that a cross-platform transfer learning mechanism can be adopted to further improve the accuracy by more than 10%. CloudRCA has been integrated into the diagnosis system of Alibaba Cloud and employed in three typical cloud computing platforms including MaxCompute, Realtime Compute and Hologres. It saves Site Reliability Engineers (SREs) more than 20% in the time spent on resolving failures in the past twelve months and improves service reliability significantly.
This article deals with the problem of harmonic resonance in power systems, which causes harmonic amplification and can worsen existing harmonic problems in the system. Therefore, performing a frequency response analy...
详细信息
Edge computing and Function-as-a-Service are two emerging paradigms that enable a timed analysis of data directly in the proximity of cyber-physical systems and users. Function-as-a-service platforms deployed at the e...
详细信息
ISBN:
(纸本)9798400702341
Edge computing and Function-as-a-Service are two emerging paradigms that enable a timed analysis of data directly in the proximity of cyber-physical systems and users. Function-as-a-service platforms deployed at the edge require mechanisms for resource management and allocation to schedule function execution and to scale the available resources in order to ensure the proper quality of service to applications. Large-scale deployments will also require mechanisms to control the energy consumption of the overall system, to ensure long-term sustainability. In this paper, we propose a technique to schedule function invocations on Edge resources by powering down idle edge nodes during period of low demands. In doing so, our technique aims at reducing the overall energy consumption without incurring in service level agreements violations. Experimental evaluations demonstrate that the proposed approach reduces service level agreement violations by at least 78.1% and energy consumption by at least 62.5% on average using synthetic and real-world datasets w.r.t. different baselines.
Shared caches in multi-core processors seriously complicate the timing verification of real-time software tasks due to the task interference occurring in the shared caches. Explicitly calculating the amount of cache i...
详细信息
ISBN:
(纸本)9781450392662
Shared caches in multi-core processors seriously complicate the timing verification of real-time software tasks due to the task interference occurring in the shared caches. Explicitly calculating the amount of cache interference among tasks and cache partitioning are two major approaches to enhance the schedulability performance in the context of multi-core processors with shared caches. The former approach suffers from pessimistic cache interference estimations that subsequently result in suboptimal schedulability performance, whereas the latter approach may increase the execution time of tasks due to a lower cache usage, also degrading the schedulability performance. In this paper, we propose a heuristic partitioned scheduler, called TCPS, for real-time non-preemptive multi-core systems with partitioned caches. To achieve a high degree of schedulability, TCPS combines the benefits of partitioned scheduling, relieving the computing resources from contention, and cache partitioning, mitigating cache interference, in conjunction with exploiting task characteristics. A series of comprehensive experiments were performed to evaluate the schedulability performance of TCPS and compare it against a variety of global and partitioned scheduling approaches. Our results show that TCPS outperforms all of these scheduling techniques in terms of schedulability, and yields a more effective cache usage and more stable load balancing.
暂无评论