In light of inherent trade-offs regarding fairness, privacy, interpretability and performance, as well as normative questions, the machine learning (ML) pipeline needs to be made accessible for public input, critical ...
详细信息
This paper presents intermediate-level knowledge in the form of a taxonomy that highlights 12 different ways in which interactive tech might support forest-related experiences that are joyful for humans. It can inspir...
详细信息
Mean field models are a popular tool used to analyse load balancing policies. In some exceptional cases the waiting time distribution of the mean field limit has an explicit form. In other cases it can be computed as ...
详细信息
Mean field models are a popular tool used to analyse load balancing policies. In some exceptional cases the waiting time distribution of the mean field limit has an explicit form. In other cases it can be computed as the solution of a set of differential equations. In this paper we study the limit of the mean waiting time E[W-lambda] as the arrival rate lambda approaches 1 for a number of load balancing policies in a large-scale system of homogeneous servers which finish work at a constant rate equal to one and exponential job sizes with mean 1 (i.e. when the system gets close to instability). As E[W-lambda] diverges to infinity, we scale with - log(1 - lambda) and present a method to compute the limit lim(lambda -> 1-) -E[W.]/log(1 -lambda). We show that this limit has a surprisingly simple form for the load balancing algorithms considered. More specifically, we present a general result that holds for any policy for which the associated differential equation satisfies a list of assumptions. For the well-known LL(d) policy which assigns an incoming job to a server with the least work left among d randomly selected servers these assumptions are trivially verified. For this policy we prove the limit is given by 1/d-1. We further show that the LL(d, K) policy, which assigns batches of K jobs to the K least loaded servers among d randomly selected servers, satisfies the assumptions and the limit is equal to K/d-K. For a policy which applies LL(d(i)) with probability p(i), we show that the limit is given by 1/Sigma(i)p(i)d(i)-1. We further indicate that our main result can also be used for load balancers with redundancy or memory. In addition, we propose an alternate scaling - log(p(lambda)) instead of - log(1 - lambda), where p(lambda) is adapted to the policy at hand, such that lim(lambda -> 1-) -E[W-lambda]/log(1-lambda) = lim(lambda -> 1-) -E[W-lambda]/log(p(lambda)), where the limit lim(lambda -> 0+) -E[W-lambda]/log(p(lambda)) is well defined and non-zero (contrary
Since the single-pixel imaging technology based on compressive sensing was proposed, single-pixel three-dimension (3D) imaging has attracted a lot of attention in the scientific research circle, because it also has th...
详细信息
Glitches - moments when technologies do not work as desired - will become increasingly common as industrially-designed robots move into complex contexts. Taking glitches to be potential sites of critical ethical refle...
详细信息
GPUs are known to be power-hungry, and due to the boom in artificial intelligence, they are currently the major contributors to the high power demands of upcoming datacenters. Most GPU usage in these popular workloads...
详细信息
ISBN:
(纸本)9798350355543
GPUs are known to be power-hungry, and due to the boom in artificial intelligence, they are currently the major contributors to the high power demands of upcoming datacenters. Most GPU usage in these popular workloads consists of large general matrix-matrix multiplications (GEMMs), which have therefore been optimized to achieve high utilization of hardware *** this work, we show that modifying the input data to GEMMs, while maintaining the matrix shapes and sizes can notably change the power consumption of these kernels. We experiment with four kinds of input variations: value distribution, bit similarity, placement, and sparsity, across different data types. Our findings indicate that these variations can change the GPU power usage during GEMM by almost 40%.We hypothesize that input-dependent power usage variations occur due to changes in the number of bit flips in the GPUs. We propose leveraging this property through compiler and scheduler optimizations to manage power and reduce energy consumption.
In a series of related works developing an ensemble consistency testing approach for multiple popular global climate models (GCMs), one test scenario has repeatedly stood out. Why does the use of the Fused Multiply-Ad...
ISBN:
(纸本)9798350355543
In a series of related works developing an ensemble consistency testing approach for multiple popular global climate models (GCMs), one test scenario has repeatedly stood out. Why does the use of the Fused Multiply-Add (FMA) operation result in model configurations getting flagged as failures, while changes to compiler choice, optimization level, processor type and number, etc. are passed as expected? This work explores the impacts of FMA on GCM simulation output from a distributional perspective and provides directions for future work to enable model developers and users to use numerical optimization techniques with confidence.
The proceedings contain 154 papers. The topics discussed include: opportune moments for the multi-stage notification responding process: a preliminary investigation;occurrence and triggers of mobile news exposure: a s...
ISBN:
(纸本)9781450384612
The proceedings contain 154 papers. The topics discussed include: opportune moments for the multi-stage notification responding process: a preliminary investigation;occurrence and triggers of mobile news exposure: a screenshot-based ESM study;killing-time detection from smartphone screenshots;TechLifeProbe: a technology prototype to probe how data sharing with adolescents’ parents improves their technology abuse through mobile phones;detecting single-hand riding with integrated accelerometer and gyroscope of smartphone;vision-based scene analysis toward dangerous cycling behavior detection using smartphones;on the predictability of parking preferences;challenges and opportunities in developing systems to assist patients in recovering from gambling addiction;exploring user contexts and needs for context-aware smartphone distraction management;estimating patient independence with sleep sensors;and a participatory workshop to design vibrothermal haptics of a smart fidget surface for remote workers.
The proceedings contain 6 papers. The topics discussed include: GenMAT: a general-purpose machine learning-driven auto-tuner for heterogeneous platforms;OSCAR parallelizing and power reducing compiler and API for hete...
ISBN:
(纸本)9781728186764
The proceedings contain 6 papers. The topics discussed include: GenMAT: a general-purpose machine learning-driven auto-tuner for heterogeneous platforms;OSCAR parallelizing and power reducing compiler and API for heterogeneous multicores;a python-based high-level programming flow for CPU-FPGA heterogeneous systems;a holistic systems approach to leveraging heterogeneity;survival of the fittest amidst the Cambrian explosion of processor architectures for artificial intelligence;and designing heterogeneous systems: large scale architectural exploration via simulation.
We explore the capabilities of Large Language Models (LLMs) to assist or substitute devices (i.e., firewalls) and humans (i.e., security experts) respectively in the detection and analysis of security incidents. We le...
详细信息
We explore the capabilities of Large Language Models (LLMs) to assist or substitute devices (i.e., firewalls) and humans (i.e., security experts) respectively in the detection and analysis of security incidents. We leverage transformer-based technologies, from relatively small to foundational sizes, to address the problem of correctly identifying the attack severity (and accessorily identifying and explaining the attack type). We contrast a broad range of LLM techniques (prompting, retrieval augmented generation, and fine-tuning of several models) using state-of-the-art machine learning models as a baseline. Using proprietary data from commercial deployment, our study provides an unbiased picture of the strengths and weaknesses of LLM for intrusion detection.
暂无评论