This lightning talk describes the effort to expand access to computer science at IDEA Public Schools, with a goal of 33% of students enrolling in computer science before graduation. The primary driver of this effort i...
详细信息
ISBN:
(纸本)9781450390712
This lightning talk describes the effort to expand access to computer science at IDEA Public Schools, with a goal of 33% of students enrolling in computer science before graduation. The primary driver of this effort is a 30-school randomized control trial (RCT) that is studying the impact of Advanced Placement (AP) Computer Science principles curriculum from *** and The Beauty and Joy of computing (BJC). Each school has each been assigned, at random, either *** or BJC and will use the assigned curriculum for a period of 3 years while the impact is evaluated. The 2021-2022 school year is the first year of this program. As a means of providing support to teachers and schools, a district-level computer science manager is creating and curating resources, including pacing guides, unit plans, exam reviews, and topic videos. In addition to the provider-facilitated summer trainings, the district provides a 3-day new-teacher content training, quarterly full-day course collaboration sessions, and bi-weekly one-hour support webinars, all focusing on AP Computer Science principles. Further, four course leaders, current teachers with proven track records of success have been assigned to facilitate professional development and provide informal coaching to our cohort. The need for schools to competently implement the *** or BJC curriculum as part of the 30-school RCT has been the key factor in deciding the depth of support to provide for the AP CS principles program.
Transient cloud servers such as Amazon Spot instances, Google Preemptible VMs, and Azure Low-priority batch VMs, can reduce cloud computing costs by as much as 10x, but can be unilaterally preempted by the cloud provi...
详细信息
ISBN:
(纸本)9781450370523
Transient cloud servers such as Amazon Spot instances, Google Preemptible VMs, and Azure Low-priority batch VMs, can reduce cloud computing costs by as much as 10x, but can be unilaterally preempted by the cloud provider. Understanding preemption characteristics (such as frequency) is a key first step in minimizing the effect of preemptions on application performance, availability, and cost. However, little is understood about temporally constrained preemptions-wherein preemptions must occur in a given time window. We study temporally constrained preemptions by conducting a large scale empirical study of Google's Preemptible VMs (that have a maximum lifetime of 24 hours), develop a new preemption probability model, new model-driven resource management policies, and implement them in a batch computing service for scientific computing workloads. Our statistical and experimental analysis indicates that temporally constrained preemptions are not uniformly distributed, but are time-dependent and have a bathtub shape. We find that existing memoryless models and policies are not suitable for temporally constrained preemptions. We develop a new probability model for bathtub preemptions, and analyze it through the lens of reliability theory. To highlight the effectiveness of our model, we develop optimized policies for job scheduling and checkpointing. Compared to existing techniques, our model-based policies can reduce the probability of job failure by more than 2x. We also implement our policies as part of a batch computing service for scientific computing applications, which reduces cost by 5x compared to conventional cloud deployments and keeps performance overheads under 3%.
With the rise of machine learning (ML) and the proliferation of smart mobile devices, recent years have witnessed a surge of interest in performing ML in wireless edge networks. In this paper, we consider the problem ...
详细信息
The constantly growing number of Internet of Things (IoT) devices and their resource-constrained nature makes them particularly vulnerable and increasingly attractive for exploitation by cyber criminals. Current estim...
详细信息
As the widening gap between GPU computing capability and other components (CPU, PCIe bus and communication network), it’s increasingly challenging to design high performance parallel algorithms for large CPU-GPU hete...
详细信息
Knowledge work increasingly spans multiple computing surfaces. Yet in status quo user experiences, content as well as tools, behaviors, and workflows are largely bound to the current device-running the current applica...
详细信息
ISBN:
(纸本)9781450375146
Knowledge work increasingly spans multiple computing surfaces. Yet in status quo user experiences, content as well as tools, behaviors, and workflows are largely bound to the current device-running the current application, for the current user, and at the current moment in time. SurfaceFleet is a system and toolkit that uses resilient distributed programming techniques to explore cross-device interactions that are unbounded in these four dimensions of device, application, user, and time. As a reference implementation, we describe an interface built using Surface Fleet that employs lightweight, semi-transparent UI elements known as Applets. Applets appear always-on-top of the operating system, application windows, and (conceptually) above the device itself. But all connections and synchronized data are virtualized and made resilient through the cloud. For example, a sharing Applet known as a Portfolio allows a user to drag and drop unbound Interaction Promises into a document. Such promises can then be fulfilled with content asynchronously, at a later time (or multiple times), from another device, and by the same or a different user.
When using parallel computing to run large-scale simulations, the parts of the system being simulated in different cores or threads often interact and exchange information, constraining the threads to be synchronized....
详细信息
ISBN:
(纸本)9781665433266
When using parallel computing to run large-scale simulations, the parts of the system being simulated in different cores or threads often interact and exchange information, constraining the threads to be synchronized. Simulating wireless networks with mobility, when a user equipment (UE) ceases to be served by one Base Station (BS), to be served by a new one, a synchronization point may be required, if the new BS is being simulated in another thread. In a large-scale distributed wireless network with high mobility, the simulation speed-up obtained from multi-threading could be lost to the overhead burden for synchronizing the threads. We propose a heuristic approach to assign BSs to threads in such a way as to minimize the number of synchronization points. In a time interval of the simulation, accumulated interactions are interpreted as growing graphs. Advancing through the simulation time until the number of disconnected graphs is equal to the number of desired threads, showed to be a good strategy to determine the longest intervals that can be simulated without synchronization points while taking advantage of multi-threading. By means of simulation tests we show decrements of up to 100.0 %, in the number of synchronization points, in comparison to those required for the same simulation times when assigning BSs to threads in a random and balanced way.
暂无评论