This paper contains the research proposal of Juan Antonio De Rus that was presented at the MMSys 2022 doctoral symposium. The use of virtual reality (VR) is growing every year. With the normalization of remote work it...
详细信息
ISBN:
(纸本)9781450392839
This paper contains the research proposal of Juan Antonio De Rus that was presented at the MMSys 2022 doctoral symposium. The use of virtual reality (VR) is growing every year. With the normalization of remote work it is to expect that the use of immersive virtual environments to support tasks as online meetings, education, etc, will grow even more. VR environments typically include multimodal content formats (synthetic content, video, audio, text) and even multi-sensory stimuli to provide an enriched user experience. In this context, Affective computing (AC) techniques assisted by Artificial Intelligence (AI) become a powerful means to determine the user's perceived Quality of Experience (QoE). In the field of AC, we investigate a variety of tools to obtain accurate emotional analysis by using AI techniques applied on physiological data. In this doctoral study we have formulated a set of open research questions and objectives on which we plan to generate valuable contributions and knowledge in the field of AC, spatial audio, and multimodal interactive virtual environments, one of which is the creation of tools to automatically evaluate the QoE, even in real-time, which can provide valuable benefits both to service providers and consumers. For data acquisition we use sensors of different quality to study the scalability, reliability and replicability of our solutions, as clinical-grade sensors are not always within the reach of the average user.
Traditionally, reducing complexity in Machine Learning promises benefits such as less overfitting. However, complexity control in Genetic Programming (GP) often means reducing the sizes of the evolving expressions, an...
详细信息
ISBN:
(纸本)9781450368667
Traditionally, reducing complexity in Machine Learning promises benefits such as less overfitting. However, complexity control in Genetic Programming (GP) often means reducing the sizes of the evolving expressions, and past literature shows that size reduction does not necessarily reduce overfitting. In fact, whether size consistently represents complexity is itself debatable. Therefore, this paper proposes evaluation time of an evolving model - the computational time required to evaluate a model on data - as the estimate of its complexity. Evaluation time depends upon the size, but crucially also on the composition of an evolving model, and can thus distil its underlying complexity. To discourage complexity, this paper takes an innovative approach that asynchronously evaluates multiple models concurrently. These models race to their completion;thus, those models that finish earlier, join the population earlier to breed further in a steady-state fashion. Thus, the computationally simpler models, even if less accurate, get further chances to evolve before the more accurate yet expensive models join the population. Crucially, since evaluation times vary from one execution to another, this paper also shows how to significantly minimise this variation. The paper compares the proposed method on six challenging symbolic regression problems with both standard GP and GP with an effective bloat control method. The results demonstrated that the proposed asynchronous parallel GP (APGP) indeed produces individuals that are smaller, faster and more accurate than those in standard GP. While GP with bloat control (GP+BC) produced smaller individuals, it did so at the cost of lower accuracy than APGP both on training and test data, thus questioning the overall benefits of bloat control. Also, while APGP took the fewest evaluations to match the training accuracy of GP, GP+BC took the most. These results, and the portability of evaluation time as an estimate of complexity encourage f
The 2019 ABET computer science criteria requires that all computing students learn parallel and distributed computing (PDC) as undergraduates, and CS2013 recommends at least fifteen hours of PDC in the undergraduate c...
详细信息
ISBN:
(纸本)9781450390712
The 2019 ABET computer science criteria requires that all computing students learn parallel and distributed computing (PDC) as undergraduates, and CS2013 recommends at least fifteen hours of PDC in the undergraduate curriculum. Consequently, many educators are looking for easy ways to integrate PDC into courses at their institutions. This hands-on workshop introduces Message Passing Interface (MPI) basics in Python or C/C++ using clusters of Raspberry Pi single-board computers. MPI is a multi-language, platform independent, industry-standard library for PDC. Raspberry Pis are an inexpensive and engaging hardware platform for studying PDC as early as CS1. Participants will experience how to teach distributed computing essentials with MPI by means of reusable, effective "parallel patterns," including single program multiple data (SPMD) execution, send-receive message passing, and parallel loop patterns. No prior experience with MPI, PDC, or the Raspberry Pi is expected; participants will explore short programs designed to help students understand MPI basics, plus longer "exemplar" programs that use MPI to solve significant applied problems. The workshop includes: (i) personal experience with the Raspberry Pi (clusters provided); (ii) instructions on how to deploy Raspberry Pi clusters quickly in the classroom; (iii) self-paced hands-on experimentation with MPI programs; and (iv) a discussion of how to use Raspberry Pi clusters to align courses with CS2013 and ABET. All materials from this workshop are available from ***; participants should bring a laptop to access materials and connect to the Raspberry Pi clusters.
As neural networks (NNs) are being widely deployed in many cloud-oriented systems for safety-critical tasks, the privacy and security of NNs become significant concerns to users in the cloud platform that shares the c...
详细信息
In January 2004, we organized the second SIGCSE Committee ("Expanding the Women-in-computing Community"). The SIGCSE Board approved the charter, because the underrepresentation of women in computing is an in...
详细信息
ISBN:
(纸本)9781450380621
In January 2004, we organized the second SIGCSE Committee ("Expanding the Women-in-computing Community"). The SIGCSE Board approved the charter, because the underrepresentation of women in computing is an international problem and an embarrassment for our profession. A BOF provides SIGCSE program advertising that will create a large audience for dissemination of information concerning successful gender issues projects and that can provide time for discussion and brainstorming. We select projects to highlight, through listserv communication and through our connections with acm-W, CRA-W, NCWIT, etc. The forum provides an important annual meeting for a large group of people, who work to increase the representation of women in computing in their separate organizations and who do not customarily have an opportunity to share ideas face-to-face.
Modern hardware is becoming increasingly susceptible to silent data corruptions. As general methods for detection and recovery from errors are time and energy consuming, selective detection and recovery are promising ...
详细信息
ISBN:
(纸本)9781450370479
Modern hardware is becoming increasingly susceptible to silent data corruptions. As general methods for detection and recovery from errors are time and energy consuming, selective detection and recovery are promising alternatives for applications that have the freedom to produce results with a variable level of accuracy. Several programming languages have provided specialized constructs for expressing detection and recovery operations, but the existing static analyses of safety and quantitative analyses of programs do not have the proper support for such language constructs. This work presents Aloe, a quantitative static analysis of reliability of programs with recovery blocks - a construct that checks for errors, and if necessary, applies the corresponding recovery strategy. The analysis supports reasoning about both reliable and potentially unreliable detection and recovery mechanisms. It implements a novel precondition generator for recovery blocks, built on top of Rely, a state-of-the-art quantitative reliability analysis for imperative programs. Aloe can reason about programs with scalar and array expressions, if-then-else conditionals, and bounded loops without early exits. The analyzed computation is idempotent and the recovery code re-executes the original computation. We implemented Aloe and applied it to a set of eight programs previously used in approximate computing research. Our results present significantly higher reliability and scale better compared to the existing Rely analysis. Moreover, the end-to-end accuracy of the verified computations exhibits only small accuracy losses.
High-Level Synthesis (HLS) of approximate computing circuits generates circuits based on functional units such as adders and multipliers. An approximate library containing approximate functional units is firstly built...
详细信息
The proceedings contain 307 papers. The topics discussed include: discovering, selecting and exploiting feature sequence records of study participants for the classification of epidemiological data on hepatic steatosi...
ISBN:
(纸本)9781450351911
The proceedings contain 307 papers. The topics discussed include: discovering, selecting and exploiting feature sequence records of study participants for the classification of epidemiological data on hepatic steatosis;a parallel framework for HCC detection in DCE-MRI sequences with wavelet-based description and SVM classification;an adaptive neuro-fuzzy inference system for improving data quality in disease registries;deep metric learning for multi-labelled radiographs;a TF-IDF based topic model for identifying lncRNAs from genomic background;a multi-criteria decision making approach for predicting cancer cell sensitivity to drugs;interpretable CNV-based tumour classification using fuzzy rule based classifiers;student research abstract: estimating the impacts of HIV intervention programs among females who sell sex in Nigeria: a modelling approach;student research abstract: simulation study of transient receptor potential current in urinary bladder over activity;ordering stakeholder viewpoint concerns for holistic enterprise architecture (the W6H framework);using enterprise architecture model analysis and description logics for maturity assessment;conceptual modeling of inter-dependencies between processes and data;transparent io access control for application-level tenant isolation;Gelly-scheduling: distributed graph processing for service placement in community networks;and bounding the cost of virtual machine migrations for resource allocation in cloud data centers.
Spin-transfer torque random access memory (STT-RAM) is gaining momentum as a promising technology for high density and embedded nonvolatile memory. To enable the design of STT-RAM circuits for memory and computing, th...
详细信息
Quantization is a widely used technique to deploy deep learning models on embedded systems since this technique could reduce the model size and computation dramatically. Many quantization approaches have been proposed...
详细信息
ISBN:
(纸本)9781728142074
Quantization is a widely used technique to deploy deep learning models on embedded systems since this technique could reduce the model size and computation dramatically. Many quantization approaches have been proposed in recent years. Some quantization approaches are aggressive which could sufficiently reduce the model size and computation. However, the accuracy could be significantly decreased. To resolve this issue, some research groups have proposed smoother approaches to reduce accuracy loss. However, smoother approaches would use much more resources than aggressive approaches. In our work, we proposed a quantization approach which reduces resource utilization dramatically without losing much accuracy. We have successfully applied our quantization approach to the reservoir computing (RC) system. Compared to the RC system using floating-point numbers, our proposed RC system reduces the resource utilization of BRAM, DSP, Flip-Flop (FF) and Lookup Table (LUT) by 47%, 93%, 93%, and 87%, respectively, while only loses 0.08% accuracy on the NARMA10 dataset. Meanwhile, our proposed RC system uses approximately 45%, 14%, and 21% less BRAM, FF, and LUT respectively than the quantized RC system using other popular quantization approach.
暂无评论