The proceedings contain 70 papers. The special focus in this conference is on Developing Intelligent Environments, Natural Interaction, Development of distributed, Ambient and Pervasive Interactions. The topics includ...
ISBN:
(纸本)9783319208039
The proceedings contain 70 papers. The special focus in this conference is on Developing Intelligent Environments, Natural Interaction, Development of distributed, Ambient and Pervasive Interactions. The topics include: Visualizing human-environment interactions;integrating concepts and techniques from HCI, human factors and media psychology;using the GQM method to evaluate calmness in ubiquitous applications;distributable interface design for web applications;enabling programmability of smart learning environments by teachers;unlocking hidden talents through sharing and making;a framework for navigating human behavior through gameful digital rhetoric;evaluating ubiquitous computing environments using 3d simulation;the transformative potential of making in teacher education;developing and evaluating two gestural-based virtual environment navigation methods for large displays;immersing users in landscapes using large scale displays in public spaces;a gesture recognition method for proximity-sensing surfaces in smart environments;developing intuitive gestures for spatial interaction with large public displays;AR coloring jigsaw puzzles with texture extraction and auto-UV mapping algorithm;smart kiosk with gait-based continuous authentication;gesture-based configuration of location information in smart environments with visual feedback;auditory browsing interface of ambient and parallel sound expression for supporting one-to-many communication;immersiveness of ubiquitous computing environments prototypes;employing virtual humans for interaction, assistance and information provision in ambient intelligence environments and a spatial interaction design in a sensible space for connecting family.
Collective Adaptive Systems support the interaction and adaptation of virtual and physical entities towards achieving common objectives. For these systems, several challenges at the modeling, provisioning, and executi...
详细信息
ISBN:
(纸本)9783319158952;9783319158945
Collective Adaptive Systems support the interaction and adaptation of virtual and physical entities towards achieving common objectives. For these systems, several challenges at the modeling, provisioning, and execution phases arise. In this position paper, we define the necessary underpinning concepts and identify requirements towards ensuring high availability in such systems. More specifically, based on a scenario from the EU Project ALLOW Ensembles, we identify the necessary requirements and derive an architectural approach that aims at ensuring high availability by combining active workflow replication, service selection, and dynamic compensation techniques.
The hyperspectral remote sensing is one of the frontier techniques in the remote sensing research fields. Applying the sparse coding model to the hyperspectral remote sensing Image processing is a hot topic in hypersp...
详细信息
ISBN:
(纸本)9781467372206
The hyperspectral remote sensing is one of the frontier techniques in the remote sensing research fields. Applying the sparse coding model to the hyperspectral remote sensing Image processing is a hot topic in hyperspectral information processing. To improve the accuracy of hyperspectral Image classification, we propose a classification method based on the spatial-spectral joint contextual sparse coding. Firstly, a dictionary is obtained by training using samples selected from the ground-truth reference data. Then, the sparse coefficients of each pixel are calculated based on the learned dictionary. Afterward, the sparse coefficients are input to the classifier and the final classification result is obtained. The visible and near-infrared hyperspectral remote sensing image collected by Tiangong-1 in Chaoyang District of Beijing is used to evaluate the performance of the proposed approach. Experimental results show that the proposed method yields the best classification performance with the overall accuracy of 95.74% and the Kappa coefficient of 0.9476 in comparison with other classification methods.
We explore four local learning versions of regularization networks. While global learning algorithms create a global model for all testing points, the local learning algorithms use neighborhoods to learn local paramet...
详细信息
The proceedings contain 41 papers. The topics discussed include: technological singularities;profile resolution across multilayer networks through smartphone camera fingerprint;an empirical method for discovering tax ...
ISBN:
(纸本)9781450334143
The proceedings contain 41 papers. The topics discussed include: technological singularities;profile resolution across multilayer networks through smartphone camera fingerprint;an empirical method for discovering tax fraudsters: a real case study of Brazilian fiscal evasion;utilizing a NoSQL data store for scalable log analysis;big data techniques for supporting accurate predictions of energy production from renewable sources;a security-assured accuracy-maximized privacy preserving collaborative filtering recommendation algorithm;towards archiving-as-a-service: a distributed index for the cost-effective access to replicated multi-version data;automatic vs. crowdsourced sentiment analysis;asymptotic-efficient algorithms for skyline query processing over uncertain contexts;shortest average-distance query on heterogeneous neighboring objects;and a data mining-based blocks placement optimization for distributed data warehouses.
Model calibration is a major challenge faced by the plethora of statistical analytics packages that are increasingly used in Big Data applications. Identifying the optimal model parameters is a timeconsuming process t...
详细信息
ISBN:
(纸本)9781450337243
Model calibration is a major challenge faced by the plethora of statistical analytics packages that are increasingly used in Big Data applications. Identifying the optimal model parameters is a timeconsuming process that has to be executed from scratch for every dataset/model combination even by experienced data scientists. We argue that the incapacity to evaluate multiple parameter configurations simultaneously and the lack of support to quickly identify sub-optimal configurations are the principal causes. In this paper, we develop two database-inspired techniques for efficient model calibration. Speculative parameter testing applies advanced parallel multi-query processing methods to evaluate several configurations concurrently. Online aggregation is applied to identify sub-optimal configurations early in the processing by incrementally sampling the training dataset and estimating the objective function corresponding to each configuration. We design concurrent online aggregation estimators and define halting conditions to accurately and timely stop the execution. We apply the proposed techniques to distributed gradient descent optimization - batch and incremental - for support vector machines and logistic regression models. We implement the resulting solutions in GLADE PF-OLA - a state-of-the-art Big Data analytics system - and evaluate their performance over terascalesize synthetic and real datasets. The results confirm that as many as 32 configurations can be evaluated concurrently almost as fast as one, while sub-optimal configurations are detected accurately in as little as a 1/20th fraction of the time. Copyright 2015 ACM.
In-situ analytics has lately been shown to be an effective approach to reduce both I/O and storage costs for scientific analytics. Developing an efficient in-situ implementation, however, involves many challenges, inc...
详细信息
ISBN:
(纸本)9781450337236
In-situ analytics has lately been shown to be an effective approach to reduce both I/O and storage costs for scientific analytics. Developing an efficient in-situ implementation, however, involves many challenges, including parallelization, data movement or sharing, and resource allocation. Based on the premise that Map Reduce can be an appropriate API for specifying scientific analytics applications, we present a novel MapReduce-like framework that supports efficient in-situ scientific analytics, and address several challenges that arise in applying the MapReduce idea for in-situ processing. Specifically, our implementation can load simulated data directly from distributed memory, and it uses a modified API that helps meet the strict memory constraints of in-situ analytics. The framework is designed so that analytics can be launched from the parallel code region of a simulation program. We have developed both time sharing and space sharing modes for maximizing the performance in different scenarios, with the former even avoiding any copying of data from simulation to the analytics program. We demonstrate the functionality, efficiency, and scalability of our system, by using different simulation and analytics programs, executed on clusters with multi-core and many-core nodes.
In order to address the problem of energy- and time-efficient execution of spatial queries in directional sensor networks, an efficient hybrid spatial query processing algorithm called SQPDSN is proposed in this paper...
详细信息
ISBN:
(纸本)9783319271613;9783319271606
In order to address the problem of energy- and time-efficient execution of spatial queries in directional sensor networks, an efficient hybrid spatial query processing algorithm called SQPDSN is proposed in this paper. In the majority of studies on query processing using wireless sensor networks, sensors are assumed to have an isotropic sensing and transmission model. However, in certain applications the sensors have directional sensing and directional transmission model. SQPDSN only requires each node within the query region send data message once which reduces the data messages. For achieving minimal energy consumption and minimal response time, our query processing model ensures that only the relevant nodes for the correct execution of a query are involved in the query execution. Each sector has a node which collects the sensory data in it, aggregates the data to derive partial query result and send it to the next sector. Compared with other techniques, the experimental results demonstrated an improvement of the proposed technique in terms of energy efficient query cover with lower communication cost.
Improving the energy efficiency of software running in a data center is a challenging task. Several application-specific techniques, such as energy-aware heuristics, controlled approximation and energy-conserving I/O,...
详细信息
ISBN:
(纸本)9783319201191;9783319201184
Improving the energy efficiency of software running in a data center is a challenging task. Several application-specific techniques, such as energy-aware heuristics, controlled approximation and energy-conserving I/O, have been proposed to tackle this problem. In this paper, we introduce data sparsing with artifacts, a novel approach to increase the energy efficiency of applications that are robust to input variations, such as speech and image processing. Data sparsing with artifacts is aimed at reducing the processing times and thus the energy efficiency of such applications while preserving the quality of the results by replacing a random subset of the original data with application-specific artifacts. In contrast to previous work, the proposed approach introduces artifacts at the data layer, without application layer modifications and with general purpose hardware. Data sparsing with artifacts has been integrated into a prototypical file system in userspace (FUSE) and the Hadoop distributed File System (HDFS). Experiments with MapReduce-based face detection, face recognition and speech recognition algorithms show promising energy savings of up to 10% with moderate accuracy losses for different data sparsing rates and artifacts.
Key grouping is a technique used by stream processing frameworks to simplify the development of parallel stateful operators. Through key grouping a stream of tuples is partitioned in several disjoint sub-streams depen...
详细信息
ISBN:
(纸本)9781450332866
Key grouping is a technique used by stream processing frameworks to simplify the development of parallel stateful operators. Through key grouping a stream of tuples is partitioned in several disjoint sub-streams depending on the values contained in the tuples themselves. Each operator instance target of one sub-stream is guaranteed to receive all the tuples containing a specific key value. A common solution to implement key grouping is through hash functions that, however, are known to cause load imbalances on the target operator instances when the input data stream is characterized by a skewed value distribution. In this paper we present DKG, a novel approach to key grouping that provides near-optimal load distribution for input streams with skewed value distribution. DKG starts from the simple observation that with such inputs the load balance is strongly driven by the most frequent values;it identifies such values and explicitly maps them to sub-streams together with groups of less frequent items to achieve a near-optimal load balance. We provide theoretical approximation bounds for the quality of the mapping derived by DKG and show, through both simulations and a running prototype, its impact on stream processingapplications. Copyright 2015 ACM.
暂无评论