The paper presents an approach to the design and implementation of web-based environments for practical exercises in parallel and distributed computing (PDC). The presented approach introduces minimal development and ...
详细信息
The paper presents an approach to the design and implementation of web-based environments for practical exercises in parallel and distributed computing (PDC). The presented approach introduces minimal development and operational costs by relying on Everest, a general-purpose platform for building computational web services. The flexibility of proposed service-oriented architecture enables the development of different types of services targeting various use cases and PDC topics. The generic execution services support the execution of different types of parallel and distributed programs on corresponding computing systems, while the assignment evaluation services implement the execution and evaluation of solutions to programming assignments. As was demonstrated by teaching two introductory PDC courses, the presented approach helps to enhance students' practical experience while avoiding low-level interfaces, reducing the grading time and providing a level of automation necessary for scaling the course to a large number of students. In contrast to other efforts, the exploited Platform as a Service model provides the ability to quickly reuse this approach by other PDC educators without installation of the Everest platform. (C) 2018 Elsevier Inc. All rights reserved.
We consider the scheduling of a real-time application that is modeled as a collection of parallel and recurrent tasks on a multicore platform. Each task is a directed-acyclic graph (DAG) having a set of subtasks (i.e....
详细信息
We consider the scheduling of a real-time application that is modeled as a collection of parallel and recurrent tasks on a multicore platform. Each task is a directed-acyclic graph (DAG) having a set of subtasks (i.e., nodes) with precedence constraints (i.e., directed edges) and must complete the execution of all its subtasks by some specified deadline. Each task generates potentially infinite number of instances where the releases of consecutive instances are separated by some minimum inter-arrival time. Each DAG task and each subtask of that DAG task is assigned a fixed priority. A two-level preemptive global fixed-priority scheduling (GFP) policy is proposed: a task-level scheduler first determines the highest-priority ready task and a subtask-level scheduler then selects its highestpriority subtask for execution. To our knowledge, no earlier work considers a two-level GFP scheduler to schedule recurrent DAG tasks on a multicore platform. We derive a schedulability test for our proposed two-level GFP scheduler. If this test is satisfied, then it is guaranteed that all the tasks will meet their deadlines under GFP. We show that our proposed test is not only theoretically better but also empirically performs much better than the state-of-the-art test in scheduling randomly generated parallel DAG task sets.
The feature selection effect directly affects the classification accuracy of the text. This paper introduces a new text feature selection method based on bat optimization. This method uses the traditional feature sele...
详细信息
ISBN:
(纸本)9781728140698
The feature selection effect directly affects the classification accuracy of the text. This paper introduces a new text feature selection method based on bat optimization. This method uses the traditional feature selection method to pre-select the original features, and then uses the bat group algorithm to optimize the pre-selected features in binary code form, and uses the classification accuracy as the individual fitness. However, when the amount of text information is large, the execution time of the single machine is long. According to this shortcoming, combining the Bat Algorithm and the Spark parallel computing framework, the text feature selection algorithm SBATFS is proposed. The algorithm combines the good search performance of the bat algorithm with the distributed and efficient calculation speed to realize the efficient solution of the text feature selection optimization model. The results show that compared with the traditional feature selection method, after SBATFS is used for feature optimization, the classification accuracy is effectively improved.
This paper presents VelocityOLAP (VOLAP), a distributedreal-time OLAP system for high-velocity data. VOLAP makes use of dimension hierarchies, is highly scalable, exploits both multi-core and multi-processor parallel...
详细信息
This paper presents VelocityOLAP (VOLAP), a distributedreal-time OLAP system for high-velocity data. VOLAP makes use of dimension hierarchies, is highly scalable, exploits both multi-core and multi-processor parallelism, and can guarantee serializable execution of insert and query operations. In contrast to other high performance OLAP systems such as SAP HANA or IBM Netezza that rely on vertical scaling or special purpose hardware, VOLAP supports cost-efficient horizontal scaling on commodity hardware or modest cloud instances. Experiments on 20 Amazon EC2 nodes with TPC-DS data show that VOLAP is capable of bulk ingesting data at over 600 thousand items per second, and processing streams of interspersed insertions and aggregate queries at a rate of approximately 50 thousand insertions and 20 thousand aggregate queries per second with a database of 1 billion items. VOLAP is designed to support applications that perform large aggregate queries, and provides similar high performance for aggregations ranging from a few items to nearly the entire database.
In our rapidly-growing big-data area, often the big sensory data from Internet of Things (IoT) cannot be sent directly to the far data-center in an efficient way because of the limitation in the network infrastructure...
详细信息
In our rapidly-growing big-data area, often the big sensory data from Internet of Things (IoT) cannot be sent directly to the far data-center in an efficient way because of the limitation in the network infrastructure. Fog computing, which has increasingly gained popularity for real-time applications, offers the utilization of local mini data-centers near the sensors to release the burden from the main data-center, and to exploit the full potential of cloud-based IoT. In this paper, a high-performance approach based on the Max-Min Ant System (MMAS), which is an efficient variation in the family of ant colony optimization algorithms, is proposed to tackle the static task-graph scheduling in homogeneous multiprocessor environments, the predominant technology used as mini-servers in fog computing. The main duty of the proposed approach is to properly manipulate the priority values of tasks so that the most optimal task-order can be achieved. Leveraging background knowledge of the problem, as heuristic values, has made the proposed approach very robust and efficient. Different random task-graphs with different shape parameters have been utilized to evaluate the proposed approach, and the results show its efficiency and superiority versus traditional counterparts from the performance perspective.
To enable the systematic evaluation of complex technical systems by engineers from various disciplines advanced 3D simulation environments that model all relevant aspects are used. In these Virtual Testbeds real-time ...
详细信息
Virtual reality (VR) surgical training and presurgical planning require the creation of 3D virtual models of patient anatomy from medical scan data. real-time head tracking in VR applications allows users to navigate ...
详细信息
Virtual reality (VR) surgical training and presurgical planning require the creation of 3D virtual models of patient anatomy from medical scan data. real-time head tracking in VR applications allows users to navigate in the virtual anatomy from any 3D position and orientation. The process of interactively rendering highly-detailed 3D volumetric data of anatomical models from a dynamically changing observer's perspective is extremely demanding for computational resources. parallel computing presents a solution to this problem, involving a distributed volume graphics rendering system composed of multiple nodes concurrently working on different portions of the output streaming, which are later integrated to form the final view. This paper presents a distributed graphics rendering system consisting of multiple GPU-based heterogeneous nodes running a best-effort rendering scheme. Experiments show promising results in terms of efficiency and performance for rendering medical volumes in realtime.
Distribution and automation of machine learning, a relatively new concepts occupying the scene of data science and its applications, in an era of massive data that it is growing by the second requiring complex process...
详细信息
Nowadays, autonomous driving and driver assistance applications are being developed at an accelerated pace. This rapid growth is primarily driven by the potential of such smart applications to significantly improve sa...
详细信息
ISBN:
(数字)9781728144030
ISBN:
(纸本)9781728144047
Nowadays, autonomous driving and driver assistance applications are being developed at an accelerated pace. This rapid growth is primarily driven by the potential of such smart applications to significantly improve safety on public roads and offer new possibilities for modern transportation concepts. Such indispensable applications typically require wireless connectivity between the vehicles and their surroundings, i.e. roadside infrastructure and cloud services. Nevertheless, such connectivity to external networks exposes the internal systems of individual vehicles to threats from remotely-launched attacks. In this realm, it is highly crucial to identify any misbehavior of the software components which might occur owing to either these threats or even software/hardware malfunctioning. In this paper, we introduce AutoSec, a host-based anomaly detection algorithm which relies on observing four timing parameters of the executed software components to accurately detect malicious behavior on the operating system level. To this end, AutoSec formulates the task of detecting anomalistic executions as a clustering problem. Specifically, AutoSec devises a hybrid clustering algorithm for grouping a set of collected timing traces resulted from executing the legitimate code. During the runtime, AutoSec simply classifies a certain execution as an anomaly, if its timing parameters are distant enough from the boundaries of the predefined clusters. To show the effectiveness of AutoSec, we collected timing traces from a testbed composed of a set of real and virtual control units communicating over a CAN bus. We show that using our proposed AutoSec, compared to baseline methods, we can identify up to 21% less false positives and 18% less false negatives.
暂无评论