This paper reports our experience using AspectJ, a general-purpose aspect-oriented extension to Java, to implement distribution and persistence aspects in a web-based information system. This system was originally imp...
详细信息
This paper reports our experience using AspectJ, a general-purpose aspect-oriented extension to Java, to implement distribution and persistence aspects in a web-based information system. This system was originally implemented in Java and restructured with AspectJ. Our main contribution is to show that AspectJ is useful for implementing several persistence and distribution concerns in the application considered, and other similar applications. We have also identified a few drawbacks in the language and suggest some minor modifications that could significantly improve similar implementations. Despite the drawbacks, we argue that the AspectJ implementation is superior to the pure Java implementation. Some of the aspects implemented in our experiment are abstract and constitute a simple aspect framework. The other aspects are application specific but we suggest that different implementations might follow the same aspect pattern. The framework and the pattern allow us to propose architecture-specific guidelines that provide practical advice for both restructuring and implementing certain kinds of persistent and distributed applications with AspectJ.
This paper presents information on the content and resources of the internationally certified Master’s program “High-Performance and distributed Information Processing Systems” in the field of Applied Mathematics a...
详细信息
This paper presents information on the content and resources of the internationally certified Master’s program “High-Performance and distributed Information Processing Systems” in the field of Applied Mathematics and Computer Science. The paper studies scientific achievements of the academic staff, peculiarities of courses currently held, characteristics of available facilities and resources, prospects for the Master’s worldwide research activities at a high scientific level, and opportunities to pursue postgraduate studies and to defend a PhD thesis.
The Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades has several research projects related to computer science needing high computational resources. Some of these projects a...
详细信息
The Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades has several research projects related to computer science needing high computational resources. Some of these projects are associated with climate prediction, molecule modeling, physical simulations, and others these applications generate a significant amount of data, regarding the big data issue, despite having excellent hardware features, the final result is obtained after hours or days of calculation depending on the algorithm complexity. For this reason, it is not possible to present optimal solutions at an ideal time. .In this work, we propose the virtualization and configuration of a high-performance cluster (HPC) known commercially as a "supercomputer" that is composed of several computers connected to a high-speed network to behave like a single computer. The virtualization is used to run a scientific algorithm that will apply performance tests using four virtual computers to demonstrate that the reduction of time is achieved by using more machines and thus be able to be implemented in the laboratories of the institution.
This paper introduces an implementation of resilient objects based solely on the conversation mechanism, in the framework of the CSP scheme. Every transaction over the object is defined as a conversation involving one...
详细信息
This paper introduces an implementation of resilient objects based solely on the conversation mechanism, in the framework of the CSP scheme. Every transaction over the object is defined as a conversation involving one or more external processes and one internal process that monitors the state of the object. The proposed implementation facilitates the distribution of the transaction code over a computer network, and also an automatic control of concurrency by enforcing an adequate “readers and writers” policy. Transactions are invoqued by procedure calls, but only message exchanges are actually done between different processors.
Recent advances in hardware technology have made it economically feasible to construct micro computer controlled subsystems and to connect them to distributed real-time control systems. However it is difficult to inte...
详细信息
Recent advances in hardware technology have made it economically feasible to construct micro computer controlled subsystems and to connect them to distributed real-time control systems. However it is difficult to integrate and develop the potential of subsystems without understanding or changing the existing systems. In this paper, we will introduce a new language concept and a management method to overcome the difficulties. Firstly, a concurrent programming concept, the guarding process (GP), is introduced. In the concept a program consists only of modules, each of which defines objects and servers. Using the concept, programs are described in a structured manner allowing the modules to detect incorrect sequences of interactions. Secondly, a management method that guarantees timing and response to modules, the deadline monitor (DM), is introduced. Servers may be specified with requirements of timing and response. DM fills these requirements and monitors servers' behavior. Finally, in order to deal with the exceptions raised by GP and DM, the exception handler is introduced. In the testing phase, GP reports to the handler why the algorithm is wrong nd how to use the module. The concepts are presented in a programming language and augmented by the implementation of a distributed robot system.
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will incre...
详细信息
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster;2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summari
Parallel computing represents the only plausible way to continue to increase the computational power available to scientists and engineers. Parallel computers, however, are not likely to be widely successful until the...
详细信息
Parallel computing represents the only plausible way to continue to increase the computational power available to scientists and engineers. Parallel computers, however, are not likely to be widely successful until they are easy to program. A major component in the success of vector supercomputers is the ability of scientists to write Fortran programs in a 'vectorizable' style and expect vectorizing compilers to automatically produce efficient code [8, 35]. The resulting programs are easily maintained, debugged, and ported across different vector machines. [ABSTRACT FROM AUTHOR]
In the Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades, the permission to use the embedded systems laboratory was obtained. INTI-Lab researchers will use this laboratory to...
详细信息
In the Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades, the permission to use the embedded systems laboratory was obtained. INTI-Lab researchers will use this laboratory to do different research related to the processing of large scale videos, climate predictions, climate change research, physical simulations, among others. This type of projects, demand a high complexity in their processes, carried out in ordinary computers that result in an unfavorable time for the researcher. For this reason, one opted for the implementation of a high-performance cluster architecture that is a set of computers interconnected to a local network. This set of computers tries to give a unique behavior to solve complex problems using parallel computing techniques. The intention is to reduce the time directly proportional to the number of machines, giving a similarity of having a low-cost supercomputer. Different performance tests were performed scaling from 1 to 28 computers to measure time reduction. The results will show if it is feasible to use the architecture in future projects that demand processes of high scientific complexity.
In the past years, multi-core processors and clusters of multi-core processors have emerged to be promising approaches to meet the growing demand for computing performance. They deliver scalable performance, certainly...
详细信息
In the past years, multi-core processors and clusters of multi-core processors have emerged to be promising approaches to meet the growing demand for computing performance. They deliver scalable performance, certainly at the costs of tedious and complex parallel programming. Due to a lack of high-level abstractions, developers of parallel applications have to deal with low-level details such as coordinating threads or synchronizing processes. Thus, parallel programming still remains a dificult and error-prone task. In order to shield the programmer from these low–level details, algorithmic skeletons have been proposed. They encapsulate typical parallel programming patterns and have emerged to be an effcient and scalable approach to simplifying the development of parallel applications. In this paper, we present a Java binding of our skeleton library Muesli. We point out strengths and weaknesses of Java with respect to parallel and distributed computing. A matrix multiplication benchmark demonstrates that the Java Generics deliver poor performance, thus the Java implementation is unable to compete with the C++ implementation in terms of performance.
暂无评论