In the paper, defects in a program code in Python are considered. It is shown that these defects are different from those in a code in C/C++;hence, there is a need in study of defects in large-scale projects with an o...
详细信息
In the paper, defects in a program code in Python are considered. It is shown that these defects are different from those in a code in C/C++;hence, there is a need in study of defects in large-scale projects with an open source code. A classification of the defects found, which is based on whether type inference is required for finding an error, is presented. It is shown that there exists a small portion of "simple" defects;however, the determination of the majority of the defects requires type inference. The question of what constructs of the Python language are to be supported in type inference for finding real defects is discussed.
Prospects for applying virtualization technology in high-performance computations on the x64 systems are studied. Principal reasons for performance degradation when parallel programs are running in virtual environment...
详细信息
Prospects for applying virtualization technology in high-performance computations on the x64 systems are studied. Principal reasons for performance degradation when parallel programs are running in virtual environments are considered. The KVM/QEMU and Palacios virtualization systems are considered in detail, with the HPC Challenge and NAS Parallel Benchmarks used as benchmarks. A modern computing cluster built on the Infiniband high-speed interconnect is used in testing. The results of the study show that, in general, virtualization is reasonable for a wide class of high-performance applications. Fine tuning of the virtualization systems involved made it possible to reduce overheads from 10-60% to 1-5% on the majority of tests from the HPC Challenge and NAS Parallel Benchmarks suites. The main bottlenecks of virtualization systems are reduced performance of the memory system (which is critical only for a narrow class of problems), costs associated with hardware virtualization, and the increased noise caused by the host operating system and hypervisor. Noise can have a negative effect on performance and scalability of fine-grained applications (applications with frequent small-scale communications). The influence of noise significantly increases as the number of nodes in the system grows.
Static analysis is a popular tool for detecting the vulnerabilities that cannot be found by means of ordinary testing. The main problem in the development of static analyzers is their low speed. Methods for accelerati...
详细信息
Static analysis is a popular tool for detecting the vulnerabilities that cannot be found by means of ordinary testing. The main problem in the development of static analyzers is their low speed. Methods for accelerating such analyzers are described, which include incremental analysis, lazy analysis, and header file caching. These methods make it possible to considerably accelerate the detection of defects and to integrate the static analysis tools in the development environment. As a result, defects in a file edited in the Visual Studio development environment can be detected in 0.5 s or faster, which means that they can be practically detected after each keystroke. Therefore, critical vulnerabilities can be detected and corrected at the stage of coding.
In our previous paper [1], a new model of a Labeled Transition system (LTS)-type implementation was proposed. In ordinary LTSs, transitions are labeled by actions;therefore, they can be called LTSs of actions. The new...
详细信息
In our previous paper [1], a new model of a Labeled Transition system (LTS)-type implementation was proposed. In ordinary LTSs, transitions are labeled by actions;therefore, they can be called LTSs of actions. The new model is an LTS of observations;in this model, observations and test actions (buttons) are used instead of actions. This model generalizes many testing semantics that are based on the LTS of actions but use additional observations (refusals, ready sets, etc.). Moreover, systems with priority, which are not described by the LTS of actions, are simulated uniformly. In the present paper, we develop this approach by focusing on the composition of systems. The point is that, on observation traces, one cannot define a composition with respect to which a composition of LTSs would possess the property of additivity: the set of traces of a composition of LTSs coincides with the set of all pairwise compositions of traces of LTS operands. This is explained by the fact that an observation in a composition state is not calculated based on observations in states-operands. In this paper, we propose an approach that eliminates this drawback. To this end, we label the transitions of LTSs by symbols (events) that, on the one hand, can be composed to guarantee the property of additivity, and, on the other hand, can be used to generate observations under testing: a transition by an event gives rise to an observation related to this event. This model is called an LTS of events. In this paper, we define (1) a transformation of an LTS of events into an LTS of observations to conform with the principles of our previous paper [1];(2) a composition of LTSs of events;(3) a composition of specifications that preserves conformance: a composition of conformal implementations is conformal to a composition of specifications;and (4) a uniform simulation of LTSs of actions in terms of the LTSs of events, which allows one to consider an implementation in any interaction semantics admiss
Innovation and engineering are very close concepts. Innovation is one of the key competences of the engineers in the way they use their own creativity and knowledge base to face the problems they have to resolve for h...
详细信息
Innovation and engineering are very close concepts. Innovation is one of the key competences of the engineers in the way they use their own creativity and knowledge base to face the problems they have to resolve for humanity's improvement and social evolution. In this special section, we have selected four papers from three research events (CINAIC 2013, TEEM 2013, and ISELEAR 2013) that empower the innovation and research cycles in engineering from different perspectives.
Data normalization is a laborious and costly process taking place in master data management soft-ware development in enterprises. We analyze the subtasks of the normalization and propose an approach to automating the ...
详细信息
Data normalization is a laborious and costly process taking place in master data management soft-ware development in enterprises. We analyze the subtasks of the normalization and propose an approach to automating the most laborious of these subtasks. Also, we describe a software system implementing the proposed approach and automatically learning the expert skills.
Distributed event-based systems are used to detect meaningful events with low latency in high data-rate event streams that occur in surveillance, sports, finances, etc. However, both known approaches to dealing with t...
详细信息
Distributed event-based systems are used to detect meaningful events with low latency in high data-rate event streams that occur in surveillance, sports, finances, etc. However, both known approaches to dealing with the predominant out-of-order event arrival at the distributed detectors have their shortcomings: buffering approaches introduce latencies for event ordering, and stream revision approaches may result in system overloads due to unbounded retraction cascades. This article presents an adaptive speculative processing technique for out-of-order event streams that enhances typical buffering approaches. In contrast to other stream revision approaches developed so far, our novel technique encapsulates the event detector, uses the buffering technique to delay events but also speculatively processes a portion of it, and adapts the degree of speculation at runtime to fit the available system resources so that detection latency becomes minimal. Our technique outperforms known approaches on both synthetical data and real sensor data from a realtime locating system (RTLS) with several thousands of out-of-order sensor events per second. Speculative buffering exploits system resources and reduces latency by 40% on average.
This paper describes the two-stage compilation system based on LLVM compiler infrastructure and the performance optimizations made possible by this deployment technique.
ISBN:
(纸本)9781479924608
This paper describes the two-stage compilation system based on LLVM compiler infrastructure and the performance optimizations made possible by this deployment technique.
This paper proposes a decomposition of generic software security problems, mapping them to smaller problems of static and dynamic binary code analysis.
ISBN:
(纸本)9781479924608
This paper proposes a decomposition of generic software security problems, mapping them to smaller problems of static and dynamic binary code analysis.
Healthcare research data is typically produced, curated, and used by scientists, physicians, and other experts that have little or no professional affinity to programming and IT system design. In the context of eviden...
详细信息
ISBN:
(纸本)9783662452318;9783662452301
Healthcare research data is typically produced, curated, and used by scientists, physicians, and other experts that have little or no professional affinity to programming and IT system design. In the context of evidence-based medicine or translational medicine, however the production, reliability, and long term availability of high quality and high assurance data is of paramount importance. In this paper we reflect on the data management needs we encountered in our experience as associated partners of a large interdisciplinary research project coordinated at the Cancer Metabolism Research Group, institute of Biomedical Sciences at University of Sao Paulo in Brazil. Their research project involves extensive collection of detailed sample data within a complicated environment of clinical and research methods, medical, assessment, and measurement equipment and the regulatory requirements of maintaining privacy, data quality and security. We use this example as an illustrative case of a category of needs and a diversity of professional and skills profiles that is representative of what happens today in any large scale research endeavor. We derive a catalogue of requirements that an IT system for the definition and management of data and processes should have, how this relates to the IT development and XMDD philosophy, and we briefly sketch how the DyWA + jABC combination provides a foundation for meeting those needs.
暂无评论