Efforts have been expanded to formulate guidelines for application of analysis and design principles, draw on the many successes in specifying large-scale projects, and provide many "cures" to the commonly e...
详细信息
Efforts have been expanded to formulate guidelines for application of analysis and design principles, draw on the many successes in specifying large-scale projects, and provide many "cures" to the commonly encountered problems. Educators and professionals have recognized viewpoint-based analysis as a powerful means to support the requirements analysis process. This paper reports on an initiative to draw closer viewpoint analysis and dataflow based methods. This has been a topic of significant interest to us in that a viewpoint approach is by far superior than its dataflow counterpart to requirements discovery and gathering for medium or large size student projects. This paper describes a framework in an attempt to: (1) bridge the gap between viewpoint-based requirements analysis and the computational aspects of dataflow oriented models; and (2) provide a systematic approach for effective structuring and managing complexity in the system under consideration. These goals have been achieved by introducing a more generalized notion of viewpoints, called interfaces, and their compositions. The adequacy of the framework and its implementation are presented.
In this paper, we present methodology, which enables designing and profiling macro dataflow graphs that represent computation and communication patterns for the FDTD (finite difference time domain) problem in irregul...
详细信息
In this paper, we present methodology, which enables designing and profiling macro dataflow graphs that represent computation and communication patterns for the FDTD (finite difference time domain) problem in irregular computational areas. Optimized macro dataflow graphs (MDFG) for FDTD computations are generated in three main phases: generation of initial MDFG based on wave propagation area partitioning, MDFG nodes merging with load balancing to obtain given number of macro nodes and communication optimization to minimize and balance internode data transmissions. The computation efficiency for several communication systems (MPI, RDMA RB, SHMEM) is discussed. Relations between communication optimization algorithms and overall FDTD computation efficiency are shown. Experimental results obtained by simulation are presented
Algorithms for generating an optimal finite state machine (FSM) implementation of pipelined data path controllers are presented. The groups of states are partitioned into two, states are encoded, and each partition is...
详细信息
Algorithms for generating an optimal finite state machine (FSM) implementation of pipelined data path controllers are presented. The groups of states are partitioned into two, states are encoded, and each partition is mapped onto one PLA to form a two-PLA based Moore-style FSM state sequencer. The experimental results show that substantial savings in layout area can be achieved compared to published traditional FSM optimization approaches.< >
In this paper, we propose to develop prototypes of dataflow diagrams using a logical framework, which is based on an extension of logic programming to perform abductive reasoning (abductive logic programming). Based ...
详细信息
In this paper, we propose to develop prototypes of dataflow diagrams using a logical framework, which is based on an extension of logic programming to perform abductive reasoning (abductive logic programming). Based on the framework, we discuss how to represent a dataflow diagram in a declarative manner as a set of logical sentences and outline a proof procedure. Given the declarative representation of a dataflow diagram, the proof procedure can be applied, which combines forward and backward chaining in a structured manner. Unlike the conventional logic programming, the computed answers are abducible atoms directly representing the outputs. When restricting to the use of pure Prolog, we provide the semantics of the outputs under the abductive logical framework and show the soundness and completeness of the proof procedure. We compare our approach with conventional backward chaining and finally discuss some further enhancements.
Web applications often rely on server-side scripts to handle HTTP requests, to generate dynamic contents, and to interact with other components. The server-side scripts usually mix with HTML statements and are difficu...
详细信息
ISBN:
(纸本)0769522092
Web applications often rely on server-side scripts to handle HTTP requests, to generate dynamic contents, and to interact with other components. The server-side scripts usually mix with HTML statements and are difficult to understand and test. In particular, these scripts do not have any compiling check and could be error-prone. Thus, it becomes critical to test the server-side scripts for ensuring the quality and reliability of Web applications. We adapt traditional dataflow testing techniques into the context of Java Server Pages (JSP), a very popular server-side script for developing Web applications with Java technology. We point out that the JSP implicit objects and action tags can introduce several unique dataflow test artifacts which need to be addressed. A test model is presented to capture the dataflow information of JSP pages with considerations of various implicit objects and action tags. Based on the test model, we describe an approach to compute the intraprocedural and interprocedural dataflow test paths for uncovering the data anomalies of JSP pages.
Scanning tools are commonly used by intruders for identifying vulnerable hosts and applications in a network. So from security perspective, to identify the attack in its initial stage and to minimize the impact of att...
详细信息
Scanning tools are commonly used by intruders for identifying vulnerable hosts and applications in a network. So from security perspective, to identify the attack in its initial stage and to minimize the impact of attack, it is important to detect scanning activities in a network. We have mainly considered TCP flow because most of the Internet application uses it as a transport protocol. Traditionally, TCP scan traffic detection uses either flag values in the TCP packet header or statistical properties of the connection parameter like number of failed connection attempts. In this paper, we present a novel behaviour analysis of TCP traffic, where by using the flow characteristics, we identify anomalies and scan activities in a network or host. The proposed method provides a generic solution to SYN scan (half open), connect scan, FIN scan, Xmas scan and null scan. Results obtained from our method prove the detection capabilities and accuracy.
Programs using service-oriented architecture (SOA) often feature ultra-late binding among components. These components have well-defined interfaces and are known as Web services. Messages between every pair of Web ser...
详细信息
ISBN:
(纸本)9781424445257
Programs using service-oriented architecture (SOA) often feature ultra-late binding among components. These components have well-defined interfaces and are known as Web services. Messages between every pair of Web services dually conform to the output interface of a sender and the input interface of a receiver. Unit testing of Web services should not only test the logic of Web services, but also assure the correctness of the Web services during input, manipulation, and output of messages. There is, however, little software testing research in this area. In this paper, we study the unit testing problem to assure components written in orchestration languages, WS-BPEL in particular. We report an empirical study of the effectiveness of the Frankl-Weyuker dataflow testing criteria (particularly the all-uses criterion) on WS-BPEL subject programs. Our study shows that conventional dataflow testing criteria can be much less effective in revealing faults in interface artifacts (WSDL documents) and message manipulations (XPath queries) than revealing faults in BPEL artifacts.
During the past ten years several variants of an analysis technique called program slicing have been developed. Program slicing has applications in maintenance tasks such as debugging, testing, program integration, pr...
详细信息
During the past ten years several variants of an analysis technique called program slicing have been developed. Program slicing has applications in maintenance tasks such as debugging, testing, program integration, program verification, etc. and can be characterized as a type of dependence analysis. A program slice can loosely be defined as the subset of a program needed to compute a certain variable value at a certain program position. A novel method for interprocedural dynamic slicing which is more precise than interprocedural static slicing methods and is useful for dependence analysis at the procedural abstraction level was given by M. Kamkar et al. (1992, 1993). It is demonstrated here how interprocedural dynamic slicing can be used to increase the reliability and precision of interprocedural dataflow testing. The work on dataflow testing reported by E. Duesterwald et al. (1992), which is a novel method for dataflow testing through output influences, is generalized.< >
Many mature development processes use structural coverage metrics to monitor the quality of testing. Studies suggest that commonly used control flow testing criteria poorly address state-based behavior of object orien...
详细信息
Many mature development processes use structural coverage metrics to monitor the quality of testing. Studies suggest that commonly used control flow testing criteria poorly address state-based behavior of object oriented software. This paper presents DaTeC, a tool that provides useful coverage information of Java object states by implementing a novel contextual dataflow testing approach.
CAL is a dataflow oriented language for writing high-level specifications of signal processing applications. The language has recently been standardized and selected for the new MPEG Reconfigurable Video Coding standa...
详细信息
ISBN:
(纸本)9781457705397
CAL is a dataflow oriented language for writing high-level specifications of signal processing applications. The language has recently been standardized and selected for the new MPEG Reconfigurable Video Coding standard. Application specifications written in CAL can be transformed into executable implementations through development tools. Unfortunately, the present tools provide no way to schedule the CAL entities efficiently at run-time. This paper proposes an automated approach to analyze specifications written in CAL, and produce run-time schedules that perform on average 1.45x faster than implementations relying on default scheduling. The approach is based on quasi-static scheduling, which reduces conditional execution in the run-time system.
暂无评论