In the fast pace business world of today where competition and technology are at their zenith, software development companies need to improve their quality standards in addition to cost reduction in operations. To ach...
详细信息
ISBN:
(纸本)076952611X
In the fast pace business world of today where competition and technology are at their zenith, software development companies need to improve their quality standards in addition to cost reduction in operations. To achieve these challenging objectives various developments are on the verge. In recent past agile methodologies have emerged as one of the most efficient implementations in the world of software development arena. Especially eXtreme programming (XP) which is integrated by Test first approach recent research proves the emergence of Test Driven Development (TDD)from this concept which is based on formalizing the requirement as a test and secondly to write such a code that can pass the test. This attempt of our research provides a mechanism to reduce the cost of testing, mainly due to troublesome test which fails again and again. We used TDD, analyzed the problem and proposed a workable solution. Test Driven Development is a technique which encourages less documentation resulting in a lot of difficulties for developers in contrast to traditional methods. In order to reduce the burden on developers we have proposed some steps in documentation.
Peculiar data are objects that are relatively few in number and significantly different from the other objects in a data set. In this paper we propose the PDD framework for detecting multiple categories of peculiar da...
详细信息
ISBN:
(纸本)9780769527017
Peculiar data are objects that are relatively few in number and significantly different from the other objects in a data set. In this paper we propose the PDD framework for detecting multiple categories of peculiar data. This framework provides an extensible set of perspectives for viewing data, currently including viewing data as a set of records, attributes, firequencies, intervals, sequences, or sequences of changes. By using these six views of the data, multiple categories of peculiar data can be detected to reveal different aspects of the data. For each view, the framework provides an extensible set of peculiarity measures to detect outliers and other kinds of peculiar data. The PDD framework has been implemented for Oracle and Access. Experiments are reported for data sets concerning Regina weather and NHL hockey.
This paper proposes a new mechanism for document similarity search, which uses the indexing structure called signature tables. The mechanism of signature tables is originally invented for similarity search of market b...
详细信息
ISBN:
(纸本)0780370805
This paper proposes a new mechanism for document similarity search, which uses the indexing structure called signature tables. The mechanism of signature tables is originally invented for similarity search of market basket data, and in this paper we apply it to document data. Since the characteristics of document data is definitely different from that of market basket data, the performance of similarity search is not satisfactory when the mechanism is naively applied to document data. In this paper, we describe the reason why the naive application decreases the efficiency, and propose some techniques for improving the performance. The results of simulation using real document data set show that the proposed mechanism implements good performance.
Understanding an unfamiliar program is always a daunting task for any programmer, either experienced or inexperienced. Many studies have shown that even an experienced programmer who is already familiar with the code ...
详细信息
ISBN:
(纸本)9781450371223
Understanding an unfamiliar program is always a daunting task for any programmer, either experienced or inexperienced. Many studies have shown that even an experienced programmer who is already familiar with the code may still need to rediscover the code frequently during software maintenance. The difficulties of program comprehension is much more intense when a system is completely new. Onewell-known solution to this notorious problem is to create effective technical documentation to make up for the lack of knowledge. The purpose of technical documentation is to achieve the transfer of knowledge. However, creating effective technical documentation has been impeded by many problems in practice [1]. In this paper, we propose a novel tool called GeekyNote to address the major challenges in technical documentation. The key ideas GeekyNote proposes are: (1) documents are annotated to versioned source code transparently;(2) formal textual writings are discouraged and screencasts (or other forms of documents) are encouraged;(3) the up-to-dateness between documents and code can be detected, measured, and managed;(4) the documentation that works like a debugging-trace is supported;(5) couplings can be easily created and managed for future maintenance needs;(6) how good a system is documented can be measured. A demo video can be accessed at https://***/cBueuPVDgWM.
The requirements and specifications documents which initiate and control design and development projects typically employ a variety of formal and informal notational systems. The goal of the research reported here has...
详细信息
ISBN:
(纸本)0818622709
The requirements and specifications documents which initiate and control design and development projects typically employ a variety of formal and informal notational systems. The goal of the research reported here has been to automatically interpret requirement documents expressed in a variety of notations and to integrate the interpretations in order to support requirements analysis and synthesis from them. Because the source notations include natural language, a form of semantic net called conceptual graphs has been adopted as the intermediate knowledge representation for expressing interpretations and integrating them. The focus of this paper is to describe the interpretation or mapping of a few requirements notations to conceptual graphs, and to indicate the process of joining these interpretations.
This paper addresses the problem of performing supervised classification on document collections containing also junk documents. With "junk documents" we mean documents that do not belong to the topic catego...
详细信息
ISBN:
(纸本)3540252959
This paper addresses the problem of performing supervised classification on document collections containing also junk documents. With "junk documents" we mean documents that do not belong to the topic categories (classes) we are interested in. This type of documents can typically not be covered by the training set;nevertheless in many real world applications (e.g. classification of web or intranet content, focused crawling etc.) such documents occur quite often and a classifier has to make a decision about them. We tackle this problem by using restrictive methods and ensemble-based meta methods that may decide to leave out-some documents rather than assigning them to inappropriate classes with low confidence. Our experiments with four different data sets show that the proposed techniques can eliminate a relatively large fraction of junk documents while dismissing only a significantly smaller fraction of potentially interesting documents.
Computer technology enables the creation of detailed documentation about the processes that create or affect entities (data, objects, etc.). Such documentation of the past can be used to answer various kinds of questi...
详细信息
ISBN:
(纸本)354046302X
Computer technology enables the creation of detailed documentation about the processes that create or affect entities (data, objects, etc.). Such documentation of the past can be used to answer various kinds of questions regarding the processes that led to the creation or modification of a particular entity. The answer to such questions are known as an entity's provenance. In this paper, we derive a number of principles for documenting the past, grounded in work from philosophy and history, which allow for provenance questions to be answered within a computational context. These principles lead us to argue that an interaction-based model is particularly suited for representing high quality documentation of the past.
This paper describes the CDA authoring tool (CDA Studio) which can be used to develop CDA (Clinical Document Architecture) documents in clinical environment. It provides easy-to-use interface for editing CDA templates...
详细信息
ISBN:
(纸本)0780389409
This paper describes the CDA authoring tool (CDA Studio) which can be used to develop CDA (Clinical Document Architecture) documents in clinical environment. It provides easy-to-use interface for editing CDA templates and connecting legacy information system. In this paper, we describe the architectures and functions of CDA studio and its 3 modules: the designer, the mapper, and the generator. The designer provides the functions to display tree format of CDA schema and generates a sample CDA document. The main role of mapper is map between the CDA elements and the field of legacy information table. The generator produces CDA documents for local clinical environments by integrating real data from the existing information system.
About three years ago I was in the fortunate position (reclining slightly) to finally finish a long and prosaic user's guide written for a complicated software program, when out of the blue the lead programmer cam...
详细信息
ISBN:
(纸本)0780314662
About three years ago I was in the fortunate position (reclining slightly) to finally finish a long and prosaic user's guide written for a complicated software program, when out of the blue the lead programmer came around to my office and asked, 'Are you done with the help text yet?' Rebuffed, and feeling like I had just written War And Peace, I responded, 'No, I thought you were going to do that.' But before those words had even dried on my lips, I knew that I had been negligent, and I was now destined to rewrite the user's guide and publish War And Peace: the Next Generation of help text. These days I have been exploring new approaches, and this paper describes how a new concept, Object-Oriented Writing (OOW), is helping me reuse more information as well as helping me provide easier-to-use information for my customers.
A temporal event analysis approach to program understanding is described. program understanding is viewed as a sequence of episodes in which the programmer concludes that an informal event occurs that corresponds to s...
详细信息
ISBN:
(纸本)081867119X
A temporal event analysis approach to program understanding is described. program understanding is viewed as a sequence of episodes in which the programmer concludes that an informal event occurs that corresponds to some part of the code. This can be viewed as accepting that the code is an adequate definition of the meaning of the informal event. Often, such a definition is contingent upon working hypotheses that describe other informal program properties that should be verified in order to confirm the validity of the understanding process. Verification of working hypotheses may depend on the formulation of additional definitions or working hypotheses. The understanding process can be assisted through the use of a documentation language for describing events and hypotheses, and an hypothesis verification tool. This paper describes a temporal event language in which hypotheses are formulated in terms of expected event sequences. An hypothesis verification tool was built, and experimentation was carried out on a set of programs. The toot was found to be very useful in understanding the detailed, control oriented aspects of a program. program faults were discovered in every program that was analyzed, indicating that it facilitates a deep level of understanding.
暂无评论