Based on the advances of research in cognitive informatics and related fields, this paper attempts to develop a layered reference model of the brain that explains the functional mechanisms and cognitive processes of t...
详细信息
ISBN:
(纸本)0769519865
Based on the advances of research in cognitive informatics and related fields, this paper attempts to develop a layered reference model of the brain that explains the functional mechanisms and cognitive processes of the natural intelligence. A variety of life functions and cognitive processes have been identified in cognitive informatics, cognitive science, neuropsychology, and neurophilosophy. In order to formally and rigorously describe a comprehensive and coherent set of mental processes and their relationships, an integrated reference model of the brain is established, which encompasses 37 cognitive processes at six layers known as the sensation, memory, perception, action, meta and higher cognitive layers from bottom up. The reference model of the brain may be applied to explain a wide range of physiological, psychological, and cognitive phenomena in cognitive informatics, particularly relationships between the inherited and the acquired life functions, as well as the subconscious and conscious cognitive processes.
Efficient star query processing is crucial for a performant data warehouse (DW) implementation and much work is available on physical optimization (e.g., indexing and schema design) and logical optimization (e.g., pre...
详细信息
Efficient star query processing is crucial for a performant data warehouse (DW) implementation and much work is available on physical optimization (e.g., indexing and schema design) and logical optimization (e.g., pre-aggregated materialized views with query rewriting). One important step in the query processing phase is, however, still a bottleneck: the residual join of results from the fact table with the dimension tables in combination with grouping and aggregation. This phase typically consumes between 50% and 80% of the overall processing time. In typical DW scenarios pre-grouping methods only have a limited effect as the grouping is usually specified on the hierarchy levels of the dimension tables and not on the fact table itself. We suggest a combination of hierarchical clustering and pre-grouping as we have implemented in the relational DBMS Transbase. Exploiting hierarchy semantics for the pre-grouping of fact table result tuples is several times faster than conventional query processing. The reason for this is that hierarchical pre-grouping reduces the number of join operations significantly. With this method even queries covering a large part of the fact table can be executed within a time span acceptable for interactive query processing.
The Level 1 Muon Trigger subsystem for BTeV will be implemented using the same architectural building blocks as the BTeV Level 1 Pixel Trigger: pipelined field programmable gate arrays feeding a farm of dedicated proc...
详细信息
The Level 1 Muon Trigger subsystem for BTeV will be implemented using the same architectural building blocks as the BTeV Level 1 Pixel Trigger: pipelined field programmable gate arrays feeding a farm of dedicated processing elements. The muon trigger algorithm identifies candidate tracks, and is sensitive to the muon charge (sign);candidate dimuon events are identified by complementary charge track-pairs. To insure that the trigger is operating effectively, the trigger development team is actively collaborating in an independent multi-university research program for reliable, self-aware, fault adaptive behavior in real-time embedded systems (RTES). Key elements of the architecture, algorithm, performance, and engineered reliability are presented.
UML has become the de facto standard for object-oriented modelling. Currently, UML comprises several different notations with no formal semantics attached to the individual diagrams or their integration, thus preventi...
详细信息
This paper focuses on credit card fraud in Multimedia Products, which are soft-products. By soft-products, we mean intangible products that can be used and consumed without having them shipped physically, such as soft...
详细信息
System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifac...
System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.
We present here an improved strategy to devise optimal integration test orders in object-oriented systems. Our goal is to minimize the complexity of stubbing during integration testing as this has been shown to be a m...
详细信息
ISBN:
(纸本)1581135564
We present here an improved strategy to devise optimal integration test orders in object-oriented systems. Our goal is to minimize the complexity of stubbing during integration testing as this has been shown to be a major source of expenditure. Our strategy to do so is based on the combined use of inter-class coupling measurement and genetic algorithms. The former is used to assess the complexity of stubs and the latter is used to minimize complex cost functions based on coupling measurement. Using a precisely defined procedure, we investigate this approach in a case study involving a real system. Results are very encouraging as the approach clearly helps obtaining systematic and optimal results. Copyright 2002 ACM.
softwareengineering is not only a technical discipline of its own. It is also a problem domain where technologies coming from other disciplines are relevant and can play an important role. One important example is kn...
详细信息
ISBN:
(纸本)1581135564
softwareengineering is not only a technical discipline of its own. It is also a problem domain where technologies coming from other disciplines are relevant and can play an important role. One important example is knowledge engineering, a term that I use in the broad sense to encompass artificial intelligence, computational intelligence, knowledge bases, data mining, and machine learning. I see a number of typical software development issues that can benefit from these disciplines and, for the sake of clarifying the discussion, I have divided them into four categories: (1) Planning, monitoring, and quality control of projects, (2) The quality and process improvement of software organizations, (3) Decision making support, (4) Automation. First, the planning, monitoring, and quality control of software development is typically based, unless it is entirely ad-hoc, on past project data and/or expert opinion. As discussed below, several techniques coming from machine learning, computational intelligence, and knowledge-based systems have shown to be useful in this context. Second, software organizations are inherently learning organizations, that need to improve, based on experience and project feedback, the way they develop software in changing and volatile environments. Large amounts of data, numerous documents, and other forms of information are typically gathered on projects. The question then becomes how to enable the intelligent storage and use of such information in future projects. Third, during the course of a project, software engineers and managers have to face important, complex decisions. They need decision models to support them, especially when project pressure is intense. Techniques originally developed for building risk models based on expert elicitation or optimization heuristics can play a key role in such a context. The last category of applications concerns automation. Many automation problems, such as test data generation, can be formulated as const
暂无评论