Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific eve...
详细信息
ISBN:
(纸本)9781624100932
Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems;and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies and challenges associated with dynamically reconfigurable space communications systems.
Clusters and distributed systems offer fault tolerance and high performance, When all computers are up and running, we would like the load to be evenly distributed among the computers. When a computer breaks down the ...
详细信息
ISBN:
(纸本)0889863415
Clusters and distributed systems offer fault tolerance and high performance, When all computers are up and running, we would like the load to be evenly distributed among the computers. When a computer breaks down the load on this computer must be redistributed to the other computers in the cluster. Most cluster systems are designed to tolerate one single fault, and one can thus distinguish between two modes of operation: normal operation when all computers are up and running and worst-case operation when one computer is down. The performance during these two modes of operation is determined by the way work is allocated to the computers in the cluster or distributed system. It turns out that the same allocation can in general not achieve optimal normal and worst-case performance, i.e. there is a trade-off. In this paper we put an optimal upper bound on the loss of normal case performance when optimizing for worst-case performance, and an optimal upper bound on the loss of worst-case case performance when optimizing for normal case performance. We also provide a heuristic algorithm for doing engineering trade-offs between worst-case and normal case performance.
In the 21st century, industrial automation will be greatly benefited by the advances in electronics, information systems, and process technology. However, these technological advances are still separate islands of aut...
详细信息
E-Business requires cooperation and open standard-based information/knowledge exchange between all the participants of global business information environment (e-Business environment) in real-time. Scientific directio...
详细信息
ISBN:
(纸本)0769520332
E-Business requires cooperation and open standard-based information/knowledge exchange between all the participants of global business information environment (e-Business environment) in real-time. Scientific direction of knowledge logistics addresses this problem. This direction is closely related with a new direction for research and development in Artificial Intelligence and Information technology areas "Web intelligence". The paper describes an approach addressing the knowledge logistics problem in regard to Web intelligence. The paper discusses main principles underlying the approach, describes the system "KSNet" based on the approach, and presents an application of the approach to a case study inspired by a coalition-based Binni scenario.
The approach described in this paper addresses fast prototyping of complex media processing systems. The method provides a high-level means to emulate the streaming subsystems of complex consumer electronics systems. ...
详细信息
ISBN:
(纸本)0889863415
The approach described in this paper addresses fast prototyping of complex media processing systems. The method provides a high-level means to emulate the streaming subsystems of complex consumer electronics systems. This includes connection management for high-throughput signal processing elements, data buffering and routing. Moreover, it offers a high-level abstraction for the configuration and control of such streaming subsystems. As a result, the essential characteristics of stream processing can be modeled and analyzed, while at the same time, complex middleware and application software can be developed and tested, which will be independent of the underlying streaming technology. Prototyping is sped up by using standard implementation technology such as PCs with off-the-shelf PCI cards, Ethernet and TCP/IP networking. All low-level media processing components have adequate software interfaces that can remain the same when implemented in real embedded products.
"Smart" sensors with embedded microprocessors and wireless communication links have the potential to change fundamentally the way civil infrastructure systems are monitored and maintained. Indeed, a 2002 Nat...
详细信息
A critical problem in wide-issue superscalar processors is the limit on cycle time imposed by the central register file and operand bypass network. Here, a distributed register file architecture that employs fully dis...
详细信息
A critical problem in wide-issue superscalar processors is the limit on cycle time imposed by the central register file and operand bypass network. Here, a distributed register file architecture that employs fully distributed functional unit clusters is presented. It utilizes a local register mapping table and a dedicated register transfer network to support distributed register operations. In addition, an eager transfer mechanism is developed to reduce penalties caused by incomplete operand transport interconnection. distributed register files can be employed to reduce operand access time by a factor of two with associated average IPC penalties of 14% and 21% on 4- and 8-way superscalar architectures across a broad range of symbolic, scientific, and multimedia applications. The IPC penalties are only 3% and 10% for SpecINT2000 applications.
The proceedings contain 35 papers. The special focus in this conference is on General Session, Web Security and Web Authoring and Design. The topics include: Toward web intelligence;artificial intelligence techniques ...
ISBN:
(纸本)3540401245
The proceedings contain 35 papers. The special focus in this conference is on General Session, Web Security and Web Authoring and Design. The topics include: Toward web intelligence;artificial intelligence techniques in retrieval of visual data semantic information;a framework to integrate business goals in web usage mining;clustered organized conceptual queries in the internet using fuzzy interrelations;evaluating the informative quality of web sites by fuzzy computing with words;mining association rules using fuzzy inference on web data;using case-based reasoning to improve information retrieval in knowledge management systems;a soft computing approach;content-based methodology for anomaly detection on the web;designing reliable web security systems using rule-based systems approach;secure intelligent agents based on formal description techniques;a natural language interface for information retrieval on semantic web documents;conceptual user tracking;coping with web knowledge;formalization of web design patterns using ontologies;a multi-agent system for web document authoring;on the vague modelling of web page characteristics regarding usability;semantic interoperability based on ontology mapping in distributed collaborative design environment;the use of data mining techniques to classify styles and types of web-based information;building topic profiles based on expert profile aggregation;practical evaluation of textual fuzzy similarity as a tool for information retrieval;a machine learning based evaluation of a negotiation between agents involving fuzzy counter-offers and feature selection algorithms to improve documents classification performance.
This paper presents a distributed and scalable admission control scheme to provide end-to-end statistical QoS guarantees in Differentiated Services (DiffServ) networks. The basic idea of the scheme is that the ingress...
详细信息
This paper presents a distributed and scalable admission control scheme to provide end-to-end statistical QoS guarantees in Differentiated Services (DiffServ) networks. The basic idea of the scheme is that the ingress routers make admission decisions according to the network status information obtained by sending probing packets from the ingress to the egress of the network. Each router passively monitors the arriving traffic and marks the probing packets with its network status. The performance of our scheme is evaluated with a variety of traffic models, QoS metrics and network topologies. The simulation results show that the proposed scheme can accurately control the admissible region and effectively improve the utilization of network resource.
What will an archive of language resources look like in the future? It is to be expected that developments in computer technology will have an impact on the nature of language resources which will be created in the fu...
详细信息
What will an archive of language resources look like in the future? It is to be expected that developments in computer technology will have an impact on the nature of language resources which will be created in the future. A projection current trends into the future helps us to see that there will be more multimedia and multilingual resources. It is also likely that increasing internet bandwidth will lead to a more distributed architecture whereby resources are accessed remotely rather than held locally. This will also facilitate the development of virtual corpora, whereby temporary, ad hoc, collections of texts can be assembled for a specific analysis. Increasingly it will become the norm to extract information from resources held in the archive, rather than downloading the corpus, installing software to analyse it with and getting them to work together. It can therefore be predicted that although archives will continue to have an important role in the preservation of resources, other roles will develop or grow in importance, as archives adapt to allow the creation of virtual corpora and online access to resources, and become centres of resource creation expertise, metadata validation and resource discovery. This paper discusses the new directions envisaged by the Oxford Text Archive (OTA), and in particular its current initiatives to improve the service provided for the community of academic linguistics researchers in the UK.
暂无评论