An electronic neurocomputer, Neuro Turbo, has been implemented, using the recently developed general-purpose, 24-bit floating-point digital signal processor (DSP), MB86220. The Neuro Turbo is a MIMD (multiple-instruct...
详细信息
An electronic neurocomputer, Neuro Turbo, has been implemented, using the recently developed general-purpose, 24-bit floating-point digital signal processor (DSP), MB86220. The Neuro Turbo is a MIMD (multiple-instruction-multiple-data) type parallel processor with four ring-coupled DSPs and four dual-port memories. The performance of the Neuro Turbo has been evaluated for several kinds of three-layer neural network. An operational speed of 2 MCPS for the learning process and 11 MCPS for the rewarding process has been achieved.< >
As an innovative distributed computing technique for sharing the memory resources in high-speed network, RAM Grid exploits the distributed free nodes, and provides remote memory for the nodes which are short of memory...
详细信息
As an innovative distributed computing technique for sharing the memory resources in high-speed network, RAM Grid exploits the distributed free nodes, and provides remote memory for the nodes which are short of memory. One of the RAM Grid systems named DRACO, tries to provide cooperative caching to improve the performance of the user node which has mass disk I/O but lacks local memory. However, the performance of DRACO is constrained with the network communication cost. In order to hide the latency of remote memory access and improve the caching performance, we proposed using push- based prefetching to enable the caching providers to push the potential useful memory pages to the user nodes. Specifically, for each caching provider, it employs sequential pattern mining techniques, which adapts to the characteristics of memory page access sequences, on locating useful memory pages for prefetching. We have verified the effectiveness of the proposed method through system analysis and trace-driven simulations.
Machine learning is currently shifting from a centralized paradigm to decentralized ones where machine learning models are trained collaboratively. In fully decentralized learning algorithms, data remains where it was...
详细信息
Machine learning is currently shifting from a centralized paradigm to decentralized ones where machine learning models are trained collaboratively. In fully decentralized learning algorithms, data remains where it was produced, models are trained locally and only model parameters are exchanged among participating entities along an arbitrary network topology and aggregated over time until convergence. Not only this limits the cost of exchanging data but also exploits the growing capabilities of users' devices while mitigating privacy and confidentiality concerns. Such systems are significantly challenged by a potential high-level of heterogeneity both at the system level as participants may have differing capabilities of (e.g., computing power, memory and network connectivity) as well as data heterogeneity (a.k.a non-IIDness). The adoption of fully decentralized learning systems requires designing frugal systems that limit communication, energy and yet ensure convergences. Several avenues are promising from adapting the network topologies to compensate for data heterogeneity to exploiting the high levels of redundancy, both in data and computations, of ML algorithms to limit data and model sharing in such systems.
This paper attempts to consolidate the theoretical foundation of the relaxation labeling processes, explore the connections between the relaxation labeling model and the Hopfield associative memory model, and seek for...
详细信息
This paper attempts to consolidate the theoretical foundation of the relaxation labeling processes, explore the connections between the relaxation labeling model and the Hopfield associative memory model, and seek for their unification. We start by defining a new labeling assignment space and then formulating relaxation labeling process as a dynamic system of Lyapunov type, which is equipped with a well defined energy function and described by a naturally fitted updating rule. We present consistency condition and show each /spl omega/-limit point of the dynamic system gives a consistent labeling. We finally make a peace between the multilabel and one-label relaxation labeling and reveal an interesting result that, for a one-label case, the newly formulated relaxation labeling model reduces to the Hopfield associative memory model.
The Utah Retrieval System Architecture (URSA) was initially developed in 1981 developed as an alternative to central-processor-based information retrieval systems. It combined distributed processing and a windowed use...
详细信息
The Utah Retrieval System Architecture (URSA) was initially developed in 1981 developed as an alternative to central-processor-based information retrieval systems. It combined distributed processing and a windowed user interface with a hardware-based search server combined with using document surrogates such as partially-inverted files. The authors have now started the development and testing of a medium-scale (about 10 gigabyte) parallel backend search server to demonstrate its operation and to gather data on the use of such a backend processor in actual operation, including information about query complexity and arrival rates. This searcher, based on a hardware-augmented RISC processor, builds on their experience developing and operating the custom VLSI FSA-based search engine. The use of a programmable processor allows the easy implementation of complex search patterns, such as numeric range matching, while the special hardware augmentation provides considerably better performance than would be available from a standard RISC processor server.< >
To support future command and control (C 2 ) concepts, military telecommunications and information processing must change significantly. Highly automated and integrated communications and processing systems will be re...
详细信息
To support future command and control (C 2 ) concepts, military telecommunications and information processing must change significantly. Highly automated and integrated communications and processing systems will be required, which must be able to manage and reconstitute themselves at the network level and the internetwork level if the survivable C 2 architectures required by the military are going to be developed. Emerging technologies in automated communication-resource management, distributed processing, and artificial intelligence (AI) will provide the foundation for building such systems. With these technologies, a battlefield information-management environment can be developed that will readily support distributed, self-managing and self-reconstituting, and, hence, survivable, C 2 . The series of papers presented in this MILCOM '87 session entitled "distributed Systems" were invited to address the theme of survivable command, control and communication (C 3 ). Although one could anticipate a myriad of papers and topics related to such a broad theme, I have selected a series of papers that focus on advanced research and development of technologies related to a conceptual model that I developed as a result of several years of involvement with distributed processing and communications research for tactical and strategic survivable C 3 . This model and its relevance to survivable C 3 are presented in my paper, which introduces the session. This model provides a framework for the six papers that will be presented, which are shown below. 1. "Multi-Destination Protocols for Tactical Radio Networks" by Edgar Caples, Rockwell International. 2.
The rapid growth of Internet of Things (IoT) and au-tonomous systems has led to the deployment of edge devices close to the sensing data source for low-latency computation.
ISBN:
(数字)9781665497473
ISBN:
(纸本)9781665497480
The rapid growth of Internet of Things (IoT) and au-tonomous systems has led to the deployment of edge devices close to the sensing data source for low-latency computation.
The authors study multimedia electronic mail together with the issues of concern surrounding it and give an overview of a user-friendly multimedia electronic mail. Based on this study, the authors present a multimedia...
详细信息
The authors study multimedia electronic mail together with the issues of concern surrounding it and give an overview of a user-friendly multimedia electronic mail. Based on this study, the authors present a multimedia electronic system that allows the originator to write a scenario to coordinate the parts it sends and to adapt the message restitution to the receiver messaging system.< >
The Midcourse Space Experiment (MSX) is the premier space technology experiment of the Ballistic Missile Defense Organization (BMDO). The primary objective of the experiment is to collect and analyze data on target an...
详细信息
The Midcourse Space Experiment (MSX) is the premier space technology experiment of the Ballistic Missile Defense Organization (BMDO). The primary objective of the experiment is to collect and analyze data on target and backgrounds phenomenology using three multi-spectral (ultraviolet, visible, infrared) imaging sensor subsystems. The MSX program made an early investment in a well-organized Data Management System to ensure full exploitation of the unique data collected during the life of the program. The requirements-driven design features a distributed processing concept and a dual data flow path to provide quality data to researchers as quickly as possible while ensuring traceability in understanding the data and sensor characteristics. Data quality and traceability goals are achieved through the application of a formal, disciplined data processing development and certification methodology that includes an in-depth technical evaluation and approval process as well as careful configuration management of the software and algorithms used to verify and calibrate the data. The MSX data is carefully catalogued and archived at the BMDO Backgrounds Data Center (BDC) to provide a legacy to future programs. The MSX Data Release Policy determines how the non-MSX community can access the available data products.
We have already proposed a graph analysis method that could shorten the analysis time by reconstructing a web graph. In our proposed method, a web graph is reconstructed for parallel distributed processing of possible...
详细信息
We have already proposed a graph analysis method that could shorten the analysis time by reconstructing a web graph. In our proposed method, a web graph is reconstructed for parallel distributed processing of possible graphs by clustering a web graph and reconstructing the web graph for Compression Graph and Cluster Graphs. Compression Graph represents the relationship between clusters, whereas Cluster Graph contains nodes belonging to each cluster. When analyzing Compression Graph and Cluster Graphs, they can be processed in parallel because there is no relationship between Compression Graph and Cluster Graphs. Further examining our previous study in which we considered a small web graph, in the present paper, we discuss the performance of the proposed method on a large-scale web graph by experiments.
暂无评论