Increasing the performance of application-specific processors by exploiting application-resident parallelism is often prohibited by costs;especially in the case of low-volume productions. The flexibility of horizontal...
详细信息
We present a necessary and sufficient condition for an arbitrary matrix A to be totally unimodular. The matrix A is interpreted as the adjacency matrix of a bipartite graph G(A). The total unimodularity of A correspon...
详细信息
Two major limitations concerning the design of cost-effective application-specific architectures are the recurrent costs of system-software development and hardware implementation, in particular VLSI implementation, f...
ISBN:
(纸本)9780897913195
Two major limitations concerning the design of cost-effective application-specific architectures are the recurrent costs of system-software development and hardware implementation, in particular VLSI implementation, for each *** SCalable architecture Experiment (SCARCE) aims to provide a framework for application-specific processor design. The framework allows scaling of functionality, implementation complexity, and performance. The SCARCE framework consists and will consist of: an architecture framework defining the constraints for the design of application-specific architectures; tools for synthesizing architectures from application or application-area; VLSI cell libraries and tools for quick generation of application-specific processors; a system-software platform which can be retargeted quickly to fit the application-specific architecture;This paper concentrates primarily on the architecture framework of SCARCE, but also presents briefly some software issues and outlines the process of generating VLSI processors.
作者:
MENSH, DRKITE, RSDARBY, PHDennis Roy Mensh:is currently the task leader
Interoperability Project with the MITRE Corporation in McLean Va. He received his B.S. and M.S. degrees in applied physics from Loyola College in Baltimore Md. and the American University in Washington D. C. He also has completed his course work towards his Ph.D. degree in computer science specializing in the fields of systems analysis and computer simulation. He has been employed by the Naval Surface Warfare Center White Oak Laboratory Silver Spring Md. for 20 years in the areas of weapon system analysis and the development of weapon systems simulations. Since 1978 he has been involved in the development of tools and methodologies that can be applied to the solution of shipboard combat system/battle force system architecture and engineering problems. Mr. Mensh is a member of ASNE MORS IEEE U.S. Naval Institute MAA and the Sigma Xi Research Society. Robert S. Kite:is a systems engineer with the Naval Warfare Systems Engineering Department of the MITRE Corporation in McLean
Va. Mr. Kite received his B.S. degree in electronic engineering from The Johns Hopkins University in Baltimore Md. Mr. Kite retired from the Federal Communications Commission in 1979 and served a project manager of the J-12 Frequency Management Support Project for the Illinois Institute of Technology Research Institute in Annapolis Md. before joining MITRE. Mr. Kite is presently a member of ASNE the Military Operations Research Society and an associate member of Sigma Xi. Paul H. Darby:has worked in the field of interoperability both in the development of interoperability concepts and systems since joining the Department of the Navy in 1967. He was the Navy's program manager for the WestPacNorth
TACS/ TADS and IFFN systems. He is currently head of the Interoperability Branch Warfare Systems Engineering Office Space and Naval Warfare Systems Command. He holds a B.S. from the U.S. Naval Academy.
JCS Pub 1 defines interoperability as “The ability of systems, units or forces to provide services to and accept services from other systems, units or forces and to use the services so exchanged to enable them to ope...
详细信息
JCS Pub 1 defines interoperability as “The ability of systems, units or forces to provide services to and accept services from other systems, units or forces and to use the services so exchanged to enable them to operate effectively together.” With JCS Pub 1 as a foundation, interoperability of systems, units or forces can be factored into a set of components that can quantify interoperability. These components are: media, languages, standards, requirements, environment, procedures, and human factors. The concept described in this paper uses these components as an analysis tool to enable specific detailed analyses of the interoperability of BFC3 systems, units, or forces for the purpose of uncovering and resolving interoperability issues and problems in the U.S. Navy, Joint, and Allied arenas. Also, as a management tool, the components can help determine potential interoperability characteristics of future U.S. Navy BFC3 systems for compliance with battle force systems architectures. The approach selected for the quantification of interoperability was the development of a set of measures of performance (MOPs) and measures of effectiveness (MOEs). The MOPs/MOEs were integrated with a candidate set of components, which were used to partition the totality of interoperability into measurable entities. The methodology described employs basic truth table theory in conjunction with logic equations to evaluate the interoperability components in terms of MOPs that were aggregated to MOEs. It is believed that this concept, although elementary and based on fundamental principles, represents an operationally significant approach rather than a theoretical approach to the quantification of interoperability. The vehicle used as a means to measure the MOPs and MOEs was the Research Evaluation and Systems Analysis (RESA) computer modeling and simulation capability at the Naval Ocean Systems Center (NOSC), San Diego, Calif. Data for the measurements were collected during a Tactical I
A methodology for the structural life assessment of a ship's structure is suggested. The methodology is based on probabilistic analysis using reliability concepts and the statistics of extremes. In this approach, ...
详细信息
A methodology for the structural life assessment of a ship's structure is suggested. The methodology is based on probabilistic analysis using reliability concepts and the statistics of extremes. In this approach, the estimation of structural life expectancy is based on selected failure modes. All possible failure modes of the ship must be investigated and the most likely paths to structural failure identified. For the purpose of illustration two failure modes are considered in this study. They are plate plastic deformation and fatigue cracking. Structural life based on these two failure modes is determined for an example vessel. The methodology determines the probability of failure of the ship's structural components according to the identified failure modes as a function of time. The results can be interpreted as the cumulative probability distribution function (CDF) of structural life. Due to the unknown level of statistical correlation between failure modes, limits or bounds on the CDF of the structural life are established. The limits correspond to the extreme cases of fully correlated and independent failure modes. The CDFs of structural life are determined for two inspection strategies; namely, inspection every year and inspection every two years with a warranty inspection at the end of the first year. The meaning of the results for the case investigated in this study is that, for example, given an inspection strategy of two years and a desired life of 15 years, there is a 72% chance that the vessel will not experience enough partial damage‘ in the failure modes identified to constitute reaching the “end of structural life” as defined.
In this paper we present a project management support Tool, named PMST, which is designed according to feedback loop techniques. PMST consists of three components: the first component supports the project management o...
Traditionally microcoded computers have been the ideal machines for implementing scalable architectures. These machines easily implement application-specific functionality in microcode and they allow architecturally t...
ISBN:
(纸本)9780818619199
Traditionally microcoded computers have been the ideal machines for implementing scalable architectures. These machines easily implement application-specific functionality in microcode and they allow architecturally transparent variation of cost/performance by trading off application code, microcode, and hardware. In contrast, hardwired machines are intrinsically incapable of implementing scalability, because they only implement a single level of interpretation. Recent RISC designs have introduced architectural features which partly resolve the scalability issues. They implement architectural openendness to allow application-specific functionality to be added to the architecture (by means of coprocessors and special function units). Additionally they define functions which, depending on application, cost, and performance, can be implemented in hardware or, by means of emulation, in *** identical from an abstract point of view, scalability by means of microprogramming and by means of emulation on a hardwired machine is significantly different. This paper describes the emulation facility provided in SCARCE (SCalable architecture Experiment), a streamlined architecture specifically designed for a wide range of embedded applications, requiring high performance. While architecturally transparent, this emulation facility operates with little overhead (8 cycles), adds three control registers, and is always interruptible. By increasing the hardware investment, the overhead could be decreased to 4 cycles per trap.
In this paper we present a project management support Tool, named PMST, which is designed according to feedback loop techniques. PMST consists of three components: the first component supports the project management o...
In this paper we present a project management support Tool, named PMST, which is designed according to feedback loop techniques. PMST consists of three components: the first component supports the project management of a single project. The second component adapts the first component according to the results of terminated projects; so that the support of further projects is improved. Both components has been designed as feedback loop. The third component is a project data base which is the interconnection of several projects.
暂无评论