The implementation details of a Lisp environment built on top of a distributed operating system are presented. The system provides transparent distribution, protected large-grain persistent heaps, concurrency within e...
详细信息
The implementation details of a Lisp environment built on top of a distributed operating system are presented. The system provides transparent distribution, protected large-grain persistent heaps, concurrency within each environment and seamless sharing of Lisp data structures between separate environments.< >
Clouds LISP distributed environments (CLIDE) is a distributed, persistent object-based symbolic programming system being implemented on the Clouds distributed operating system. LISP environment instances are stored as...
详细信息
Clouds LISP distributed environments (CLIDE) is a distributed, persistent object-based symbolic programming system being implemented on the Clouds distributed operating system. LISP environment instances are stored as large-grained persistent objects, enabling users on many machines to share the contents of these environments through interenvironment evaluations. CLIDE provides a comprehensive research environment for distributed symbolic language, invocation and consistency semantics, and an implementation vehicle for the construction of the symbolic processing portions of complex megaprogrammed systems.< >
作者:
KING, JFBARTON, DEJ. Fred King:is the manager of the Advanced Technology Department for Unisys in Reston
Virginia. He earned his Ph.D. in mathematics from the University of Houston in 1977. He has been principal investigator of research projects in knowledge engineering pattern recognition and heuristic problem-solving. Efforts include the development of a multi-temporal multispectral classifier for identifying graincrops using LANDSAT satellite imagery data for NASA. Also as a member of the research team for a NCI study with Baylor College of Medicine and NASA he helped develop techniques for detection of carcinoma using multispectral microphotometer scans of lung tissue. He established and became technical director of the AI Laboratory for Ford Aerospace where he developed expert scheduling modeling and knowledge acquisition systems for NASA. Since joining Unisys in 1985 he has led the development of object-oriented programming environments blackboard architectures data fusion techniques using neural networks and intelligent data base systems. Douglas E. Barton:is manager of Logistics Information Systems for Unisys in Reston
Virginia. He earned his B.A. degree in computer science from the College of William and Mary in 1978 and did postgraduate work in London as a Drapers Company scholar. Since joining Unisys in 1981 his work has concentrated on program management and software engineering of large scale data base management systems and design and implementation of knowledge-based systems in planning and logistics. As chairman of the Logistics Data Subcommittee of the National Security Industrial Association (NSIA) he led an industry initiative which examined concepts in knowledge-based systems in military logistics. His responsibilities also include evaluation development and tailoring of software engineering standards and procedures for data base and knowledge-based systems. He is currently program manager of the Navigation Information Management System which provides support to the Fleet Ballistic Missile Progr
A valuable technique during concept development is rapid prototyping of software for key design components. This approach is particularly useful when the optimum design approach is not readily apparent or several know...
详细信息
A valuable technique during concept development is rapid prototyping of software for key design components. This approach is particularly useful when the optimum design approach is not readily apparent or several known alternatives need to be rapidly evaluated. A problem inherent in rapid prototyping is the lack of a "target system" with which to interface. Some alternatives are to develop test driver libraries, integrate the prototype with an existing working simulator, or build one for the specific problem. This paper presents a unique approach to concept development using rapid prototyping for concept development and scenario-based simulation for concept verification. The rapid prototyping environment, derived from artificial intelligence technology, is based on a blackboard architecture. The rapid prototype simulation capability is provided through an object-oriented modeling environment. It is shown how both simulation and blackboard technologies are used collectively to rapidly gain insight into a tenacious problem. A specific example will be discussed where this approach was used to evolve the logic of a mission controller for an autonomous underwater vehicle.
Microservice architectures are increasingly used to modularize IoT applications and deploy them in distributed and heterogeneous edge computing environments. Over time, these microservice-based IoT applications are su...
详细信息
Microservice architectures are increasingly used to modularize IoT applications and deploy them in distributed and heterogeneous edge computing environments. Over time, these microservice-based IoT applications are susceptible to performance anomalies caused by resource hogging (e.g., CPU or memory), resource contention, etc., which can negatively impact their Quality of Service and violate their Service Level Agreements. Existing research on performance anomaly detection for edge computing environments focuses on model training approaches that either achieve high accuracy at the expense of a time-consuming and resource-intensive training process or prioritize training efficiency at the cost of lower accuracy. To address this gap, while considering the resource constraints and the large number of devices in modern edge platforms, we propose two clustering-based model training approaches: (1) intra-cluster parameter transfer learning-based model training (ICPTL) and (2) cluster-level model training (CM). These approaches aim to find a trade-off between the training efficiency of anomaly detection models and their accuracy. We compared the models trained under ICPTL and CM to models trained for specific devices (most accurate, least efficient) and a single general model trained for all devices (least accurate, most efficient). Our findings show that ICPTL’s model accuracy is comparable to that of the model per device approach while requiring only 40% of the training time. In addition, CM further improves training efficiency by requiring 23% less training time and reducing the number of trained models by approximately 66% compared to ICPTL, yet achieving a higher accuracy than a single general model.
On behalf of the Organizing Committee I am pleased to present the proceedings of the 2006 Symposium on Component-Based Software engineering (CBSE). CBSE is concerned with the development of software-intensive systems ...
详细信息
ISBN:
(数字)9783540356295
ISBN:
(纸本)9783540356288
On behalf of the Organizing Committee I am pleased to present the proceedings of the 2006 Symposium on Component-Based Software engineering (CBSE). CBSE is concerned with the development of software-intensive systems from reusable parts (components), the development of reusable parts, and system maintenance and improvement by means of component replacement and customization. CBSE 2006 was the ninth in a series of events that promote a science and technology foundation for achieving predictable quality in software systems through the use of software component technology and its associated software engineering practices. We were fortunate to have a dedicated Program Committee comprising 27 internationally recognized researchers and industrial practitioners. We received 77 submissions and each paper was reviewed by at least three Program Committee members (four for papers with an author on the Program Committee). The entire reviewing process was supported by Microsoft’s CMT technology. In total, 22 submissions were accepted as full papers and 9 submissions were accepted as short papers. This was the first time CBSE was not held as a co-located event at ICSE. Hence special thanks are due to Ivica Crnkovic for hosting the event. We also wish to thank the ACM Special Interest Group on Software engineering (SIGSOFT) for their sponsorship of CBSE 2005. The proceedings you now hold were published by Springer and we are grateful for their support. Finally, we must thank the many authors who contributed the high-quality papers contained within these proceedings.
This book features a collection of high-quality research papers presented at the International Conference on Intelligent and Cloud Computing (ICICC 2019), held at Siksha 'O' Anusandhan (Deemed to be University...
详细信息
ISBN:
(数字)9789811562020
ISBN:
(纸本)9789811562013;9789811562044
This book features a collection of high-quality research papers presented at the International Conference on Intelligent and Cloud Computing (ICICC 2019), held at Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar, India, on December 20, 2019. Including contributions on system and network design that can support existing and future applications and services, it covers topics such as cloud computing system and network design, optimization for cloud computing, networking, and applications, green cloud system design, cloud storage design and networking, storage security, cloud system models, big data storage, intra-cloud computing, mobile cloud system design, real-time resource reporting and monitoring for cloud management, machine learning, data mining for cloud computing, data-driven methodology and architecture, and networking for machine learning systems.
GECON - Grid Economics and Business Models Cloud computing is seen by many people as the natural evolution of Grid computing concepts. Both, for instance, rely on the use of service-based approaches for pro- sioning ...
详细信息
ISBN:
(数字)9783642038648
ISBN:
(纸本)9783642038631
GECON - Grid Economics and Business Models Cloud computing is seen by many people as the natural evolution of Grid computing concepts. Both, for instance, rely on the use of service-based approaches for pro- sioning compute and data resources. The importance of understanding business m- els and the economics of distributed computing systems and services has generally remained unchanged in the move to Cloud computing. This understanding is nec- sary in order to build sustainable e-infrastructure and businesses around this paradigm of sharing Cloud services. Currently, only a handful of companies have created s- cessful businesses around Cloud services. Among these, Amazon and Salesforce (with their offerings of Elastic Compute Cloud and force. com among other offerings) are the most prominent. Both companies understand how to charge for their services and how to enable commercial transactions on them. However, whether a wide-spread adoption of Cloud services will occur has to seen. One key enabler remains the ability to support suitable business models and charging schemes that appeal to users o- sourcing (part of) their internal business functions. The topics that have been addressed by the authors of accepted papers reflect the above-described situation and the need for a better understanding of Grid economics. The topics range from market mechanisms for trading computing resources, capacity planning, tools for modeling economic aspects of service-oriented systems, archit- tures for handling service level agreements, to models for economically efficient resource allocation.
暂无评论