The rapid advances in computer and communication technologies, and their merger, is leading to a global information market place. It will consist of federations of very large number of information systems that will co...
ISBN:
(纸本)9780897916745
The rapid advances in computer and communication technologies, and their merger, is leading to a global information market place. It will consist of federations of very large number of information systems that will cooperate to varying extents to support the users' information needs. We discuss an approach to information brokering in the above environment. We discuss two of its tasks: information resource discovery, which identifies relevant information sources for a given query, and query processing, which involves the generation of appropriate mapping from relevant but structurally heterogeneous objects. Query processing consists of information focusing and information *** approach is based on: semantic proximity, which represents semantic similarities based on the context of comparison, and schema correspondences which are used to represent structural mappings and are associated with the context. The context of comparison of the two objects is the primary vehicle to represent the semantics for determining semantic proximity. Specifically, we use a partial context representation to capture the semantics in terms of the assumptions in the intended use of the objects and the intended meaning of the user query. Information focusing is supported by subsequent context comparison. The same mechanism can be used to support information resource discovery. Context comparison leads to changes in schema correspondences that are used to support information correlation.
Microservice architectures are increasingly used to modularize IoT applications and deploy them in distributed and heterogeneous edge computing environments. Over time, these microservice-based IoT applications are su...
详细信息
Microservice architectures are increasingly used to modularize IoT applications and deploy them in distributed and heterogeneous edge computing environments. Over time, these microservice-based IoT applications are susceptible to performance anomalies caused by resource hogging (e.g., CPU or memory), resource contention, etc., which can negatively impact their Quality of Service and violate their Service Level Agreements. Existing research on performance anomaly detection for edge computing environments focuses on model training approaches that either achieve high accuracy at the expense of a time-consuming and resource-intensive training process or prioritize training efficiency at the cost of lower accuracy. To address this gap, while considering the resource constraints and the large number of devices in modern edge platforms, we propose two clustering-based model training approaches: (1) intra-cluster parameter transfer learning-based model training (ICPTL) and (2) cluster-level model training (CM). These approaches aim to find a trade-off between the training efficiency of anomaly detection models and their accuracy. We compared the models trained under ICPTL and CM to models trained for specific devices (most accurate, least efficient) and a single general model trained for all devices (least accurate, most efficient). Our findings show that ICPTL’s model accuracy is comparable to that of the model per device approach while requiring only 40% of the training time. In addition, CM further improves training efficiency by requiring 23% less training time and reducing the number of trained models by approximately 66% compared to ICPTL, yet achieving a higher accuracy than a single general model.
Where is system architecture heading? The special interest group on computer and systems Architecture (Fachausschuss Rechner- und Systemarchitektur) of the German computer and information technology associations GI an...
详细信息
ISBN:
(数字)9783540247142
ISBN:
(纸本)9783540212386
Where is system architecture heading? The special interest group on computer and systems Architecture (Fachausschuss Rechner- und Systemarchitektur) of the German computer and information technology associations GI and ITG a- ed this question and discussed it during two Future Workshops in 2002. The result in a nutshell: Everything will change but everything else will remain. Future systems technologies will build on a mature basis of silicon and IC technology,onwell-understoodprogramminglanguagesandsoftwareengineering techniques, and on well-established operating systems and middleware concepts. Newer and still exotic but exciting technologies like quantum computing and DNA processing are to be watched closely but they will not be mainstream in the next decade. Although there will be considerable progress in these basic technologies, is there any major trend which uni?es these diverse developments? There is a common denominator – according to the result of the two - ture Workshops – which marks a new quality. The challenge for future systems technologies lies in the mastering of complexity. Rigid and in?exible systems, built under a strict top-down regime, have reached the limits of manageable complexity, as has become obvious by the recent failure of several large-scale projects. Nature is the most complex system we know, and she has solved the problem somehow. We just haven’t understood exactly how nature does it. But it is clear that systems designed by nature, like an anthill or a beehive or a swarm of birds or a city, are di?erent from today’s technical systems that have beendesignedbyengineersandcomputerscientists.
This book constitutes the refereed proceedings of the 16th International GI/ITG Conference on Measurement, Modeling and Evaluation of Computing systems and Dependability and Fault Tolerance, MMB & DFT 2012, held i...
详细信息
ISBN:
(数字)9783642285400
ISBN:
(纸本)9783642285394
This book constitutes the refereed proceedings of the 16th International GI/ITG Conference on Measurement, Modeling and Evaluation of Computing systems and Dependability and Fault Tolerance, MMB & DFT 2012, held in Kaiserslautern, Germany, in March 2012.;The 16 revised full papers presented together with 5 tool papers and 5 selected workshop papers were carefully reviewed and selected from 54 submissions. MMB & DFT 2012 covers diverse aspects of performance and dependability evaluation of systems including networks, computer architectures, distributedsystems, software, fault-tolerant and secure systems.
This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and be...
详细信息
ISBN:
(数字)9783319735153
ISBN:
(纸本)9783319735146;9783030088033
This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking.;To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks forRDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions.;Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.
暂无评论