In the big data era, creating self-managing scalable platforms for running big data applications is a fundamental task. Such self-managing and self-healing platforms involve a proper reaction to hardware (e.g., cluste...
详细信息
ISBN:
(数字)9781728189468
ISBN:
(纸本)9781728189475
In the big data era, creating self-managing scalable platforms for running big data applications is a fundamental task. Such self-managing and self-healing platforms involve a proper reaction to hardware (e.g., cluster nodes) and software (e.g., big data tools) failures, besides a dynamic resizing of the allocated resources based on overload and underload situations and scaling policies. The distributed and stateful nature of big data platforms (e.g., Hadoop-based cluster) makes the management of these platforms a challenging task. This paper aims to design and implement a scalable cloud native Hadoopbased big data platform using MiCADO, an open-source, and a highly customisable multi-cloud orchestration and auto-scaling framework for Docker containers, orchestrated by Kubernetes. The proposed MiCADO-based big data platform automates the deployment and enables an automatic horizontal scaling (in and out) of the underlying cloud infrastructure. The empirical evaluation of the MiCADO-based big data platform demonstrates how easy, efficient, and fast it is to deploy and undeploy Hadoop clusters of different sizes. Additionally, it shows how the platform can automatically be scaled based on user-defined policies (such as CPU-based scaling).
Iterative methods for solving large sparse systems of linear equations are widely used in many HPC applications. Extreme scaling of these methods can be difficult, however, since global communication to form dot produ...
详细信息
The Partitioned Global Address Space memory model has been popularised by a number of languages and applications. However this abstraction can often result in the programmer having to rely on some in built choices and...
详细信息
Recognizing toponyms and resolving them to their real-world referents is required to provide advanced semantic access to textual data. This process is often hindered by the high degree of variation in toponyms. Candid...
详细信息
Microorganisms, such as bacteria and algae, thrive in complex environments and their behavior in fluids holds significant importance for various medical and industrial applications. By conducting Lattice Boltzmann (LB...
详细信息
Microorganisms, such as bacteria and algae, thrive in complex environments and their behavior in fluids holds significant importance for various medical and industrial applications. By conducting Lattice Boltzmann (LB) simulations, the transport and rotational properties of a generic squirmer are investigated in solutions embedded with flexible polymer chains at different Reynolds numbers. The interplay of activity and heterogeneously distributed polymers have profound influences on these properties. Remarkable enhancements of up to three orders of magnitude in the rotational motion, along with apparent decays in self-propelling velocities, are observed for squirmers with nonzero active stresses. These extraordinary phenomena stem from the squirmer-polymer mechanical and hydrodynamic interactions. Specifically, polymer wrapping occurs in front of a pusher, while numerous polymers are absorbed in the rear of a puller. Both mechanisms enhance the rotational motion and simultaneously impede translations through forces and torques arising from direct contacts or asymmetric local flows induced by polymers. The source dipole flow fields generated by a neutral swimmer rapidly advect polymers to the rear, leaving no apparent impacts on its rotational and transport properties. The influences of Reynolds number Re (Re=0.8 and 0.04) and squirmer-polymer boundary conditions (no-slip and repulsive) on the dynamics are addressed. In short, the no-slip boundary condition results in more profound effects on both rotational and translational properties at Re = 0.8. However, at Re = 0.04, the disparity between the two boundary conditions diminishes due to the heightened fluid viscous drag, which impedes direct contacts between squirmers and polymers. Our results reveal the relevance of system heterogeneity and highlight the essential role of squirmer-polymer mechanical and hydrodynamic interactions in shaping the behavior of swimmers in viscoelastic fluids. These findings offer valua
Exploration using borehole drilling is a key activity in determining the most appropriate locations for the petroleum industry to develop oil fields. However estimating the amount of Oil In Place (OIP) relies on compu...
详细信息
Exploration using borehole drilling is a key activity in determining the most appropriate locations for the petroleum industry to develop oil fields. However estimating the amount of Oil In Place (OIP) relies on computing with a very significant number of geological models which, due to the ever increasing capability to capture and refine data, is becoming infeasible. As such data reduction techniques are required to reduce this set down to a much smaller yet still fully representative ensemble. In this paper we explore different approaches to identifying the key grouping of models, based on their most important features, and then using this information select a reduced set which we can be confident fully represent the overall model space. The result of this work is an approach which enables us to describe the entire state space using only 0.5% of the models, along with a series of lessons learnt. The techniques that we describe are not only applicable to oil and gas exploration, but also more generally to the HPC community as we are forced to work with reduced data-sets due to the rapid increase in data collection capability.
The modernization that stems from Industry 4.0 started populating the manufacturing sector with networked devices, complex sensors, and a significant proportion of physical actuation components. However, new capabilit...
详细信息
ISBN:
(数字)9781728152004
ISBN:
(纸本)9781728152011
The modernization that stems from Industry 4.0 started populating the manufacturing sector with networked devices, complex sensors, and a significant proportion of physical actuation components. However, new capabilities in networked cyber-physical systems demand more complex infrastructure and algorithms and often lead to new security flaws and operational risks that increase the attack surface area exponentially. The interconnected nature of Industry 4.0-driven operations and the pace of digital transformation mean that cyberattacks can have far more extensive effects than ever before. Based on that, the core ideas of this paper are driven by the observation that cybersecurity is one of the key enablers of Industry 4.0. Having this in mind, we propose CryptoFactory - a forward-looking design of a layered-based architecture that can be used as a starting point for building secure and privacy-preserving smart factories. CryptoFactory aims to change the security outlook in smart manufacturing by discussing a set of fundamental requirements and functionality that modern factories should support in order to be resistant to both internal and external attacks. To this end, CryptoFactory first focuses on how to build trust relationships between the hardware devices in the factory. Then, we look on how to use several cryptographic approaches to allow IoT devices to securely collect, store and share their data while we also touch upon the emerging topic of secure and privacy-preserving communication and collaboration between manufacturing environments and value chains. Finally, we look into the problem of how to perform privacy-preserving analytics by leveraging Trusted Execution Environments and the promising concept of Functional Encryption.
The Julia programming language has evolved into a modern alternative to fill existing gaps in scientific computing and data science applications. Julia leverages a unified and coordinated single-language and ecosystem...
详细信息
Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these gui...
Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements. The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI. This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system. The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.
The amount of simulation experimentation that can be performed in a project can be restricted by time, especially if a model takes a long time to simulate and many replications are required. Cloud computing presents a...
详细信息
ISBN:
(数字)9781728194998
ISBN:
(纸本)9781728195001
The amount of simulation experimentation that can be performed in a project can be restricted by time, especially if a model takes a long time to simulate and many replications are required. Cloud computing presents an attractive proposition to speeding up, or extending, simulation experimentation as computing resources can be hired on demand rather than having to invest in costly infrastructure. However, it is not common practice for simulation users to take advantage of this and, arguably, rather than speeding up simulation experimentation users tend to make compromises by using unnecessary model simplification techniques. This may be due to a lack of awareness of what Cloud computing can offer. Based on several years' experience of innovation in this area, this article presents our experiences in developing Cloud computing applications for simulation experimentation and discusses what future innovations might be created for the widespread benefit of our simulation community.
暂无评论