For pt.1 see ibid., p.46 (1996). Greg Wilson started a discussion on what topics computer scientists should teach-given just a week-that would most benefit the physical scientist or engineer. The present paper provide...
详细信息
For pt.1 see ibid., p.46 (1996). Greg Wilson started a discussion on what topics computer scientists should teach-given just a week-that would most benefit the physical scientist or engineer. The present paper provides three more opinions, and Wilson's response.
The von Neumann architecture-which is based upon the principle of one he von Neumann architecture-which is based upon the principle of one complex processor that sequentially performs a single complex task at a given ...
详细信息
The von Neumann architecture-which is based upon the principle of one he von Neumann architecture-which is based upon the principle of one complex processor that sequentially performs a single complex task at a given moment-has dominated computing technology for the past 50 years. Recently, however, researchers have begun exploring alternative computational systems based on entirely different principles. Although emerging from disparate domains, the work behind these systems shares a common computational philosophy, which the author calls cellular computing. This philosophy promises to provide new means for doing computation more efficiently-in terms of speed, cost, power dissipation, information storage, and solution quality. Simultaneously, cellular computing offers the potential of addressing much larger problem instances than previously possible, at least for some application domains. Cellular computing has attracted increasing research interest. Work in this field has produced results that hold prospects for a bright future. Yet questions must be answered before cellular computing can become a mainstream paradigm. What classes of computational tasks are most suited to it? How do we match the specific properties and behaviors of a given model to a suitable class of problems?
Experimental performance studies on computer systems, including Grids, require deep understandings on their workload characteristics. The need arises from two important and closely related topics in performance evalua...
详细信息
ISBN:
(纸本)9780769528335
Experimental performance studies on computer systems, including Grids, require deep understandings on their workload characteristics. The need arises from two important and closely related topics in performance evaluation, namely, workload modeling and performance prediction. Both topics rely heavily on the representative workload data and have their arsenal from statistics and machine learning. Nevertheless, their goals and the nature of research differ considerably. Workload modeling aims at building mathematical models to generate workloads that can be used in simulation-based performance evaluation studies. It should statistically resemble the original real-world data therefore marginal statistics and second-order properties such as autocorrelation and power spectrum are important matching criteria. Performance prediction, on the other hand, intends to provide real-time forecast of important performance metrics (such as application run time and queue wait time) which can support Grid scheduling decisions. From this perspective prediction accuracy as well as performance should be considered to evaluate candidate techniques. My PhD research focuses primarily on these two topics in space-shared, data-intensive Grid environments. Starting from a comprehensive work-load analysis with emphasis oil the correlation structures and the scaling behavior several basic job arrival patterns such as pseudo-periodicity and long range dependence are identified. Models are further proposed to capture these important arrival patterns and a complete workload model including run time is being investigated. The strong autocorrelations present in run time and queue wait time series inspire the research for performance prediction based on learning from historical data. Techniques based on a Instance Based Learning algorithm and several improvements are proposed and empirically evaluated. Research plans are proposed to use the results of work-load modeling and performance prediction in
The mobile computing domain presents major new challenges for middleware to overcome. In particular the mobile environment is characterised by frequent changes and often poor network QoS. Therefore, a number of middle...
详细信息
ISBN:
(纸本)0769519210
The mobile computing domain presents major new challenges for middleware to overcome. In particular the mobile environment is characterised by frequent changes and often poor network QoS. Therefore, a number of middleware platforms and paradigms have been Put forward to solve these issues. This in turn though has generated a problem i.e. middleware heterogeneity exists within this domain. As a consequence, mobile client applications developed upon one type of middleware are unable to interoperate and utilise services implemented on an alternative. In this paper, we examine the issue of middleware heterogeneity and propose a configurable and dynamically reconfigurable middleware platform, named ReMMoC (Reflective Middleware for Mobile computing), which allows mobile client applications to be developed independently of the underlying middleware technology.
DIRAC (Distributed Infrastructure with Remote Agent Control) has been developed by the CERN LHCb physics experiment to facilitate large scale simulation and user analysis tasks spread across both grid and non-grid com...
详细信息
ISBN:
(纸本)0769522564
DIRAC (Distributed Infrastructure with Remote Agent Control) has been developed by the CERN LHCb physics experiment to facilitate large scale simulation and user analysis tasks spread across both grid and non-grid computing resources. It consists of a small set of distributed stateless Core Services, which are centrally managed, and Agents which are managed by each computing site. DIRAC utilizes concepts from existing distributed computing models to provide a lightweight, robust, and flexible system. This paper will discuss the architecture, performance, and implementation of the DIRAC system which has recently been used for an intensive physics simulation involving more than forty sites, 90 TB of data, and in excess of one thousand 1 GHz;processor-years.
In hard computing for engineering applications, we use explicit models derived from physical principles, and implement them on a computer as purely syntactic Turing-equivalent structures. When such a direct attack is ...
详细信息
ISBN:
(纸本)0780378555
In hard computing for engineering applications, we use explicit models derived from physical principles, and implement them on a computer as purely syntactic Turing-equivalent structures. When such a direct attack is not feasible, we resort to soft computing, using the techniques arising from artificial intelligence to ferret out the secrets of a process based on implicit models derived from observed data. Again, we implement them on a computer as purely syntactic Turing-equivalent structures. As the interest of engineers moves toward problems in biomedical engineering and human-machine interaction, it is apparent that there are problems intractable even by the methods of soft computing. Processes of life and mind include internal semantics including inherent semantic ambiguity that are indispensable to their operation, but these semantics lire totally missed by the purely syntactical strategies of both hard and soft computing. For engineers to make responsible decisions about systems that involve naturally occurring processes of life and mind, a new modeling strategy is required. It needs semantic models that can account for internal ambiguity, and has so high a degree of flexibility that we may think of it as softer than "soft computing.".
The development of digital computing machines is described and illustrated, from the mechanical devices of the seventeenth century to the electronic systems of today, in three chronological sets of sketches. Only a fe...
详细信息
The Italian Institute for Nuclear physics (INFN) has a long experience in the field of distributed scientific computing, mainly in the framework of GRID computing. In the last two years an interest towards the cloud c...
详细信息
ISBN:
(纸本)9780769551685
The Italian Institute for Nuclear physics (INFN) has a long experience in the field of distributed scientific computing, mainly in the framework of GRID computing. In the last two years an interest towards the cloud computing paradigm has arisen inside the INFN scientific and technological communities, leading to the growth of new activities aiming at creating a new distributed computing environment that takes advantage of the flexibility offered by cloud technologies. In this contribution we will give an overview of the activities carried out by the INFN IT community in this direction and we will highlight the key aspects still under evaluation in view of a possible adoption of the cloud for scientific computing.
暂无评论