Lack of machine intelligibility in human/computer interactions can be traced to present approaches in software design. From the duality principle, software involved in human/computer software interaction contains two ...
详细信息
Lack of machine intelligibility in human/computer interactions can be traced to present approaches in software design. From the duality principle, software involved in human/computer software interaction contains two distinct layers, a declarative knowledge-level layer and a lower-level functional or procedural-knowledge layer. The former supports simple deductive and inductive inferences and can be used to facilitate human-comprehensible communication between programs, while the latter is composed of normal procedurally encoded computer programs. The duality principle in software design is necessary in dealing with the demands of wide-scale computer usage in the information age and an urgent goal for computer science research.
Performance prediction is useful in helping parallel programmers answer questions such as speedup scalability. Performance prediction for parallel simulation requires first working out the performance analyzer algorit...
详细信息
The safety of software is becoming increasingly important as computers pervade control systems on which human life depends. Whilst hardware has become significantly more reliable over the years, the same cannot be sai...
详细信息
The safety of software is becoming increasingly important as computers pervade control systems on which human life depends. Whilst hardware has become significantly more reliable over the years, the same cannot be said of software. This has become more complex and methods to ensure its correctness have been slow in development. One approach is to mathematically verify software in such systems. This paper investigates the industrial use of these techniques, their advantages and disadvantages, and the introduction of standards and their recommendations concerning formal methods in this area. The cost of safety is also considered.
The next phase of LHC Operations-High Luminosity LHC (HL-LHC), which is aimed at ten-fold increase in the luminosity of proton-proton collisions at the energy of 14 TeV, is expected to start operation in 2027-2028 and...
详细信息
A type flaw attack on a security protocol is an attack where a field that was originally intended to have one type is subsequently interpreted as having another type. A number of type flaw attacks have appeared in the...
A type flaw attack on a security protocol is an attack where a field that was originally intended to have one type is subsequently interpreted as having another type. A number of type flaw attacks have appeared in the academic literature. In this paper we prove that type flaw attacks can be prevented using a simple technique of tagging each field with some information indicating its intended type.
That the influence of the PRAM model is ubiquitous in parallel algorithm design is as clear as the fact that it is technologically infeasible for the forseeable future. The current generation of parallel hardware prom...
详细信息
The next phase of LHC Operations – High Luminosity LHC (HL-LHC), which is aimed at ten-fold increase in the luminosity of proton-proton collisions at the energy of 14 TeV, is expected to start operation in 2027-2028 ...
The next phase of LHC Operations – High Luminosity LHC (HL-LHC), which is aimed at ten-fold increase in the luminosity of proton-proton collisions at the energy of 14 TeV, is expected to start operation in 2027-2028 and will deliver an unprecedented scientific data volume of multi-exabyte scale. This amount of data has to be stored and the corresponding storage system should ensure fast and reliable data delivery for processing by scientific groups distributed all over the world. The present LHC computing and data processing model will not be able to provide the required infrastructure growth even taking into account the expected hardware technology evolution. To address this challenge the new state-of-the-art computing infrastructure technologies are now being developed and are presented here. The possibilities of application of the HL-LHC distributed data handling technique for other particle and astro-particle physics experiments dealing with large-scale data volumes like DUNE, LSST, Belle-II, JUNO, SKAO etc. are also discussed.
暂无评论