Our goal is to develop intelligent service robots that operate in standard human environments, automating common tasks. In pursuit of this goal, we follow the ubiquitous robotics paradigm, in which intelligent percept...
详细信息
Our goal is to develop intelligent service robots that operate in standard human environments, automating common tasks. In pursuit of this goal, we follow the ubiquitous robotics paradigm, in which intelligent perception and control, are combined with ubiquitous computing. By exploiting sensors and effectors in its environment, a robot can perform more complex tasks without becoming overly complex itself. Following this insight, we have developed a service robot that operates autonomously in a sensor-equipped kitchen. The robot learns from demonstration, and performs sophisticated tasks, in concert with the network of devices in its environment. We report on the design, implementation, and usage of this system, which is freely available for use, and improvement by others, in the research community. (C) 2008 Elsevier B.V. All rights reserved.
Despite ongoing efforts to enhance the processes and techniques used in the development of software projects at all stages, software development projects continue to suffer problems in meeting user expectations, sched...
详细信息
Despite ongoing efforts to enhance the processes and techniques used in the development of software projects at all stages, software development projects continue to suffer problems in meeting user expectations, schedule, and budget. The purpose of this paper is to address the issues of management and control in large development projects and to present the results of our independent study on the development of the OS/400 R.1 development project, a very large software development project at IBM Corporation, Rochester, MN. The results of a field survey of software development professionals are summarized and compared with those of the OS/400 development. Furthermore, experience gained from the OS/360 development project is revisited and new insights are discussed. The paper concludes with lessons learned and project success factors.
Modern mobile devices are sufficiently powerful to execute computationally intensive mathematical problems. This study presents an implementation of a two-dimensional transmission line matrix method (TLM) solver execu...
详细信息
Modern mobile devices are sufficiently powerful to execute computationally intensive mathematical problems. This study presents an implementation of a two-dimensional transmission line matrix method (TLM) solver executing on a Smartphone. Software development and architectural design are discussed, focusing on object-oriented strategies for modular and reusable code. Optimisation strategies are also discussed, with large variations in performance observed dependent on the data-caching method that is used. Swapping between data buffers using pointers was shown to be the most effective method, offering significant performance gains over the original software. It was shown for a mesh measuring 246 by 370 nodes running on an iPhone 4 that an update rate of 9.37 fps could be achieved.
Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h(-1)) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid da...
详细信息
Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h(-1)) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid data analysis, as it is important to be able to inspect the data quickly in order to guide the experiment in real time. By utilizing readily available web-serving tools that interact with the Python scripting language, it was possible to implement a high-throughput Bragg-spot analyzer (***) that is presently in use at numerous synchrotron-radiation beamlines. Similarly, Python interoperability enabled the production of a new data-reduction package (***) for serial femto-second crystallography experiments at the Linac Coherent Light Source (LCLS). Future data-reduction efforts will need to focus on specialized problems such as the treatment of diffraction spots on interleaved lattices arising from multi-crystal specimens. In these challenging cases, accurate modeling of close-lying Bragg spots could benefit from the high-performance computing capabilities of graphics-processing units.
We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to b...
详细信息
We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to build robust spatially explicit simulations of ecological processes by providing a growing library of reusable interchangeable components and automating many modeling tasks. To build a model, a user selects components from the library, and then writes new components as needed. Some of these components represent specific ecological processes, such as how environmental factors influence the growth of individual trees. Other components provide simulation support such as reading and writing files in various formats to allow inter-operability with other software. The framework manages components and variables, the order of operations, and spatial interactions. The framework provides only simulation support;it does not include ecological functions or assumptions. This separation allows biologists to build models without becoming computer scientists, while computer scientists can improve the framework without becoming ecologists. The framework is designed to operate on multiple platforms and be used across networks via a World Wide Web-based user interface. ECLPSS is designed for use with both single processor computers for small models, and multiple processors in order to simulate large regions with complex interactions among many individuals or ecological compartments. To test Version 1.0 of ECLPSS, we created a model to evaluate the effect of tropospheric ozone on forest ecosystem dynamics. This model is a reduced-form version of two existing models: TREGRO, which represents an individual tree, and ZELIG, which represents forest stand growth and succession. This model demonstrates key features of ECLPSS, such as the ability to examine the effects of cell size and model structure on model predictions. (C) 2002 Elsevier Science B.V. All rights reserved.
Scenario Based Programming (SBP) builds upon Autonomous Evolution of sensory and actuator Driver layers through Environmental Constraints (AEDEC) [1] to provide a simple and yet versatile coding approach. SBP provide ...
详细信息
ISBN:
(纸本)9780889865952
Scenario Based Programming (SBP) builds upon Autonomous Evolution of sensory and actuator Driver layers through Environmental Constraints (AEDEC) [1] to provide a simple and yet versatile coding approach. SBP provide for automatic abstractions of the sensors and actuators eliminating the need for a programmer to understand the robot hardware. SBP reduces the complex robot programming down to scenario list creation and associating the appropriate action primitive with the elements in the scenario list. Since the SBP code is written for the actuator and sensory driver layers, the high level code is portable and reusable. The properties of SBP is demonstrated and verified using two physically different autonomous mobile robots (Talrik and Mantaray), by implementing obstacle avoidance and wall following behaviours.
Aims: Gulf War illness (GWI), a chronic symptom-based disorder, affects up to 30% of Veterans who served in the 1990-1991 Gulf War1. Because no diagnostic test or code for GWI exists, researchers typically determine c...
详细信息
Aims: Gulf War illness (GWI), a chronic symptom-based disorder, affects up to 30% of Veterans who served in the 1990-1991 Gulf War1. Because no diagnostic test or code for GWI exists, researchers typically determine case status using self-reported symptoms and conditions according to Kansas2 and CDC3 criteria. No validated algorithm has been published and case definitions have varied slightly by study. This paper aims to standardize the application of the original CDC and Kansas case definitions by defining a framework for writing reliable code for complex case definitions, implementing this framework on a sample of 1343 Gulf War Veterans (GWVs), and validating the framework by applying the code to a sample of 41,077 GWVs. Main methods: Methods were drawn from software engineering: write pseudocode, write test cases, and write code;then test code. code was examined for accuracy, flexibility, replicability, and reusability. Key findings: The pseudocode promoted understanding of the planned algorithm, encouraging discussion and leading to agreement on the case definition algorithms among all team members. The completed SAS code was written for and tested in the Gulf War Era Cohort and Biorepository (GWECB)4. This code was adapted and tested in the Million Veteran Program (MVP)5. The code was documented for reproducibility and reusability. Significance: Ease of reuse suggests that this method could be used to standardize the application of other case definitions, reducing time and resources spent by each study team. Documentation, code, and test cases are available through the Department of Veterans Affairs (VA) Phenomics catalog6.
暂无评论