Development, testing and maintenance of software for embedded systems is a complex task. analysis of the traceability between different software artifacts (e.g., source code, test code and requirements) is an enabling...
详细信息
Development, testing and maintenance of software for embedded systems is a complex task. analysis of the traceability between different software artifacts (e.g., source code, test code and requirements) is an enabling capability for better development, testing and maintenance of software systems. However, there is a general lack of tool support for automating traceability analysis for embedded systems. We demonstrate in this paper how to extend an existing unit test and test coverage framework to produce a hard-ware assisted tool framework capable of automatically deriving and visualizing traceability links between source code and test code artifacts. To demonstrate the applicability and usefulness of the framework, we report the application of the framework on realistic embedded software for vehicle gear transmission control built and deployed on the Analog Devices Blackfin ® ADSP-5XX family of DSP processors.
Modern medical equipments produce huge amounts of data that need to be archived for long periods and efficiently transferred over networks. data compression plays an essential role in reducing the amount of medical im...
详细信息
Modern medical equipments produce huge amounts of data that need to be archived for long periods and efficiently transferred over networks. data compression plays an essential role in reducing the amount of medical imaging data. Medical images can usually be compressed by a factor of three before any degradation appears. Higher compression levels are desirable but can only be achieved with lossy compression, thus scarifying image quality. The diagnosis value of compressed medical images has been studied and recommendations about maximum acceptable compression ratios have been provided based on qualitative visual analysis. It has been suggested, without further investigation, that CT images, with thicknesses below five mm, cannot undergo lossy compression if diagnostic value needed to be preserved. In this pa per, we present an objective quantitative quality assessment of compressed CT images using Visual Signal to Noise Ratio. Our results show that visual fidelity can be significantly affected by two factors, slice thickness and exposure time, for images compressed using the same compression ratio.
This paper describes a virtual instrument for PQ assessment. Conceived to detect transients, sags and swells, the computational nucleus is based in higher-order statistics, which enhance the statistical characterizati...
详细信息
This paper describes a virtual instrument for PQ assessment. Conceived to detect transients, sags and swells, the computational nucleus is based in higher-order statistics, which enhance the statistical characterization of the raw data, with the consequent improvement of the instrument's performance. This measurement application is thought to be used by the plant operator previously trained in PQ analysis. The normal operation limit is previously established on empirical expert knowledge. The instrument also allows online visualization of the statistical parameters (variance, skewness and kurtosis). The repeatability is 85% roughly. Results convey the idea of an easy transferable technology, due to its low cost and optimized computational nucleus.
This paper presents a novel approach for combining optical flow into enhanced 3D motion vector fields for human action recognition. Our approach detects motion of the actors by computing optical flow in video data cap...
详细信息
This paper presents a novel approach for combining optical flow into enhanced 3D motion vector fields for human action recognition. Our approach detects motion of the actors by computing optical flow in video data captured by a multi-view camera setup with an arbitrary number of views. Optical flow is estimated in each view and extended to 3D using 3D reconstructions of the actors and pixel-to-vertex correspondences. The resulting 3D optical flow for each view is combined into a 3D motion vector field by taking the significance of local motion and its reliability into account. 3D Motion Context (3D-MC) and Harmonic Motion Context (HMC) are used to represent the extracted 3D motion vector fields efficiently and in a view-invariant manner, while considering difference in anthropometry of the actors and their movement style variations. The resulting 3D-MC and HMC descriptors are classified into a set of human actions using normalized correlation, taking into account the performing speed variations of different actors. We compare the performance of the 3D-MC and HMC descriptors, and show promising experimental results for the publicly available i3DPost Multi View Human Action dataset.
Augmented reality application, has received considerable interest in recent years with the advent of specialized applications and data fusion from different visual sources. In this paper, we present a novel mobile aug...
详细信息
Augmented reality application, has received considerable interest in recent years with the advent of specialized applications and data fusion from different visual sources. In this paper, we present a novel mobile augmented reality system for artificial optical radiation (AOR) identification and measurement. The system has been designed to help experts (both medical doctors and engineers) deal with AOR related problem in their fields according to the European Directive 2006/25/EC (Occupational health and safety). An innovative tracking system, based on image retrieval paradigms, has been used in image registration.
Our aim is to predict the stability of a grasp from the perceptions available to a robot before attempting to lift up and transport an object. The percepts we consider consist of the tactile imprints and the object-gr...
详细信息
Our aim is to predict the stability of a grasp from the perceptions available to a robot before attempting to lift up and transport an object. The percepts we consider consist of the tactile imprints and the object-gripper configuration read before and until the robot's manipulator is fully closed around an object. Our robot is equipped with multiple tactile sensing arrays and it is able to track the pose of an object during the application of a grasp. We present a kernel-logistic-regression model of pose- and touch-conditional grasp success probability which we train on grasp data collected by letting the robot experience the effect on tactile and visual signals of grasps suggested by a teacher, and letting the robot verify which grasps can be used to rigidly control the object. We consider models defined on several subspaces of our input data - e.g., using tactile perceptions or pose information only. Our experiment demonstrates that joint tactile and pose-based perceptions carry valuable grasp-related information, as models trained on both hand poses and tactile parameters perform better than the models trained exclusively on one perceptual input.
While the arctic possesses significant information of scientific value, surprisingly little work has focused on developing robotic systems to collect this data. For arctic robotic data collection to be a viable soluti...
详细信息
While the arctic possesses significant information of scientific value, surprisingly little work has focused on developing robotic systems to collect this data. For arctic robotic data collection to be a viable solution, a method for navigating in the arctic, and thus of assessing glacial terrain, must be developed. Segmenting the ground plane from the rest of the image is one common aspect of a visual hazard detection system. However, the properties of glacial images, namely low contrast, overcast sky, and cloud, mountain, and snow sharing common colors, pose difficulties for most visual algorithms. A horizon line detection scheme is presented which uses multiple visual cues to rank candidate horizon segments, then constructs a horizon line consistent with those cues. Weak cues serve to reinforce a selected path, while strong cues have the ability to redirect it. Further, the system infers the horizon location in areas that are visually ambiguous. The performance of the proposed system has been tested on multiple data sets collected on two different glaciers in Alaska, and compares favorably, both in terms of time and classification performance, to representative segmentation algorithms from several different classes.
Parallel computational scientific applications have been described by their computation and communication patterns. From a storage and I/O perspective, these applications can also be grouped into separate data models ...
详细信息
Parallel computational scientific applications have been described by their computation and communication patterns. From a storage and I/O perspective, these applications can also be grouped into separate data models based on the way data is organized and accessed during simulation, analysis, and visualization. Parallel netCDF is a popular library used in many scientific applications to store scientific datasets and provides high-performance parallel I/O. Although the metadata-rich netCDF file format can effectively store and describe regular multi-dimensional array datasets, it does not address the full range of current and future computational science data models. In this paper, we present a new storage scheme in Parallel netCDF to represent a broad variety of data models used in modern computational scientific applications. This scheme also allows concurrent metadata construction for different data objects from multiple groups of application processes, an important feature in obtaining a high degree of I/O parallelism for data models exhibiting irregular data distribution. Furthermore, we employ non-blocking I/O functions to aggregate irregularly distributed data requests into large, contiguous data requests, to achieve high-performance I/O. Using an example of adaptive mesh refinement data model, we demonstrate the proposed scheme can produce scalable performance results for both data and metadata creation and access.
The media consumption has become more individualized and customized due to the development of digital media and of more efficient portable devices. Among other fields, linguistic learning may benefit greatly from this...
详细信息
The media consumption has become more individualized and customized due to the development of digital media and of more efficient portable devices. Among other fields, linguistic learning may benefit greatly from this trend if new types of content are created, specifically for the requirements of this new generation of learners, suitable for individual usage, portable, customized, flexible, interactive and entertaining. In this paper we present a solution that has its roots in the Edutainment (Education and entertainment) field and consists in automatically generating language comprehension quizzes based on foreign-language audiovisual content (TV series, documentaries, news). The proposed approach turns any foreign-language video into a true interactive teaching material, by combining different fields of expertise, such as video analysis and meta-data extraction, information search and automatic language processing.
Cardiac Magnetic Resonance (CMR) image segmentation is a crucial step before physicians go for patient diagnoses, related image guided surgery or medical datavisualization. Most of the existing algorithms are effecti...
详细信息
Cardiac Magnetic Resonance (CMR) image segmentation is a crucial step before physicians go for patient diagnoses, related image guided surgery or medical datavisualization. Most of the existing algorithms are effective under certain circumstances. On the other hand, Random Walk approach is robust for image segmentation in every condition. Weighting function plays an important role for a successful segmentation in the approach. In this work, an attempt has been made to study the behavior of the weighting function with respect to the intensity distribution in the object to be segmented. In this work, we present a weighting function viz. derivative of Gaussian, that is proved to yield better segmentation results while applying on ischemic CMR images, where objects are obscure. Virtuous results on CMR images describes the potential of the weighting function.
暂无评论