This paper introduces a video processing method designed specifically to give researchers access to players' behaviors within any commercially available off-the-shelf PC game. Insights into players' experience...
详细信息
This paper introduces a video processing method designed specifically to give researchers access to players' behaviors within any commercially available off-the-shelf PC game. Insights into players' experiences with videogame violence are achieved from automatically processing significant elements of the audio-visual communication/feedback (i.e. moving image and sound) over long periods of uninhibited game play. The method exploits a wealth of significant and (behaviorally) influential information that is conveyed to the player as they progress through a game. Such information can relate to virtual progress (e.g. maps, mission logs), player reserves (e.g. acquisitions) or vitality (e.g. health or stamina levels), to name but a few. By analyzing what is transmitted to the player during play, feedback-based game metrics are produced that utilize the same content that the player perceives and processes during their gameplay, thus, tying the method directly to individual `player experiences.' This method was developed for a project that addressed the persistent concerns over the impact of game violence and its translation into societal violence. The method provided the project with an alternative approach to understanding the use and function of violence within game systems, taking research in a new direction from the approach and research-designs employed by experimental psychological over the last few decades.
The floating-point unit is one of the most common building block in any computing system and is used for a huge number of applications. By combining two state-of-the-art techniques of imprecise hardware, namely Gate-L...
详细信息
The floating-point unit is one of the most common building block in any computing system and is used for a huge number of applications. By combining two state-of-the-art techniques of imprecise hardware, namely Gate-Level Pruning and Inexact Speculative Adder, and by introducing a novel Inexact Speculative Multiplier architecture, three different approximate FPUs and one reference IEEE-754 compliant FPU have been integrated in a 65 nm CMOS process within a low-power multi-core processor. Silicon measurements show up to 27% power, 36% area and 53 % power-area product savings compared to the IEEE-754 single-precision FPU. Accuracy loss has been evaluated with a high-dynamic-range image tone-mapping algorithm, resulting in small but non-visible errors with image PSNR value of 90 dB.
The high-precision ball bearing is fundamental to the performance of complex mechanical systems. As the speed increases, the cage behavior becomes a key factor in influencing the bearing performance, especially life a...
详细信息
The high-precision ball bearing is fundamental to the performance of complex mechanical systems. As the speed increases, the cage behavior becomes a key factor in influencing the bearing performance, especially life and reliability. This paper develops a high-precision instrument for analyzing nonlinear dynamic behavior of the bearing cage. The trajectory of the rotational center and non-repetitive run-out( NRRO) of the cage are used to evaluate the instability of cage motion. This instrument applied an aerostatic spindle to support and spin test the bearing to decrease the influence of system error. Then, a high-speed camera is used to capture images when the bearing works at high speeds. A 3D trajectory tracking software TEMA Motion is used to track the spot which marked the cage surface. Finally, by developing the MATLAB program, a Lissajous' figure was used to evaluate the nonlinear dynamic behavior of the cage with different speeds. The trajectory of rotational center and NRRO of the cage with various speeds are analyzed. The results can be used to predict the initial failure and optimize cage structural parameters. In addition, the repeatability precision of instrument is also validated. In the future, the motorized spindle will be applied to increase testing speed and imageprocessingalgorithms will be developed to analyze the trajectory of the cage. Published by AIP Publishing.
Synthetic Aperture Radars (SAR) provide high resolution ground images by utilizing the aircraft motion to synthesize a large aperture. The presence of residual phase error in the SAR images causes defocusing in the im...
详细信息
ISBN:
(纸本)9781509007752
Synthetic Aperture Radars (SAR) provide high resolution ground images by utilizing the aircraft motion to synthesize a large aperture. The presence of residual phase error in the SAR images causes defocusing in the images. Autofocus algorithm compensates these phase errors to obtain a focused image. In most scenarios a priori information about the type of phase error is known which can be effectively utilized in autofocus techniques. The paper proposes a new autofocus algorithm which is based on Wiener filter theory and implements a multistage Wiener filter to compensate the phase error in SAR images.
This paper presents a new motion vector based moving object detection system for use in Advanced Driver Assistance systems. By the shared-use of motion vectors with a video encoder, the computational cost for motion e...
详细信息
This paper presents a new motion vector based moving object detection system for use in Advanced Driver Assistance systems. By the shared-use of motion vectors with a video encoder, the computational cost for motion estimation using optical flow method can be saved, making the system more cost effective. Moving objects are detected by evaluating the planar parallax residuals of macroblocks in the video. The proposed algorithm involves newly proposed "APD" constraints on hypothesis generation and template matching in hypothesis verification. Test results show that moving objects can be detected effectively in scenarios that pose danger to the ego vehicle.
This paper provides a mathematical analysis that shows how the crisp output of an IT2 FLS that is obtained by using the Begian-Melek-Mendel (BMM) formula compares to the one obtained by using center-of-sets type-reduc...
详细信息
ISBN:
(纸本)9781509006274
This paper provides a mathematical analysis that shows how the crisp output of an IT2 FLS that is obtained by using the Begian-Melek-Mendel (BMM) formula compares to the one obtained by using center-of-sets type-reduction followed by defuzzification (COS TR + D). This is made possible by reformulating the structural solutions of the two optimization problems that are associated with COS TR, and then expanding each of them using a Maclaurin series expansion. As a result of doing this, we show that BMM is the zero-order approximation to COS TR + D. Additionally, by retaining the zero-order and first-order terms from the Maclaurin series expansions, we provide a new Enhanced BMM, one that is non-iterative, has a closed form and is much faster than using the EKM algorithms for COS TR. Although the Enhanced BMM formula is slower than BMM, we demonstrate, by means of extensive simulations, that it is from 5% to 50% more accurate than is BMM for achieving the same numerical solution that is obtained from COS TR + D; and, it is at least 94% faster than when EKM is used for COS TR +D, which makes the Extended BMM a very strong candidate for use in real time applications of IT2 FLSs.
While current computers have shown to be particular useful for arithmetic and logic implementations, their accuracy and efficiency for applications such as e.g. face, object and speech recognition, are not that impres...
详细信息
ISBN:
(纸本)9781467398893
While current computers have shown to be particular useful for arithmetic and logic implementations, their accuracy and efficiency for applications such as e.g. face, object and speech recognition, are not that impressive, especially when compared to what the human brain can do. Machine learning algorithms have been useful, especially for these type of applications, as they operate in a similar way to the human brain, by learning the data provided and storing it for future recognition. Until now, there has been a strong focus on developing the process of data storage and retrieval, merely neglecting the value of the provided information and the amount of data required to store. Hence, currently all information provided is stored, because it is difficult for the machine to decide which information needs to be stored. Consequently, large amounts of data are stored, which then affects the processing of the data. Thus, this paper investigates the opportunity to reduce data storage through the use of differentiation and combine it with an existing similarity detection algorithm. The differentiation isWhile current computers have shown to be particular useful for arithmetic and logic implementations, their accuracy and efficiency for applications such as e.g. face, object and speech recognition, are not that impressive, especially when compared to what the human brain can do. Machine learning algorithms have been useful, especially for these type of applications, as they operate in a similar way to the human brain, by learning the data provided and storing it for future recognition. Until now, there has been a strong focus on developing the process of data storage and retrieval, merely neglecting the value of the provided information and the amount of data required to store. Hence, currently all information provided is stored, because it is difficult for the machine to decide which information needs to be stored. Consequently, large amounts of data are stored, which then
This article focuses on an original approach aiming the processing of low-altitude aerial sequences taken from an helicopter (or drone) and presenting a road traffic. Proposed system attempts to extract vehicles from ...
详细信息
This article focuses on an original approach aiming the processing of low-altitude aerial sequences taken from an helicopter (or drone) and presenting a road traffic. Proposed system attempts to extract vehicles from acquired sequences. Our approach begins with detecting the primitives of sequence images. At the time of this step of segmentation, the system computes dominant motion for each pair of images. This motion is computed using wavelets analysis on optical flow equation and robust techniques. Interesting areas (areas not affected by the dominant motion) are detected thanks to a Markov hierarchical model. Primitives stemming from segmentation and interesting areas are used to build a graph on which partitioning process is executed. This graph gathers only the primitives (considered as nodes) witch belong to the interesting areas. Nodes are interconnected by Perceptive Criteria. To extract the important elements of the sequence (vehicles), a bi-partition of this graph using Normalized Cuts technique takes place. Finally, parameters of proposed algorithm are chosen thanks to a learning stage for which we use Genetic algorithms.
A large portion of imageprocessing applications often come with stringent requirements regarding performance, energy efficiency, and power. FPGAs have proven to be among the most suitable architectures for algorithms...
详细信息
A large portion of imageprocessing applications often come with stringent requirements regarding performance, energy efficiency, and power. FPGAs have proven to be among the most suitable architectures for algorithms that can be processed in a streaming pipeline. Yet, designing imaging systems for FPGAs remains a very time consuming task. High-Level Synthesis, which has significantly improved due to recent advancements, promises to overcome this obstacle. In particular, Altera OpenCL is a handy solution for employing an FPGA in a heterogeneous system as it covers all device communication. However, to obtain efficient hardware implementations, extreme code modifications, contradicting OpenCL's data-parallel programming paradigm, are necessary. In this work, we explore the programming methodology that yields significantly better hardware implementations for the Altera Offline Compiler. We furthermore designed a compiler back end for a domain-specific source-to-source compiler to leverage the algorithm description to a higher level and generate highly optimized OpenCL code. Moreover, we advanced the compiler to support arbitrary bit width operations, which are fundamental to hardware designs. We evaluate our approach by discussing the resulting implementations throughout an extensive application set and comparing them with example designs, provided by Altera. In addition, as we can derive multiple implementations for completely different target platforms from the same domain-specific language source code, we present a comparison of the achieved implementations in contrast to GPU implementations.
暂无评论