In this study, the author analyses target range estimation errors in matched filtering-based detection performed in high range resolution (HRR) radars. Conventional radar signal processors use point target detectors, ...
详细信息
In this study, the author analyses target range estimation errors in matched filtering-based detection performed in high range resolution (HRR) radars. Conventional radar signal processors use point target detectors, where extended target responses are put through a point detection process by windowing and thresholding. The author demonstrates through simulations that the performance of degradation under the point target assumption can be significant for HRR radars, where targets extend across several detection cells. The author modelled the reflections for stationary and moving extended target scenarios by using three target signal models (TSMs). A non-homogenous Poisson process (NHPP) is provided to model the signal at the output of the target detector, which includes reflections from targets and clutter. The corresponding maximum likelihood (ML) estimator is derived as a range estimation technique. The author simulated a target detection process and made comparisons between the classical-and NHPP-based peak estimator performances for each of the TSMs. Furthermore, the ML estimation (MLE) algorithm is extended for multiple targets. The author demonstrates that the ML estimator significantly reduces target range estimation errors compared with the classical point target estimators.
Managing computing resources in a hypercube entails two steps. First, a job must be chosen to execute from among those waiting (job scheduling). Next a particular subcube within the hypercube must be allocated to that...
详细信息
Managing computing resources in a hypercube entails two steps. First, a job must be chosen to execute from among those waiting (job scheduling). Next a particular subcube within the hypercube must be allocated to that job (processor allocation). Whereas processor allocation has been well studied, job scheduling has been largely neglected. The goal of this paper is to compare the roles of processor allocation and job scheduling in achieving good performance on hypercube computers. We show that job scheduling has far more impact on performance than does processor allocation. We propose a new family of scheduling disciplines, called Scan, that have particular performance advantages. We show that performance problems that cannot be resolved through careful processor allocation can be solved by using Scan job-scheduling disciplines. Although the Scan disciplines carry far less overhead than is incurred by even the simplest processor allocation strategies, they are far more able to improve performance than even the most sophisticated strategies. Furthermore, when Scan disciplines are used, the abilities of sophisticated processor allocation strategies to further improve performance are limited to negligible levels. Consequently, a simple O(n) allocation strategy can be used in place of these complex strategies.
The working set model for program behavior was invented in 1965. It has stood the test of time in virtual memory management for over 50 years. It is considered the ideal for managing memory in operating systems and ca...
详细信息
The working set model for program behavior was invented in 1965. It has stood the test of time in virtual memory management for over 50 years. It is considered the ideal for managing memory in operating systems and caches. Its superior performance was based on the principle of locality, which was discovered at the same time;locality is the observed tendency of programs to use distinct subsets of their pages over extended periods of time. This tutorial traces the development of working set theory from its origins to the present day. We will discuss the principle of locality and its experimental verification. We will show why working set memory management resists thrashing and generates near-optimal system throughput. We will present the powerful, linear-time algorithms for computing working set statistics and applying them to the design of memory systems. We will debunk several myths about locality and the performance of memory systems. We will conclude with a discussion of the application of the working set model in parallel systems, modern shared CPU caches, network edge caches, and inventory and logistics management.
The transfer matrix technique is an efficient tool for calculating sound transmission through multilayered structures. However, due to the assumption of infinite size layers important discrepancies may be found betwee...
详细信息
The transfer matrix technique is an efficient tool for calculating sound transmission through multilayered structures. However, due to the assumption of infinite size layers important discrepancies may be found between predicted and experimental data. The spatial windowing technique introduced by Villot et al. [Predicting the acoustical radiation of finite size multi-layered structures by applying windowing on infinite structures, journal of Sound and Vibration 245 (2001) 433-455] has shown to give data much closer to measurement results than other measures, such as limiting the maximum angle of incidence when integrating to obtain the sound reduction index for diffuse incidence. Using a two-dimensional spatial window, also including the azimuth angle implies, however, that two double numerical integrations must be performed. As predicted results are compared with laboratory data, where the aspect ratio of the test object is required to be less than 1:2, a simplified procedure may be applied involving two single integrals only. It is shown that the accuracy in the end result may in practice be maintained by this simplified procedure. (C) 2009 Elsevier Ltd. All rights reserved.
The abrupt boundary truncation of an image introduces artifacts in the restored image, The traditional solution is tb smooth the image data using special window functions such as Hamming or trapezoidal windows, This i...
详细信息
The abrupt boundary truncation of an image introduces artifacts in the restored image, The traditional solution is tb smooth the image data using special window functions such as Hamming or trapezoidal windows, This is followed by zero-padding and linear convolution with the restoration filter, This method improves the results but still distorts the image, especially at the margins. Instead of the above method, we propose a different procedure, This procedure is simple and exploits the natural property of ''circular'' or periodic convolution of the discrete Fourier transform (DFT), Instead of padding the image by zeros, it is padded by a reflected version of it, This is followed by ''circular'' convolution with the restoration filter, This procedure is shown to lead to better restoration results than the windowing and linear convolution techniques, The computational effort is also improved since our method requires half the number of computations required by the conventional linear deconvolution method.
We give a simple interpretation of a recently noted phenomenon, namely, the resemblance between the axial intensity of an apertured Bessel beam and the squared profile of the windowing function. this effect can be use...
详细信息
We give a simple interpretation of a recently noted phenomenon, namely, the resemblance between the axial intensity of an apertured Bessel beam and the squared profile of the windowing function. this effect can be used to control the axial behavior of the beam, and we present examples for the case of a flattened Gaussian profile as aperturing function. (C) 1997 Optical Society of America.
Two main purposes of synthetic aperture radar (SAR) speckle filtering are the preservation of the edge sharpness and the suppression of the speckle noise. However, most of the existing filtering techniques suppress th...
详细信息
Two main purposes of synthetic aperture radar (SAR) speckle filtering are the preservation of the edge sharpness and the suppression of the speckle noise. However, most of the existing filtering techniques suppress the speckle noise while smearing edges. This problem is partly solved by using nonsquare windows, but it can also generate large errors in many applications. A new windowing algorithm that can overcome the defects of the nonsquare window, by discriminating the edges more precisely with a preliminary terrain classification result, is proposed.
A review of computer systems simulation is presented. Existing and proposed programs for simulating computer systems are described and recommendations are made about the desirable characteristics of a simulator. Atten...
详细信息
A review of computer systems simulation is presented. Existing and proposed programs for simulating computer systems are described and recommendations are made about the desirable characteristics of a simulator. Attention is primarily focused upon those simulators that may be used to study time-sharing, multiprogramming, and multiprocessing systems. It appears that a higher order, special purpose simulation language is needed for simulating computer systems. CSS and LOMUSS II seem to be representative of the most promising type of computer system simulators.
The paper describes the internal structure of a large operating system as a set of cooperating sequential processes. The processes synchronize by means of semaphores and extended semaphores (queue semaphores). The num...
详细信息
The paper describes the internal structure of a large operating system as a set of cooperating sequential processes. The processes synchronize by means of semaphores and extended semaphores (queue semaphores). The number of parallel processes is carefully justified, and the various semaphore constructions are explained. The system is proved to be free of “deadly embrace” (deadlock). The design principle is an alternative to Dijkstra's hierarchical structuring of operating systems. The project management and the performance are discussed, too. The operating system is the first large one using the RC 4000 multiprogramming system.
暂无评论