A basic function of array processing is to estimate the spatial location of disturbed or discrete radiating and scattering sources. We show that classical beamformers, optimum array processors, matched field and match...
详细信息
We consider the design of an array processor for space-time coded multi-antenna systems. As an alternative to the previously proposed zero-forcing method, in this paper, the maximum signal-to-noise ratio (SNR) criteri...
详细信息
We consider the design of an array processor for space-time coded multi-antenna systems. As an alternative to the previously proposed zero-forcing method, in this paper, the maximum signal-to-noise ratio (SNR) criterion is used to obtain a balance between interference suppression and noise enhancement. Although the same in concept, this work differs from the conventional minimum mean-squared error method in that there is more than one desired signal dimension each corresponding to one of the space-time coded streams. It will be shown that the number of linear filters required by the maximum SNR array processor is no more than the dimension of the signal space or the number of collaborating transmit antennas. The advantages of this design are highly improved performance and reduced decoding complexity.
This paper provides detailed analysis and performance evaluation of the holographic array processing(HAP) algorithm. The HAP is a source localization method that is based on medium calibration. C onventional array pro...
详细信息
This paper provides detailed analysis and performance evaluation of the holographic array processing(HAP) algorithm. The HAP is a source localization method that is based on medium calibration. C onventional array processing algorithms, such as matched0field processing(MFP), require precise knowledge of the medium between the source and thr receiving array, but the HPA methiod relaxes this stiff requirement. It calibrates the integrated effect of a great portion of the medium, and geoacoustic parameter estimation is needed only for a samll portion of the ocean between the unknown source and the virtual array. The virtual array is constructed by moving a reference source to incremental depths of the water column near the target. Theoretical analysis is provided using the WKB aproximation for a range dependent ocean. The numerical simulation is performed using a high-order parabolic equatiion (PE) code for a range-dependent analytical sound-speed profile(SSP0, and measured sound-speed data from the North Pacific ocean. The results of the analysis and simulation show the possibility of localizing a sourceat large distance with great accuracy.
Continuous recordings of ambient seismic noise across large seismic arrays allows a new type of processing using the cross-correlation technique on broadband data. We propose to apply double beamforming (DBF) to cross...
详细信息
Continuous recordings of ambient seismic noise across large seismic arrays allows a new type of processing using the cross-correlation technique on broadband data. We propose to apply double beamforming (DBF) to cross correlations to extract a particular wave component of the reconstructed signals. We focus here on the extraction of the surface waves to measure phase velocity variations with great accuracy. DBF acts as a spatial filter between two distant subarrays after cross correlation of the wavefield between each single receiver pair. During the DBF process, horizontal slowness and azimuth are used to select the wavefront on both subarray sides. DBF increases the signal-to-noise ratio, which improves the extraction of the dispersive wave packets. This combination of cross correlation and DBF is used on the Transportable array (USarray), for the central U.S. region. A standard model of surface wave propagation is constructed from a combination of the DBF and cross correlations at different offsets and for different frequency bands. The perturbation (phase shift) between each beam and the standard model is inverted. High-resolution maps of the phase velocity of Rayleigh and Love waves are then constructed. Finally, the addition of azimuthal information provided by DBF is discussed, to construct curved rays that replace the classical great-circle path assumption.
The detection and location capability of the International Monitoring System for small seismic events in the continental and oceanic regions surrounding the Sea of Japan is determined mainly by three primary seismic a...
详细信息
The detection and location capability of the International Monitoring System for small seismic events in the continental and oceanic regions surrounding the Sea of Japan is determined mainly by three primary seismic arrays: USRK, KSRS, and MJAR. Body wave arrivals are coherent on USRK and KSRS up to frequencies of around 4 Hz and classical array processing methods can detect and extract features for most regional signals on these stations. We demonstrate how empirical matched field processing (EMFP), a generalization of frequency-wavenumber or f-k analysis, can contribute to calibrated direction estimates which mitigate bias resulting from near-station geological structure. It does this by comparing the narrowband phase shifts between the signals on different sensors, observed at a given time, with corresponding measurements on signals from historical seismic events. The EMFP detection statistic is usually evaluated as a function of source location rather than slowness space and the size of the geographical footprint valid for EMFP templates is affected by array geometry, the available signal bandwidth, and Earth structure over the propagation path. The MJAR arrayhas similar dimensions to KSRS but is sited in far more complex geology which results in poor parameter estimates with classical f-k analysis for all signals lacking energy at 1 Hz or below. EMFP mitigates the signal incoherence to some degree but the geographical footprint valid for a given matched field template on MJAR is very small. Spectrogram beamforming provides a robust detection algorithm for high-frequency signals at MJAR. The array aperture is large enough that f-k analysis performed on continuous AR-AIC functions, calculated from optimally bandpass-filtered signals at the different sites, can provide robust slowness estimates for regional P-waves. Given a significantly higher SNR for regional S-phases on the horizontal components of the 3-component site of MJAR, we would expect incoherent detect
作者:
WETHERELL, CComputer Science Group
Department of Applied Science Hertz Hall University of California at Davis P.O. Box 808 Livermore CA 94550 U.S.A.
The Department of Energy (DoE) has a long history of large-scale scientific calculation on the most advanced ‘number-crunching’ computers. Recently, an effort to improve communications and software sharing among DoE...
详细信息
The Department of Energy (DoE) has a long history of large-scale scientific calculation on the most advanced ‘number-crunching’ computers. Recently, an effort to improve communications and software sharing among DoE laboratories has been underway. One result of this sharing is a project to design and implement a common language. That language turns out to be FORTRAN 77 significantly extended with new data structures, control structures and array processing. The data used to design the array processing feature is surprising and likely to be of use to others working in scientific language design; it is reported here so that others may profit from DoE's experience.
In this paper we present array processing techniques used to discriminate between propagating wideband biological signals. Velocity and frequency responses for a circular array designed to enhance nerve signal from a ...
详细信息
A method of space-time array processing is introduced that is based on the model-based approach. The signal and measurement systems are placed into state-space form, thereby allowing the unknown parameters of the mode...
详细信息
A method of space-time array processing is introduced that is based on the model-based approach. The signal and measurement systems are placed into state-space form, thereby allowing the unknown parameters of the model, such as signal bearings, to be estimated by an extended Kalman filter. A major advantage of the model-based approach is that there is no inherent limitation to the degree of sophistication of the models used, and therefore it can deal with other than plane-wave models, such as cylindrically or spherically spreading propagation models, as well as more sophisticated representations such as the normal mode and the parabolic equation propagation models. Since the processor treats the parameters of interest as unknown parameters to be estimated, there is no explicit beamformer structure, and therefore no accuracy limitations such as fixed beam bin sizes and predetermined number of preformed beams. After a theoretical exposition of the underlying theory, the performance of the processor is evaluated with synthesized data sets. The results indicate that the method is a highly effective approach that is capable of significantly outperforming conventional array processors. (C) 1997 Acoustical Society of America.
Precise determination of hypocentral depth remains one of the most relevant problems in earthquake seismology. It is well known that using depth phases allows for significant improvement in event depth determination;h...
详细信息
Precise determination of hypocentral depth remains one of the most relevant problems in earthquake seismology. It is well known that using depth phases allows for significant improvement in event depth determination;however, routinely and systematically picking such phases, for teleseismic or regional arrivals, is problematic due to poor signal-to-noise ratios around the pP and sP phases. To overcome this limitation, we have taken advantage of the additional information carried by seismic arrays. We use velocity spectral analysis to precisely measure pP-P times. The individual estimates obtained at different subarrays, for all pairs of earthquakes, are combined using a double-difference algorithm, in order to precisely map seismicity in regions where it is tightly clustered. We illustrate this method by relocating intermediate-depth earthquakes in the Nazca subducting plate, beneath northern Chile, where we confirm the existence of a narrowly spaced double seismic zone, previously imaged using a local dedicated deployment. As a second example we relocate the aftershock sequence of the 2014 M-w 7.9 intermediate depth, Rat Islands earthquake, and provide evidence of a subvertical fault plane for the main shock. Finally, we show that the resulting relative depth errors are typically smaller than 2km.
In this demonstration, we present AscotDB, a new tool for the analysis of telescope image data. AscotDB results from the integration of ASCOT, a Web-based tool for the collaborative analysis of telescope images and th...
详细信息
In this demonstration, we present AscotDB, a new tool for the analysis of telescope image data. AscotDB results from the integration of ASCOT, a Web-based tool for the collaborative analysis of telescope images and their metadata, and SciDB, a parallel array processing engine. We demonstrate the novel data exploration supported by this integrated tool on a 1 TB dataset comprising scientifically accurate, simulated telescope images. We also demonstrate novel iterative-processing features that we added to SciDB in order to support this use-case.
暂无评论