Multi-Input Multi-Output (MIMO) wireless communication systems commonly employ beamforming techniques with Singular Value Decomposition (SVD). In such systems, if no channel encoding is employed, the full diversity or...
详细信息
Multi-Input Multi-Output (MIMO) wireless communication systems commonly employ beamforming techniques with Singular Value Decomposition (SVD). In such systems, if no channel encoding is employed, the full diversity order provided by the channel is achieved when a single symbol is transmitted over multiple channels; however, this property is lost whenever multiple symbols are simultaneously transmitted. The full diversity order can be restored when channel coding is added to such a system. For example, when Bit-Interleaved Coded Modulation (BICM) is combined with this technique, the full diversity order of NM in an M × N MIMO channel, transmitting S parallel streams is possible; provided SR c ¿ 1 where R C is the BICM convolutional code rate. In this paper, we present multiple beamforming with constellation precoding which can achieve the full diversity order with both uncoded and BICM-coded SVD systems. An analytical proof of this property is provided. In addition, to reduce the computational complexity of Maximum Likelihood (ML) decoding, we introduce a Sphere Decoding (SD) technique. This technique achieves several orders of magnitude reduction in computational complexity not only with respect to conventional ML decoding, but also, with respect to conventional SD.
In this paper we lay out the computational challenges involved in effectively simulating complex phenomena such as sequestering CO2 in oil and gas reservoirs. The challenges arise at multiple levels: (i) the computati...
详细信息
ISBN:
(纸本)9781605588186
In this paper we lay out the computational challenges involved in effectively simulating complex phenomena such as sequestering CO2 in oil and gas reservoirs. The challenges arise at multiple levels: (i) the computational complexity of simulating the fundamental processes;(ii) the resource requirements of the computationally demanding simulations;(iii) the need for integrating real-time data (intensive) and computationally intensive simulations;(iv) and the need to implement all of these in a robust, scalable and extensible approach. We will outline the architecture and implementation of the solution we develop in response to these requirements, and discuss results to validate claims that our solution scales to effectively solve desired problem sizes and thus provides the capability to generate novel scientific insight. Copyright 2010 ACM.
Patent classification is a large scale hierarchical text classification (LSHTC) task. Though comprehensive comparisons, either learning algorithms or feature selection strategies, have been fully made in the text cate...
详细信息
This paper describes a statistical machine translation system for our participation for the WMT10 shared task. Based on MOSES, our system is capable of translating German, French and Spanish into English. Our main con...
详细信息
Engineering data, especially bills of material, can be represented as unordered trees. Since BOMs tend to be proprietary, it is very difficult to obtain BOM datasets to use in testing data mining algorithms such as cl...
详细信息
Engineering data, especially bills of material, can be represented as unordered trees. Since BOMs tend to be proprietary, it is very difficult to obtain BOM datasets to use in testing data mining algorithms such as clustering and classification. Clustering in particular is a useful method for determining groups of similar BOMs in cases where intelligent product numbering or product families do not exist. This research develops a program to generate suitable datasets of unordered trees subject to a user's specification of tree depth and number of leaf nodes.
We propose a prefetch cache sizing module for use with any sequential prefetching scheme and evaluate its impact on the hit rate. Disk array caches perform sequential prefetching by loading data contiguous to I/O requ...
详细信息
AWM-Olsen is a widely used ground motion simulation code based on a parallel finite difference solution of the 3-D velocity-stress wave equation. This application runs on tens of thousands of cores and consumes severa...
详细信息
Color and shape descriptions of an image are the most widely used visual features in content-based image retrieval systems. Feature vectors for shape and color can be combined to improve the performance of the content...
详细信息
Color and shape descriptions of an image are the most widely used visual features in content-based image retrieval systems. Feature vectors for shape and color can be combined to improve the performance of the content-based image retrieval systems. In this paper, a novel image retrieval method integrating HSV color quantization and curve let transform is proposed. By analyzing properties of HSV(Hue, Saturation, Value) color space, a new dividing method to quantize the HSV color space into 24 non-uniform bins based on HSV soft decision is introduced and used for color histogram generation. Digital curve let transform is employed for extracting shape features in images, as it has been proved that the curve let transform is an almost optimal sparse representation of objects with edges. The generated HSV color histogram and the curve let feature are then combined and weighted for image retrieval, using Manhattan distance metric as the similiarity measure. Experiments on an image database of 565 images show that the combined feature performs well in precision and adaptability.
The Copenhagen climate conference 2009 has concluded that we have to change the way how we live. Currently, several carbon management systems are being developed to meet enterprises' requirement. In addition, a nu...
详细信息
The Copenhagen climate conference 2009 has concluded that we have to change the way how we live. Currently, several carbon management systems are being developed to meet enterprises' requirement. In addition, a number of carbon footprint calculation tools have been developed to fulfill carbon calculation needs from industries, households and individuals. However, under the current circumstance, it is still a challenging issue on how to take advantage of the latest IT technologies to improve carbon calculations with comprehensive functionality. In this paper, we extend carbon 2.0, the concept proposed by previous studies, and introduce a new carbon framework based on cloud computing. This new framework is massively scalable and can configure and deliver carbon-related services dynamically and on demand. Moreover, it is a real low-carbon framework itself!
暂无评论