The cloud provides low-cost and flexible IT resources (hardware and software) across the Internet. As more cloud providers seek to drive greater business outcomes and the environments of the cloud become more complica...
详细信息
The cloud provides low-cost and flexible IT resources (hardware and software) across the Internet. As more cloud providers seek to drive greater business outcomes and the environments of the cloud become more complicated, it is evident that the era of the intelligent cloud has arrived. The intelligent cloud faces several challenges, including optimizing the economic cloud service configuration and adaptively allocating resources. In particular, there is a growing trend toward using machine learning to improve the intelligence of cloud management. This article discusses an architecture of intelligent cloud resource management with deep reinforcement learning. The deep reinforcement learning makes clouds automatically and efficiently negotiate the most appropriate configuration, directly from complicated cloud environments. Finally, we give an example to evaluate and conclude the remarkable ability of the intelligent cloud with deep reinforcement learning.
Automatic target recognition (ATR) is the process where computer algorithms are used to detect and classify objects or regions of interest in sensor data [1]. ATR algorithms have been developed for a broad range of se...
详细信息
Automatic target recognition (ATR) is the process where computer algorithms are used to detect and classify objects or regions of interest in sensor data [1]. ATR algorithms have been developed for a broad range of sensors, including electro-optic (EO), infrared (IR), and microwave sensors (radar). A key advantage of microwave radar is its utility for all weather and time of day compared with other types of sensors. One limitation of many radar systems’ ATR performance is the poor spatial resolution in the range and cross-range dimensions. While the range resolution of the microwave radar system can be straightforwardly improved by increasing the system’s radio frequency bandwidth, superior cross-range (or azimuth) resolution requires an increase in antenna size. In order to achieve resolution similar to EO or IR systems, an enormous antenna must be used for transmission and reception, hindering microwave radar system’s general applicability to ATR problems. However, by operating a synthetic aperture radar (SAR), the large antenna requirement can be overcome, and very fine cross-range resolution can be obtained with a modest \"real\" aperture antenna. Indeed, SAR has been used to generate high-resolution 2-D or 3-D object images at a variety of different microwave frequencies. In particular, wideangle SAR images can contain important features of an object from a diverse set of observation angles collected by a radar system orbiting around a target or scene. These feature-rich wide-angle SAR images are useful for ATR due to their high resolution and comprehensive coverage of the target. Interference, jamming, or data dropouts, however, are commonplace in practical SAR environments, and result in an incomplete data set. These missing and corrupted data cause substantial degradation in the generated SAR imagery, hampering their utility for subsequent ATR processing especially when data-independent image formation algorithms are used. A prime example of these difficulti
The rich computational dynamics coupled with simple training in reservoir computing (RC) algorithms makes them a natural choice for spatiotemporal learning systems in embedded platforms. In this article, we study thre...
详细信息
The rich computational dynamics coupled with simple training in reservoir computing (RC) algorithms makes them a natural choice for spatiotemporal learning systems in embedded platforms. In this article, we study three variants of the reservoir algorithm: the echo-state network (ESN), the liquid-state machine (LSM), and the time-delay reservoirs (TDRs). A digital reservoir architecture is used as a baseline test bed to benchmark for epileptic seizure detection. An average accuracy of 85% is observed across the three variants of the algorithm and the effectiveness of the reservoir is assessed with different metrics.
The main concern of any space mission operation is to ensure the health and safety of spacecrafts. The worst case in this circumstance is probably the loss of a mission but the more common interruption of spacecraft f...
详细信息
The main concern of any space mission operation is to ensure the health and safety of spacecrafts. The worst case in this circumstance is probably the loss of a mission but the more common interruption of spacecraft functionality can result in compromised mission objectives. Spacecraft telemetry holds information related to state-of-health (SOH) of its subsystems. Each parameter has information that represents a time-variant property (i.e., a status or a measurement) to be checked. Moreover, the tremendous increase in telemetry data volume and its complexity directs the need for more efficient and scalable data processing systems. As a consequence, there is a continuous improvement of telemetry monitoring applications in order to reduce the time required to respond to changes in a spacecraft's SOH. So, a fast conception of the current state of the spacecraft is thus very important in order to respond to failures. Furthermore, the progressive growth in spacecrafts leads to increase of the level of standardization in spacecraft operations.
In this article, the authors present a framework for offloading collective operations to programmable logic for use in applications using the Message Passing Interface (MPI). They evaluate their approach on the Xilinx...
详细信息
In this article, the authors present a framework for offloading collective operations to programmable logic for use in applications using the Message Passing Interface (MPI). They evaluate their approach on the Xilinx Zynq system on a chip and the NetFPGA, a network interface card based on a field-programmable gate array. Results are presented from microbenchmarks and a benchmark scientific application.
The papers in this special section focus on new developments in the area of distributed coordination control and industrial applications. [ABSTRACT FROM AUTHOR]
The papers in this special section focus on new developments in the area of distributed coordination control and industrial applications. [ABSTRACT FROM AUTHOR]
This special issue introduces current developments in bioinformatics, the use of computational techniques in the service of the life sciences, in the specific context of the analysis and interpretation of DNA molecule...
详细信息
This special issue introduces current developments in bioinformatics, the use of computational techniques in the service of the life sciences, in the specific context of the analysis and interpretation of DNA molecules, where DNA is the primary informational molecule of all life. To fully appreciate the papers in this issue, some background knowledge in biology is required. The DNA in a cell is organized into a genome consisting of one or more chromosomes, each of which is a DNA molecule, which can be thought of as a long string over the alphabet of nucleotides, A, C, G, and T. Within a DNA string or sequence are substrings that are genes that normally encode the recipe for the functional proteins in the cell. The human genome is organized into 23 pairs of chromosomes, one maternal and one paternal chromosome set, containing a total of approximately six billion nucleotides. Amazingly, all ~1 trillion cells in one’s body, of diverse form and function, contain the same “molecular blueprint” with a near perfect copy of their genome.
Visual tracking is an important problem in the field of robotics, especially for the cameras mounted on agile flying robots. The information generated from these images by the tracking algorithm can be used by autonom...
详细信息
Visual tracking is an important problem in the field of robotics, especially for the cameras mounted on agile flying robots. The information generated from these images by the tracking algorithm can be used by autonomous vehicle navigation [1], or human-robot interaction [2], [3]. Image-based tracking algorithms are categorized as point tracking, kernel tracking, or silhouette tracking [4]. First, distinguishing features, such as color, shape, and region are selected to identify objects for visual tracking. Then, the tracked object is modeled, on the basis of the selected features. The correspondence or similarity measurement between the target and the candidate across frames is constructed, such as the sum of squared differences of the pixel intensity values [5], the mutual information (MI) [6], [7], the normalized cross correlation, and the Bhattacharyya coefficient [8]. Supervised or unsupervised online learning algorithms are also used for visual tracking.
The papers in this special section focus on multimedia data retrieval and classification via large-scale systems. Today, large collections of multimedia data are explosively created in different fields and have attrac...
详细信息
The papers in this special section focus on multimedia data retrieval and classification via large-scale systems. Today, large collections of multimedia data are explosively created in different fields and have attracted increasing interest in the multimedia research area. Large-scale multimedia data provide great unprecedented opportunities to address many challenging research problems, e.g., enabling generic visual classification to bridge the well-known semantic gap by exploring large-scale data, offering a promising possibility for in-depth multimedia understanding, as well as discerning patterns and making better decisions by analyzing the large pool of data. Therefore, the techniques for large-scale multimedia retrieval, classification, and understanding are highly desired. Simultaneously, the explosion of multimedia data puts urgent needs for more sophisticated and robust models and algorithms to retrieve, classify, and understand these data. Another interesting challenge is, how can the traditional machine learning algorithms be scaled up to millions and even billions of items with thousands of dimensionalities? This motivated the community to design parallel and distributed machine learning platforms, exploiting GPUs as well as developing practical algorithms. Besides, it is also important to exploit the commonalities and differences between different tasks, e.g., image retrieval and classification have much in common while different indexing methods evolve in a mutually supporting way.
暂无评论