Relation detection plays a crucial role in Knowledge Base Question Answering (KBQA) because of the high variance of relation expression in the question. Traditional deep learning methods follow an encoding-comparing p...
详细信息
Nowadays, the digital earth not only relates to the technologies of surveying and mapping geography, but also includes the analysis and cross-application of various scientific data related to geographic information. I...
Nowadays, the digital earth not only relates to the technologies of surveying and mapping geography, but also includes the analysis and cross-application of various scientific data related to geographic information. It requires a computing environment and management for the corresponding thematic application models which supports the online analysis on the digital earth. So we proposed an integrated deployment method for digital earth thematic application models based on container cloud, which provides a flexible and scalable information integration and analysis architecture for digital earth. Firstly, a Docker image building module is designed to facilitate the user to quickly build the application model into a image. Then the Docker technology is used to implement the containerization of the application model to provide a stable running environment for the application model. Finally, the Kubernetes is used to dynamically manage the cluster resources to realize rapid expansion of computing resources. In addition, the HDFS is deployed, and the image files and various scientific data respectively store with Hbase and Apache file systems. It can ensure the read and write speed of the application model and the related data and log files. It also used zabbix framework to manage the monitoring index of the container cloud cluster to ensure the execution effect of the application model. Through the tests of typical application model, it is verified that the method can integrated deploy and apply the cross-domain, hetero-architecture, multi-version and multi-class application models on the digital earth.
Multi-channel sliding spotlight SAR can achieve high-resolution and wide-swath imaging. The sparse reconstruction algorithm can improve the quality of imaging. By applying the sparse reconstruction algorithm to multic...
详细信息
Multi-channel sliding spotlight SAR can achieve high-resolution and wide-swath imaging. The sparse reconstruction algorithm can improve the quality of imaging. By applying the sparse reconstruction algorithm to multichannel sliding spotlight SAR imaging, azimuth ambiguities, noise and clutter can be suppressed effectively. In this paper, l_1 regularization based multi-channel sliding spotlight SAR imaging method is proposed. The proposed method combines the DPCA imaging operators with the l_1 regularization scheme to solve the nonuniform sampling and azimuth ambiguities problem. The proposed method can suppress azimuth ambiguities more effectively than the reconstruction filter algorithm based DPCA technology in the case of a lower PRF. The experiment results verify the effectiveness of the proposed method.
Airborne laser scanning (ALS) is a technique used to obtain Digital Surface Models (DSM) and Digital Terrain Models (DTM) efficiently, and filtering is the key procedure used to derive DTM from point clouds. Gen...
详细信息
Airborne laser scanning (ALS) is a technique used to obtain Digital Surface Models (DSM) and Digital Terrain Models (DTM) efficiently, and filtering is the key procedure used to derive DTM from point clouds. Generating seed points is an initial step for most filtering algorithms, whereas existing algorithms usually define a regular window size to generate seed points. This may lead to an inadequate density of seed points, and further introduce error type I, especially in steep terrain and forested areas. In this study, we propose the use of object- based analysis to derive surface complexity information from ALS datasets, which can then be used to improve seed point generation. We assume that an area is complex if it is composed of many small objects, with no buildings within the area. Using these assumptions, we propose and implement a new segmentation algorithm based on a grid index, which we call the Edge and Slope Restricted Region Growing (ESRGG) algorithm. Surface complexity information is obtained by statistical analysis of the number of objects derived by segmentation in each area. Then, for complex areas, a smaller window size is defined to generate seed points. Experimental results show that the proposed algorithm could greatly improve the filtering results in complex areas, especially in steep terrain and forested areas.
Domain generalization aims to apply knowledge gained from multiple labeled source domains to unseen target domains. The main difficulty comes from the dataset bias: Training data and test data have different distribut...
详细信息
Low-poly style illustrations, which have 3D abstract appearance, have become a popular stylish recently. Most previous methods require special knowledges in 3D modeling and need tedious interactions. We present an int...
详细信息
ISBN:
(纸本)9781509060689
Low-poly style illustrations, which have 3D abstract appearance, have become a popular stylish recently. Most previous methods require special knowledges in 3D modeling and need tedious interactions. We present an interactive system for non-expert users to easily manipulate the low-poly style illustration. Our system consists of two parts: vertex sampling and mesh rendering. In the vertex sampling stage, we extract a set of candidate points from the image and rank them according to their importance of structure preserving using adaptive thinning. Based on the pre-ranked point list, the user can select an arbitrary number of vertices for the triangle mesh construction. In the mesh rendering stage, we optimize triangle colors to create stereo-looking low-polys. We also provide three tools for exible modication of vertex numbers, color contrast, and local region emphasis. The experiment results demonstrate that our system outperforms state-of-the-art method via simple user interactions.
Digitizing the land surface temperature(T_(s))and surface soil moisture(m _(v))is essential for developing the intelligent Digital ***,we developed a two parameter physical-based passive microwave remote sensing model...
详细信息
Digitizing the land surface temperature(T_(s))and surface soil moisture(m _(v))is essential for developing the intelligent Digital ***,we developed a two parameter physical-based passive microwave remote sensing model for jointly retrieving T_(s) and m_(v) using the dual-polarized T_(b) of Aqua satellite advanced microwave scanning radiometer(AMSR-E)C-band(6.9 GHz)based on the simplified radiative transfer *** using in situ T_(s) and m_(v) in southern China showed the average root mean square errors(RMSE)of T s and m_(v) retrievals reach 2.42 K(R^(2)=0.61,n=351)and 0.025 g cm^(−3)(R^(2)=0.68,n=663),*** results were also validated using global in situ T_(s)(n=2362)and m_(v)(n=1657)of International Soil Moisture *** corresponding RMSE are 3.44 k(R 2=0.86)and 0.039 g cm^(−3)(R^(2)=0.83),*** monthly variations of model-derived Ts and mv are highly consistent with those of the Moderate Resolution Imaging Spectroradiometer T_(s)(R^(2)=0.57;RMSE=2.91 k)and ECV_SM m_(v)(R^(2)=0.51;RMSE=0.045 g cm^(−3)),***,this paper indicates an effective way to jointly modeling T_(s) and m_(v) using passive microwave remote sensing.
In this paper, we focus on the use of multi-modal data to achieve a semantic segmentation of aerial imagery. Thereby, the multi-modal data is composed of a true orthophoto, the Digital Surface Model (DSM) and further ...
详细信息
In this paper, we focus on the use of multi-modal data to achieve a semantic segmentation of aerial imagery. Thereby, the multi-modal data is composed of a true orthophoto, the Digital Surface Model (DSM) and further representations derived from these. Taking data of different modalities separately and in combination as input to a Residual Shuffling Convolutional Neural Network (RSCNN), we analyze their value for the classification task given with a benchmark dataset. The derived results reveal an improvement if different types of geometric features extracted from the DSM are used in addition to the true orthophoto.
Vehicle re-identification (re-id) plays an important role in the automatic analysis of the drastically increasing urban surveillance videos. Similar to the other image retrieval problems, vehicle re-id suffers from th...
详细信息
ISBN:
(纸本)9781509060689
Vehicle re-identification (re-id) plays an important role in the automatic analysis of the drastically increasing urban surveillance videos. Similar to the other image retrieval problems, vehicle re-id suffers from the difficulties caused by various poses of vehicles, diversified illuminations, and complicated environments. Triplet-wise training of convolutional neural network (CNN) has been studied to address these challenges, where the CNN is adopted to automate the feature extraction from images, and the training adopts triplets of (query, positive example, negative example) to capture the relative similarity between them to learn representative features. The traditional triplet-wise training is weakly constrained and thus fails to achieve satisfactory results. We propose to improve the triplet-wise training at two aspects: first, a stronger constraint namely classification-oriented loss is augmented with the original triplet loss; second, a new triplet sampling method based on pairwise images is designed. Our experimental results demonstrate the effectiveness of the proposed methods that achieve superior performance than the state-of-the-arts on two vehicle re-id datasets, which are derived from real-world urban surveillance videos.
Recently, numerous salient object detection methods are proposed for different data types. And a reliable method, which can accurately extract complete salient objects, is beneficial to various vision tasks. However, ...
详细信息
ISBN:
(纸本)9781509060689
Recently, numerous salient object detection methods are proposed for different data types. And a reliable method, which can accurately extract complete salient objects, is beneficial to various vision tasks. However, existing methods may fail in highlighting the entire salient object uniformly. In this work, we propose a simple and universal framework aiming to improve the detection result of existing methods. To remove inaccurate salient regions, we apply location prior and adaptive de-noising to prior saliency maps extracted from existing methods in the pre-processing step. Then, an iteration optimization algorithm considering local smoothness and global similarity is introduced to refine the pre-processed saliency map. The experimental results show that the proposed framework can universally enhance the performance of state-of-the-art salient object detection methods for 2D, 3D and light field data.
暂无评论