Moving object segmentation based on LiDAR is a crucial and challenging task for autonomous driving and mobile robotics. Most approaches explore spatio-temporal information from LiDAR sequences to predict moving object...
详细信息
In the context of the discrepancies between the early and late universe, we emphasize the importance of independent measurements of the cosmic curvature in the late universe. We present an investigation of the model-i...
详细信息
In the context of the discrepancies between the early and late universe, we emphasize the importance of independent measurements of the cosmic curvature in the late universe. We present an investigation of the model-independent measurement of the cosmic curvature parameter Ωk in the late universe with the latest Hubble parameter H(z) measurements and type Ia supernovae (SNe Ia) data. For that, we use two reconstruction methods, the Gaussian process (GP) and artificial neural network (ANN) methods, to achieve the distance construction from H(z) data. Our analysis reveals that the GP method provides the most precise constraint on Ωk, with a constraint precision of ξ(Ωk)=0.13, surpassing recent estimations using similar methods. The GP method consistently indicates a preference for a flat universe at the 2σ confidence level. Moreover, we find that the choice of reconstruction method influences the estimation of Ωk. The ANN reconstruction method exhibits higher sensitivity to the addition of BAO H(z) data, resulting in comparable constraint precision to the GP method. A discrepancy exists between the best-fit values obtained by these two reconstruction methods, indicating their dependence on the reconstruction approach. However, we anticipate that with the improvement of sample size and precision of observational H(z) data, the estimation of Ωk using this approach will become more robust and reliable.
Evolutionary transfer optimization(ETO) serves as "a new frontier in evolutionary computation research", which will avoid zero reuse of experience and knowledge from solved problems in traditional evolutiona...
详细信息
As "a new frontier in evolutionary computation research", evolutionary transfer optimization(ETO) will overcome the traditional paradigm of zero reuse of related experience and knowledge from solved past pro...
详细信息
With the development and gradual richness of 3D sensors, cross-source point cloud registration has become one of the emerging research contents in the field of 3D reconstruction in recent years, which can be widely us...
With the development and gradual richness of 3D sensors, cross-source point cloud registration has become one of the emerging research contents in the field of 3D reconstruction in recent years, which can be widely used in the 3D reconstruction of space scenes and the construction of unmanned maps. In view of the above problems, the main content and innovation of this paper are as follows: we propose a point cloud down-sampling algorithm based on adaptive voxel grid filtering and build a point cloud feature extraction model based on deep network. Different from the traditional point cloud registration by searching for correspondences, we propose to build a relationship model between the features of the point cloud and the transformation matrix, and iteratively solve the transformation matrix by improving the LK algorithm. Meanwhile, we propose unsupervised loss function based on bidirectional Euclidian distance. The experimental comparison proves that the 3D reconstruction effect of the cross-source point cloud registration method in this paper is better than that of the traditional point cloud registration.
As we all know, there is an unsolved problem for robots to accurately and quickly grasp unknown objects in an unstructured environment. In order to describe the pose information of objects more accurately, a rotationa...
As we all know, there is an unsolved problem for robots to accurately and quickly grasp unknown objects in an unstructured environment. In order to describe the pose information of objects more accurately, a rotational Gaussian encoding method is proposed in this paper. Different from the traditional 2-D Gaussian encoding to represent ground-truth, this method introduces rotation information based on 2-D Gaussian. Grasping is made more accurate and robust by treating the grasp pose as a rotating bounding box in the image plane. In addition, a balanced loss function is introduced in this paper. This loss function not only solves the problem of positive and negative sample imbalance in the traditional cross-entropy loss but also reduces the penalty of points around the Gaussian center location. In this paper, a pixel-level single-stage grasping algorithm is designed in an end-to-end manner without the intermediate process of using anchor points, which saves time and improves the accuracy of grasping simultaneously. Then, the proposed model in this paper is evaluated on two standard datasets, Cornell dataset and Jacquard dataset, achieving 96.8% and 94.7% accuracy, respectively. Finally, we use a 6 DoF UR5e robotic arm for real-world grasping experiments. The success rate of single-object grasping is 94.7% and the time to generate the grasp frame is 8 ms. Experiments demonstrat that the algorithm is equally effective in real-world environments and has a faster detection time.
Security inspection is an indispensable aspect of contemporary life and it plays a crucial role in ensuring personal safety at all times. In this regard, the accurate and prompt detection of prohibited objects is impe...
Security inspection is an indispensable aspect of contemporary life and it plays a crucial role in ensuring personal safety at all times. In this regard, the accurate and prompt detection of prohibited objects is imperative. To further improve the detection accuracy of CenterNet on prohibited objects images while maintaining its high-speed detection capabilities, we make the following improvements. Firstly, to tackle the challenges related to complex and overlapping x-ray images, the Efficient Channel Attention module (ECA-Net) has been introduced to augment the feature extraction ability of the CenterNet for prohibited objects. Secondly, the Feature Pyramid Network (FPN) has been employed to amplify the feature acquisition capability of small prohibited objects. Finally, the Complete-IoU (CIoU) loss has been implemented to achieve faster convergence by minimizing the distance between the predicted bounding box and the corre-sponding ground truth while ensuring scale invariance of the two bounding boxes. The experimental results demonstrate that the EFC-CenterNet can effectively balance both accuracy and speed in real-time detection of contraband items, achieving an impressive 84.27% mAP and 52.48 FPS.
X-ray prohibited items detection is an effective and crucial measure in various security inspection scenarios. However, the overlapping phenomenon in X-ray images exacerbates the foreground-background class imbalance,...
X-ray prohibited items detection is an effective and crucial measure in various security inspection scenarios. However, the overlapping phenomenon in X-ray images exacerbates the foreground-background class imbalance, and the imaging principle of X-ray images results in missing texture features. To address these challenges, we propose an end-to-end X-ray Prohibited Items Detector (PID-YOLOX) based on YOLOX-Tiny, which offers fast detection speed and high accuracy. Specifically, we introduce the Generalized Label Assignment (GLA) scheme to tackle the foreground-background class imbalance problem, and the Multi-Cardinality Attention (MCA) mechanism to alleviate the issue of missing texture features. Our experimental results show that PID-YOLOX achieves 54.9% average precision (AP) on the PIXray dataset, surpassing YOLOX-Tiny by 2.2% AP. Furthermore, extensive experiments demonstrate that PID-YOLOX is superior to the state-of-the-art methods, indicating its potential applications in the prohibited items detection field.
This paper proposes a rumor control model based on community immunization. Based on the community division and the trust network inference algorithm, the model redefines the standard to measure the importance of nodes...
详细信息
ISBN:
(数字)9798331509712
ISBN:
(纸本)9798331509729
This paper proposes a rumor control model based on community immunization. Based on the community division and the trust network inference algorithm, the model redefines the standard to measure the importance of nodes in the network. First, the model uses the Louvain clustering algorithm based on the Ochiai coefficient to discover the network community and then presents the trust network inference algorithm. By analyzing the key factors that affect trust transfer between nodes, the trust evaluation between unfamiliar nodes is inferred, and important nodes with a high degree of trust in the network community are calculated. Finally, combined with the characteristics of inner degree and outer degree centrality of nodes in the network community, five types of important nodes in the network are screened out. To avoid repeated selection of nodes, this paper identifies a group of key nodes in the network community for local immunization by means of deduplication and taking intersection, so as to realize effective control of rumors in the network.
With the rapid development of 3D vision technology, the existing passive binocular cameras can no longer meet the practical needs of depth perception. Therefore, this paper proposes a binocular active stereo matching ...
With the rapid development of 3D vision technology, the existing passive binocular cameras can no longer meet the practical needs of depth perception. Therefore, this paper proposes a binocular active stereo matching method based on multi-scale random forest. Firstly, a binocular active vision system consists of a near-infrared random speckle projector and a binocular camera is constructed, and the system was calibrated using Zhang's calibration method. Secondly, gamma image enhancement and image difference method are used for processing to reduce the impact of ambient light on the measurement results. Next, extract the points of interest and use the multi-scale random forest algorithm to match the window where the points of interest are located to generate a sparse structured light anchor disparity map globally. Finally, using the Census transforms as the matching cost, iterate continuously, refine the disparity and get a dense disparity map. The experimental results show that this system can achieve good depth perception accuracy and robustness under complex indoor lighting conditions.
暂无评论