In recent years, the local feature-based point cloud registration methods have attracted more attention for their high robustness. The state-of-the-art local feature-based pipeline consists of local feature extraction...
详细信息
In recent years, the local feature-based point cloud registration methods have attracted more attention for their high robustness. The state-of-the-art local feature-based pipeline consists of local feature extraction, feature matching, and outlier rejection. However, the accuracy of this pipeline decreases significantly between low-overlap point clouds. In this article, we propose a novel hand-crafted framework to realize efficient low-overlap point cloud registration through point cloud partitioning, semiglobal point cloud block matching, and best transformation selection with refinement. Experimental results on benchmark datasets show that the proposed algorithm presents the superior performance over the previous methods on low-overlap scenes under different features and achieves very competitive accuracy on non-low-overlap data.
In this paper, we present an algorithm that addresses the challenge of dividing a workspace among multiple UAVs. The workspace can be any convex or non-convex polygon and may contain holes of various shapes that repre...
详细信息
In this paper, we present an algorithm that addresses the challenge of dividing a workspace among multiple UAVs. The workspace can be any convex or non-convex polygon and may contain holes of various shapes that represent no-fly zones. The UAVs can be heterogeneous, with different levels of autonomy, speed, and range. The goal of the workspace division is to obtain areas whose sizes are best matched to the capabilities of the UAVs while maximizing compactness. The algorithm decomposes the polygon representing the workspace into a triangular grid, followed by an iterative process of accumulating adjacent triangles while maximizing the compactness of the resulting regions. The performance of the algorithm and the quality of the partitions generated by the algorithm are compared to existing methods. Results show that this approach outperforms others in several metrics, achieving a 5% to 10% improvement in compactness, while maintaining reasonable performance, with a time overhead of up to approximately two seconds when splitting a polygon into ten parts.
In machine learning, k-means clustering is an unsupervised leaning technique to partition the data into k clusters that are homogeneous within the cluster and heterogeneous between clusters. The k-means algorithm assi...
详细信息
In machine learning, k-means clustering is an unsupervised leaning technique to partition the data into k clusters that are homogeneous within the cluster and heterogeneous between clusters. The k-means algorithm assigns equal importance to all features in a single view. Several multi-view clustering techniques have been developed over the last few decades, including multi-view k-means (MVKM). In MVKM, patterns can be detected more accurately, and clustering performance is improved by applying each view individually for pattern discovery. The demand for highly accurate and flexible methods in machine learning and artificial intelligence has increased the use of multi-view datasets. These datasets are complicated and pose challenges for researchers in real-life situations. A novel type of sparse k-means clustering algorithm, sparse MVKM (S-MVKM), is proposed in this study for multi-view data. The main aim of our proposed S-MVKM algorithm is to achieve sparsity in features and shrink unimportant features to exactly zero using Lasso (least absolute shrinkage and selection operator). For considering theoretic behaviors of the S-MVKM algorithm, we create a convergence theorem to ensure that any convergent subsequence in S-MVKM will ultimately tend to an optimal solution. The proposed S-MVKM algorithm is then compared with several existing algorithms, such as traditional MVKM, feature reduction MVKM (FR-MVKM), MVKM with adaptive sparse membership weight allocation (M-VASM), and redundant and sparse feature learning (RSF-MVKM). Based on the three clustering performance measures of accuracy rate, Rand index, and normalized mutual information, we make the comparisons between them by using several synthetic and real-world datasets. The comparative results demonstrate that the proposed S-MVKM clustering algorithm effectively surpasses these competing clustering algorithms in terms of these three performance measures. Finally, we make concluding remarks that provide potential g
As a cutting-edge field of global research, 3D video technology faces the dual challenges of large data volumes and high processing complexity. Although the most recent video coding standard VVC, surpasses HEVC in cod...
详细信息
As a cutting-edge field of global research, 3D video technology faces the dual challenges of large data volumes and high processing complexity. Although the most recent video coding standard VVC, surpasses HEVC in coding efficiency, dedicated research on 3D video coding remains relatively scarce. Building on existing research, this study aims to develop a 3D video coding algorithm based on VVC that lowers the complexity of the encoding procedure. We focus specifically on the depth maps in 3D video content and introduce an extreme forest model from machine learning to optimize intra-frame coding. This paper proposes a novel CU partitioning strategy implemented through a two-stage extreme forest model. First, the initial model predicts the CU partitioning type, including no partition, QuadTree partitioning, Multi-type tree horizontal partitioning, and Multi-type tree vertical partitioning. For the latter two cases, a second model further refines the partitioning into binary or ternary trees. Through this two-stage prediction mechanism, we effectively bypass CU partitioning types with low probability, significantly reducing the coding complexity. The experimental results demonstrate that the proposed algorithm saves 47.46% in encoding time while maintaining coding quality, with only a 0.26% increase in Bjontegaard Delta Bitrate. This achievement provides an effective low-complexity solution for the 3D video coding field.
The Digital Twin Network (DTN) establishes a real-time virtual mirror of physical networks. Data collection plays an essential role in DTN, which collects the status data of physical network for building highly consis...
详细信息
The Digital Twin Network (DTN) establishes a real-time virtual mirror of physical networks. Data collection plays an essential role in DTN, which collects the status data of physical network for building highly consistent digital twins. In this paper, we present a network-wide data collection scheme based on In-band Network Telemetry (INT). To build a lifelike mirror of the physical network, the probing path set is required to cover all links so that network topology, traffic load, and port-level device information is captured. We present a Latency-aware High-degree Replicated First (LHRF) vertex-cut graph partitioning algorithm to partition the network into several balanced subgraphs while trying to replicate the high-degree vertexes among partitions first. LHRF aims to balance the length and accumulated latency of the probing paths. With shorter and stabler probing latencies, the information received by digital twin can reflect the latest and consistent network-wide status. To prevent the packets from being fragmented due to overlong paths, a deep limited search (DLS) based path planning algorithm is employed to generate non-overlapped probing paths covering all edges in the separated subgraphs. Simulation results demonstrate that the proposed scheme generates more balanced INT paths with constrained path length and shorter, stabler probing delay.
In distributed workloads involving joins and aggregations, skewed attribute values often cause load balancing issues, leading to stragglers and increased execution times. Existing solutions often rely on cost-based mo...
详细信息
In distributed workloads involving joins and aggregations, skewed attribute values often cause load balancing issues, leading to stragglers and increased execution times. Existing solutions often rely on cost-based models, require extensive parameter tuning, or necessitate modifications to distributed execution engines, limiting their usability and generality. To address this challenge, we present a novel patch-based repartitioning algorithm that eliminates load imbalances while minimizing network overhead. Our approach extends the subset-replicate technique by leveraging data distribution and location statistics to optimize data locality and reduce unnecessary data movement. Unlike traditional hash-based methods, our technique is skew-insensitive, requires no parameter tuning, and integrates seamlessly into existing distributed execution engines as a drop-in replacement for the shuffle mechanism. The proposed method operates in three distinct stages: (1) statistics collection and load threshold computation, (2) patch-based subgroup assignment to ensure optimal load balancing with minimal replication, and (3) informed data shuffling and join execution. This structured process ensures even workload distribution across workers while reducing network I/O. Theoretical analysis proves the scalability, skew robustness, and load-balancing guarantees of our approach, establishing bounds on maximum worker load and network data movement. Experimental evaluations demonstrate that our method achieves perfectly balanced workloads and reduces execution time by up to 81% compared to conventional hash-based joins under moderate to high skew, while introducing negligible overhead at low skew levels. These improvements are attributed to reduced data movement, optimized worker utilization, and the algorithm's robust theoretical foundation. Our research provides a versatile and practical solution for skewed data processing, significantly advancing the efficiency of distributed data man
In this study, we present a simple and intuitive method for accelerating optimal Reeds-Shepp path computation. Our approach uses geometrical reasoning to analyze the behavior of optimal paths, resulting in a new parti...
详细信息
In this study, we present a simple and intuitive method for accelerating optimal Reeds-Shepp path computation. Our approach uses geometrical reasoning to analyze the behavior of optimal paths, resulting in a new partitioning of the state space and a further reduction in the minimal set of viable paths. We revisit and reimplement classic methodologies from literature, which lack contemporary open-source implementations, to serve as benchmarks for evaluating our method. In addition, we address the underspecified Reeds-Shepp planning problem where the final orientation is unspecified. We perform exhaustive experiments to validate our solutions. Compared to the modern C++ implementation of the original Reeds-Shepp solution in the Open Motion Planning Library, our method demonstrates a 15x speedup, while classic methods achieve a 5.79x speedup. Both approaches exhibit machine-precision differences in path lengths compared to the original solution. We release our proposed C++ implementations for both the accelerated and underspecified Reeds-Shepp problems as open-source code.
Systems of different types of agents coupled with high-order interactions can emerge complex synchronization patterns. This article presents an analysis framework to study the group synchronization of heterogeneous mu...
详细信息
Systems of different types of agents coupled with high-order interactions can emerge complex synchronization patterns. This article presents an analysis framework to study the group synchronization of heterogeneous multiagent systems with hierarchical structure and directed interactions of arbitrary order. First, directed hypergraph is introduced to describe the arbitrary-order interactions among the agents. Then, to explore the possible group synchronization patterns, the agent partition methods commonly used in dyadic graphs are extended to the directed hypergraph. Meanwhile, to verify the admissibility of the group synchronization patterns, two equitable partition types of agent partition are defined from the perspective of hyperedge partition. Next, using the simultaneous block diagonalization method, the stability problem of the group synchronous solution is decoupled into several independent subblocks. These subblocks are associated with the perturbations along and transverse to the synchronous manifold, with transverse subblocks related to loss of synchronization. Therefore, the stability of the group synchronous solution can be investigated by evaluating the stability of each transverse subblock via the master stability function method, thus significantly reducing the stability calculation complexity. Finally, the effectiveness of the analysis framework is illustrated via the simulation.
The clustering algorithm has always been a hot spot in machine learning, which has made great progress and been widely used in different scenarios. Due to the characteristics and requirements of some application scena...
详细信息
The clustering algorithm has always been a hot spot in machine learning, which has made great progress and been widely used in different scenarios. Due to the characteristics and requirements of some application scenarios, the branch of the balanced clustering algorithm has been developed. The ideal of these algorithms is to obtain clusters containing approximately the same number of samples. However, when there are data points distributed at the boundary of different clusters, resulting in different probabilities of their belonging, hard-partitioned balanced clustering may not be able to handle these boundary data well, thus limiting their performance. Motivated by this, we propose a Fuzzy Min-Cut with Soft Balancing Effects (FCBE) method in this article. Specifically, the FCBE model utilizes fuzzy constraints to simultaneously enhance the ability of the balanced algorithm to capture boundary data members and the advantage of directly obtaining the partitioning results of graph-cut problem without postprocessing. In addition, a sparse regularization is introduced to avoid trivial solutions and maintain the separability of the relationship matrix. Furthermore, the proposed FCBE method can be viewed as a flexibly adjustable generalization pattern that not only has clear interpretability but also can become special cases with clear physical meanings under different parameter values. The feasibility of FCBE has been verified on real datasets.
Unmanned aerial vehicle (UAV)-assisted communications are critical in regional wireless networks. Using reconfigurable intelligent surfaces (RISs) can significantly improve UAVs' throughput and energy efficiency. ...
详细信息
Unmanned aerial vehicle (UAV)-assisted communications are critical in regional wireless networks. Using reconfigurable intelligent surfaces (RISs) can significantly improve UAVs' throughput and energy efficiency. Due to limited communications resources, the data transfer rate of ground terminals (GTs) could be slower, and the throughput may be low. Using RIS-assisted UAVs can effectively address these limitations. This paper focuses on optimizing the three-dimensional (3D) trajectory of the UAV and the scheduling order of the GTs and designing the phase shift of the RIS to maximize energy efficiency while meeting the limited energy and fair service constraints in the case of fair service GTs. To address the non-convexity of this problem, we propose a triple deep q-network (TDQN) algorithm, which better avoids the overestimation problem during the optimization process. We propose an improved k-density-based spatial clustering of applications with noise (K-DBSCAN) clustering algorithm, which is characterized by the ability to output the initial movement range of the UAV and prune the deep reinforcement learning (DRL) state space by the initial movement range to speed up DRL training based on the completion of the partitioning deployment work. A fair screening mechanism is proposed to satisfy the fairness constraint. The results show that the TDQN algorithm is 2.9% more energy efficient than the baseline. The K-DBSCAN algorithm speeds up the training of the TDQN algorithm by 59.4%. The fair screening mechanism reduces the throughput variance from an average of 114099.9 to an average of 46.9.
暂无评论