Background The redirected walking(RDW)method for multi-user collaboration requires maintaining the relative position between users in a virtual environment(VE)and physical environment(PE).A chasing game in a VE is a t...
详细信息
Background The redirected walking(RDW)method for multi-user collaboration requires maintaining the relative position between users in a virtual environment(VE)and physical environment(PE).A chasing game in a VE is a typical virtual reality game that entails multi-user *** a user approaches and interacts with a target user in the VE,the user is expected to approach and interact with the target user in the corresponding PE as *** methods of multi-user RDW mainly focus on obstacle avoidance,which does not account for the relative positional relationship between the users in both VE and *** To enhance the user experience and facilitate potential interaction,this paper presents a novel dynamic alignment algorithm for multi-user collaborative redirected walking(DA-RDW)in a shared PE where the target user and other users are *** algorithm adopts improved artificial potential fields,where the repulsive force is a function of the relative position and velocity of the user with respect to dynamic *** the best alignment,this algorithm sets the alignment-guidance force in several cases and then converts it into a constrained optimization problem to obtain the optimal ***,this algorithm introduces a potential interaction object selection strategy for a dynamically uncertain environment to speed up the subsequent *** balance obstacle avoidance and alignment,this algorithm uses the dynamic weightings of the virtual and physical distances between users and the target to determine the resultant force *** The efficacy of the proposed method was evaluated using a series of simulations and live-user *** experimental results demonstrate that our novel dynamic alignment method for multi-user collaborative redirected walking can reduce the distance error in both VE and PE to improve alignment with fewer collisions.
We study that the different-mode(waveguide-connected)power splitter[(W)PS]can provide different-mode testing points for the optical *** the PS or WPS providing two different-mode testing points,the measured insertion ...
详细信息
We study that the different-mode(waveguide-connected)power splitter[(W)PS]can provide different-mode testing points for the optical *** the PS or WPS providing two different-mode testing points,the measured insertion losses(ILs)of the three-channel and dual-mode waveguide crossing(WC)for both the fundamental transverse electric(TE0)and TE1 modes are less than 1.8 dB or 1.9 dB from 1540 nm to 1560 *** the same time,the crosstalks(CTs)are lower than-17.4 dB or-18.2 *** consistent test results indicate the accuracy of the(W)PS-based testing ***,combining the tunable tap couplers,the(W)PS can provide multiple testing points with different modes and different transmittances.
Recently, bio-inspired event cameras have seen increased use for object detection in autonomous driving due to their advantages of high temporal resolution and high dynamic range. However, how to leverage the high-spe...
详细信息
Recently, bio-inspired event cameras have seen increased use for object detection in autonomous driving due to their advantages of high temporal resolution and high dynamic range. However, how to leverage the high-speed and asynchronous characteristics of event streams to achieve accurate and robust detection within low end-to-end latency remains a key unresolved issue. Prior methods not only struggle with high latency but also encounter difficulties in robustly detecting objects at varying speeds. In this paper, we propose a novel dense-to-sparse event-based object detection framework called DTSDNet. The event temporal image is first proposed to preserve motion and temporal information in the event stream. Then, rich spatial features from the dense pathway are integrated into the sparse pathway through the attention-based dual-pathway aggregation module. To assess the speed robustness of the model and event representation, we propose a simple yet effective relative speed estimation method. The experimental results demonstrate that the model and event representation can achieve state-of-the-art (SOTA) detection performance and superior speed robustness on the event-based object detection datasets. Moreover, this dense-to-sparse framework can reduce the accumulation time of the event stream by a factor of 5 (from 50 ms to 10 ms) while maintaining SOTA detection performance, meeting the low-latency requirements of perception in real-time driving. IEEE
Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sens...
详细信息
Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads;the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps;the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections;and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction. IEEE
While reinforcement learning has shown promising abilities to solve continuous control tasks from visual inputs, it remains a challenge to learn robust representations from high-dimensional observations and generalize...
详细信息
In the development of ethernet passive optical networks (EPONs), quality of service (QoS) support and fairness per optical network unit (ONU) are crucial issues. However, making an elaborate analysis of the existing p...
详细信息
Social robot accounts controlled by artificial intelligence or humans are active in social networks,bringing negative impacts to network security and social *** social robot detection methods based on graph neural net...
详细信息
Social robot accounts controlled by artificial intelligence or humans are active in social networks,bringing negative impacts to network security and social *** social robot detection methods based on graph neural networks suffer from the problem of many social network nodes and complex relationships,which makes it difficult to accurately describe the difference between the topological relations of nodes,resulting in low detection accuracy of social *** paper proposes a social robot detection method with the use of an improved neural ***,social relationship subgraphs are constructed by leveraging the user’s social network to disentangle intricate social relationships ***,a linear modulated graph attention residual network model is devised to extract the node and network topology features of the social relation subgraph,thereby generating comprehensive social relation subgraph features,and the feature-wise linear modulation module of the model can better learn the differences between the ***,user text content and behavioral gene sequences are extracted to construct social behavioral features combined with the social relationship subgraph ***,social robots can be more accurately identified by combining user behavioral and relationship *** carrying out experimental studies based on the publicly available datasets TwiBot-20 and Cresci-15,the suggested method’s detection accuracies can achieve 86.73%and 97.86%,*** with the existing mainstream approaches,the accuracy of the proposed method is 2.2%and 1.35%higher on the two *** results show that the method proposed in this paper can effectively detect social robots and maintain a healthy ecological environment of social networks.
Software, hardware, data, and computing power can be abstracted and encapsulated as services authorised to users in a paid or free manner for on demand deployment. Service composition combines multiple existing servic...
详细信息
Amid the global shift of smart manufacturing towards greener and more intelligent paradigms, the spatiotemporal coupling characteristics of dynamic heat conduction networks pose significant challenges for optimizing t...
详细信息
Object detection is an important task in drone vision. Since the number of objects and their scales always vary greatly in the drone-captured video, small object-oriented feature becomes the bottleneck of model perfor...
详细信息
Object detection is an important task in drone vision. Since the number of objects and their scales always vary greatly in the drone-captured video, small object-oriented feature becomes the bottleneck of model performance, and most existing object detectors tend to underperform in drone-vision scenes. To solve these problems, we propose a novel detector named YOLO-Drone. In the proposed detector, the backbone of YOLO is firstly replaced with ConvNeXt, which is the state-of-the-art one to extract more discriminative features. Then, a novel scale-aware attention(SAA) module is designed in detection head to solve the large disparity scale problem. A scale-sensitive loss(SSL) is also introduced to put more emphasis on object scale to enhance the discriminative ability of the proposed detector. Experimental results on the latest VisDrone 2022 test-challenge dataset(detection track) show that our detector can achieve average precision(AP) of 39.43%, which is tied with the previous state-of-the-art, meanwhile,reducing 39.8% of the computational cost.
暂无评论