Contact micromanipulation for cells is an important branch in the field of micromanipulation. Limited by the size of the sensor, it is difficult to integrate the sensor in the macroscopic scene into the micromanipulat...
详细信息
Contact micromanipulation for cells is an important branch in the field of micromanipulation. Limited by the size of the sensor, it is difficult to integrate the sensor in the macroscopic scene into the micromanipulator. So in most operations, images are often the only reliable source of information. The first prerequisite for realizing automated micromanipulation is to be able to extract key information from images. The occlusion phenomenon will inevitably occur in the contact micromanipulation. Although it is possible to design a specific algorithm to identify the target edge information in the occlusion state according to the characteristics of the operating environment and the end-effector. But there is still no universal image processing method to solve this problem. In this paper, we propose an image processing function based on a composite deep learning network structure to solve this problem. Our algorithm is divided into two steps:In the first step, we input the original image into the target detection network to get the position information of the target and end-effector, and cut the region of interest from the original image according to this information. In the second step, we preprocess these candidate sub-images containing key foreground information, and then input them into the image segmentation network to obtain the contour information of the end-effector and target. We designed a cell aspiration experiment based on the digital holographic microscope imaging system to validate our algorithm. In future work, we will continue to improve the algorithm to have better robustness and generalization.
A promising effective human-robot interaction in assistive robotic systems is gaze-based control. However, current gaze-based assistive systems mainly help users with basic grasping actions, offering limited support. ...
详细信息
作者:
Zhang, PengHe, XingYu, JunzhiSouthwest University
Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing College of Electronic and Information Engineering Chongqing400715 China Peking University
State Key Laboratory for Turbulence and Complex Systems Department of Advanced Manufacturing and Robotics College of Engineering Beijing100871 China Peking University
Nanchang Innovation Institute Nanchang330224 China
In this paper, a distributed fixed-time neurodynamic algorithm (DFxTNA) is designed for solving distributed optimization problem with time-varying (TV) objective function and constraints. The DFxTNA consists of consen...
详细信息
With the rapid advances in computer vision, human action recognition has gradually received attention, but the current methods still exhibit some problems in indoor environments. The human skeleton, as the framework o...
详细信息
3D scene flow characterizes how the points at the current time flow to the next time in the 3D Euclidean space, which possesses the capacity to infer autonomously the non-rigid motion of all objects in the scene. The ...
详细信息
Intracortical brain-machine interfaces (iBMIs) aim to establish a communication path between the brain and external devices. However, in the daily use of iBMIs, the non-stationarity of recorded neural signals necessit...
详细信息
作者:
Wang, GuangmingFeng, ZhihengJiang, ChaokangWang, HeshengDepartment of Automation
Key Laboratory of System Control and Information Processing of Ministry of Education Key Laboratory of Marine Intelligent Equipment and System of Ministry of Education Shanghai Engineering Research Center of Intelligent Control and Management Shanghai Jiao Tong University Shanghai200240 China Engineering Research Center of Intelligent Control for Underground Space
Ministry of Education School of Information and Control Engineering Advanced Robotics Research Center China University of Mining and Technology Xuzhou221116 China
Scene flow represents the 3D motion of each point in the scene, which explicitly describes the distance and the direction of each point’s movement. Scene flow estimation is used in various applications such as autono...
详细信息
Recognizing fault types of machinery system is a fundamental but challenging task in industrial application. Although remarkable progress has been attained by learning fault features and predicting the corresponded fa...
详细信息
作者:
Wenhua WuGuangming WangJiquan ZhongHesheng WangZhe LiuMoE Key Lab of Artificial Intelligence
AI Institute Shanghai Jiao Tong University Shanghai China Department of Automation
Key Laboratory of System Control and Information Processing of Ministry of Education Key Laboratory of Marine Intelligent Equipment and System of Ministry of Education Shanghai Engineering Research Center of Intelligent Control and Management Insititute of Medical Robotics Shanghai Jiao Tong University Shanghai China
Depth estimation is one of the most important tasks in scene understanding. In the existing joint self-supervised learning approaches of depth-pose estimation, depth estimation and pose estimation networks are indepen...
Depth estimation is one of the most important tasks in scene understanding. In the existing joint self-supervised learning approaches of depth-pose estimation, depth estimation and pose estimation networks are independent of each other. They only use the adjacent image frames for pose estimation and lack the use of the estimated geometric information. To enhance the depth-pose association, we propose a monocular multi-frame unsupervised depth estimation framework, named PLPE-Depth. There are a depth estimation network and two pose estimation networks with image input and pseudo-LiDAR input. The main idea of our approach is to use the pseudo-LiDAR reconstructed from the depth map to estimate the pose of adjacent frames. We propose depth re-estimation with a better pose between the image pose and the pseudo-LiDAR pose to improve the accuracy of estimation. Besides, we improve the reconstruction loss and design a pseudo-LiDAR pose enhancement loss to facilitate the joint learning. Our approach enhances the use of the estimated depth information and strengthens the coupling between depth estimation and pose estimation. Experiments on the KITTI dataset show that our depth estimation achieves state-of-the-art performance at low resolution. Our source codes will be released on https://***/IRMVLabIPLPE-Depth.
暂无评论