In order to achieve intelligent and flexible welding, complete the extraction and path planning of 3-D complex weld seam of large steel weldment, and solve the problems that in the vision-guided welding process, the s...
详细信息
In order to achieve intelligent and flexible welding, complete the extraction and path planning of 3-D complex weld seam of large steel weldment, and solve the problems that in the vision-guided welding process, the scanning path depends on manual teaching, and the offline programming depends on the machining and assembly accuracy of the weldment;a vision-guided method for welding robots based on a line structured-light sensor is proposed. First, the 3-D computer aided design (CAD) model of the weldment is used to build an offline welding knowledge library, and then, the scanning path is planned in combination with the point cloud alignment. Then, the sensor collects the point cloud on the surface of the weldment along the scanning path. Second, different weld seam extraction algorithms are proposed according to different weld types. The random sample consensus (RANSAC) algorithm is used to identify the position of weld feature points, and the three-frame method is used to solve the pose of weld feature points. Finally, the weld seam is generated to guide the welding robot to complete the welding operation. The experimental results show that the proposed method can be used for different types and dimensions of weldments and provides flexible welding operations with better versatility and higher intelligence.
computervision focuses on optimizing computers to understand and interpret visual data from photos or movies, while image recognition specializes in detecting and categorizing objects or patterns in photographs. Tech...
详细信息
Although deep learning has achieved satisfactory performance in computervision, a large volume of im-ages is required. However, collecting images is often expensive and challenging. Many image augmenta-tion algorithm...
详细信息
Although deep learning has achieved satisfactory performance in computervision, a large volume of im-ages is required. However, collecting images is often expensive and challenging. Many image augmenta-tion algorithms have been proposed to alleviate this issue. Understanding existing algorithms is, therefore, essential for finding suitable and developing novel methods for a given task. In this study, we perform a comprehensive survey of image augmentation for deep learning using a novel informative taxonomy. To examine the basic objective of image augmentation, we introduce challenges in computervision tasks and vicinity distribution. The algorithms are then classified among three categories: model-free, model-based, and optimizing policy-based. The model-free category employs the methods from image process-ing, whereas the model-based approach leverages image generation models to synthesize images. In con-trast, the optimizing policy-based approach aims to find an optimal combination of operations. Based on this analysis, we believe that our survey enhances the understanding necessary for choosing suitable methods and designing novel algorithms.(c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license ( http://***/licenses/by-nc-nd/4.0/ )
Human Activity Recognition (HAR) is a challenging task in computervision that involves analyzing and detecting human actions for various applications such as healthcare and security. Indeed, HAR can be divided into t...
详细信息
Human Activity Recognition (HAR) is a challenging task in computervision that involves analyzing and detecting human actions for various applications such as healthcare and security. Indeed, HAR can be divided into two types: sensor-based and video-based. Sensor-based involves the use of sensors and machine learning algorithms to recognize human activities based on data collected from wearable devices or other sources such as accelerometers and magnetometers or video-based HAR refers to the process of recognizing human activities from video data using machine learning and computervisiontechniques. There are different approaches for video-based HAR, but one common method is to use machine learning algorithms to classify the activity based on features extracted from the video frames. In general, HAR consists of two phases: data collection and processing and activity classification. In this paper, a new method is presented for recognizing human activities using body articulations, which are connections between bones in the skeletal system, such as the hand and shoulder. These joints allow for various degrees and types of movement. As contributions we used the MediaPipe algorithm in the first phase to extract key-point coordinates from human skeleton, and in the second phase which is activity classification transfer learning concept was employed to classify the extracted coordinates. The newly developed method was applied on the KTH, Weizmann, and Olympic Sports datasets and demonstrated higher performance in recognizing human activities, surpassing previous approaches.
To make robotic welding more flexible and intelligent, artificial intelligence-based systems are one of the most important developments. This paper introduces a computervision-based algorithm for weld path detection,...
详细信息
To make robotic welding more flexible and intelligent, artificial intelligence-based systems are one of the most important developments. This paper introduces a computervision-based algorithm for weld path detection, gap measurement, and weld length calculation. The proposed innovative approach employs various image processing techniques and mathematical operations, accurately determining weld attributes at seam points. Using the YOLO-based object detection algorithm, the model attains a remarkable average precision of 99.5% in identifying atypical weld regions. The study also introduces an efficient boundary line elimination method based on the Probabilistic Hough transform and mathematical logic. Methodology for classifying weld lines with or without significant gaps has been proposed, followed by adapting distinct set of algorithms for weld line identification and gap measurement. Rigorous testing on butt joints of diverse shapes (e.g., straight, zig-zag, and curve) and sizes verifies the robustness of the algorithm, with errors well within +/- 1 mm for length measurements. In testing conducted at three different points along the individual weld profile, the maximum error in estimating the weld gap was observed to be 0.11 mm. Weld seam information can be extracted effectively with the proposed algorithm, which proves its viability for industrial applications.
Interest in autonomous robots has grown significantly in recent years, motivated by the many advances in computational power and artificial intelligence. Space probes landing on extra-terrestrial celestial bodies, as ...
详细信息
Interest in autonomous robots has grown significantly in recent years, motivated by the many advances in computational power and artificial intelligence. Space probes landing on extra-terrestrial celestial bodies, as well as vertical take-off and landing on unknown terrains, are two examples of high levels of autonomy being pursued. These robots must be endowed with the capability to evaluate the suitability of a given portion of terrain to perform the final touchdown. In these scenarios, the slope of the terrain where a lander is about to touch the ground is crucial for a safe landing. The capability to measure the slope of the terrain underneath the vehicle is essential to perform missions where landing on unknown terrain is desired. This work attempts to develop algorithms to assess the slope of the terrain below a vehicle using monocular images in the visible spectrum. A lander takes these images with a camera pointing in the landing direction at the final descent before the touchdown. The algorithms are based on convolutional neural networks, which classify the perceived slope into discrete bins. To this end, three convolutional neural networks were trained using images taken from multiple types of surfaces, extracting features that indicate the existing inclination in the photographed surface. The metrics of the experiments show that it is feasible to identify the inclination of surfaces, along with their respective orientations. Our overall aim is that if a hazardous slope is detected, the vehicle can abort the landing and search for another, more appropriate site.
The control and movement of automated robots constitutes a fundamental field in mobile robotics. The system uses a set of algorithms and techniques to guide the precise movement and navigation of ***, refers to the ro...
详细信息
ISBN:
(纸本)9798350355291;9798350355284
The control and movement of automated robots constitutes a fundamental field in mobile robotics. The system uses a set of algorithms and techniques to guide the precise movement and navigation of ***, refers to the robot's ability to follow and approach a desired trajectory accurately over time. This concept implies that any initial deviation from the planned trajectory must be corrected progressively, so that the robot adjusts and maintains it's course towards the destination.
Images and videos captured in poor illumination conditions are degraded by low brightness, reduced contrast, color distortion, and noise, rendering them barely discernable for human perception and ultimately negativel...
详细信息
ISBN:
(纸本)9781510673854;9781510673847
Images and videos captured in poor illumination conditions are degraded by low brightness, reduced contrast, color distortion, and noise, rendering them barely discernable for human perception and ultimately negatively impacting computervision system performance. These challenges are exasperated when processing video surveillance camera footage, using this unprocessed video data as-is for real-time computervision tasks across varying environmental conditions within intelligent Transportation Systems (ITS), such as vehicle detection, tracking, and timely incident detection. The inadequate performance of these algorithms in real-world deployments incurs significant operational costs. Low-light image enhancement (LLIE) aims to improve the quality of images captured in these unideal conditions. Groundbreaking advancements in LLIE have been recorded employing deep-learning techniques to address these challenges, however, the plethora of models and approaches is varied and disparate. This paper presents an exhaustive survey to explore a methodical taxonomy of state-of-the-art deep learning-based LLIE algorithms and their impact when used in tandem with other computervisionalgorithms, particularly detection algorithms. To thoroughly evaluate these LLIE models, a subset of the BDD100K dataset, a diverse real-world driving dataset is used for suitable image quality assessment and evaluation metrics. This study aims to provide a detailed understanding of the dynamics between low-light image enhancement and ITS performance, offering insights into both the technological advancements in LLIE and their practical implications in real-world conditions. The project Github repository can be accessed here.
intelligent optimization algorithm is an advanced computing technology, which simulates the biological evolution process in nature or the logical thinking of human beings to find a solution to the problem. In computer...
详细信息
This work aims to explore the robot automatic navigation model under computerintelligentalgorithms and machine vision, so that mobile robots can better serve all walks of life. In view of the current situation of hi...
详细信息
This work aims to explore the robot automatic navigation model under computerintelligentalgorithms and machine vision, so that mobile robots can better serve all walks of life. In view of the current situation of high cost and poor work flexibility of intelligentrobots, this work innovatively researches and improves the image processing algorithm and control algorithm. In the navigation line edge detection stage, aiming at the low efficiency of the traditional ant colony algorithm, the Canny algorithm is combined to improve it, and a Canny based ant colony algorithm is proposed to detect the trajectory edge. In addition, the Single Shot MultiBox Detector (SSD) algorithm is adopted to detect obstacles in the navigation trajectory of the robot. The performance is analyzed through simulation. The results show that the navigation accuracy of the Canny-based ant colony algorithm proposed in this work is basically stable at 89.62%, and its running time is the shortest. Further analysis of the proposed SSD neural network through comparison with other neural networks suggests that its feature recognition accuracy reaches 92.90%. The accuracy is at least 3.74% higher versus other neural network algorithms, the running time is stable at about 37.99 s, and the packet loss rate is close to 0. Therefore, the constructed mobile robot automatic navigation model can achieve high recognition accuracy under the premise of ensuring error. Moreover, the data transmission effect is ideal. It can provide experimental basis for the later promotion and adoption of mobile robots in various fields.
暂无评论