Instance segmentation,an important image processing operation for automation in agriculture,is used to precisely delineate individual objects of interestwithin images,which provides foundational information for variou...
详细信息
Instance segmentation,an important image processing operation for automation in agriculture,is used to precisely delineate individual objects of interestwithin images,which provides foundational information for various automated or robotic tasks such as selective harvesting and precision *** study compares the one-stage YOLOv8 and the two-stage Mask R-CNN machine learning models for instance segmentation under varying orchard conditions across two *** 1,collected in dormant season,includes images of dormant apple trees,which were used to train multi-object segmentation models delineating tree branches and *** 2,collected in the early growing season,includes images of apple tree canopies with green foliage and immature(green)apples(also called fruitlet),which were used to train single-object segmentation models delineating only immature green *** results showed that YOLOv8 performed better than Mask R-CNN,achieving good precision and near-perfect recall across both datasets at a confidence threshold of ***,for Dataset 1,YOLOv8 achieved a precision of 0.90 and a recall of 0.95 for all *** comparison,Mask R-CNN demonstrated a precision of 0.81 and a recall of 0.81 for the *** Dataset 2,YOLOv8 achieved a precision of 0.93 and a recall of *** R-CNN,in this single-class scenario,achieved a precision of 0.85 and a recall of ***,the inference times for YOLOv8 were 10.9 ms for multi-class segmentation(Dataset 1)and 7.8 ms for single-class segmentation(Dataset 2),compared to 15.6 ms and 12.8 ms achieved by Mask R-CNN's,*** findings showYOLOv8's superior accuracy and efficiency in machine learning applications compared to two-stage models,specifically Mask-R-CNN,which suggests its suitability in developing smart and automated orchard operations,particularly when real-time applications are necessary in such cases as robotic harvesting and robotic immature green fruit thin
Flexible pressure sensors hold significant potential for applications in health monitoring, human-machine interaction, electronic skin, and artificialintelligence due to their high sensitivity, flexibility, lightweig...
详细信息
Flexible pressure sensors hold significant potential for applications in health monitoring, human-machine interaction, electronic skin, and artificialintelligence due to their high sensitivity, flexibility, lightweight, and ease of signal acquisition. In recent years, extensive research into sensor materials, structures, and manufacturing technologies has led to the development of various high-performance flexible pressure sensors. Currently, optimizing sensing performance involves selecting appropriate functional materials, designing deformable structures, and employing high-precision manufacturing techniques. This paper reviews recent advancements in flexible pressure sensors, focusing on sensing mechanisms, functional materials, microstructure design, manufacturing technologies, and application fields. First, the sensing mechanisms of pressure sensors operating in different modes are introduced, and several widely used functional materials are discussed. Particular attention is given to the role of geometric microstructure design in enhancing sensing performance. Next, the influence of various manufacturing technologies on sensing performance is analyzed and summarized. In addition, emerging applications of flexible pressure sensors in health monitoring, human-machine interaction, electronic skin, and artificialintelligence are presented. Finally, the paper concludes by highlighting the development prospects and major challenges in achieving high-performance flexible pressure sensors.
When teaching robotics, instructors face the challenge of finding an effective approach to bridge theoretical concepts and practical applications. Both computer simulations and hands-on laboratory experiments provide ...
详细信息
When teaching robotics, instructors face the challenge of finding an effective approach to bridge theoretical concepts and practical applications. Both computer simulations and hands-on laboratory experiments provide learners with opportunities for active, immersive, and experiential learning. As students progress from introductory to advanced topics and from theory to practice, their performance is contingent upon earlier knowledge and may increase, remain unchanged, or decrease. The question that arises is whether computer simulation can serve as a viable foundation for fostering an understanding of theory that enables the subsequent grasp of advanced practical concepts in robotics. Put another way, when students are introduced to the field of robotics through computer simulation, how will they perform when presented with advanced hands-on tasks involving the construction of physical robots to solve problems in physical space? To answer this question, we examined undergraduate student performance (n = 107) across two robotics courses-an introductory course using computer simulation (Robot Operating System, Rviz, and GAZEBO) and an advanced course using physical hardware (Puzzlebot), leveraging the hardware's capability for AI tasks such as machinevision (Nvidia Jetson Nano development kit). Our findings suggest that student performance increased as they progressed from using computer simulation to engaging with hardware in the physical environment, further suggesting that teaching with computer simulations provides an adequate foundation to learn and complete more advanced tasks.
machine learning models suffer from serious performance declination when facing out-of-distribution datasets. In recent years, numerous researches on domain generalization (DG) have been made to address that issue and...
详细信息
ISBN:
(纸本)9789819607853;9789819607860
machine learning models suffer from serious performance declination when facing out-of-distribution datasets. In recent years, numerous researches on domain generalization (DG) have been made to address that issue and improve the model's generalization. With the development of multimodal models, there are an increasing number of works considering utilizing large vision-language models to achieve DG. In this paper, we proposed Semantic Information Extraction and Target Alignment (SIETA) to take alignment both in the training and testing phases. We choose pre-trained CLIP as the teacher to guide our model to learn the ability to extract semantic information by minimizing the distance between the representations of CLIP's text encoder and those of our encoder during training and use knowledge distillation to transfer the abundant prior knowledge of CLIP into our model. In the inference phase, we leverage Test-Time Adaptation (TTA) to slightly align our model with the target domain to further enhance the model's generalization. We conducted experiments on four DG benchmark datasets, and the results show that our method significantly improves the model's generalization with a smaller size than CLIP and is versatile to combine with other DG methods.
The proceedings contain 53 papers. The topics discussed include: understanding Internet usage patterns and behaviors among college students: a machine learning approach;applications of wireless sensor networks in biom...
ISBN:
(纸本)9798331511166
The proceedings contain 53 papers. The topics discussed include: understanding Internet usage patterns and behaviors among college students: a machine learning approach;applications of wireless sensor networks in biomedical research and healthcare;development of obstacle navigation in robots using the linear regression method;estimation of rock compressibility for Indonesian limestones by artificial neural network;leaf based grape variety identification using computer vision;implementation of convolutional neural network in medical robot facial recognition system;recent applications using explainable artificialintelligence for data analytics;and development of an autonomous navigation system using google maps API for a package delivery device.
artificialintelligence (AI) and machine learning have changed the nature of scientific inquiry in recent years. Of these, the development of virtual assistants has accelerated greatly in the past few years, with Chat...
详细信息
The extensive incorporation of machinevision into the fields of robotics and automation in a variety of different ways. The various uses of machinevision and the revolutionary impact it has on the capabilities of ro...
详细信息
Sensor fusion is vital for many critical applications, including robotics, autonomous driving, aerospace, and beyond. Integrating data streams from different sensors enables us to overcome the intrinsic limitations of...
详细信息
Sensor fusion is vital for many critical applications, including robotics, autonomous driving, aerospace, and beyond. Integrating data streams from different sensors enables us to overcome the intrinsic limitations of each sensor, providing more reliable measurements and reducing uncertainty. Moreover, deep learning-based sensor fusion unlocked the possibility of multimodal learning, which utilizes different sensor modalities to boost object detection. Yet, adverse weather conditions remain a significant challenge to the reliability of sensor fusion. However, introducing the Transformer deep learning model in sensor fusion presents a promising avenue for advancing its sensing capabilities, potentially overcoming that challenge. Transformer models proved powerful in modeling vision, language, and numerous other domains. However, these models suffer from high latency and heavy computation requirements. This paper aims to provide: 1) an extensive overview of sensor fusion and transformer models;2) an in-depth survey of the state-of-the-art (SoTA) methods for Transformer-based sensor fusion, focusing on camera-LiDAR and camera-radar methods;and 3) a quantitative analysis of the SoTA methods, uncovering research gaps and stimulating future work.
Despite massive development in aerial robotics, precise and autonomous landing in various conditions is still challenging. This process is affected by many factors, such as terrain shape, weather conditions, and the p...
详细信息
Despite massive development in aerial robotics, precise and autonomous landing in various conditions is still challenging. This process is affected by many factors, such as terrain shape, weather conditions, and the presence of obstacles. This paper describes a deep learning-accelerated image processing pipeline for accurate detection and relative pose estimation of the UAV with respect to the landing pad. Moreover, the system provides increased safety and robustness by implementing human presence detection and error estimation for both landing target detection and pose computation. Human presence and landing pad location are performed by estimating the presence probability via segmentation. This is followed by the landing pad keypoints' location regression algorithm, which, in addition to coordinates, provides the uncertainty of presence for each defined landing pad landmark. To perform the aforementioned tasks, a set of lightweight neural network models was selected and evaluated. The resulting measurements of the system's performance and accuracy are presented for each component individually and for the whole processing pipeline. The measurements are performed using onboard embedded UAV hardware and confirm that the method can provide accurate, low-latency feedback information for safe landing support.
High tech to feed the world! This slogan was launched some 10 years ago in The Netherlands to direct research on high tech for agricultural production. It responded to an urgent concern about the needs of a growing wo...
详细信息
High tech to feed the world! This slogan was launched some 10 years ago in The Netherlands to direct research on high tech for agricultural production. It responded to an urgent concern about the needs of a growing world population for food, feed, fibers, and basic chemical components. By the way, make no mistake, agriculture does not only provide food! Anyway, the slogan represented the vibrant optimism about the potential of technology in support of this quest. And for good reasons! Advances in robotics and artificialintelligence (AI) in the recent past allowed us to go beyond far-fetched visions and speculations and put these technologies to the test on the fields, in the greenhouses, in the orchards, and in livestock barns. Definitely, this trend is not limited to The Netherlands. When monitoring academic and professional literature, the rapidly growing number of publications in the past 10 years addressing robotics and AI in the agricultural context is clear proof of a worldwide trend.
暂无评论