A major barrier to advancing modern wireless networking research is the lack of an effective wireless network simulation platform that simultaneously offers high fidelity, scalability, reproducibility and ease of use....
ISBN:
(纸本)9789639799080
A major barrier to advancing modern wireless networking research is the lack of an effective wireless network simulation platform that simultaneously offers high fidelity, scalability, reproducibility and ease of use. MiNT [8], [7] is an innovative wireless network emulation platform that is specifically designed to satisfy all these desirable properties. To support reconfigurable network topology and wireless node mobility, MiNT is built on a networked robot system that carries wireless networking equipments and is designed to be completely tetherless, capable of supporting 24X7 operation, and low-cost. This paper describes the design, implementation and evaluation of this networked robot system. Each robot node in MiNT is an irobot's Roomba, which is modified to house an embedded PC equipped with multiple wireless networking interfaces and to re-charge the embedded PC through Roomba's built-in self-charging mechanism. For robot navigation and movement, MiNT's networked robot system supports a computervision-based robot positioning mechanism and a collision avoidance-driven trajectory planning component. Finally, MiNT provides an interactive control interface and visualization interface to give users real-time visibility into and full control over the MiNT testbed.
The current paper addresses the problem of object identification from multiple3D partial views, collected from different view angles with the objective of disambiguating between similar objects. We assume a mobile rob...
详细信息
The current paper addresses the problem of object identification from multiple3D partial views, collected from different view angles with the objective of disambiguating between similar objects. We assume a mobile robot equipped with a depth sensor that autonomously collects observations from an object from different positions, with no previous known pattern. The challenge is to efficiently combine the set of observations into a single classification. We approach the problem with a multiple hypothesis filter that allows to combine information from a sequence of observations given the robot movement. We further innovate by off-line learning neighborhoods between possible hypothesis based on the similarity of observations. Such neighborhoods translate directly the ambiguity between objects, and allow to transfer the knowledge of one object to the other. In this paper we introduce our algorithm, Multiple Hypothesis for Object Class Disambiguation from Multiple Observations, and evaluate its accuracy and efficiency.
This paper proposes a programming class for young children. In order to reduce the burden on teachers and attract students' interest, a robot teaching assistant is used to explain programming knowledge, verify and...
详细信息
ISBN:
(纸本)9781450388054
This paper proposes a programming class for young children. In order to reduce the burden on teachers and attract students' interest, a robot teaching assistant is used to explain programming knowledge, verify and run programs. Different from writing programs on a computer, the physical board programming is employed to develop logical thinking of young children and prevent them from the vision harm of facing computer screens for a long time and the lack of reality from immersion in the virtual world. Based on the knowledge points of programming, we have designed a course with 16 lessons. The course has been successfully applied in many kindergartens and elementary schools. We use the questionnaires for students and teachers to evaluate the course and the experimental results show that it is effective.
Due to loss of vision, it becomes a challenging task for a visually impaired individual to locate and grasp a target object. This paper presents a hand-worn assistive device that may assist a visually impaired person ...
详细信息
ISBN:
(纸本)9781538694091;9781538694084
Due to loss of vision, it becomes a challenging task for a visually impaired individual to locate and grasp a target object. This paper presents a hand-worn assistive device that may assist a visually impaired person in detecting a target object and maintaining alignment with the object while approaching it. The device consists of a sensing module and a guiding module. The sensing module uses an RGB-D camera to detect and track the target object. The guiding module computes hand-object misalignment, determines the desired hand movement (DHM) for hand-object alignment, and uses a cable-driven exoskeleton mechanism to guide the user's hand to align with the target object. The guiding module helps to maintain hand-object alignment while the hand is approaching the object. A prototype of the device was developed and its usability was validated by experiments with human subjects.
In computervision human action recognition plays an important role in the modern era. Here a sensor can acquired accurately the human action and their interactions from a previously unseen data sequence. Human activi...
详细信息
ISBN:
(数字)9798350361780
ISBN:
(纸本)9798350361797
In computervision human action recognition plays an important role in the modern era. Here a sensor can acquired accurately the human action and their interactions from a previously unseen data sequence. Human activity identification in video sequences is a hotspot for computervision research owing to its practical implications. It encompasses security, surveillance, healthcare, robotics, animation, sports analysis, smart home automation and behavioral analysis. The goal of AI society is to create a system that can observe and understand human behavior and actions completely independently. For example, a robot assistant might aid a patient undergoing home monitoring by analyzing the most effective method of exercise and so avoiding additional injuries, thereby increasing the robot's usefulness to society. This kind of smart technology will be immensely useful to us since it eliminates the need for unnecessary doctor visits, lowers healthcare costs, and allows for constant remote monitoring of the patient. Many feature-based methods, both manually designed and automatically taught, have emerged in the past two decades for identifying human actions in video footage. Traditional methods of human activity identification rely on meticulously designed characteristics that drill down to the most fundamental of motions. Over the last several years, deep learning algorithms have made great progress in a number of different domains, including human location prediction, object recognition, segmentation, audio analysis, object tracking, and super-resolution. In visual recognition tasks, the deep learning model is also crucial. Instead of manually extracting the characteristics, deep learning-based methods provide a more efficient and time-saving alternative. Handcrafted features solutions were shown to be effective, however they over-relied on feature descriptors when attempting action categorization. This type of problem needed additional man hours and specialized knowledge
The goal of this project is to develop a human detection robot that may be used as a backup mechanism to save lives in case of an emergency. Developing dependable surveillance systems is essential for both safety and ...
详细信息
ISBN:
(数字)9798350355338
ISBN:
(纸本)9798350355345
The goal of this project is to develop a human detection robot that may be used as a backup mechanism to save lives in case of an emergency. Developing dependable surveillance systems is essential for both safety and security. The human detection model is a fundamental part of any surveillance system. Recent developments in embedded systems and hardware make it feasible to create a low-cost real-time person detecting system. This study investigates the architecture of human detection and tracking techniques over non-overlapping field of views. A search technique is presented to make up for each of these shortcomings. In an experimental context, the rate and accuracy of human detection using the suggested technique was evaluated.
Self-driving cars are an innovative technology in transportation that seeks to eliminate human mistakes, save lives, and increase the efficiency of traffic flow. In this work, a self-driving prototype with a Raspberry...
详细信息
ISBN:
(数字)9798331533557
ISBN:
(纸本)9798331533564
Self-driving cars are an innovative technology in transportation that seeks to eliminate human mistakes, save lives, and increase the efficiency of traffic flow. In this work, a self-driving prototype with a Raspberry Pi 4 Model B, several sensor components, and their assessment were proposed as ultrasonic and grayscale sensors, a Raspberry Pi camera, and a SunFounder robot hat, which are all managed via a mobile application from Ezblock studio. To ensure that control and decision-making are in real-time, algorithms were incorporated into the EzBlock system. The prototype was tested in various scenarios, employing multiple levels of lighting and path types and straight and curved paths with sharp and curved corners. The system's high performance was demonstrated in good lighting, whereby it could detect lanes, signs, and obstacles and avoid them. However, some difficulties were observed during tests, such as sharp corners, wide paths, and low light conditions, causing problems with camera-based detection. This work demonstrates the viability of low-cost, Raspberry Pi-based self-driving prototypes for applications such as autonomous vehicles.
Deep learning models can perform well when evaluated on images from the same distribution as the training set. However, applying small perturbations in the forms of noise, artifacts, occlusions, blurring, etc. to a mo...
Deep learning models can perform well when evaluated on images from the same distribution as the training set. However, applying small perturbations in the forms of noise, artifacts, occlusions, blurring, etc. to a model's input image and feeding the model with out-of-distribution (OOD) data can significantly drop the model's accuracy, making it not applicable to real-world scenarios. Data augmentation is one of the well-practiced methods to improve model robustness against OOD data; however, examining which augmentation type to choose and how it affects the OOD robustness remains understudied. There is a growing belief that augmenting datasets using data augmentations that improve a model's bias to shape-based features rather than texture-based features results in increased OOD robustness for Convolutional Neural Networks trained on the ImageNet-1K dataset. This is usually stated as “an increase in the model's shape bias results in an increase in its OOD robustness”. Based on this hypothesis, some works in the literature aim to find augmentations with higher effects on model shape bias and use those for data augmentation. By evaluating 39 types of data augmentations on a widely used OOD dataset, we demonstrate the impact of each data augmentation on the model's robustness to OOD data and further show that the mentioned hypothesis is not true; an increase in shape bias does not necessarily result in higher OOD robustness. By analyzing the results, we also find some biases in the ImageNet-1K dataset that can easily be reduced using proper data augmentation. Our evaluation results further show that there is not necessarily a trade-off between in-domain accuracy and OOD robustness, and choosing the proper augmentations can help increase both in-domain accuracy and OOD robustness simultaneously.
暂无评论