版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Imam Mohammad Ibn Saud Islamic Univ Fac Comp & Informat Sci Riyadh 11564 Saudi Arabia King Abdulaziz Univ Fac Comp & Informat Technol Dept Comp Sci Jeddah 21589 Saudi Arabia Univ Salamanca Bisite Res Grp Salamanca 37007 Spain Air Inst IoT Digital Innovat Hub Salamanca 37188 Spain Osaka Inst Technol Fac Engn Dept Elect Informat & Commun Osaka 5358585 Japan Queensland Univ Technol Sch Architecture & Built Environm 2 George St Brisbane Qld 4000 Australia King Abdulaziz Univ High Performance Comp Ctr Jeddah 21589 Saudi Arabia
出 版 物:《SENSORS》 (传感器)
年 卷 期:2022年第22卷第19期
页 面:7435-7435页
核心收录:
学科分类:0710[理学-生物学] 071010[理学-生物化学与分子生物学] 0808[工学-电气工程] 07[理学] 0804[工学-仪器科学与技术] 0703[理学-化学]
基 金:Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU) Jeddah Saudi Arabia [RG-11-611-38]
主 题:visually impaired smart mobility sensors LiDAR ultrasonic deep learning obstacle detection obstacle recognition assistive tools edge computing green computing sustainability Arduino Uno smart app
摘 要:Over a billion people around the world are disabled, among whom 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, and poor environments and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required in order to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach using a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on the smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image-processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system s hardware and software design, having constructed their prototype implementations and tested them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than USD 80. Essentially, we provide designs o