This research proposes a system that leverages stereo vision and monocular depth estimation to form a depth map from which a 3D point cloud scene is extracted. The emergence of competitive neural networks for depth ma...
详细信息
The new era of technology is being greatly influenced by the field of artificial intelligence. computer vision and deep learning have become increasingly important due to their ability to process vast amounts of data ...
The new era of technology is being greatly influenced by the field of artificial intelligence. computer vision and deep learning have become increasingly important due to their ability to process vast amounts of data and provide insights and solutions in a variety of fields. computer vision, deep learning and signal analysis have been used in a growing number of applications and services including smart devices, image, and speech recognition, healthcare, etc., one such device is an infant monitoring system. It monitors the daily activities of the infant such as their sleeping patterns, sounds, and movements. In this paper, deep learning and computer vision libraries were used to develop algorithms to detect whether the infant was in any uncomfortable situation such as sleeping on its back, face being covered and whether the infant was awake. The smart infant monitoring system detects the infant's unsafe resting situation in real time and sent immediate alerts to the caretaker's device. This paper presents the design flow of a smart infant monitoring system consisting of a night vision camera, a Jetson Nano, and a Wi-Fi internet connection. The pose estimation and awake detection algorithms were developed and tested successfully for different infant resting/sleeping situations. The smart infant monitoring system provides significant benefits for safety and an improved understanding of infants' sleep patterns and behavior.
This study explores the integration of HL7 Fast Healthcare Interoperability Resources (FHIR) into a Clinical Decision Support System (CDSS) utilizing a Pepper robot to enhance doctor visits in a hospital setting. We e...
详细信息
The concept of reward is fundamental in reinforcement learning with a wide range of applications in natural and social *** an interpretable reward for decision-making that largely shapes the system's behavior has ...
详细信息
The concept of reward is fundamental in reinforcement learning with a wide range of applications in natural and social *** an interpretable reward for decision-making that largely shapes the system's behavior has always been a challenge in reinforcement *** this work,we explore a discrete-time reward for reinforcement learning in continuous time and action spaces that represent many phenomena captured by applying physical *** find that the discrete-time reward leads to the extraction of the unique continuous-time decision law and improved computational efficiency by dropping the integrator operator that appears in classical results with integral *** apply this finding to solve output-feedback design problems in power *** results reveal that our approach removes an intermediate stage of identifying dynamical *** work suggests that the discrete-time reward is efficient in search of the desired decision law,which provides a computational tool to understand and modify the behavior of large-scale engineering systems using the optimal learned decision.
Voice activity detection (VAD), is a signal processing technique used to determine whether a given speech signal contains voiced or unvoiced segments. VAD is used in various applications such as Speech Coding, Voice C...
Voice activity detection (VAD), is a signal processing technique used to determine whether a given speech signal contains voiced or unvoiced segments. VAD is used in various applications such as Speech Coding, Voice Controlled Systems, speech feature extraction, etc. For example, in Adaptive multi-rate (AMR) speech coding, VAD is used as an efficient way of coding different speech frames at different bit rates. In this paper, we implemented the application of a Zero-Phase Zero Frequency Resonator (ZP-ZFR) as VAD on hardware. ZP-ZFR is an Infinite Impulse Response (IIR) filter that offers the advantage of requiring a lower filter order, making it suitable for hardware implementation. The proposed system is implemented on the TIMIT database using the Nexys Video Artix-7 FPGA board. The hardware design is carried out using Vivado 2021.1, a popular tool for FPGA development. The Hardware Description Language (HDL) used for implementation is Verilog. The proposed system achieves good performance with low complexity. Therefore this work is implemented on hardware, which can be used in various applications.
We propose a distributed system based on low-power embedded FPGAs designed for edge computing applications focused on exploring distributing scheduling optimizations for Deep Learning (DL) workloads to obtain the best...
We propose a distributed system based on low-power embedded FPGAs designed for edge computing applications focused on exploring distributing scheduling optimizations for Deep Learning (DL) workloads to obtain the best performance regarding latency and power efficiency. Our cluster was modular throughout the experiment, and we have implementations that consist of up to 12 Zynq-7020 chip-based boards as well as 5 UltraScale+ MPSoC FPGA boards connected through an ethernet switch, and the cluster will evaluate configurable Deep Learning Accelerator (DLA) Versatile Tensor Accelerator (VTA). This adaptable distributed architecture is distinguished by its capacity to evaluate and manage neural network workloads in numerous configurations which enables users to conduct multiple experiments tailored to their specific application needs. The proposed system can simultaneously execute diverse Neural Network (NN) models, arrange the computation graph in a pipeline structure, and manually allocate greater resources to the most computationally intensive layers of the NN graph.
Hands-on learning environments and cyber ranges are popular tools in cybersecurity education. These resources provide students with practical assessments to strengthen their abilities and can assist in transferring ma...
Hands-on learning environments and cyber ranges are popular tools in cybersecurity education. These resources provide students with practical assessments to strengthen their abilities and can assist in transferring material from the classroom to real-world scenarios. Additionally, virtualization environments, such as Proxmox, provide scalability and network flexibility that can be adapted to newly discovered threats. However, due to the increasing demand for cybersecurity skills and experience, learning environments must support an even greater number of students each term. Manual provisioning and management of environments for large student populations can consume valuable time for the instructor. To address this challenge, we developed an Environment Provisioning and Management Tool for cybersecurity education. Our solution interacts with the exposed Proxmox API to automate the process of user creation, server provisioning, and server destruction for a large set of users. Remote access will be managed by a pfSense firewall. Based on our testing, a six-machine user environment could be provisioned in 14.96 seconds and destroyed in 15.06 seconds.
computer vision has proven itself capable of accurately detecting and classifying objects within images. This also works in cases where images are used as a way of representing data, without being actual photographs. ...
computer vision has proven itself capable of accurately detecting and classifying objects within images. This also works in cases where images are used as a way of representing data, without being actual photographs. In cybersecurity, computer vision is rarely used, however it has been used to detect botnets successfully. We applied computer vision to determine how well it would be able to detect and classify a large number of attacks and determined that it would be able to run at a decent rate on a Jetson Nano. This was accomplished by training a convolutional neural network using data publicly available in the IoT-23 database, which contains packet captures of IoT devices with and without different malware infections. The neural network was evaluated on an RTX 3050 and a Jetson Nano to see if it could be used in IoT.
According to WHO's report from 2021, Drowning is the 3rd leading cause of unintentional death worldwide. The use of autonomous drones for drowning recognition can increase the survival rate and help lifeguards and...
According to WHO's report from 2021, Drowning is the 3rd leading cause of unintentional death worldwide. The use of autonomous drones for drowning recognition can increase the survival rate and help lifeguards and rescuers with their life saving mission. This paper presents a real-time drowning recognition model and algorithm for ocean surveillance that can be implemented on a drone. The presented model has been trained using two different approaches and has 88% accuracy. Compared to the contemporary models of drowning recognition designed for swimming pools, the model presented is better suited for outdoor applications in the ocean.
Due to the rapid adoption of Internet of Things (IoT) technologies, many networks are composed of a patchwork of devices designed by different software and hardware developers. In addition to the heterogeneity of IoT ...
Due to the rapid adoption of Internet of Things (IoT) technologies, many networks are composed of a patchwork of devices designed by different software and hardware developers. In addition to the heterogeneity of IoT networks, the general rush-to-market produced products with poor adherence to core cybersecurity principles. Coupled together, these weaknesses leave organizations vulnerable to attack by botnets, such as Mirai and Gafgyt. Infected devices pose a threat to both internal and external devices as they attempt to add new devices to the collective or to perpetrate targeted attacks within the network or against third parties. Artificial Intelligence (AI) tools for intrusion detection are popular platforms for detecting indicators of botnet infiltration. However, when training AI tools, the heterogeneity of the network hampers detection and classification accuracy due to the differences in device architecture and network layout. To investigate this challenge, we explored the application of a Neural Network (NN) to the N-BaIoT dataset. The NN achieved 94% classification accuracy when trained using data from all devices in the network. Further, we examined the model's transferability by training on a single device and applying it to data from all devices. This resulted in a noticeable decline in classification accuracy. However, when considering cyberattack detection the model retained a very high true positive rate of 99.6%.
暂无评论