Hands-on learning environments and cyber ranges are popular tools in cybersecurity education. These resources provide students with practical assessments to strengthen their abilities and can assist in transferring ma...
Hands-on learning environments and cyber ranges are popular tools in cybersecurity education. These resources provide students with practical assessments to strengthen their abilities and can assist in transferring material from the classroom to real-world scenarios. Additionally, virtualization environments, such as Proxmox, provide scalability and network flexibility that can be adapted to newly discovered threats. However, due to the increasing demand for cybersecurity skills and experience, learning environments must support an even greater number of students each term. Manual provisioning and management of environments for large student populations can consume valuable time for the instructor. To address this challenge, we developed an Environment Provisioning and Management Tool for cybersecurity education. Our solution interacts with the exposed Proxmox API to automate the process of user creation, server provisioning, and server destruction for a large set of users. Remote access will be managed by a pfSense firewall. Based on our testing, a six-machine user environment could be provisioned in 14.96 seconds and destroyed in 15.06 seconds.
We propose a distributed system based on low-power embedded FPGAs designed for edge computing applications focused on exploring distributing scheduling optimizations for Deep Learning (DL) workloads to obtain the best...
We propose a distributed system based on low-power embedded FPGAs designed for edge computing applications focused on exploring distributing scheduling optimizations for Deep Learning (DL) workloads to obtain the best performance regarding latency and power efficiency. Our cluster was modular throughout the experiment, and we have implementations that consist of up to 12 Zynq-7020 chip-based boards as well as 5 UltraScale+ MPSoC FPGA boards connected through an ethernet switch, and the cluster will evaluate configurable Deep Learning Accelerator (DLA) Versatile Tensor Accelerator (VTA). This adaptable distributed architecture is distinguished by its capacity to evaluate and manage neural network workloads in numerous configurations which enables users to conduct multiple experiments tailored to their specific application needs. The proposed system can simultaneously execute diverse Neural Network (NN) models, arrange the computation graph in a pipeline structure, and manually allocate greater resources to the most computationally intensive layers of the NN graph.
According to WHO's report from 2021, Drowning is the 3rd leading cause of unintentional death worldwide. The use of autonomous drones for drowning recognition can increase the survival rate and help lifeguards and...
According to WHO's report from 2021, Drowning is the 3rd leading cause of unintentional death worldwide. The use of autonomous drones for drowning recognition can increase the survival rate and help lifeguards and rescuers with their life saving mission. This paper presents a real-time drowning recognition model and algorithm for ocean surveillance that can be implemented on a drone. The presented model has been trained using two different approaches and has 88% accuracy. Compared to the contemporary models of drowning recognition designed for swimming pools, the model presented is better suited for outdoor applications in the ocean.
This paper presents a real-time prototype system for monitoring the distraction levels of the driver. Due to the nature of high traffic conditions commonly seen nowadays, accidents are highly likely to occur as driver...
This paper presents a real-time prototype system for monitoring the distraction levels of the driver. Due to the nature of high traffic conditions commonly seen nowadays, accidents are highly likely to occur as drivers cannot always recognize their exhaustion levels themselves. We utilize a single low-cost camera facing the driver connected to a single-board computer; a series of frame captures from the camera are fed to a neural network, and a pattern detection algorithm to predict the driver's distraction level is utilized. All training is conducted under personalized training sets to increase accuracy and to match an individual's driving patterns as accurately as possible. This system is designed to serve as a baseline for further system development, and many vital sub-components can be changed regarding input data type and choices of machine learning algorithms.
This study explores the integration of HL7 Fast Healthcare Interoperability Resources (FHIR) into a Clinical Decision Support System (CDSS) utilizing a Pepper robot to enhance doctor visits in a hospital setting. We e...
详细信息
Flaw detection using the ultrasonic imaging technique has been widely used for Structural Health Monitoring (SHM). Ultrasonic testing has the advantages of one-sided measurement, high penetration depth, and inspection...
Flaw detection using the ultrasonic imaging technique has been widely used for Structural Health Monitoring (SHM). Ultrasonic testing has the advantages of one-sided measurement, high penetration depth, and inspection accuracy. Artificial intelligence (AI) such as Deep Learning (DL) methods can automate the inspection process with high reliability for imaging and SHM. Neural Networks (NNs) and DL methods can detect flaws using ultrasonic images with high accuracy but suffer extensive computational costs for training and deployment. In this study, we propose a lightweight transformer NN, UFDCTNet: Ultrasonic Flaw Detection Convolutional Transformer Neural Network (TNN), optimized with data-efficient Convolutional NN (CNN) for flaw detection using ultrasonic imaging. TNN utilizes the self-attention network architecture that learns global representations and allows high parallelism in computation resulting in reduced training time. CNNs learn local representations with fewer parameters because of inherent spatial inductive bias. This UFDCTNet utilizes the advantages of CNNs to learn spatially local representations resulting in fewer model parameters for fast training. For performance analysis, we trained data-efficient TNN and CNN using ultrasonic images to detect flaws. To examine training results, NNs are trained with the USimgAIST dataset consisting of 7000 experimental B-scan images representing without-flaw and with-flaw cases. A pulsed laser ultrasonic scanning system was used to collect these B-scan images from 17 stainless steel specimen plates with various types of flaws and some plates without any damage.
Decomposition and characterization of backscattered ultrasonic multiple interfering echoes is a critical step for analyzing the ultrasonic propagation path, propagation modes, coupling condition of the transducers and...
Decomposition and characterization of backscattered ultrasonic multiple interfering echoes is a critical step for analyzing the ultrasonic propagation path, propagation modes, coupling condition of the transducers and detecting defects within the propagation path. Chirplet signal Decomposition (CSD) is an efficient way of analyzing ultrasonic echoes. However, CSD is computationally expensive and time-consuming for real-time signalprocessing applications. To address this problem, we present a System-on-Chip (SoC) implementation of the CSD algorithm, with the goal of speed optimization and real-time execution without sacrificing the accuracy of the signal decomposition and reconstruction. The implementation was tested on a Zynq Ultrascale+ series FPGA and achieved an echo estimation within two milliseconds.
Due to the rapid adoption of Internet of Things (IoT) technologies, many networks are composed of a patchwork of devices designed by different software and hardware developers. In addition to the heterogeneity of IoT ...
Due to the rapid adoption of Internet of Things (IoT) technologies, many networks are composed of a patchwork of devices designed by different software and hardware developers. In addition to the heterogeneity of IoT networks, the general rush-to-market produced products with poor adherence to core cybersecurity principles. Coupled together, these weaknesses leave organizations vulnerable to attack by botnets, such as Mirai and Gafgyt. Infected devices pose a threat to both internal and external devices as they attempt to add new devices to the collective or to perpetrate targeted attacks within the network or against third parties. Artificial Intelligence (AI) tools for intrusion detection are popular platforms for detecting indicators of botnet infiltration. However, when training AI tools, the heterogeneity of the network hampers detection and classification accuracy due to the differences in device architecture and network layout. To investigate this challenge, we explored the application of a Neural Network (NN) to the N-BaIoT dataset. The NN achieved 94% classification accuracy when trained using data from all devices in the network. Further, we examined the model's transferability by training on a single device and applying it to data from all devices. This resulted in a noticeable decline in classification accuracy. However, when considering cyberattack detection the model retained a very high true positive rate of 99.6%.
Principal Component Analysis (PCA) is a versatile Unsupervised Learning (UL) technique for reducing the dimensionality of datasets. As a result, PCA is widely used in consumer and research applications as a preprocess...
Principal Component Analysis (PCA) is a versatile Unsupervised Learning (UL) technique for reducing the dimensionality of datasets. As a result, PCA is widely used in consumer and research applications as a preprocessing tool for identifying important features prior to further analysis. In instances where on-site personnel or developers do not have the expertise to apply UL techniques, third party processors are frequently retained. However, the release of client or proprietary data poses a substantial security risk. This risk increases the regulatory and contractual burden on analysts when interacting with sensitive or classified information. Homomorphic Encryption (HE) cryptosystems are a novel family of encryption algorithms that permit approximate addition and multiplication on encrypted data. When applied to UL models, such as PCA, experts may apply their expertise while maintaining data privacy. In order to evaluate the potential application of Homomorphic Encryption, we implemented Principal Component Analysis using the Microsoft SEAL HE libraries. The resulting implementation was applied to the MNIST Handwritten dataset for feature reduction and image reconstruction. Based on our results, HE considerably increased the time required to process the dataset. However, the HE algorithm is still viable for non-real-time applications as it had an average pixel error of near-zero for all image reconstructions.
Ultrasonic Nondestructive Evaluation (NDE) has been extensively used to characterize the microstructure of metallic structures for early exposure of materials integrity. However, industrial NDE requires the processing...
Ultrasonic Nondestructive Evaluation (NDE) has been extensively used to characterize the microstructure of metallic structures for early exposure of materials integrity. However, industrial NDE requires the processing, storage, and real-time transmission of large volumes of ultrasonic data. Therefore, it is indispensable to compress ultrasonic data with high fidelity. In this study, we explore the development of Unsupervised Learning (UL) based Neural Network (NN) models for massive ultrasonic data compression and an innovative multilayer perceptron residual autoencoder: Ultrasonic Residual Compressive Autoencoder (URCA), is introduced to compress ultrasonic data with high compression performance. This URCA can be fast-trained and utilizes the sparsity penalty with residual connection to optimize compression performance. UL-based NNs allow for memory-efficient training and rapid online augmentation of the model. To examine the results, a high-performance ultrasonic signal acquisition system was assembled to automatically collect ultrasonic data from heat-treated 1,018 steel blocks for microstructure characterization. Compression performance is analyzed based on compression ratio, reconstruction accuracy and model training time. The reconstruction accuracy was measured using the Structural Similarity Index Measure (SSIM) and Peak signal-to-Noise Ratio (PSNR). By training the URCA NN for a high reconstruction performance of 0.96 SSIM, we obtained 91.25% memory space-saving. For a higher compression performance of 0.80 SSIM, we obtained 96.04% memory space-saving.
暂无评论