Remote Photoplethysmography (rPPG) is a non-invasive approach for monitoring Heart Rate (HR) that can be used in various applications in healthcare and biometrics. rPPG measurements acquired using facial videos have b...
Remote Photoplethysmography (rPPG) is a non-invasive approach for monitoring Heart Rate (HR) that can be used in various applications in healthcare and biometrics. rPPG measurements acquired using facial videos have become very popular and one of the main steps of this technique is facial tracking and Region of Interest (ROI) extraction. This research paper investigates four widely used face tracking algorithms, namely MediaPipe Face Mesh (MPFM), Haar Cascade, Multi-task Cascaded Convolutional Network (MTCNN), and Dlib, concerning their ROI extraction capabilities for rPPG HR measurements. Using evaluation metrics such as accuracy, processing time, and ease of extracting ROIs, this work also recommends the most suitable face tracking algorithm from those mentioned above for rPPG measurements, and presents a compilation of a prioritization list of ROIs based on their sensitivities for rPPG measurements. Experimental results showed that the MPFM algorithm and cheek ROIs provided the best HR measurements.
Vision based solutions for the localization of vehicles have become popular recently. We employ an image retrieval based visual localization approach. The database images are kept with GPS coordinates and the location...
详细信息
The Industrial Internet of Things (IIoT) cites the usage of classical Internet of Things (IoT) in different applications and industrial fields. Smart grids, smart homes, supply chain management, connected cars, and sm...
详细信息
Accurate weather forecasting is indispensable for countries like Bangladesh because of their reliance on agriculture and vulnerability to frequently occurring natural disasters such as floods, cyclones, and riverbank ...
详细信息
Accurate weather forecasting is indispensable for countries like Bangladesh because of their reliance on agriculture and vulnerability to frequently occurring natural disasters such as floods, cyclones, and riverbank erosion. Bangladesh has a fairly regular annual weather pattern where the weather features, such as temperature, humidity, and rainfall, are highly correlated. Leveraging this inter-feature correlation between temperature and rainfall, we propose a flexible transformer based neural network that can forecast monthly temperature or rainfall by analyzing the past few data-points of any one or both of these weather features. We evaluated the proposed method using a public dataset called Bangladesh Weather Dataset which contains 115 years of Bangladesh weather data comprising month-wise average temperature and rainfall measurements. Our method demonstrates substantial improvements over the previously proposed approaches for this task in both metrics- mean squared error and mean absolute error. Our proposed transformer network is also significantly more lightweight and computationally efficient and accurate. This transformer aided method would pave a way to understand and leverage the complex relationship between the weather features and open up possibilities of coming up with even more robust methods of forecasting weather data.
E-learning is one of the favorite jobs where all the learners can learn through different online sites to update their knowledge acquisition. Though online learning supports several benefits, there are many challenges...
详细信息
Accurate estimation of agricultural crop acreage is crucial for effective planning, resource management, and policymaking. In recent years, satellite remote sensing technologies like OPTISAR have become valuable tools...
详细信息
ISBN:
(数字)9798331518981
ISBN:
(纸本)9798331518998
Accurate estimation of agricultural crop acreage is crucial for effective planning, resource management, and policymaking. In recent years, satellite remote sensing technologies like OPTISAR have become valuable tools for mapping and monitoring agricultural fields. This paper provides a comprehensive review of various machine learning techniques, particularly Random Forest and Hierarchical Decision Rule Classification, employed for the acreage estimation of Rabi pulses and Kharif onions using OPTISAR observations. Random Forest, as an ensemble learning method, has gained popularity due to its robustness in handling high-dimensional data, while Hierarchical Decision Rule Classification offers a structured approach for decision-making through layered rules. The review analyses the performance, accuracy, and limitations of these methods, highlighting their roles in addressing challenges such as mixed cropping patterns, varied phenological stages, and spectral variability in satellite data. The findings suggest that the integration of machine learning techniques with remote sensing data holds significant potential for improving agricultural monitoring and decision-making processes.
AI-driven precision agriculture is revolutionising the agricultural business by using advanced techniques to monitor crop health and optimize output. By integrating data from sensors, drones, and satellite photos with...
详细信息
ISBN:
(数字)9798350387490
ISBN:
(纸本)9798350387506
AI-driven precision agriculture is revolutionising the agricultural business by using advanced techniques to monitor crop health and optimize output. By integrating data from sensors, drones, and satellite photos with artificial intelligence, farmers may get up-to-date information on crop condition, soil health, and environmental factors. Through meticulous analysis of vast amounts of data, these artificial intelligence systems are capable of detecting the first signs of pests, diseases, and nutritional deficiencies. This enables timely interventions to enhance the overall health of crops. AI-driven predictive analytics manage the timing of irrigation, fertilisation, and harvesting to enhance productivity and guarantee optimum resource use. This approach addresses growing global need in case of food while also improving agricultural sustain ability and productivity. The Python script demonstrates integration of AI and IoT technologies into a Precision Agriculture System to optimize agricultural yields and monitor crop health. It generates a synthetic dataset with real-world parameters and evaluates the Random Forest Regressor model on training along with testing data. The simulation provides visual insights and performance metrics, highlighting the potential of AI and IoT in precision agriculture in case of enhanced decision-making along with productivity.
Students must be attentive in the classroom to improve on their learning. Further, it is important for the faculty to know whether the students pay complete attention in the classroom. If not, faculty can slow down an...
详细信息
ISBN:
(纸本)9781665454315
Students must be attentive in the classroom to improve on their learning. Further, it is important for the faculty to know whether the students pay complete attention in the classroom. If not, faculty can slow down and modify their mode of delivery by making the classes more interesting to gain students attention. The aim of this work is to monitor the student’s attention in the classroom based on their physical state which is the result of their bio chemical and electrical signals in the brain. The approaches like Electro dermal activity (EDA), Electroencephalography (EEG), Electrocardiogram (ECG) are measured by using physiological signals but is hard to extract and analyze. Head bands extracting EEG signals may also distract a student’s attention in classes. The proposed work aims to make use of Deep Convolution Neural Network (DCNN) with Histogram of Gradient (HoG) for face detection and facenet algorithm for face recognition. To evaluate the attentiveness of the students, A Convolution Neural Network model is proposed. The trained model is tested by deploying in Jetson Nano. To identify the attentiveness of the student, the physical state of the face is considered, such as head movement, and the state of eyes and mouth, whether it is opened or closed. Furthermore, the open eye region is analysed if it is partially opened or fully opened. By using CNN to train the model instead of Machine learning algorithms, the accuracy of the system is found to increase.
Learning to generate motions of thin structures such as plant leaves in dynamic view synthesis is challenging. This is because thin structures usually undergo small but fast, non-rigid motions as they interact with ai...
详细信息
ISBN:
(数字)9798350379037
ISBN:
(纸本)9798350379044
Learning to generate motions of thin structures such as plant leaves in dynamic view synthesis is challenging. This is because thin structures usually undergo small but fast, non-rigid motions as they interact with air and wind. When given a set of RGB images or videos of a scene with moving thin structures as input, existing methods that map the scene to its corresponding canonical space for rendering novel views fail as the object movements are too subtle compared to the background. Disentangling the objects with thin parts from the background scene is also challenging when the parts show fast and rapid motions. To address these issues, we propose a Neural Radiance Field (NeRF)-based framework that accurately reconstructs thin structures such as leaves and captures their subtle, fast motions. The framework learns the geometry of a scene by mapping the dynamic images to a canonical scene in which the scene remains static. We propose a ray masking network to further decompose the canonical scene into foreground and background, thus enabling the network to focus more on foreground movements. We conducted experiments using a dataset containing thin structures such as leaves and petals, which include image sequences collected by us and one public image sequence. Experiments show superior results compared to existing methods. Video outputs are available at https://***/.
In the age of cloud computing, data stored and processed in the cloud must be secure. This study examines how powerful machine learning can secure cloud computing data. Three trials assessed different machine learning...
In the age of cloud computing, data stored and processed in the cloud must be secure. This study examines how powerful machine learning can secure cloud computing data. Three trials assessed different machine learning models in this setting. Experiment 1 used a Random Forest model and achieved 95% accuracy, 0.92 precision, 0.96 recall, and 0.94 F1 Score. This shows that the model can accurately categorize security threats with a good balance of true and false positives. A Deep Neural Network (DNN) improved accuracy to 97% in Experiment 2. Precision, recall, and F1 Score values of 0.94, 0.98, and 0.96 demonstrate the DNN’s ability to discriminate threats from normal activity. The model captures complicated patterns well, making it a powerful cloud security tool. Security analysis using reinforcement learning, specifically Q-learning, was introduced in Experiment 3. The model’s 88% detection rate showed its capacity to identify threats, but its 0.05 false positive rate created a tradeoff between true and false positives. With the 0.12 false negative rate, one can infer improvements in threat detection accuracy. These results indicate that state-of-the-art machine learning is capable of protecting cloud data. The Random Forest and Deep Neural Network models have very high accuracy balanced with reasonable precision-recall trade-offs, whereas reinforcement learning through Qlearning shows promise but needs modification to improve the model’s performance in terms of both accuracy and false positive rates. While rising threats need constant adaptation and learning, the model should also fulfill security needs. This study helps formulate a secure cloud computing infrastructure that can stand threats of change.
暂无评论