With the emergence of AI for good, there has been an increasing interest in building computer vision data-driven deep learning inclusive AI solutions. Sign language Recognition (SLR) has gained attention recently. It ...
详细信息
With the emergence of AI for good, there has been an increasing interest in building computer vision data-driven deep learning inclusive AI solutions. Sign language Recognition (SLR) has gained attention recently. It is an essential component of a sign-to-text translation system to support the deaf and hard-of-hearing population. This paper presents a computer VISIOn data-driven deep learning framework for Sign Language video Recognition (VisoSLR). VisioSLR provides a precise measurement of translating signs for developing an end-to-end computational translation system. Considering the scarcity of sign language datasets, which hinders the development of an accurate recognition model, we evaluate the performance of our framework by fine-tuning the very well-known YOLO models, which are built from a signs-unrelated collection of images and videos, using a small-sized sign language dataset. Gathering a sign language dataset for signs training would involve an enormous amount of time to collect and annotate videos in different environmental setups and multiple signers, in addition to the training time of a model. Numerical evaluations of VisioSLR show that our framework recognizes signs with a mean average precision of 97.4%, 97.1%, and 95.5% and 11, 12, and 12 milliseconds of recognition time on YOLOv8m, YOLOv9m, and YOLOv11m, respectively.
Self-supervised time series anomaly detection (TSAD) demonstrates remarkable performance improvement by extracting high-level data semantics through proxy tasks. Nonetheless, most existing self-supervised TSAD techniq...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Self-supervised time series anomaly detection (TSAD) demonstrates remarkable performance improvement by extracting high-level data semantics through proxy tasks. Nonetheless, most existing self-supervised TSAD techniques rely on manual- or neural-based transformations when designing proxy tasks, overlooking the intrinsic temporal patterns of time series. This paper proposes a local temporal pattern learning-based time series anomaly detection (LTPAD). LTPAD first generates sub-sequences. Pairwise sub-sequences naturally manifest proximity relationships along the time axis, and such correlations can be used to construct supervision and train neural networks to facilitate the learning of temporal patterns. Time intervals between two sub-sequences serve as labels for sub-sequence pairs. By classifying these labeled data pairs, our model captures the local temporal patterns of time series, thereby modeling the temporal pattern-aware "normality". Abnormal scores of testing data are acquired by evaluating their conformity to these learned patterns shared in training data. Extensive experiments show that LTPAD significantly outperforms state-of-the-art competitors.
The outbreak of the COVID-19 pandemic revealed the criticality of timely intervention in a situation exacerbated by a shortage in medical staff and equipment. Pain-level screening is the initial step toward identifyin...
详细信息
The outbreak of the COVID-19 pandemic revealed the criticality of timely intervention in a situation exacerbated by a shortage in medical staff and equipment. Pain-level screening is the initial step toward identifying the severity of patient conditions. Automatic recognition of state and feelings help in identifying patient symptoms to take immediate adequate action and providing a patient-centric medical plan tailored to a patient's state. In this paper, we propose a framework for pain-level detection for deployment in the United Arab Emirates and assess its performance using the most used approaches in the literature. Our results show that a deployment of a pain-level deep learning detection framework is promising in identifying the pain level accurately.
Diabetes Mellitus has no permanent cure to date and is one of the leading causes of death globally. The alarming increase in diabetes calls for the need to take precautionary measures to avoid/predict the occurrence o...
详细信息
Diabetes Mellitus has no permanent cure to date and is one of the leading causes of death globally. The alarming increase in diabetes calls for the need to take precautionary measures to avoid/predict the occurrence of diabetes. This paper proposes HealthEdge, a machine learning-based smart healthcare framework for type 2 diabetes prediction in an integrated IoT-edge-cloud computing system. Numerical experiments and comparative analysis were carried out between the two most used machine learning algorithms in the literature, Random Forest (RF) and Logistic Regression (LR), using two real-life diabetes datasets. The results show that RF predicts diabetes with 6% more accuracy on average compared to LR.
The explosive adoption of IoT applications in different domains, such as healthcare, transportation, and smart home and industry, has led to the pervasive adoption of edge and cloud computing. Large-scale edge and clo...
详细信息
The explosive adoption of IoT applications in different domains, such as healthcare, transportation, and smart home and industry, has led to the pervasive adoption of edge and cloud computing. Large-scale edge and cloud data centers, consisting of thousands of computing servers, are hungry-energy infrastructure exacerbating issues such as environmental carbon footprint and high electricity costs. Developing energy-efficient solutions for cloud infrastructure requires knowledge of the correlation between computing server resource utilization and power consumption. Power consumption modeling exhibits this relationship and is crucial for energy savings. In this paper, we propose PowerGen, a framework to generate server resources utilization and corresponding power consumption dataset. The proposed framework will aid academic researchers to formulate correlations between resources utilization and power consumption by using power prediction models, and evaluate energy-aware resource management approaches in an edge-cloud computing system. It will help edge and cloud administrators to evaluate the energy-efficiency of heterogenous severs architectures in a datacenter. We exemplify the applicability of the dataset, generated by our proposed framework, in power prediction modeling and energy-aware scheduling for green computing scenarios.
作者:
Nada ShahinLeila IsmailDepartment of CS and Software Engineering
Intelligent Distributed Computing and Systems (INDUCE) Lab College of IT National Water and Energy Center UAE University Al-Ain UAE CLOUDS lab
School of Computing and Information Systems The University of Melbourne Melbourne Australia
ChatGPT is a language model based on Generative AI. Existing research work on ChatGPT focused on its use in various domains. However, its potential for Sign Language Translation (SLT) is yet to be explored. This paper...
ChatGPT is a language model based on Generative AI. Existing research work on ChatGPT focused on its use in various domains. However, its potential for Sign Language Translation (SLT) is yet to be explored. This paper addresses this void. Therefore, we present GPT's evolution aiming a retrospective analysis of the improvements to its architecture for SLT. We explore ChatGPT's capabilities in translating different sign languages in paving the way to better accessibility for deaf and hard-of-hearing community. Our experimental results indicate that ChatGPT can accurately translate from English to American (ASL), Australian (AUSLAN), and British (BSL) sign languages and from Arabic Sign Language (ArSL) to English with only one prompt iteration. However, the model failed to translate from Arabic to ArSL and ASL, AUSLAN, and BSL to Arabic. Consequently, we present challenges and derive insights for future research directions.
The Internet of Things (IoT) revolutionizes smart city domains such as healthcare, transportation, industry, and education. The Internet of Medical Things (IoMT) is gaining prominence, particularly in smart hospitals ...
详细信息
ISBN:
(数字)9798331507589
ISBN:
(纸本)9798331507596
The Internet of Things (IoT) revolutionizes smart city domains such as healthcare, transportation, industry, and education. The Internet of Medical Things (IoMT) is gaining prominence, particularly in smart hospitals and Remote Patient Monitoring (RPM). The vast volume of data generated by IoMT devices should be analyzed in real-time for health surveillance, prognosis, and prediction of diseases. Current approaches relying on Cloud computing to provide the necessary computing and storage capabilities do not scale for these latency-sensitive applications. Edge computing emerges as a solution by bringing cloud services closer to IoMT devices. This paper introduces SmartEdge, an AI-powered smart healthcare end-to-end integrated edge and cloud computing system for diabetes prediction. This work addresses latency concerns and demonstrates the efficacy of edge resources in healthcare applications within an end-to-end system. The system leverages various risk factors for diabetes prediction. We propose an Edge and Cloud-enabled framework to deploy the proposed diabetes prediction models on various configurations using edge nodes and main cloud servers. Performance metrics are evaluated using, latency, accuracy, and response time. By using ensemble machine learning voting algorithms we can improve the prediction accuracy by 5% versus a single model prediction.
Blockchain technology has piqued the interest of businesses of all types, while consistently improving and adapting to business requirements. Several blockchain platforms have emerged, making it challenging to select ...
详细信息
Blockchain technology has piqued the interest of businesses of all types, while consistently improving and adapting to business requirements. Several blockchain platforms have emerged, making it challenging to select a suitable one for a specific type of business. This paper presents a classification of over one hundred blockchain platforms. We develop smart contracts for detecting healthcare insurance frauds using the top two blockchain platforms selected based on our proposed decision-making map approach which selects the top suitable platforms for healthcare insurance frauds detection application. Our classification shows that the largest percentage of platforms can be used for all types of application domains, the second biggest percentage for financial services, and a small number is to develop applications in specific domains. Our decision-making map and performance evaluations reveal that Hyperledger Fabric surpassed Neo in all metrics for detecting healthcare insurance frauds.
Machine Translation has played a critical role in reducing language barriers, but its adaptation for Sign Language Machine Translation (SLMT) has been less explored. Existing works on SLMT mostly use the Transformer n...
详细信息
ISBN:
(数字)9798350379495
ISBN:
(纸本)9798350379501
Machine Translation has played a critical role in reducing language barriers, but its adaptation for Sign Language Machine Translation (SLMT) has been less explored. Existing works on SLMT mostly use the Transformer neural network which exhibits low performance due to the dynamic nature of the sign language. In this paper, we propose a novel Gated-Logarithmic Transformer (GLoT) that captures the long-term temporal dependencies of the sign language as a time-series data. We perform a comprehensive evaluation of GloT with the transformer and transformer-fusion models as a baseline, for Sign-to-Gloss-to-Text translation. Our results demonstrate that GLoT consistently outperforms the other models across all metrics. These findings underscore its potential to address the communication challenges faced by the Deaf and Hard of Hearing community.
With the growing Deaf and Hard of Hearing population worldwide and the persistent shortage of certified sign language interpreters, there is a pressing need for an efficient, signs-driven, integrated end-to-end transl...
详细信息
暂无评论