Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and ***,achieving a balance between the quality...
详细信息
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and ***,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains *** This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and ***,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image *** this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of *** rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image *** detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in *** addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light *** Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,*** achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering *** Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR ***,the handling of morphing artifacts in the parallax image regions remains a topic for future resea
Internet of Things (IoT) devices are often directly authenticated by the gateways within the network. In complex and large systems, IoT devices may be connected to the gateway through another device in the network. In...
详细信息
With advancements in technology, the study of data hiding (DH) in images has become more and more important. In this paper, we introduce a novel data hiding scheme that employs a voting strategy to predict pixels base...
详细信息
Distributed Denial of Service (DDoS) attacks pose a significant threat to network infrastructures, leading to service disruptions and potential financial losses. In this study, we propose an ensemble-based approach fo...
详细信息
Network function virtualisation (NFV) offers several benefits to both network operators and end users. It is a more programmable and low-cost solution as compared to a traditional network. Since the network functions ...
详细信息
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...
详细信息
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are *** addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time *** is resulting in TD missing potential offloading opportunities in the *** fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic *** Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time *** framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning *** results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
作者:
A.E.M.EljialyMohammed Yousuf UddinSultan AhmadDepartment of Information Systems
College of Computer Engineering and SciencesPrince Sattam Bin Abdulaziz UniversityAlkharjSaudi Arabia Department of Computer Science
College of Computer Engineering and SciencesPrince Sattam Bin Abdulaziz UniversityAlkharjSaudi Arabiaand also with University Center for Research and Development(UCRD)Department of Computer Science and EngineeringChandigarh UniversityPunjabIndia
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network’s incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks i...
详细信息
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network’s incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks is an inevitable component of network security. The main challenges of such an IDS are achieving zero or extremely low false positive rates and high detection rates. Internet of Things (IoT) networks run by using devices with minimal resources. This situation makes deploying traditional IDSs in IoT networks unfeasible. Machine learning (ML) techniques are extensively applied to build robust IDSs. Many researchers have utilized different ML methods and techniques to address the above challenges. The development of an efficient IDS starts with a good feature selection process to avoid overfitting the ML model. This work proposes a multiple feature selection process followed by classification. In this study, the Software-defined networking (SDN) dataset is used to train and test the proposed model. This model applies multiple feature selection techniques to select high-scoring features from a set of features. Highly relevant features for anomaly detection are selected on the basis of their scores to generate the candidate dataset. Multiple classification algorithms are applied to the candidate dataset to build models. The proposed model exhibits considerable improvement in the detection of attacks with high accuracy and low false positive rates, even with a few features selected.
Computational approaches can speed up the drug discovery process by predicting drug-target affinity, otherwise it is time-consuming. In this study, we developed a convolutional neural network (CNN)-based model named S...
详细信息
The most generic and understandable way of communication is by observing facial expressions;Facial Expression Recognition(FER) performance was affected by the differences in ethnicity, culture, and geography. This res...
详细信息
Since the beginning of Natural Language Processing, researchers have been highly interested in the idea of equipping machines with the ability to think, understand, and communicate in a human-like manner. However, des...
详细信息
Since the beginning of Natural Language Processing, researchers have been highly interested in the idea of equipping machines with the ability to think, understand, and communicate in a human-like manner. However, despite the significant progress in this field, particularly in English, Arabic research remains in its early stages of development. This study presents a Systematic Literature Review on challenges faced in Arabic chatbot development and the proposed solutions. Utilizing the search terms "ARABIC," "CHATBOT," "CHALLENGES," and "SOLUTION," (including synonyms) we systematically surveyed studies published between 2000 and 2023 from Scopus, science Direct, Web of science, PubMed, SpringerLink, IEEE Xplore, ACM, Ebesco, and ICI. Moreover, besides Google Scholar and ResearchGate, we employed a manual snowballing technique to discover supplementary relevant research by examining the references of all chosen primary studies. The included studies were assessed for eligibility based on the quality assessment checklist we developed. Out of 3,891 studies, only 64 were deemed eligible. Many challenges were identified, including the scarcity of well-structured datasets. To overcome this (n=35) studies manually collected and preprocessed data. Additionally, Arabic language complexity led (n=53) researchers to adopt pre-scripted rules approaches, followed by generative approaches (n=10), and hybrid approach (n=1). Furthermore, (n=27) studies employed human-based evaluation metrics to assess the chatbot performance. while, (n=11) studies haven’t used any metrics. Based on conducted research, a critical research priority is providing Arabic with high-quality resources, such as an Arabic dataset that includes dialectal variations and incorporates empathy, lexicon corpora, and also a word normalization library. These resources will enable the chatbot to interact more naturally and humanely. Additionally, hybrid approaches have shown promising results, particularly in low-reso
暂无评论