Series-series compensated inductive wireless power transfer link (SS-IWPTL) may attain specific load independent voltage gain for given coupling coefficient at two certain operating frequencies, residing away from res...
详细信息
To meet the latency requirements of various business usecases and applications in 5G New Radio (NR), two step grant-free RACH procedure has been proposed in Third Generation Partnership Project (3GPP) release 16 for g...
To meet the latency requirements of various business usecases and applications in 5G New Radio (NR), two step grant-free RACH procedure has been proposed in Third Generation Partnership Project (3GPP) release 16 for granting access to subscribers. However, due to the limited number of preambles, there is a non-zero probability that two mobile User Equipments (UEs) selecting same preamble signatures leading to collisions. Consequently, the base stations (gNBs) in 5G Radio Access Network (RAN) are unable to send a response to the UEs. Furthermore, with the increase in the number of cellular UEs and Machine Type Communication (MTC) devices, the probability of such preamble collisions further increases, thereby leading to reattempts by UEs. This in turn, results in increased latency and reduced channel utilization. In order to reduce contention during preamble access, we propose to use deep learning based models to design a RACH procedure that predicts the incoming connection requests in advance and proactively allocates uplink resources to UEs. We have used Recurrent Neural Network (RNN) and Long Short Term Memory (LSTM) models to predict UEs which are going to participate in two step RACH procedure. On doing extensive simulations, it is observed that both RNN and LSTM models perform equally good in reducing the number of collisions in a dense user scenario thereby enabling massive user access to 5G network.
The endeavor to predict the future price of an organization's stock is referred to as stock market prediction. It is challenging to predict future trends accurately because the stock market is a dynamic system con...
The endeavor to predict the future price of an organization's stock is referred to as stock market prediction. It is challenging to predict future trends accurately because the stock market is a dynamic system continually evolving. An impressive profit might be made by correctly predicting the price of a stock in the future. Understanding a company's stock price pattern and forecasting its future development and financial growth will be quite advantageous. The use of machine learning algorithms to forecast stock values has gained popularity in recent years. The objective of this research endeavor is to develop an artificially intelligent model capable of estimating a certain company's stock prices. The application of a sort of machine learning technique known as Long Short Term Memory, which is based on recurrent neural networks (RNNs), is the main topic of this study.
Jordan has a high and growing level of traffic accidents reaching 160,600) accidents in 2021, (11,241) of them had human injuries, (589) deaths, and (320) million JOD losses. Road traffic accidents are currently the 8...
Jordan has a high and growing level of traffic accidents reaching 160,600) accidents in 2021, (11,241) of them had human injuries, (589) deaths, and (320) million JOD losses. Road traffic accidents are currently the 8th leading cause of death worldwide, accounting for about 1.35 million fatalities annually. Understanding the primary causes of these accidents and the circumstances in which they occur is essential if governments throughout the world are to put policies in place to limit the number of fatalities brought on by traffic accidents. This project has two objectives: (i) to learn more about Jordan's present road accident situation by performing an exploratory study on several road accident datasets, and (ii) to investigate the effectiveness of various machine-learning techniques in predicting the severity of road accidents in Jordan. The dataset entities were collected, collated, explored, and prepared for use in the model. To assess their predictive performance, five different classification algorithms were trained and tested. The findings of the present study demonstrate that the best algorithm was Logistic Regression, which had an accuracy of 98.1%.
Discovering emerging entities (EEs) is the problem of finding entities before their establishment. These entities can be critical for individuals, companies, and governments. Many of these entities can be discovered o...
Discovering emerging entities (EEs) is the problem of finding entities before their establishment. These entities can be critical for individuals, companies, and governments. Many of these entities can be discovered on social media platforms, e.g. Twitter. These identities have been the spot of research in academy and industry in recent years. Similar to any machine learning problem, data availability is one of the major challenges in this problem. This paper proposes EEPT that is an online clustering method able to discover EEs without any need for training on a dataset. Additionally, due to the lack of a proper evaluation metric, this paper uses a new metric to evaluate the results. The results show that EEPT is promising and finds significant entities before their establishment.
To accommodate the rapid adoption of the commu-nity to new types of transportation, namely electric vehicles whose demand is increasing rapidly, the need for reliable charging materials has increased. In addition, alt...
详细信息
Unsupervised and Semi-supervised learning methods are used extensively in the field of remote sensing due to the scarcity in availability of annotated data. However, standard clustering methods are highly sensitive to...
Unsupervised and Semi-supervised learning methods are used extensively in the field of remote sensing due to the scarcity in availability of annotated data. However, standard clustering methods are highly sensitive to parameters like the initial selection of centroids and the number of clusters. Predetermining the number of clusters create a bias which enforces the model to form the given number of clusters irrespective of the grouping patterns present in data. In this paper, we aim to explore the inherent structure of Landsat-8 data and group them based on certain similarities. Pixel-wise features are extracted using a partially labelled data denoting the standard three classes, water bodies, settlements and vegetation. These features are then used to estimate the actual number of clusters by applying stability analysis method using repetitive sampling.
Detecting diabetes at an early stage can help save lives and improve the patients quality of life significantly. Diabetes can be detected with the assistance of information regarding the patient's lifestyle and he...
Detecting diabetes at an early stage can help save lives and improve the patients quality of life significantly. Diabetes can be detected with the assistance of information regarding the patient's lifestyle and health. This work aims to predict diabetic patients using different machine-learning classification algorithms and a dataset about diabetic and healthy patients. The work employs a data balancing technique to handle the data imbalance issue, as well as using cross-validation. In addition, it compares these machine-learning algorithms according to several performance indicators like accuracy, precision, recall, and Fl-score. Accordingly, the Random Forest classifier proved to produce the best results with accuracy, precision, recall, and an Fl-score, all equal to 89%.
In human, skin tone is significantly varying from one extend (darkest) to other (lightest) due to the difference in the amount of pigmentation (melanin). Even though skin color detection and segmentation is the challe...
详细信息
This paper deals with network slicing in 5G networks, where a slice is defined as a set of virtual network function (VNF) instances that collaborate to create an end-toend (E2E) virtual network. A set of slices is imp...
This paper deals with network slicing in 5G networks, where a slice is defined as a set of virtual network function (VNF) instances that collaborate to create an end-toend (E2E) virtual network. A set of slices is implemented on a physical substrate network maintained by an infrastructure provider. The virtual network embedding (VNE) problem deals with the deployment of a network slice’s virtual network request on the substrate. Typically, the resources allocated per slice’s request are not shared with other slices due to privacy, security and performance considerations. However, there are situations in which VNF instances might be aggregated across many slices to further increase the utilization ratio of the substrate infrastructure. Given these shareable VNF nodes, deploying the network slices is effectively the embedding of the numerous virtual network where these slices are linked by the shared VNFs. This paper uses a reinforcement learning (RL) approach for the embedding problem. The approach incorporates sharing based virtual network functions in an existing RL scheme designed for virtual node embedding without much additional computation. The proposed scheme is implemented using a policy based RL method; the performance study shows an increase in the reward ratio by up to 20% compared to the non-sharing case, along with an increase in the acceptance percentage of slices.
暂无评论