This paper presents a new circuit structure of five-level switched-capacitor (SC) based transformerless inverter which is ideal for grid-integrated photovoltaic (PV) applications. The main objective of the proposed mu...
详细信息
Unprecedented capabilities for content generation, predictive analytics, and automation are made available by the introduction of Generative Artificial Intelligence (AI) technologies, which uses in a new age of indust...
详细信息
Satellite-Airborne-Terrestrial Networks (SATNs) are expected to provide communication and edge-computing services for a plethora of IoT applications. However, preserving the freshness of information is challenging sin...
Satellite-Airborne-Terrestrial Networks (SATNs) are expected to provide communication and edge-computing services for a plethora of IoT applications. However, preserving the freshness of information is challenging since it requires timely data collection, bandwidth, and offloading decisions across different administrative domains. In this paper, we aim to optimize the age of information (AoI) and energy consumption tradeoff when serving multiple traffic classes in SATNs. A cross-domain federated computation offloading algorithm (Fed-SATEC-Off) is presented in which different service providers (SPs) collaborate to allocate the bandwidth while unmanned aerial vehicles (UAVs) and satellites make decisions to collect, relay, and offload the computing tasks. Given the requirements of each traffic class, the optimum collaborative strategies between SPs, UAVs, and satellites are obtained together with the computation offloading topology. Our algorithm is based on multi-agent actor-critic and incorporates federated learning to improve the convergence of the learning process. The numerical results show that Fed-SATEC-Off reduces the AoI by factor 4 and achieves faster convergence than existing approaches.
In combinatorial optimization problems with one-hot constraints, the number of binary variables that have the value of one in the feasible solution becomes constant and small, so it is important to always consider the...
详细信息
Presently, the global proliferation of Internet of Things (IoT) devices and applications has accelerated owing to their advantages in enhancing the business and industrial environment, as well as the daily lives of in...
详细信息
ISBN:
(纸本)9798350333794
Presently, the global proliferation of Internet of Things (IoT) devices and applications has accelerated owing to their advantages in enhancing the business and industrial environment, as well as the daily lives of individuals. Nevertheless, IoT devices are vulnerable to malevolent network traffic, which can lead to undesirable outcomes and disrupt the operation of IoT devices. Consequently, it is imperative to devise a screening method for network traffic to identify and categorize malicious activity and reduce its harmful effects. In this regard, the present study proposes a framework and thoroughly compares single and ensemble machine learning models that are using predictive analytics to detect and classify network activity in an IoT network. Specifically, our framework and models distinguish between normal and anomalous network activity. Furthermore, network traffic is classified into eight categories, normal, denial of service (DoS) attack, Scan attack, Data Type Probing, Malicious Operation, Malicious Control, Spying attack and Wrong Setup based attack. The special focus of this paper is in identifying privacy related attacks in comparison with main cybersecurity related ones. Several supervised machine learning models have been herein implemented to characterize their performance in detecting and classifying network activities for IoT Networks based on the ensemble learning framework as well as on the single learning machines framework. Moreover, classifiers based on the statistical learning theory framework are herein involved too, including hybrid Bayesian theory models, decision trees (DT) and the ordinal Learning Modelling approach. The statistical pattern recognition based models as well as all supervised learning machines models have been evaluated on a broad benchmark dataset for IoT attacks. Besides, an improved data engineering feature processing model has been applied to the dataset under consideration to improve the accuracy of all models under co
Stomach cancer, sometimes referred to as gastric cancer, is still one of the most common and widespread deadly disease worldwide offers a significant challenge in oncology due to late detection and high mortality rate...
详细信息
ISBN:
(数字)9798331505745
ISBN:
(纸本)9798331505752
Stomach cancer, sometimes referred to as gastric cancer, is still one of the most common and widespread deadly disease worldwide offers a significant challenge in oncology due to late detection and high mortality rates. The advent of deep learning has revolutionized stomach cancer prediction by leveraging sophisticated models such as convolutional neural net-works (CNNs), multimodal learning and hybrid frameworks that integrate multiple data sources like medical imaging, histological slides, clinical records and genetic profiles and provide better performance with high accuracy. This review comprehensively ex-plores recent advances in deep learning applications for the detection of stomach cancer, highlighting their performance, accuracy, and clinical implications. Comparative analysis demonstrates that different deep learning models, such as ResNet50, DenseNet121, EfficientNetB5, and MobileNetV2, provide impressive results with accuracy up to 99% and higher sensitivity and specificity. It also indicates that the system accuracy needs to be improved to 100 % in order to detect stomach cancer as early as possible. Early detection helps in the stoppage of further spreading of the disease and lowers the death rates.
Melanoma is the most fatal type of malignant skin cancer, posing a serious threat to people's physical health. If melanoma of the skin is detected early, the chances of survival are very high. Dermoscopic images c...
详细信息
The outbreak of the coronavirus disease (COVID-19) has had a profound impact on education worldwide. The rise of remote learning is one of the most significant changes in this regard, as many schools and universities ...
详细信息
Recently, reinforcement learning has been at the forefront of the automation revolution. Reinforcement learning has proved to be a particularly powerful technique since it can understand the actions that will lead to ...
详细信息
Scan integrity test has become inevitable for the design manufacturing that demands low DPPM. As the current technology has already moved towards nanometer realm, the manufacturing process begin to see newer defects a...
Scan integrity test has become inevitable for the design manufacturing that demands low DPPM. As the current technology has already moved towards nanometer realm, the manufacturing process begin to see newer defects at the level of cell. Such defect doesn’t seems to be detectable either by stuck-at or transition delay test alone. The newer kind of defects which are becoming prominent are rather subtle requiring a newer test methodology that would involve finer control of voltage and current at the transistor itself. In this work we have addressed the fault of kind which can be classified as Stuck-on fault in NMOS transistor. The specific problem that we attempt to solve is that how do we test the stuck-on fault in the NMOS transistor on the scan path? The solution we propose is to find out a newer scan integrity test methodology to ensure the correctness of scan chain before it can be used for design test. The proposed solution provides a scan integrity test technique that involves a careful control of gate voltage at near threshold value. This near threshold voltage at input sensitizes the Stuck-on fault which is then propagated to the scan-out by the process of flushing shift. The proposed methodology has been experimentally verified for the standard scan cell design using 16nm technology as well as at the logic level it has been implemented on ISCAS circuit to demonstrate the sensitization and propagation of fault effect. The proposed methodology does not requires any additional DFT area.
暂无评论