Recently, style transfer is a research area that attracts a lot of attention, which transfers the style of an image onto a content target. Extensive research on style transfer has aimed at speeding up processing or ge...
详细信息
Conversational Artificial Intelligence (AI) and Natural Language Processing have advanced significantly with the creation of a Generative Pre-trained Transformer (ChatGPT) by OpenAI. ChatGPT uses deep learning techniq...
详细信息
Battery energy storage systems (BESS) enable many applications for photovoltaic (PV) equipped nano-grids. Stored excessive energy is utilized for energy arbitrage, demand response during blackouts, and peak shaving. T...
Battery energy storage systems (BESS) enable many applications for photovoltaic (PV) equipped nano-grids. Stored excessive energy is utilized for energy arbitrage, demand response during blackouts, and peak shaving. This technology helped utility service providers deal with the duck-curve-effect and intermittency of renewable energy systems. In this paper, we investigate the benefits of using energy storage systems in PV nano-grids for residential sectors and evaluate the relationship between battery presence and system performance. We also analyze the effects of battery degradation on energy independence and self-sufficiency and investigate the relationship between simple payback time and state of health (SoH) levels of nano-grid components. Additionally, we propose a simple rule-based energy supervision strategy and evaluate its performance under various forecasting error levels, highlighting the importance of high-performance forecasting methods for proper energy management systems. Our contributions will help select and optimize PV-battery solutions and energy management algorithms to achieve the highest benefit and minimize investment risks.
An independent industrial system may transform into a connected network through the assistance of Industrial Internet of Things (IIoT). The deployed sensors in the IIoT maintain surveillance of the industrial machiner...
详细信息
An independent industrial system may transform into a connected network through the assistance of Industrial Internet of Things (IIoT). The deployed sensors in the IIoT maintain surveillance of the industrial machinery and equipment. As a result, safety and reliability emerge as the primary concerns in IIoT. This presents a variety of well-known and increasing issues related to the industrial system. The IIoT devices are exposed to a wide range of malware, threats, and assaults. To prevent the IIoT devices from malware effects, effective protection plans must be implemented. But adequate security mechanisms are not be incorporated in IIoT devices with limited resources. It is essential to ensure the accuracy and dependability of information gathered by IIoT devices. Decisions taken with incomplete or inaccurate data might be devastating. To overcome these difficulties deep learning with reinforcement learning for complex decision-making in industry applications is developed in this research work. In this developed model, an Adaptive Deep Reinforcement learning (ADRL)-based resource management is performed to reduce the operation cost associated with IIoT deployments. Energy efficiency is essential in IIoT ecosystem, particularly for the devices that run on batteries. Through dynamic resource allocation based on workload needs and energy limits, ADRL-based resource management optimizes the usage of energy. The reliability of the designed model is enhanced by fine-tuning the parameters from DRL using the Ship Rescue Optimization (SRO) algorithm. Thus, ADRL-based resource management systems make real-time decisions based on current environmental conditions and system requirements. This helps the IIoT systems to react quickly to change demands and optimize resource allocation. Finally, the experimental analysis is performed to find the success rate of the developed resource management system via various metrics. Throughout the validation, the statistical analysis of the
Accurate forecasting of long-term time series has important applications for decision making and planning. However, it remains challenging to capture the long-term dependencies in time series data. To better extract l...
详细信息
Reducing radiation doses benefits patients, however, the resultant low-dose computed tomography (LDCT) images often suffer from clinically unacceptable noise and artifacts. While deep learning (DL) shows promise in LD...
详细信息
The task of dialogue summarization involves distilling a given dialogue into a concise and coherent summary. However, discrepancies in language styles between dialogues and summaries, scattered crucial information, in...
详细信息
ISBN:
(数字)9798350359312
ISBN:
(纸本)9798350359329
The task of dialogue summarization involves distilling a given dialogue into a concise and coherent summary. However, discrepancies in language styles between dialogues and summaries, scattered crucial information, incomplete utterances with ellipsis, and coreferences bring unique challenges to dialogue summarization. To tackle these challenges, we present multiple strategies in this study to extract crucial information with varying levels of granularity in dialogue from word-level and utterance-level semantics. This crucial information is used as valuable annotations on the dialogue text to train the model to recognize and utilize key information during the training phase, and enhance the model’s ability to identify crucial information during inference to generate better dialogue summaries. Experimental results on the SAMSum and DialogSum datasets shows that our method outperforms strong baseline models in terms of both ROUGE and BERTScore metrics. We corroborate these improvements through human evaluation.
This paper proposes a novel bi-orthogonal projection learning (BOPL) for dimensionality reduction (DR) methods, which further extends the existing DR to a more flexible, robust, and sparse embedding framework. Unlike ...
This paper proposes a novel bi-orthogonal projection learning (BOPL) for dimensionality reduction (DR) methods, which further extends the existing DR to a more flexible, robust, and sparse embedding framework. Unlike conventional methods that use only one kind of projection for learning in DR, the proposed BOPL introduces two kinds of projection, which are orthogonal and provides the true projection the freedom for more accurate data transformation in subspace learning. Inspired by the observation that the two projections share many similar data structures, the projections are expected to preserve the similarity structure of data by using two different reconstruction ways. The proposed method can also handle data corrupted by noises since a sparse item is employed to compensate for the noises during DR. Several novel unsupervised DR methods (i.e., BOPL_PCA, BOPL_NPE, and BOPL_LPP) are derived from the proposed framework. The results of experiments on the natural and synthetic data sets demonstrate that the proposed methods outperform the existing well-established DR methods.
The working principle of DFC compiler is introduced in this article. DFC is a grammatical extension of standard C language, with special DF function which describe the dependence of computing DAG. DFC compiler, dfcc, ...
详细信息
Federated Learning (FL) is a distributed machine learning paradigm that allows clients to train models on their data while preserving their privacy. FL algorithms, such as Federated Averaging (FedAvg) and its variants...
详细信息
暂无评论