Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and ***,achieving precise segmentation remains a challenge due to various factors,in...
详细信息
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and ***,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound *** existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,*** address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule *** MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding *** transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the *** approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the ***,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation *** results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)*** findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.
This study proposes a malicious code detection model DTL-MD based on deep transfer learning, which aims to improve the detection accuracy of existing methods in complex malicious code and data scarcity. In the feature...
详细信息
作者:
Zjavka, LadislavDepartment of Computer Science
Faculty of Electrical Engineering and Computer Science VŠB-Technical University of Ostrava 17. Listopadu 15/2172 Ostrava Czech Republic
Photovoltaic (PV) power is generated by two common types of solar components that are primarily affected by fluctuations and development in cloud structures as a result of uncertain and chaotic processes. Local PV for...
详细信息
Photovoltaic (PV) power is generated by two common types of solar components that are primarily affected by fluctuations and development in cloud structures as a result of uncertain and chaotic processes. Local PV forecasting is unavoidable in supply and load planning necessary in integration of smart systems into electrical grids. Intra- or day-ahead modelling of weather patterns based on Artificial Intelligence (AI) allows one to refine available 24 h. cloudiness forecast or predict PV production at a particular plant location during the day. AI usually gets an adequate prediction quality in shorter-level horizons, using the historical meteo- and PV record series as compared to Numerical Weather Prediction (NWP) systems. NWP models are produced every 6 h to simulate grid motion of local cloudiness, which is additionally delayed and usually scaled in a rough less operational applicability. Differential Neural Network (DNN) is based on a newly developed neurocomputing strategy that allows the representation of complex weather patterns analogous to NWP. DNN parses the n-variable linear Partial Differential Equation (PDE), which describes the ground-level patterns, into sub-PDE modules of a determined order at each node. Their derivatives are substituted by the Laplace transforms and solved using adapted inverse operations of Operation Calculus (OC). DNN fuses OC mathematics with neural computing in evolution 2-input node structures to form sum modules of selected PDEs added step-by-step to the expanded composite model. The AI multi- 1…9-h and one-stage 24-h models were evolved using spatio-temporal data in the preidentified daily learning sequences according to the applied input–output data delay to predict the Clear Sky Index (CSI). The prediction results of both statistical schemes were evaluated to assess the performance of the AI models. Intraday models obtain slightly better prediction accuracy in average errors compared to those applied in the second-day-ahead
The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These in...
详细信息
The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These infrastructures are very effective in providing a fast response to the respective queries of the requesting modules, but their distributed nature has introduced other problems such as security and privacy. To address these problems, various security-assisted communication mechanisms have been developed to safeguard every active module, i.e., devices and edges, from every possible vulnerability in the IoT. However, these methodologies have neglected one of the critical issues, which is the prediction of fraudulent devices, i.e., adversaries, preferably as early as possible in the IoT. In this paper, a hybrid communication mechanism is presented where the Hidden Markov Model (HMM) predicts the legitimacy of the requesting device (both source and destination), and the Advanced Encryption Standard (AES) safeguards the reliability of the transmitted data over a shared communication medium, preferably through a secret shared key, i.e., , and timestamp information. A device becomes trusted if it has passed both evaluation levels, i.e., HMM and message decryption, within a stipulated time interval. The proposed hybrid, along with existing state-of-the-art approaches, has been simulated in the realistic environment of the IoT to verify the security measures. These evaluations were carried out in the presence of intruders capable of launching various attacks simultaneously, such as man-in-the-middle, device impersonations, and masquerading attacks. Moreover, the proposed approach has been proven to be more effective than existing state-of-the-art approaches due to its exceptional performance in communication, processing, and storage overheads, i.e., 13%, 19%, and 16%, respectively. Finally, the proposed hybrid approach is pruned against well-known security attacks
The permanent magnet (PM) Vernier machines enhance torque density and decrease cogging torque compared to conventional permanent magnet synchronous motor. This paper presents a novel fractional-slot H-shaped PM Vernie...
详细信息
Effective real-time monitoring and analysis of distributed grids necessitate the use of synchro-waveform measurements, which capture almost all high-frequency disturbances and transient phenomena. However, due to limi...
详细信息
Cloud computing technology provides various computing resources on demand to users on pay per use basis. The technology fails in terms of its usage due to confidentiality and privacy issues. Access control mechanisms ...
详细信息
The Intelligent Internet of Things(IIoT) involves real-world things that communicate or interact with each other through networking technologies by collecting data from these “things” and using intelligent approache...
详细信息
The Intelligent Internet of Things(IIoT) involves real-world things that communicate or interact with each other through networking technologies by collecting data from these “things” and using intelligent approaches, such as Artificial Intelligence(AI) and machine learning, to make accurate decisions. Data science is the science of dealing with data and its relationships through intelligent approaches. Most state-of-the-art research focuses independently on either data science or IIoT, rather than exploring their integration. Therefore, to address the gap, this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT) system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics. The paper analyzes the data science or big data security and privacy features, including network architecture, data protection, and continuous monitoring of data, which face challenges in various IoT-based systems. Extensive insights into IoT data security, privacy, and challenges are visualized in the context of data science for IoT. In addition, this study reveals the current opportunities to enhance data science and IoT market development. The current gap and challenges faced in the integration of data science and IoT are comprehensively presented, followed by the future outlook and possible solutions.
This paper proposes a Poor and Rich Squirrel Algorithm (PRSA)-based Deep Maxout network to find fraud data transactions in the credit card system. Initially, input transaction data is passed to the data transformation...
详细信息
Coping with noise in computing is an important problem to consider in large systems. With applications in fault tolerance (Hastad et al., 1987;Pease et al., 1980;Pippenger et al., 1991), noisy sorting (Shah and Wainwr...
详细信息
Coping with noise in computing is an important problem to consider in large systems. With applications in fault tolerance (Hastad et al., 1987;Pease et al., 1980;Pippenger et al., 1991), noisy sorting (Shah and Wainwright, 2018;Agarwal et al., 2017;Falahatgar et al., 2017;Heckel et al., 2019;Wang et al., 2024a;Gu and Xu, 2023;Kunisky et al., 2024), noisy searching (Berlekamp, 1964;Horstein, 1963;Burnashev and Zigangirov, 1974;Pelc, 1989;Karp and Kleinberg, 2007), among many others, the goal is to devise algorithms with the minimum number of queries that are robust enough to detect and correct the errors that can happen during the computation. In this work, we consider the noisy computing of the threshold-k function. For n Boolean variables x = (x1, ..., xn) ∈ {0, 1}n, the threshold-k function THk(·) computes whether the number of 1's in x is at least k or not, i.e., (Equation presented) The noisy queries correspond to noisy readings of the bits, where at each time step, the agent queries one of the bits, and with probability p, the wrong value of the bit is returned. It is assumed that the constant p ∈ (0, 1/2) is known to the agent. Our goal is to characterize the optimal query complexity for computing the THk function with error probability at most δ. This model for noisy computation of the THk function has been studied by Feige et al. (1994), where the order of the optimal query complexity is established;however, the exact tight characterization of the optimal number of queries is still open. In this paper, our main contribution is tightening this gap by providing new upper and lower bounds for the computation of the THk function, which simultaneously improve the existing upper and lower bounds. The main result of this paper can be stated as follows: for any 1 ≤ k ≤ n, there exists an algorithm that computes the THk function with an error probability at most δ = o(1), and the algorithm uses at most (Equation presented) queries in expectation. Here we define m (Eq
暂无评论