Nowadays, blockchain distributed ledger technology is becoming more and more prominent, and its decentralization, anonymization, and tampering obvious features have been widely recognized. These excellent technical fe...
详细信息
With the increasing prevalence of e-commerce plat-forms, understanding customer sentiments expressed in product reviews is crucial for assessing platform and product performance. However, traditional sentiment analysi...
详细信息
ISBN:
(数字)9798350365887
ISBN:
(纸本)9798350365894
With the increasing prevalence of e-commerce plat-forms, understanding customer sentiments expressed in product reviews is crucial for assessing platform and product performance. However, traditional sentiment analysis (SA) models often struggle with low-resource languages such as Turkish, particularly due to the language's agglutinative nature and morphological complexity. To address this, several public datasets are now available online. However, these datasets are often annotated using rule-based methods, resulting in poor accuracy for Machine Learning (ML) models. This study addresses these challenges by employing two large Turkish datasets, TRSAv1 and VSCR, and improving their sentiment annotations through advanced Large Language Models (LLMs) such as ChatGPT4o-mini. The reannotation significantly enhanced the performance of traditional ML models-Logistic Regression, Naive Bayes, Random Forest, and Support Vector Machine-especially in the neutral and negative sentiment classes, as evidenced by increased precision, recall, and F1-scores. The findings underscore the importance of thorough data preparation and demonstrate that utilizing LLMs for sentiment annotation in Turkish can lead to more reliable SA, providing a stronger foundation for developing advanced SA tailored to low-resource languages. This study is the first to employ recent LLM for relabelling Turkish customer reviews, highlighting the potential for improved accuracy in SA for low-resource languages.
This research introduces a novel Probabilistic Graph Modeling-based Safety Classifier Algorithm designed for the purpose of classifying road safety in smart transportation systems. Leveraging a combination of numerica...
详细信息
Incentive mechanisms in Spatial Crowdsourcing (SC) have been widely studied as they provide an effective way to motivate mobile workers to perform spatial tasks. Yet, most existing mechanisms only involve single tasks...
详细信息
ISBN:
(数字)9798350383508
ISBN:
(纸本)9798350383515
Incentive mechanisms in Spatial Crowdsourcing (SC) have been widely studied as they provide an effective way to motivate mobile workers to perform spatial tasks. Yet, most existing mechanisms only involve single tasks, neglecting the presence of complementarity and substitutability among tasks. This limits their effectiveness in practice cases. Motivated by this, we consider task bundles for incentive mechanism design and closely analyze the mutual exclusion effect that arises with task bundles. We then develop a combinatorial incentive mechanism, including three key policies: In the offline case, we propose a combinatorial assignment policy to address the conflict between mutual exclusion and assignment efficiency. We next study the conflict between mutual exclusion and truthfulness, and build a combinatorial pricing policy to pay winners that yields both incentive compatibility and individual rationality. In the online case with unknown workers’ utilities, we present an online combinatorial assignment policy that balances the exploration-exploitation trade-off under the mutual exclusion constraints. Through theoretical analysis and numerical simulations using real-world mobile networking datasets, we demonstrate the effectiveness of the proposed mechanism.
In recent years, deep learning methods have shown promising results in Single Image Deraining (SID). However, these methods still suffer from unsatisfactory residual rain streaks, primarily due to the absence of image...
In recent years, deep learning methods have shown promising results in Single Image Deraining (SID). However, these methods still suffer from unsatisfactory residual rain streaks, primarily due to the absence of image priors embedding and limitations in modeling capacity. In this paper, we propose a joint High-Frequency Channel and Fourier Frequency Domain Guided Network (FPGNet) to remove complex rain streaks while obtaining clearer background details. Specifically, FPGNet comprises two core components: the High-frequency Guidance Module (HFGM) and the Fourier-based Multi-scale Feature Extraction Module (FMFEM). The HFGM focuses on learning the natural distribution of rain streaks from the high-frequency channel to guide the network’s modeling of rain streaks, while the FMFEM serves as the backbone of FPGNet to learn rain layers in rainy images. Additionally, we introduce an Interactive Fusion Module (IFM) to enhance the integration of the high-frequency channel into the network. Extensive experiments demonstrate that our network outperforms state-of-the-art deraining methods in effectively removing rain streaks from rainy images.
The integration of the contrastive learning paradigm into deep clustering has led to enhanced performance in image clustering. However, in existing researches, the samples in the class of the target may be still treat...
详细信息
ISBN:
(数字)9798350354973
ISBN:
(纸本)9798350354980
The integration of the contrastive learning paradigm into deep clustering has led to enhanced performance in image clustering. However, in existing researches, the samples in the class of the target may be still treated as negative samples, this means that more semantic sample pairs cannot be constructed and the intra-class compactness may be compromised. In this paper, we propose a new contrastive clustering method based on mining latent positive samples (CCLPS), which utilizes the nearest neighbor relationship between samples to mine semantic positive samples in the same class. In more detail, by mining latent positive samples in both the feature and cluster spaces, CCLPS can obtain more cluster-friendly feature representation of samples to enhance clustering performance. Samples mined from the feature space fuel the cluster-level contrastive learning task, while samples mined from the cluster space fuel the instance-level task. Three image datasets CIFAR-10, CIFAR-100 and Tiny-ImageNet are employed to experimentally demonstrate usefulness and effectiveness of the proposed method in image clustering, and transfer learning based on mining latent positive samples is also discussed and comparisons are made with 19 representative clustering methods on the datasets. Experimental results indicate that the proposed method can serve as a pre-training model for transfer learning and exhibits commendable feature modeling capabilities.
Early surgical treatment of brain tumors is crucial in reducing patient mortality rates. However, brain tissue deformation (called brain shift) occurs during the surgery, rendering pre-operative images invalid. As a c...
详细信息
Ocean data collection via unmanned aerial vehicles (UAVs) has received widespread attention due to its flexibility and low cost. To further improve the efficiency of large maritime data collection between the UAVs and...
Ocean data collection via unmanned aerial vehicles (UAVs) has received widespread attention due to its flexibility and low cost. To further improve the efficiency of large maritime data collection between the UAVs and the ocean surface buoys, optical wireless transmission is considered as a promising technique because of its low latency and high bandwidth. However, the waves and other disturbances in the complex oceanic environment result in drift and instability of the buoy, which deteriorate and even interrupt the line-of-sight (LOS) optical transmission. To tackle these challenges, in this paper, we propose a reliable data collection scheme based on a deep reinforcement learning (DRL) approach. We first model the optical channel and calculate the maximum transmission range that satisfies a pre-defined bit error rate (BER) to ensure the quality of service (QoS). Then, we formulate the data collection procedure as a Markov decision process (MDP) aiming at maximizing the received signal intensity and minimizing the energy consumption, in which the system uncertainty caused by the oceanic environmental disturbances is taken into account. Finally, we propose a beam pointing adjustment algorithm based on deep deterministic policy gradient (DDPG) approach to alleviate the performance deterioration and maintain a stable LOS communication. Through extensive simulations, the results demonstrate that the proposed scheme is effective and achieves reliable data collection via the optical links.
Remanufacturing of waste products cannot be separated from disassembly. Reasonable disassembly sequence can realize balanced production of disassembly line, increase work efficiency during remanufacturing, reduce capi...
详细信息
With the existing deep learning models in predicting multiple diseases primarily focus on analyzing individual diseases in isolation, lacking a unified system for multi-disease prediction. This project presents an app...
With the existing deep learning models in predicting multiple diseases primarily focus on analyzing individual diseases in isolation, lacking a unified system for multi-disease prediction. This project presents an approach to predict multiple diseases using Flask API, with a specific focus on brain tumors, COVID-19 and pneumonia. The proposed work represents a significant contribution to the field of disease prediction, harnessing the power of deep learning algorithms and modern web application development. The primary focus is on disease prediction, with a particular emphasis on ensuring accuracy and accessibility for end-users. The initial phase of this research involves data collection, where relevant datasets of various diseases are gathered. These datasets serve as the foundation for training and validating the deep learning models. Two prominent deep learning algorithms, Sequential CNN and VGG16, are employed for this purpose. These algorithms are chosen for their ability to handle complex data and recognize patterns within medical images and other health-related data. The core of the research involves training the deep learning models using the collected datasets. This step is crucial in enabling the models to learn and generalize from the provided data, ultimately enhancing their predictive capabilities. The models are modified to elevate their performance and accuracy. Following the training phase, the models are rigorously tested to evaluate their predictive accuracy. This assessment is vital in gauging the real-world applicability of the models in medical diagnosis. To make these powerful disease prediction models accessible to a wider audience, a front-end web application is developed.
暂无评论