Aspect-based sentiment analysis (ABSA) performs fine-grained analysis on text to determine a specific aspect category and a sentiment polarity. Recently, machine learning models have played a key role in ABSA tasks. I...
详细信息
Aspect-based sentiment analysis (ABSA) performs fine-grained analysis on text to determine a specific aspect category and a sentiment polarity. Recently, machine learning models have played a key role in ABSA tasks. In particular, transformer-based pre-trained models have achieved promising results in natural language processing tasks. Therefore, we propose a permutation based XLNet fine-tuning model for aspect category detection and sentiment polarity detection. Our model learns bidirectional contexts via positional encoding and factorization order. We evaluate the proposed permutation language model on three ABSA datasets, namely, SentiHood, SemEval 2015, and SemEval 2016. Specifically, we studied the ABSA tasks in a constrained system with a multi-class environment. Our result indicates that the proposed permutation language model achieves a better result.
We study the problem of robust multivariate polynomial regression: let p: Rn → R be an unknown n-variate polynomial of degree at most d in each variable. We are given as input a set of random samples (xi, yi) ∈ [−1,...
详细信息
Sequential change-point detection for time series enables us to sequentially check the hypothesis that the model still holds as more and more data are observed. It's widely used in data monitoring in practice. Mea...
详细信息
Large Language Models (LLMs) and Code-LLMs (CLLMs) have significantly improved code generation, but, they frequently face difficulties when dealing with challenging and complex problems. Retrieval-Augmented Generation...
详细信息
Notwithstanding economic progress, Nigeria continues to countenance significant food security issues. This study investigates the root causes of this problem and attempts to develop predictive models to address hunger...
详细信息
Consider the domain of multiclass classification within the adversarial online setting. What is the price of relying on bandit feedback as opposed to full information? To what extent can an adaptive adversary amplify ...
ISBN:
(纸本)9798331314385
Consider the domain of multiclass classification within the adversarial online setting. What is the price of relying on bandit feedback as opposed to full information? To what extent can an adaptive adversary amplify the loss compared to an oblivious one? To what extent can a randomized learner reduce the loss compared to a deterministic one? We study these questions in the mistake bound model and provide nearly tight answers. We demonstrate that the optimal mistake bound under bandit feedback is at most O(k) times higher than the optimal mistake bound in the full information case, where k represents the number of labels. This bound is tight and provides an answer to an open question previously posed and studied by Daniely and Helbertal ['13] and by Long ['17, '20], who focused on deterministic learners. Moreover, we present nearly optimal bounds of 0(k) on the gap between randomized and deterministic learners, as well as between adaptive and oblivious adversaries in the bandit feedback setting. This stands in contrast to the full information scenario, where adaptive and oblivious adversaries are equivalent, and the gap in mistake bounds between randomized and deterministic learners is a constant multiplicative factor of 2. In addition, our results imply that in some cases the optimal randomized mistake bound is approximately the square-root of its deterministic parallel. Previous results show that this is essentially the smallest it can get. Some of our results are proved via a reduction to prediction with expert advice under bandit feedback, a problem interesting on its own right. For this problem, we provide a randomized algorithm which is nearly optimal in some scenarios.
The ever-increasing fine-tuning cost of large-scale pre-trained models gives rise to the importance of dataset pruning, which aims to reduce dataset size while maintaining task performance. However, existing dataset p...
详细信息
Overparameterized neural networks generalize well but are expensive to train. Ideally, one would like to reduce their computational cost while retaining their generalization benefits. Sparse model training is a simple...
详细信息
Operator-splitting methods are widely used to solve differential equations, especially those that arise from multi-scale or multi-physics models, because a monolithic (single-method) approach may be inefficient or eve...
详细信息
Classifying malicious traffic in Wireless Sensor Networks (WSNs) is crucial for maintaining the network's security and dependability. Traditional security techniques are challenging to deploy in WSNs because they ...
详细信息
暂无评论