In contemporary society, pets are increasingly regarded as integral family members, contributing significantly to human quality of life. The growing prevalence of dog ownership has concurrently escalated the economic ...
详细信息
The proliferation of mobile applications in today's digital environment has revolutionized the way people interact with technology, their experiences are often reflected in reviews, providing a rich source of data...
详细信息
Large Language Models (LLMs) have exhibited remarkable performance across various downstream tasks, but they may generate inaccurate or false information with a confident tone. One of the possible solutions is to empo...
详细信息
Lp-quantile has recently been receiving growing attention in risk management since it has desirable properties as a risk measure and is a generalization of two widely applied risk measures, Value-at-Risk and Expectile...
详细信息
As world knowledge advances and new task schemas emerge, Continual Learning (CL) becomes essential for keeping Large Language Models (LLMs) current and addressing their shortcomings. This process typically involves co...
详细信息
Current adversarial attacks for multi-class classifiers choose the target class for a given input naively, based on the classifier's confidence levels for various target classes. We present a novel adversarial tar...
ISBN:
(纸本)9798331314385
Current adversarial attacks for multi-class classifiers choose the target class for a given input naively, based on the classifier's confidence levels for various target classes. We present a novel adversarial targeting method, MALT - Mesoscopic Almost Linearity Targeting, based on medium-scale almost linearity assumptions. Our attack wins over the current state of the art AutoAttack on the standard benchmark datasets CIFAR-100 and ImageNet and for a variety of robust models. In particular, our attack is five times faster than AutoAttack, while successfully matching all of AutoAttack's successes and attacking additional samples that were previously out of reach. We then prove formally and demonstrate empirically that our targeting method, although inspired by linear predictors, also applies to standard non-linear models.
Rigorously establishing the safety of black-box machine learning models concerning critical risk measures is important for providing guarantees about model behavior. Recently, Bates et. al. (JACM '24) introduced t...
ISBN:
(纸本)9798331314385
Rigorously establishing the safety of black-box machine learning models concerning critical risk measures is important for providing guarantees about model behavior. Recently, Bates et. al. (JACM '24) introduced the notion of a risk controlling prediction set (RCPS) for producing prediction sets that are statistically guaranteed low risk from machine learning models. Our method extends this notion to the sequential setting, where we provide guarantees even when the data is collected adaptively, and ensures that the risk guarantee is anytime-valid, i.e., simultaneously holds at all time steps. Further, we propose a framework for constructing RCPSes for active labeling, i.e., allowing one to use a labeling policy that chooses whether to query the true label for each received data point and ensures that the expected proportion of data points whose labels are queried are below a predetermined label budget. We also describe how to use predictors (i.e., the machine learning model for which we provide risk control guarantees) to further improve the utility of our RCPSes by estimating the expected risk conditioned on the covariates. We characterize the optimal choices of label policy and predictor under a fixed label budget and show a regret result that relates the estimation error of the optimal labeling policy and predictor to the wealth process that underlies our RCPSes. Lastly, we present practical ways of formulating label policies and empirically show that our label policies use fewer labels to reach higher utility than naive baseline labeling strategies on both simulations and real data.
Gender bias creates inequalities in roles, expectations, and opportunities between males and females. When such biases are incorporated into artificial intelligence models, the corresponding technological solutions an...
Gender bias creates inequalities in roles, expectations, and opportunities between males and females. When such biases are incorporated into artificial intelligence models, the corresponding technological solutions and products can further entrench the social biases. Herein, a new method for investigating the extent to which latent biases in text-based training data affect a language model is presented. Potential gender bias is identified by deriving values assigned to male/female words via inverse operations from embedded expressions to the original words using the approximate inverse model explanation (AIME). In particular, AIME constructs approximate generalized inverse operators for black-box models. A biased embedded representation used in machine learning models as an internal representation of word/sentence vectors likely introduces bias into the overall prediction results of such models. The OpenAI text-embedding-ada-002 large language model, which provides embedded expressions, is employed to determine the gender bias included in the proposed method. Experimental results show that the OpenAI textembedding-ada-002 model is partially gender-biased owing to the training text data. These results are expected to (i) contribute to the development of effective measures preventing gender bias during the design and training of language models, (ii) promote the identification and mitigation of gender bias in future language models, and (iii) provide insights into the effect of language models and their limitations from technical, social, and cultural perspectives.
Coronavirus disease (COVID-19) is a major pandemic disease that has already infected millions of people worldwide and affects many aspects, especially public health. There are many clinical techniques for the diagnosi...
详细信息
In this paper, we propose a fractional-order financial risk system with one absolute function term. The Adomian decomposition method (ADM) is used to numerically solve the fractional-order financial risk system. Dynam...
详细信息
暂无评论