Multi-distribution or collaborative learning involves learning a single predictor that works well across multiple data distributions, using samples from each during training. Recent research on multi-distribution lear...
Background: Cervical cancer is the fourth most frequent cancer in women worldwide. Even though cervical cancer deaths have decreased significantly in Western countries, low and middle-income countries account for near...
详细信息
When there are outliers or heavy-tailed distributions in the data, the traditional least squares with penalty function is no longer applicable. In addition, with the rapid development of science and technology, a lot ...
详细信息
When there are outliers or heavy-tailed distributions in the data, the traditional least squares with penalty function is no longer applicable. In addition, with the rapid development of science and technology, a lot of data, enjoying high dimension, strong correlation and redundancy, has been generated in real life. So it is necessary to find an effective variable selection method for dealing with collinearity based on the robust method. This paper proposes a penalized M-estimation method based on standard error adjusted adaptive elastic-net, which uses M-estimators and the corresponding standard errors as weights. The consistency and asymptotic normality of this method are proved theoretically. For the regularization in high-dimensional space, the authors use the multi-step adaptive elastic-net to reduce the dimension to a relatively large scale which is less than the sample size, and then use the proposed method to select variables and estimate parameters. Finally, the authors carry out simulation studies and two real data analysis to examine the finite sample performance of the proposed method. The results show that the proposed method has some advantages over other commonly used methods.
This paper presents Secure Orchestration, a novel framework meticulously planned to uphold rigorous security measures over the profound security concerns that lie within the container orchestration platforms, especial...
详细信息
Human activity recognition (HAR) techniques pick out and interpret human behaviors and actions by analyzing data gathered from various sensor devices. HAR aims to recognize and automatically categorize human activitie...
详细信息
Detecting plagiarism in documents is a well-established task in natural language processing (NLP). Broadly, plagiarism detection is categorized into two types (1) intrinsic: to check the whole document or all the pass...
详细信息
Detecting plagiarism in documents is a well-established task in natural language processing (NLP). Broadly, plagiarism detection is categorized into two types (1) intrinsic: to check the whole document or all the passages have been written by a single author;(2) extrinsic: where a suspicious document is compared with a given set of source documents to figure out sentences or phrases which appear in both documents. In the pursuit of advancing intrinsic plagiarism detection, this study addresses the critical challenge of intrinsic plagiarism detection in Urdu texts, a language with limited resources for comprehensive language models. Acknowledging the absence of sophisticated large language models (LLMs) tailored for Urdu language, this study explores the application of various machine learning, deep learning, and language models in a novel framework. A set of 43 stylometry features at six granularity levels was meticulously curated, capturing linguistic patterns indicative of plagiarism. The selected models include traditional machine learning approaches such as logistic regression, decision trees, SVM, KNN, Naive Bayes, gradient boosting and voting classifier, deep learning approaches: GRU, BiLSTM, CNN, LSTM, MLP, and large language models: BERT and GPT-2. This research systematically categorizes these features and evaluates their effectiveness, addressing the inherent challenges posed by the limited availability of Urdu-specific language models. Two distinct experiments were conducted to evaluate the impact of the proposed features on classification accuracy. In experiment one, the entire dataset was utilized for classification into intrinsic plagiarized and non-plagiarized documents. Experiment two categorized the dataset into three types based on topics: moral lessons, national celebrities, and national events. Both experiments are thoroughly evaluated through, a fivefold cross-validation analysis. The results show that the random forest classifier achieved an ex
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task...
详细信息
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task requirements such as latency in task execution, computation costs, etc. So, selecting such a fog node that meets task requirements is a crucial challenge. To choose an optimal fog node, access to each node's resource availability information is essential. Existing approaches often assume state availability or depend on a subset of state information to design mechanisms tailored to different task requirements. In this paper, OptiFog: a cluster-based fog computing architecture for acquiring the state information followed by optimal fog node selection and task offloading mechanism is proposed. Additionally, a continuous time Markov chain based stochastic model for predicting the resource availability on fog nodes is proposed. This model prevents the need to frequently synchronize the resource availability status of fog nodes, and allows to maintain an updated state information. Extensive simulation results show that OptiFog lowers task execution latency considerably, and schedules almost all the tasks at the fog layer compared to the existing state-of-the-art. IEEE
Generating cover photos from story text is a non trivial challenge to solve. Existing approaches focus on generating only images from given text prompt. To the best of our knowledge, non of these approaches focus on g...
详细信息
Colorectal cancer is one of the most prevalent cancers in the world. It illustrates the effectiveness of early detection and treatment of precursor polyps to prevent progression to malignancy. Despite the pivotal role...
详细信息
To efficiently estimate the central subspace in sufficient dimension reduction,response discretization via slicing its range is one of the most used methodologies when inverse regression-based methods are ***,existing...
详细信息
To efficiently estimate the central subspace in sufficient dimension reduction,response discretization via slicing its range is one of the most used methodologies when inverse regression-based methods are ***,existing slicing schemes are almost all ad hoc and not widely ***,how to define datadriven schemes with certain optimal properties is a longstanding problem in this *** research described here is then ***,we introduce a likelihood-ratio-based framework for dimension reduction,subsuming the popularly used methods including the sliced inverse regression,the sliced average variance estimation and the likelihood acquired ***,we propose a regularized log likelihood-ratio criterion to obtain a data-driven slicing scheme and derive the asymptotic properties of the estimators.A simulation study is carried out to examine the performance of the proposed method and that of existing methods.A data set concerning concrete compressive strength is also analyzed for illustration and comparison.
暂无评论