Modern large-scale computing systems always demand better connectivity indicators for reliability evaluation. However, as more processing units have been rapidly incorporated into emerging computing systems, existing ...
详细信息
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves ...
详细信息
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves privacy,enhances responsiveness,and saves ***,current ondevice DL relies on predefined patterns,leading to accuracy and efficiency *** is difficult to provide feedback on data processing performance during the data acquisition stage,as processing typically occurs after data acquisition.
Detecting plagiarism in documents is a well-established task in natural language processing (NLP). Broadly, plagiarism detection is categorized into two types (1) intrinsic: to check the whole document or all the pass...
详细信息
Detecting plagiarism in documents is a well-established task in natural language processing (NLP). Broadly, plagiarism detection is categorized into two types (1) intrinsic: to check the whole document or all the passages have been written by a single author;(2) extrinsic: where a suspicious document is compared with a given set of source documents to figure out sentences or phrases which appear in both documents. In the pursuit of advancing intrinsic plagiarism detection, this study addresses the critical challenge of intrinsic plagiarism detection in Urdu texts, a language with limited resources for comprehensive language models. Acknowledging the absence of sophisticated large language models (LLMs) tailored for Urdu language, this study explores the application of various machine learning, deep learning, and language models in a novel framework. A set of 43 stylometry features at six granularity levels was meticulously curated, capturing linguistic patterns indicative of plagiarism. The selected models include traditional machine learning approaches such as logistic regression, decision trees, SVM, KNN, Naive Bayes, gradient boosting and voting classifier, deep learning approaches: GRU, BiLSTM, CNN, LSTM, MLP, and large language models: BERT and GPT-2. This research systematically categorizes these features and evaluates their effectiveness, addressing the inherent challenges posed by the limited availability of Urdu-specific language models. Two distinct experiments were conducted to evaluate the impact of the proposed features on classification accuracy. In experiment one, the entire dataset was utilized for classification into intrinsic plagiarized and non-plagiarized documents. Experiment two categorized the dataset into three types based on topics: moral lessons, national celebrities, and national events. Both experiments are thoroughly evaluated through, a fivefold cross-validation analysis. The results show that the random forest classifier achieved an ex
This paper presents Secure Orchestration, a novel framework meticulously planned to uphold rigorous security measures over the profound security concerns that lie within the container orchestration platforms, especial...
详细信息
Validating assertions before adding them to a knowledge graph is an essential part of its creation and maintenance. Due to the sheer size of knowledge graphs, automatic fact-checking approaches have been developed. Th...
详细信息
Existing in-memory graph storage systems that rely on DRAM have scalability issues because of the limited capacity and volatile nature of DRAM. The emerging persistent memory (PMEM) offers us a chance to solve these i...
详细信息
Hybrid memory systems composed of dynamic random access memory(DRAM)and Non-volatile memory(NVM)often exploit page migration technologies to fully take the advantages of different memory *** previous proposals usually...
详细信息
Hybrid memory systems composed of dynamic random access memory(DRAM)and Non-volatile memory(NVM)often exploit page migration technologies to fully take the advantages of different memory *** previous proposals usually migrate data at a granularity of 4 KB pages,and thus waste memory bandwidth and DRAM *** this paper,we propose Mocha,a non-hierarchical architecture that organizes DRAM and NVM in a flat address space physically,but manages them in a cache/memory *** the commercial NVM device-Intel Optane DC Persistent Memory Modules(DCPMM)actually access the physical media at a granularity of 256 bytes(an Optane block),we manage the DRAM cache at the 256-byte size to adapt to this feature of *** design not only enables fine-grained data migration and management for the DRAM cache,but also avoids write amplification for Intel Optane *** also create an Indirect Address Cache(IAC)in Hybrid Memory Controller(HMC)and propose a reverse address mapping table in the DRAM to speed up address translation and cache ***,we exploit a utility-based caching mechanism to filter cold blocks in the NVM,and further improve the efficiency of the DRAM *** implement Mocha in an architectural *** results show that Mocha can improve application performance by 8.2%on average(up to 24.6%),reduce 6.9%energy consumption and 25.9%data migration traffic on average,compared with a typical hybrid memory architecture-HSCC.
Ensemble object detectors have demonstrated remarkable effectiveness in enhancing prediction accuracy and uncertainty quantification. However, their widespread adoption is hindered by significant computational and sto...
详细信息
Autism spectrum disorder (ASD) affects 1 in 100 children globally. Early detection and intervention can enhance life quality for individuals diagnosed with ASD. This research utilizes the support vector machine-recurs...
详细信息
Autism spectrum disorder (ASD) affects 1 in 100 children globally. Early detection and intervention can enhance life quality for individuals diagnosed with ASD. This research utilizes the support vector machine-recursive feature elimination (SVM-RFE) method in its approach for ASD classification using the phenotypic and Automated Anatomical Labeling (AAL) Brain Atlas datasets of the Autism Brain Imaging data Exchange preprocessed dataset. The functional connectivity matrix (FCM) is computed for the AAL data, generating 6670 features representing pair-wise brain region activity. The SVM-RFE feature selection method was applied five times to the FCM data, thus determining the optimal number of features to be 750 for the best performing support vector machine (SVM) model, corresponding to a dimensionality reduction of 88.76%. Pertinent phenotypic data features were manually selected and processed. Subsequently, five experiments were conducted, each representing a different combination of the features used for training and testing the linear SVM, deep neural networks, one-dimensional convolutional neural networks, and random forest machine learning models. These models are fine-tuned using grid search cross-validation (CV). The models are evaluated on various metrics using 5-fold CV. The most relevant brain regions from the optimal feature set are identified by ranking the SVM-RFE feature weights. The SVM-RFE approach achieved a state-of-the-art accuracy of 90.33% on the linear SVM model using the data Processing Assistant for Resting-State Functional Magnetic Resonance Imaging pipeline. The SVM model’s ability to rank the features used based on their importance provides clarity into the factors contributing to the diagnosis. The thalamus right, rectus right, and temporal middle left AAL brain regions, among others, were identified as having the highest number of connections to other brain regions. These results highlight the importance of using traditional ML models fo
Information steganography has received more and more attention from scholars nowadays,especially in the area of image steganography,which uses image content to transmit information and makes the existence of secret in...
详细信息
Information steganography has received more and more attention from scholars nowadays,especially in the area of image steganography,which uses image content to transmit information and makes the existence of secret information *** enhance concealment and security,the Steganography without Embedding(SWE)method has proven effective in avoiding image distortion resulting from cover *** this paper,a novel encrypted communication scheme for image SWE is *** reconstructs the image into a multi-linked list structure consisting of numerous nodes,where each pixel is transformed into a single node with data and pointer *** employing a special addressing algorithm,the optimal linked list corresponding to the secret information can be *** receiver can restore the secretmessage fromthe received image using only the list header position *** scheme is based on the concept of coverless steganography,eliminating the need for any modifications to the cover *** boasts high concealment and security,along with a complete message restoration rate,making it resistant to ***,this paper proposes linked-list construction schemeswithin theproposedframework,which caneffectively resist a variety of attacks,includingnoise attacks and image compression,demonstrating a certain degree of *** validate the proposed framework,practical tests and comparisons are conducted using multiple *** results affirm the framework’s commendable performance in terms of message reduction rate,hidden writing capacity,and robustness against diverse attacks.
暂无评论