The metaverse is a universal and immersive virtual world, which are components of cyber-physical-social systems (CPSS). The traditional centralized approach to building a metaverse poses risks to user privacy, securit...
The metaverse is a universal and immersive virtual world, which are components of cyber-physical-social systems (CPSS). The traditional centralized approach to building a metaverse poses risks to user privacy, security, and autonomy. A growing movement towards a decentralized metaverse, built on blockchain and distributed networks, has emerged as a response. This paper presents a conceptual framework for building a de-centralized metaverse emphasising security, user autonomy, and inclusivity within situation-aware CPSS. Drawing on literature from blockchain, virtual worlds, and decentralized systems, we explore the technical and regulatory challenges that must be addressed to create a more secure, autonomous, and inclusive virtual world. Our framework highlights the importance of community-driven governance and transparent data management for building a decentralized metaverse that empowers users and promotes collaboration.
Cyber security is predicated heavily on manual efforts to fight malicious assaults, but such processes are increasingly turning into overwhelmed by means of the quantity of threats - a hassle worsened in the course of...
详细信息
Dry biomass weight measurements from a quadrat in a paddock for grass, clover and weeds when expressed as percentages of total dry herbage mass are compositional in nature. Unlike real valued regression problems, pred...
详细信息
computer Vision is playing aremarkable role right from essentials to entertainment and thus trying to turn computer as a 'seeing' machine. Having widespread applications in most of the real world domain like h...
详细信息
Intrusion detection systems are an essential part of the current cybersecurity environment, and IDS devices are intended to scan the activity of the network and systems to identify possible improper actions or violati...
详细信息
ISBN:
(数字)9798331542559
ISBN:
(纸本)9798331542566
Intrusion detection systems are an essential part of the current cybersecurity environment, and IDS devices are intended to scan the activity of the network and systems to identify possible improper actions or violations. IDSs work in analyzing the traffic of networks and the traffic of networks and the behavior of systems, distinguishing suspicious information and signs of violation of legal access. This capability is critical for protecting information systems’ characteristics, such as integrity, confidentiality, and availability. Useful data was collected from Kaggle data archives, and several base models such as K-Nearest Neighbor, Random Forest, Logistic Regression, and Decision Tree classifier will be used. Their predictions are combined using a Voting Classifier. The models are trained and evaluated based on accuracy, precision, recall, and F1-Score. Results show that the Voting classifier achieved an accuracy of 99.78%, with a precision of 99.67% and recall of 99.92% for intrusion detection. The Logistic Regression and K-Nearest Neighbor models achieved accuracies of 95.37% and 99.52%, respectively, while having a precision of 94.93% and 99.51%, respectively. The Decision Tree and Random Forest models achieved accuracies of 99.76% and 99.86%, respectively, while having a precision of 99.75% and 99.83%, respectively. These findings suggest ensemble techniques significantly enhance intrusion detection, making networks safer.
Massive data is written to blockchain systems for the destination of keeping safe. However, existing blockchain protocols still demand that each full node has to contain the entire chain. Most nodes quit because they ...
详细信息
Massive data is written to blockchain systems for the destination of keeping safe. However, existing blockchain protocols still demand that each full node has to contain the entire chain. Most nodes quit because they are unable to grow their storage space with the size of data. As the number of nodes decreases, the security of blockchains would significantly reduce. We present SE-Chain, a novel scale-out blockchain model that improves storage scalability under the premise of ensuring safety and achieves efficient retrieval. The SE-Chain consists of three parts:the data layer, the processing layer and the storage layer. In the data layer, each transaction is stored in the AB-M tree (Adaptive Balanced Merkle tree), which adaptively combines the advantages of balanced binary tree (quick retrieval) and Merkle tree (quick verification). In the processing layer, the full nodes store the part of the complete chain selected by the duplicate ratio regulation algorithm. Meanwhile, the node reliability verification method is used for increasing the stability of full nodes and reducing the risk of imperfect data recovering caused by the reduction of duplicate number in the storage layer. The experimental results on real datasets show that the query time of SE-Chain based on the AB-M tree is reduced by 17% when 16 nodes exist. Overall, SE-Chain improves the storage scalability extremely and implements efficient querying of transactions.
Liver disease is a major global health concern, and effective treatment requires an early and accurate diagnosis. Deep learning (DL) and Machine Learning (ML) techniques have been widely applied to improve disease pre...
详细信息
ISBN:
(数字)9798331543891
ISBN:
(纸本)9798331543907
Liver disease is a major global health concern, and effective treatment requires an early and accurate diagnosis. Deep learning (DL) and Machine Learning (ML) techniques have been widely applied to improve disease prediction and classification. A range of models, including Convolutional Neural Networks (CNN), Support Vector Machines (SVM), Logistic Regression (LR), and K-Nearest Neighbours (KNN), were trained using the Indian Liver Patient dataset (ILPD) from Kaggle in order to diagnose cases of liver illness. With an accuracy rate of 96.21%, precision rate of 74.76%, recall rate of 92.77%, and F1-score rate of 82.80%, the CNN model outperforms all others, demonstrating its ability to extract complex patterns for accurate categorization. Experimental results prove that Deep Learning (DL) methodologies, especially CNN, can enhance predictive power extensively compared to Machine Learning (ML) methods.
Sparsely-activated Mixture-of-Expert (MoE) layers have found practical applications in enlarging the model size of large-scale foundation models, with only a sub-linear increase in computation demands. Despite the wid...
详细信息
A key task in the field of Natural Language Processing (NLP) is determining semantic similarity between text sentences. Sentence pair modeling, Textual similarity and language modeling are few important tasks in NLP. ...
A key task in the field of Natural Language Processing (NLP) is determining semantic similarity between text sentences. Sentence pair modeling, Textual similarity and language modeling are few important tasks in NLP. Traditional machine learning algorithms require an enormous quantity of training data, but it is a timeconsuming process. Pre-trained models can be modified for a variety of downstream applications since they use methods for generically learning the characteristics of neural network topologies and language representations. Bidirectional Encoder Representations from Transformers- BERT & GPT are the popular architectures in NLP which enable to use minimal fine-tuning effort to produce effective results. In this work a fine-tuned BERT model that is suitable for semantic sentence similarity which predicts the entailment, neutral and contradictory categories of sentence pairs is presented. The fine-tuning feature promotes the training phase of model whichis widely effective across different types of semantic similarity models. The performance analysis of our system shows that the fine-tuned model reduces the number of neurons in the neural network there by reducing storage and time spent in expensive training task to create deep learning model.
Metaheuristics are essential tools for efficiently solving combinatorial optimization problems in arising from many fields. As incomplete methods, metaheuristics can provide goodquality results in a very short time. A...
Metaheuristics are essential tools for efficiently solving combinatorial optimization problems in arising from many fields. As incomplete methods, metaheuristics can provide goodquality results in a very short time. Among these approaches, the Iterated Greedy algorithm (IG) has appeared as a powerful and flexible method for finding near-optimal solutions to combinatorial problems. In this paper, we conducted a comprehensive systematic literature review on the variants of IG approach, and its applications covering the period from its inception in 2007 up to 2022. To the best of our knowledge, this is the first work in which all operators and aspects of IG are discussed to provide a detailed idea about this approach.
暂无评论