As smart home IoT systems gain popularity, they inevitably become targets for security risks and concerns. Among various cyber-attacks targeting these systems, the fake event attack poses significant issues due to its...
详细信息
ISBN:
(数字)9781728190549
ISBN:
(纸本)9781728190556
As smart home IoT systems gain popularity, they inevitably become targets for security risks and concerns. Among various cyber-attacks targeting these systems, the fake event attack poses significant issues due to its ability to manipulate secure devices through automation rules. In response to this threat, we propose TrustEvent - a system designed to offer end-to-end event signature verification. By integrating TrustEvent with existing home automation platforms, event authenticity is verified against signatures generated from edge devices before these events trigger automation rule execution. Notably, we have developed a signature proxy module, enhancing our system's compatibility across various platform scenarios. We have implemented a TrustEvent prototype in conjunction with existing commercial smart home IoT platforms, evaluating its overhead in the process. Our experimentation demonstrates that our system only marginally increases the automation execution latency, by an average of 3.74 seconds, representing a acceptable compromise for enhanced security.
We present DeepSAT, a novel end-to-end learning framework for the Boolean satisfiability (SAT) problem. Unlike existing solutions trained on random SAT instances with relatively weak supervision, we propose applying t...
We present DeepSAT, a novel end-to-end learning framework for the Boolean satisfiability (SAT) problem. Unlike existing solutions trained on random SAT instances with relatively weak supervision, we propose applying the knowledge of the well-developed electronic design automation (EDA) field for SAT solving. Specifically, we first resort to logic synthesis algorithms to pre-process SAT instances into optimized and-inverter graphs (AIGs). By doing so, the distribution diversity among various SAT instances can be dramatically reduced, which facilitates improving the generalization capability of the learned model. Next, we regard the distribution of SAT solutions being a product of conditional Bernoulli distributions. Based on this observation, we approximate the SAT solving procedure with a conditional generative model, leveraging a novel directed acyclic graph neural network (DAGNN) with two polarity prototypes for conditional SAT modeling. To effectively train the generative model, with the help of logic simulation tools, we obtain the probabilities of nodes in the AIG being logic ‘1’ as rich supervision. We conduct comprehensive experiments on various SAT problems. Our results show that, DeepSAT achieves significant accuracy improvements over state-of-the-art learning-based SAT solutions, especially when generalized to SAT instances that are relatively large or with diverse distributions.
The face mask is an essential sanitaryware in daily lives growing during the pandemic period and is a big threat to current face recognition systems. The masks destroy a lot of details in a large area of face, and it ...
详细信息
Within the domain of human-robot communication, effective communication is paramount for seamless and smooth collaboration between humans and robots. A promising method for improving language grounding is reinforcemen...
详细信息
ISBN:
(数字)9798350353266
ISBN:
(纸本)9798350353273
Within the domain of human-robot communication, effective communication is paramount for seamless and smooth collaboration between humans and robots. A promising method for improving language grounding is reinforcement learning (RL), which enables robots to translate spoken commands into suitable behaviors. This paper presents a comprehensive review of recent advancements in RL techniques applied to the task of language grounding in human-robot interaction, focusing specifically on instruction following. Key challenges in this domain include the ambiguity of natural language, the complexity of action spaces, and the need for robust and interpretable models. Various RL algorithms and architectures tailored for language grounding tasks are discussed, highlighting their strengths and limitations. Furthermore, real-world applications and experimental results are examined, showcasing the effectiveness of RL-based approaches in enabling robots to understand and execute instructions from human users. Finally, promising directions for future research are identified, emphasizing the importance of addressing scalability, generalization, and adaptability in RL-based language grounding systems for human-robot interaction.
Convolutional neural networks (CNNs) and Transformers have significantly succeeded in low-level vision tasks. Although prominent complementary characteristics exist regarding the larger receptive field and better conv...
Convolutional neural networks (CNNs) and Transformers have significantly succeeded in low-level vision tasks. Although prominent complementary characteristics exist regarding the larger receptive field and better convergence, only some efforts have compacted them efficiently due to their individual and nonnegligible weakness. In this paper, we propose a hybrid progressive coupled network (HPCNet) for rain perturbation removal, which integrates the advantages of these two architectures while maintaining both effectiveness and efficiency. In particular, we achieve the progressive decomposition and association of rain-free and rain features, designed as an asymmetrical dual-path mutual representation network to alleviate the computational cost. Meanwhile, we equip the network with high-efficiency convolutions and resolution rescaling strategy to trade off the computational complexity. Extensive experiments show that our method outperforms MPRNet on average while saving 87.2% and 61.1% of the computational cost and parameters.
Mining high-quality logical rules is crucial as they can provide beneficial interpretability for predictions. Recent methods that incorporate logical rules into learning tasks have been proven to yield high-quality lo...
Mining high-quality logical rules is crucial as they can provide beneficial interpretability for predictions. Recent methods that incorporate logical rules into learning tasks have been proven to yield high-quality logical rules successfully. However, existing methods either rely on the rule instances observed to support rule mining, or simply embed the rule head and the rule body to learn from them. Additionally, they can not fully utilize the rich semantic information contained in logical rules and overlook the intrinsic correlations between all relations within the domain. In this paper, we propose a model called Correlation and order-Aware Rule Learning (CARL) that captures deeper semantic information in rules by allowing relations to be co-aware of each other and paying attention to logical sequence sensitivity. CARL utilizes semantic consistency between the rule body and rule head as its learning objective, continuously introducing more semantic information and logically simplifying the rule body while considering logical sequence sensitivity. We explored the internal correlations between domain relations and used the thought of knowledge distillation to simplify modules so that relations in CARL can share or perceive each other’s information or state efficiently. Experiments on link prediction tasks have demonstrated that CARL can learn higher-quality rules and yield state-of-the-art results on four popular public datasets. https://***/burning5112/CARL
Accurate classification of battery quality and prediction of battery lifetime before leaving the factory would bring economic and safety ***,we propose a data-driven approach with machine learning to classify the batt...
详细信息
Accurate classification of battery quality and prediction of battery lifetime before leaving the factory would bring economic and safety ***,we propose a data-driven approach with machine learning to classify the battery quality and predict the battery lifetime before usage only using formation *** extract three classes of features from the raw formation data,considering the statistical aspects,differential analysis,and electro-chemical *** correlation between over 100 extracted features and the battery lifetime is analysed based on the ageing *** learning models are developed to classify battery quality and predict battery lifetime by features with a high correlation with battery *** validation results show that the quality classification model achieved accuracies of 89.74% and 89.47% for the batteries aged at 25℃ and 45℃,***,the lifetime prediction model is able to predict the battery end-of-life with mean per-centage errors of 6.50% and 5.45%for the batteries aged at 25℃ and 45℃,*** work highlights the potential of battery formation data from production lines in quality classification and lifetime prediction.
Considering the potential benefits to lifespan and performance, zoned flash storage is expected to be incorporated into the next generation of consumer devices. However, due to the limited volatile cache and heterogen...
详细信息
ISBN:
(数字)9783982674100
ISBN:
(纸本)9798331534646
Considering the potential benefits to lifespan and performance, zoned flash storage is expected to be incorporated into the next generation of consumer devices. However, due to the limited volatile cache and heterogeneous flash cells of consumer-grade flash storage, adopting a zone abstraction requires additional internal hardware design to maximize its benefits. To understand and efficiently improve the hardware design on consumer-grade zoned flash storage, we present ConZone—the first emulator tailored to the characteristics of consumer-grade zoned flash storage. Users can explore the internal architecture and management strategies of consumer-grade zoned flash storage and integrate the optimization with software. We validate the accuracy of ConZone by realizing a hardware architecture for consumer-grade zoned flash storage and comparing it with the state-of-the-art. We also make a case study for read performance research with ConZone to explore the design of mapping mechanisms and cache management strategies.
In this work, we present a novel approach for general object segmentation from a monocular image, eliminating the need for manually labeled training data and enabling rapid, straightforward training and adaptation wit...
详细信息
ISBN:
(数字)9798350348811
ISBN:
(纸本)9798350348828
In this work, we present a novel approach for general object segmentation from a monocular image, eliminating the need for manually labeled training data and enabling rapid, straightforward training and adaptation with minimal data. Our model initially learns from LiDAR during the training process, which is subsequently removed from the system, allowing it to function solely on monocular imagery. This study leverages the concept of the Stixel-World to recognize a medium level representation of its surroundings. Our network directly predicts a 2D multi-layer Stixel-World and is capable of recognizing and locating multiple, superimposed objects within an image. Due to the scarcity of comparable works, we have divided the capabilities into modules and present a free space detection in our experiments section. Furthermore, we introduce an improved method for generating Stixels from LiDAR data, which we use as ground truth for our network.
After designing a simulation and running it locally on a small network instance, the implementation can be scaled-up via parallel and distributed computing (e.g., a cluster) to cope with massive networks. However, imp...
详细信息
ISBN:
(纸本)9781665476621
After designing a simulation and running it locally on a small network instance, the implementation can be scaled-up via parallel and distributed computing (e.g., a cluster) to cope with massive networks. However, implementation changes can create errors (e.g., parallelism errors), which are difficult to identify since the aggregate behavior of an incorrect implementation of a stochastic network simulation can fall within the distributions expected from correct implementations. In this paper, we propose the first approach that applies machine learning to traces of network simulations to detect errors. Our technique transforms simulation traces into images by reordering the network's adjacency matrix, and then training supervised machine learning models. Our evaluation on three simulation models shows that we can easily detect previously encountered types of errors and even confidently detect new errors. This work opens up numerous opportunities by examining other simulation models, representations (i.e., matrix reordering algorithms), or machine learning techniques.
暂无评论