Machine learning has been massively utilized to construct data-driven solutions for predicting the lifetime of rechargeable batteries in recent years, which project the physical measurements obtained during the early ...
详细信息
Machine learning has been massively utilized to construct data-driven solutions for predicting the lifetime of rechargeable batteries in recent years, which project the physical measurements obtained during the early charging/discharging cycles to the remaining useful lifetime. While most existing techniques train the prediction model through minimizing the prediction error only, the errors associated with the physical measurements can also induce negative impact to the prediction accuracy. Although total-least-squares(TLS) regression has been applied to address this issue, it relies on the unrealistic assumption that the distributions of measurement errors on all input variables are equivalent, and cannot appropriately capture the practical characteristics of battery degradation. In order to tackle this challenge, this work intends to model the variations along different input dimensions, thereby improving the accuracy and robustness of battery lifetime prediction. In specific, we propose an innovative EM-TLS framework that enhances the TLS-based prediction to accommodate dimension-variate errors, while simultaneously investigating the distributions of them using expectation-maximization(EM). Experiments have been conducted to validate the proposed method based on the data of commercial Lithium-Ion batteries, where it reduces the prediction error by up to 29.9 % compared with conventional TLS. This demonstrates the immense potential of the proposed method for advancing the R&D of rechargeable batteries.
Automated machine learning(AutoML) has achieved remarkable success in automating the non-trivial process of designing machine learning *** the focal areas of AutoML,neural architecture search(NAS) stands out,aiming to...
详细信息
Automated machine learning(AutoML) has achieved remarkable success in automating the non-trivial process of designing machine learning *** the focal areas of AutoML,neural architecture search(NAS) stands out,aiming to systematically explore the complex architecture space to discover the optimal neural architecture configurations without intensive manual *** has demonstrated its capability of dramatic performance improvement across a large number of real-world *** core components in NAS methodologies normally include(ⅰ) defining the appropriate search space,(ⅱ)designing the right search strategy and(ⅲ) developing the effective evaluation *** early NAS endeavors are characterized via groundbreaking architecture designs,the imposed exorbitant computational demands prompt a shift towards more efficient paradigms such as weight sharing and evaluation estimation,***,the introduction of specialized benchmarks has paved the way for standardized comparisons of NAS ***,the adaptability of NAS is evidenced by its capability of extending to diverse datasets,including graphs,tabular data and videos,etc.,each of which requires a tailored *** paper delves into the multifaceted aspects of NAS,elaborating on its recent advances,applications,tools,benchmarks and prospective research directions.
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both enti...
详细信息
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both entity and relation embedding to make predictions, ignoring the semantic correlations among different entities and relations within the same timestamp. This can lead to random and nonsensical predictions when unseen entities or relations occur. Furthermore, many existing models exhibit limitations in handling highly correlated historical facts with extensive temporal depth. They often either overlook such facts or overly accentuate the relationships between recurring past occurrences and their current counterparts. Due to the dynamic nature of TKG, effectively capturing the evolving semantics between different timestamps can be *** address these shortcomings, we propose the recurrent semantic evidenceaware graph neural network(RE-SEGNN), a novel graph neural network that can learn the semantics of entities and relations simultaneously. For the former challenge, our model can predict a possible answer to missing quadruples based on semantics when facing unseen entities or relations. For the latter problem, based on an obvious established force, both the recency and frequency of semantic history tend to confer a higher reference value for the current. We use the Hawkes process to compute the semantic trend, which allows the semantics of recent facts to gain more attention than those of distant facts. Experimental results show that RE-SEGNN outperforms all SOTA models in entity prediction on 6 widely used datasets, and 5 datasets in relation prediction. Furthermore, the case study shows how our model can deal with unseen entities and relations.
Bayesian modelling helps applied researchers to articulate assumptions about their data and develop models tailored for specific applications. Thanks to good methods for approximate posterior inference, researchers ca...
详细信息
Bayesian modelling helps applied researchers to articulate assumptions about their data and develop models tailored for specific applications. Thanks to good methods for approximate posterior inference, researchers can now easily build, use, and revise complicated Bayesian models for large and rich data. These capabilities, however, bring into focus the problem of model criticism. researchers need tools to diagnose the fitness of their models, to understand where they fall short, and to guide their revision. In this paper, we develop a new method for Bayesian model criticism, the holdout predictive check (HPC). Holdout predictive check are built on posterior predictive check (PPC), a seminal method that checks a model by assessing the posterior predictive distribution on the observed data. However, PPC use the data twice—both to calculate the posterior predictive and to evaluate it—which can lead to uncalibrated p-values. Holdout predictive check, in contrast, compare the posterior predictive distribution to a draw from the population distribution, a heldout dataset. This method blends Bayesian modelling with frequentist assessment. Unlike the PPC, we prove that the HPC is properly calibrated. Empirically, we study HPC on classical regression, a hierarchical model of text data, and factor analysis.
Hybrid memory systems composed of dynamic random access memory(DRAM)and Non-volatile memory(NVM)often exploit page migration technologies to fully take the advantages of different memory *** previous proposals usually...
详细信息
Hybrid memory systems composed of dynamic random access memory(DRAM)and Non-volatile memory(NVM)often exploit page migration technologies to fully take the advantages of different memory *** previous proposals usually migrate data at a granularity of 4 KB pages,and thus waste memory bandwidth and DRAM *** this paper,we propose Mocha,a non-hierarchical architecture that organizes DRAM and NVM in a flat address space physically,but manages them in a cache/memory *** the commercial NVM device-Intel Optane DC Persistent Memory Modules(DCPMM)actually access the physical media at a granularity of 256 bytes(an Optane block),we manage the DRAM cache at the 256-byte size to adapt to this feature of *** design not only enables fine-grained data migration and management for the DRAM cache,but also avoids write amplification for Intel Optane *** also create an Indirect Address Cache(IAC)in Hybrid Memory Controller(HMC)and propose a reverse address mapping table in the DRAM to speed up address translation and cache ***,we exploit a utility-based caching mechanism to filter cold blocks in the NVM,and further improve the efficiency of the DRAM *** implement Mocha in an architectural *** results show that Mocha can improve application performance by 8.2%on average(up to 24.6%),reduce 6.9%energy consumption and 25.9%data migration traffic on average,compared with a typical hybrid memory architecture-HSCC.
Software-defined networks(SDNs) present a novel network architecture that is widely used in various datacenters. However, SDNs also suffer from many types of security threats, among which a distributed denial of servi...
详细信息
Software-defined networks(SDNs) present a novel network architecture that is widely used in various datacenters. However, SDNs also suffer from many types of security threats, among which a distributed denial of service(DDoS) attack, which aims to drain the resources of SDN switches and controllers,is one of the most common. Once the switch or controller is damaged, the network services can be *** defense schemes against DDoS attacks have been proposed from the perspective of attack detection;however, such defense schemes are known to suffer from a time consuming and unpromising accuracy, which could result in an unavailable network service before specific countermeasures are taken. To address this issue through a systematic investigation, we propose an elaborate resource-management mechanism against DDoS attacks in an SDN. Specifically, by considering the SDN topology, we leverage the M/M/c queuing model to measure the resistance of an SDN to DDoS attacks. Network administrators can therefore invest a reasonable number of resources into SDN switches and SDN controllers to defend against DDoS attacks while guaranteeing the quality of service(QoS). Comprehensive analyses and empirical data-based experiments demonstrate the effectiveness of the proposed approach.
Floor localization is crucial for various applications such as emergency response and rescue,indoor positioning,and recommender *** existing floor localization systems have many drawbacks,like low accuracy,poor scalab...
详细信息
Floor localization is crucial for various applications such as emergency response and rescue,indoor positioning,and recommender *** existing floor localization systems have many drawbacks,like low accuracy,poor scalability,and high computational *** this paper,we first frame the problem of floor localization as one of learning node embeddings to predict the floor label of a ***,we introduce FloorLocator,a deep learning-based method for floor localization that integrates efficient spiking neural networks with powerful graph neural *** approach offers high accuracy,easy scalability to new buildings,and computational *** results on using several public datasets demonstrate that FloorLocator outperforms state-of-the-art ***,in building B0,FloorLocator achieved recognition accuracy of 95.9%,exceeding state-of-the-art methods by at least 10%.In building B1,it reached an accuracy of 82.1%,surpassing the latest methods by at least 4%.These results indicate FloorLocator’s superiority in multi-floor building environment localization.
This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual *** is inspired by an important intuition which was often i...
详细信息
This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual *** is inspired by an important intuition which was often ignored in previous work:a real-world 3D scene consists of the contextually reasonable organization of objects,*** typically place one double bed with several subordinate objects into a bedroom instead of different shapes of ***,abstracts 3D repositories as multiplex networks and accordingly encodes implicit relations between or among ***,basic statistics such as co-occurrence,in combination with advanced relations,are used to tackle object relationships of different *** experiments demonstrate that the latent space of ScenePalette has rich contexts that are essential for contextual representation and exploration.
The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression ***,labeling large datasets demands significant human...
详细信息
The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression ***,labeling large datasets demands significant human,time,and financial *** active learning methods have mitigated the dependency on extensive labeled data,a cold-start problem persists in small to medium-sized expression recognition *** issue arises because the initial labeled data often fails to represent the full spectrum of facial expression *** paper introduces an active learning approach that integrates uncertainty estimation,aiming to improve the precision of facial expression recognition regardless of dataset scale *** method is divided into two primary ***,the model undergoes self-supervised pre-training using contrastive learning and uncertainty estimation to bolster its feature extraction ***,the model is fine-tuned using the prior knowledge obtained from the pre-training phase to significantly improve recognition *** the pretraining phase,the model employs contrastive learning to extract fundamental feature representations from the complete unlabeled *** features are then weighted through a self-attention mechanism with rank ***,data from the low-weighted set is relabeled to further refine the model’s feature extraction *** pre-trained model is then utilized in active learning to select and label information-rich samples more *** results demonstrate that the proposed method significantly outperforms existing approaches,achieving an improvement in recognition accuracy of 5.09%and 3.82%over the best existing active learning methods,Margin,and Least Confidence methods,respectively,and a 1.61%improvement compared to the conventional segmented active learning method.
data race is one of the most important concurrent anomalies in multi-threaded *** con-straint-based techniques are leveraged into race detection,which is able to find all the races that can be found by any oth-er soun...
详细信息
data race is one of the most important concurrent anomalies in multi-threaded *** con-straint-based techniques are leveraged into race detection,which is able to find all the races that can be found by any oth-er sound race ***,this constraint-based approach has serious limitations on helping programmers analyze and understand data ***,it may report a large number of false positives due to the unrecognized dataflow propa-gation of the ***,it recommends a wide range of thread context switches to schedule the reported race(in-cluding the false one)whenever this race is exposed during the constraint-solving *** ad hoc recommendation imposes too many context switches,which complicates the data race *** address these two limitations in the state-of-the-art constraint-based race detection,this paper proposes DFTracker,an improved constraint-based race detec-tor to recommend each data race with minimal thread context ***,we reduce the false positives by ana-lyzing and tracking the dataflow in the *** this means,DFTracker thus reduces the unnecessary analysis of false race *** further propose a novel algorithm to recommend an effective race schedule with minimal thread con-text switches for each data *** experimental results on the real applications demonstrate that 1)without removing any true data race,DFTracker effectively prunes false positives by 68%in comparison with the state-of-the-art constraint-based race detector;2)DFTracker recommends as low as 2.6-8.3(4.7 on average)thread context switches per data race in the real world,which is 81.6%fewer context switches per data race than the state-of-the-art constraint based race ***,DFTracker can be used as an effective tool to understand the data race for programmers.
暂无评论