We describe in detail dispute resolution problems with cryptographic voting systems that do not produce a paper record of the unencrypted vote. With these in mind, we describe the design and use of Audiotegrity - a cr...
详细信息
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against adversarial examples in deep neural networks (DNNs). Although adversarial training (AT) improves robustness, it degr...
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against adversarial examples in deep neural networks (DNNs). Although adversarial training (AT) improves robustness, it degrades the standard accuracy, thus yielding the tradeoff. To mitigate this tradeoff, we propose a novel AT method called ARREST, which comprises three components: (i) adversarial finetuning (AFT), (ii) representation-guided knowledge distillation (RGKD), and (iii) noisy replay (NR). AFT trains a DNN on adversarial examples by initializing its parameters with a DNN that is standardly pretrained on clean examples. RGKD and NR respectively entail a regularization term and an algorithm to preserve latent representations of clean examples during AFT. RGKD penalizes the distance between the representations of the standardly pretrained and AFT DNNs. NR switches input adversarial examples to nonadversarial ones when the representation changes significantly during AFT. By combining these components, ARREST achieves both high standard accuracy and robustness. Experimental results demonstrate that ARREST mitigates the tradeoff more effectively than previous AT-based methods do.
Language is an important tool for humans to share knowledge. We propose a meta-learning method that shares knowledge across supervised learning tasks using feature descriptions written in natural language, which have ...
ISBN:
(纸本)9781713871088
Language is an important tool for humans to share knowledge. We propose a meta-learning method that shares knowledge across supervised learning tasks using feature descriptions written in natural language, which have not been used in the existing meta-learning methods. The proposed method improves the predictive performance on unseen tasks with a limited number of labeled data by meta-learning from various tasks. With the feature descriptions, we can find relationships across tasks even when their feature spaces are different. The feature descriptions are encoded using a language model pretrained with a large corpus, which enables us to incorporate human knowledge stored in the corpus into meta-learning. In our experiments, we demonstrate that the proposed method achieves better predictive performance than the existing meta-learning methods using a wide variety of real-world datasets provided by the statistical office of the EU and Japan.
We propose a security verification framework for cryptographic protocols using machine learning. In recent years, as cryptographic protocols have become more complex, research on automatic verification techniques has ...
详细信息
Systems driven by innovation, a pivotal force in human society, present various intriguing statistical regularities, from the Heaps' law to logarithmic scaling or somewhat different patterns for the innovation rat...
Systems driven by innovation, a pivotal force in human society, present various intriguing statistical regularities, from the Heaps' law to logarithmic scaling or somewhat different patterns for the innovation rates. The urn model with triggering (UMT) has been instrumental in modeling these innovation dynamics. Yet a generalization is needed to capture the richer empirical phenomenology. Here we introduce a time-dependent urn model with triggering (TUMT), a generalization of the UMT that crucially integrates time-dependent parameters for reinforcement and triggering to offer a broader framework for modeling innovation in nonstationary systems. Through analytical computation and numerical simulations, we show that the TUMT reconciles various behaviors observed in a broad spectrum of systems from patenting activity to the analysis of gene mutations. We highlight how the TUMT features a “critical” region where both Heaps' and Zipf's laws coexist, for which we compute the exponents.
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against adversarial examples in deep neural networks (DNNs). Although adversarial training (AT) improves robustness, it degr...
详细信息
Inferring the exact topology of the interactions in a large, stochastic dynamical system from time-series data can often be prohibitive computationally and statistically without strong side information. One alternativ...
详细信息
Real-world image recognition systems often face corrupted input images, which cause distribution shifts and degrade the performance of models. These systems often use a single prediction model in a central server and ...
详细信息
We prove that the empirical risk of most wellknown loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the same loss. This holds tru...
详细信息
ISBN:
(纸本)9781510829008
We prove that the empirical risk of most wellknown loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the same loss. This holds true even for non-smooth, non-convex losses and in any rkhs. The frrst term is a (kernel) mean operator - the focal quantity of this work - which we characterize as the sufficient statistic for the labels. The result tightens known generalization bounds and sheds new light on their interpretation. Factorization has a direct application on weakly supervised learning. In particular, we demonstrate that algorithms like sgd and proximal methods can be adapted with minimal effort to handle weak supervision, once the mean operator has been estimated. We apply this idea to learning with asymmetric noisy labels, connecting and extending prior work. Furthermore, we show that most losses enjoy a data-dependent (by the mean operator) form of noise robustness, in contrast with known negative results.
Raft is known as a replicating state machine that tolerates crash faults, and its use enables various distributed systems, including distributed transaction processing systems. The original Raft protocol creates an in...
Raft is known as a replicating state machine that tolerates crash faults, and its use enables various distributed systems, including distributed transaction processing systems. The original Raft protocol creates an intra-cluster consensus by having the leader generate a log of each incoming client request and replicate it to its followers. When requests arrive frequently, the number of replications increases, increasing the number of communications and possibly leading to performance degradation. To solve this problem, we propose a bulk-transfer protocol that replicates multiple requests at once. We implemented the Raft system to verify the effect of the proposed method. We measured the performance of the implemented system when multiple requests were transferred in bulk and observed an increase in throughput with an increase in the amount of data replicated at a time. The highest throughput of 214 MB/sec was achieved when replicating 12.5 MB at a time. This is 12,066 times higher than the performance when replicating one request (18 B) at a time. Besides, the size of the log is reduced to 5/9 of the conventional method.
暂无评论