A computing task can be distributed in an edge network and offloaded to multiple edge devices, called workers, to expedite the processing. The computing speeds of the workers, however, are usually unknown or time-vary...
详细信息
ISBN:
(纸本)9781538674628
A computing task can be distributed in an edge network and offloaded to multiple edge devices, called workers, to expedite the processing. The computing speeds of the workers, however, are usually unknown or time-varying. To identify the fast workers, a bayesian approach based on Thompson sampling is used. The estimation of the computing speeds of the workers is formulated as a multi-armed bandit problem. While existing schemes allocate the same amount of computation work to each selected worker, this paper exploits the heterogeneous computing speeds of the workers and formulates the task allocation problem with the objective of minimizing the overall computing delay. A lower bound for the delay is obtained and is proved to be minimized by a greedy algorithm. Simulation results show that our scheme outperforms other benchmarks.
Deepfake threatens the authenticity of the information in artificial intelligence Internet of Things (IoT) systems. Recently, several deepfake detection methods have been proposed in academia and industry for securing...
详细信息
Deepfake threatens the authenticity of the information in artificial intelligence Internet of Things (IoT) systems. Recently, several deepfake detection methods have been proposed in academia and industry for securing the authenticity of visual information in the face of artificial intelligence advances. Frame-level detection methods, a widely employed security method against deepfake, have a small model size and offer real-time responsiveness, despite basing their classification decision only on the information contained within the frame they are evaluating. We propose a new lightweight frame-level detection technique based on bayesian inference weighting (BIW) to improve the robustness of existing frame-level detection models. Our proposed BIW technique employs the Naive bayesian algorithm to estimate the reliability of any candidate model's detection results. Comprehensive experiments were conducted on the attacked data sets by four designed video interference approaches and edge computing platform, showing that BIW enhances the robustness of all the baselines and improves their detection accuracy with a real-time response.
Surrogate models are statistical or conceptual approximations for more complex simulation models. In this context, it is crucial to propagate the uncertainty induced by limited simulation budget and surrogate approxim...
详细信息
Surrogate models are statistical or conceptual approximations for more complex simulation models. In this context, it is crucial to propagate the uncertainty induced by limited simulation budget and surrogate approximation error to predictions, inference, and subsequent decision-relevant quantities. However, quantifying and then propagating the uncertainty of surrogates is usually limited to special analytic cases or is otherwise computationally very expensive. In this paper, we propose a framework enabling a scalable, bayesian approach to surrogate modeling with thorough uncertainty quantification, propagation, and validation. Specifically, we present three methods for bayesian inference with surrogate models given measurement data. This is a task where the propagation of surrogate uncertainty is especially relevant, because failing to account for it may lead to biased and/or overconfident estimates of the parameters of interest. We showcase our approach in three detailed case studies for linear and nonlinear real-world modeling scenarios. Uncertainty propagation in surrogate models enables more reliable and safe approximation of expensive simulators and will therefore be useful in various fields of applications.
In this paper we consider a high-order spatial generalized autoregressive conditional heteroskedasticity (GARCH) model to account for the volatility clustering patterns observed over space. The model consists of a log...
详细信息
In this paper we consider a high-order spatial generalized autoregressive conditional heteroskedasticity (GARCH) model to account for the volatility clustering patterns observed over space. The model consists of a log-volatility equation that includes the high-order spatial lags of the log-volatility term and the squared outcome variable. We use a transformation approach to turn the model into a mixture of normals model, and then introduce a bayesian Markov chain Monte Carlo (MCMC) estimation approach coupled with a data-augmentation technique. Our simulation results show that the bayesian estimator has good finite sample properties. We apply a first-order version of the spatial GARCH model to US house price returns at the metropolitan statistical area level over the period 2006Q1-2013Q4 and show that there is significant variation in the log-volatility estimates over space in each period.
Hydrometric data poverty compounds the challenge of accounting for uncertainties in non-stationary stage-discharge relationships. This paper builds on three methods to explore the integration of a dynamic approach to ...
详细信息
Hydrometric data poverty compounds the challenge of accounting for uncertainties in non-stationary stage-discharge relationships. This paper builds on three methods to explore the integration of a dynamic approach to rating curve assessment and a physically based bayesian framework for quantifying discharge amid geomorphologically induced rating shifts in a sparsely gauged alluvial river. The Modified GesDyn-FlowAM-BaRatin method entails sequentially segmenting gaugings according to residual indicators of riverbed instability and channel conveyance variability, leveraging cross-sectional surveys to augment calibration data, and eliciting hydraulic priors for probabilistic rating curve estimation. This method is applied to a Philippine watershed, where quarrying near the gauging station has ostensibly caused morphodynamic adjustments. Time-variable credible intervals for discharge are computed. The optimal estimates root mean square error (RMSE = 2.96 m3/s) from maximum a posteriori rating curves outperform the hydrographer's benchmark (RMSE = 5.00 m3/s), whose systematic errors from the gauged flows arise from lapses in shift detection.
This study uses the Progressive First Failure Type-II Censoring Scheme (PFFT2CS) to estimate the Process Capability Index (PCI), Spmk, for the Nakagami Distribution (ND). Using maximum likelihood, maximum product spac...
详细信息
This study uses the Progressive First Failure Type-II Censoring Scheme (PFFT2CS) to estimate the Process Capability Index (PCI), Spmk, for the Nakagami Distribution (ND). Using maximum likelihood, maximum product spacing, and bayesian estimation techniques, Spmk is calculated on the basis of PFFT2CS. Using noninformative prior, the Bayes estimator of Spmk is produced for the linear exponential (LINEX) loss function, the squared error loss function, and the generalised entropy loss function. Additionally, the approximate confidence intervals (CIs) for the index Spmk that were derived using both traditional approaches and the highest posterior density (HPD) credible intervals are compared. The performance of the classical and Bayes estimates of Spmk with respect to their mean squared errors is evaluated in a simulation exercise, and the average width and coverage probabilities of the CIs and HPDs intervals are compared. Two actual data sets are reanalyzed in order to illustrate the efficacy of the suggested index and estimation approaches.
Network data arises naturally in a wide variety of applications in different fields. In this article we discuss in detail the statistical modeling of financial networks. The structure of such networks red has not been...
详细信息
Efficient channel estimation is challenging in full-dimensional multiple-input multiple-output communication systems, particularly in those with hybrid digital-analog architectures. Under a compressive sensing framewo...
详细信息
Efficient channel estimation is challenging in full-dimensional multiple-input multiple-output communication systems, particularly in those with hybrid digital-analog architectures. Under a compressive sensing framework, this letter first designs a uniform dictionary based on a spherical Fibonacci grid to represent channels in a sparse domain, yielding smaller angular errors in three-dimensional beamspace than traditional dictionaries and consistent estimation in different directions. Then, a bayesian inference-aided greedy pursuit algorithm is developed to estimate channels in the frequency domain. Finally, simulation results demonstrate that both the designed dictionary and the proposed bayesian inference-aided channel estimation outperform the benchmark schemes and attain a lower normalized mean squared error of channel estimation.
Probabilistic/stochastic computations form the backbone of autonomous systems and classifiers. Recently, biomedical applications of probabilistic computing such as bayesian networks for disease diagnosis, DNA sequenci...
详细信息
Probabilistic/stochastic computations form the backbone of autonomous systems and classifiers. Recently, biomedical applications of probabilistic computing such as bayesian networks for disease diagnosis, DNA sequencing, etc. have attracted significant attention owing to their high energy-efficiency. bayesian inference is widely used for decision making based on independent (often conflicting) sources of information/evidence. A cascaded chain or tree structure of asynchronous circuit elements known as Muller C-elements can effectively implement bayesian inference. Such circuits utilize stochastic bit streams to encode input probabilities which enhances their robustness and fault-tolerance. However, the CMOS implementations of Muller C-element are bulky and energy hungry which restricts their widespread application in resource constrained IoT and mobile devices such as UAVs, robots, space rovers, etc. In this work, for the first time, we propose a compact and energy-efficient implementation of Muller C-element utilizing a single Ferroelectric FET and use it for cancer diagnosis task by performing bayesian inference with high accuracy on Wisconsin data set. The proposed implementation exploits the unique drain-erase, program inhibit and drain-erase inhibit characteristics of FeFETs to yield the output as the polarization-state of the ferroelectric layer. Our extensive investigation utilizing an in-house developed experimentally calibrated compact model of FeFET reveals that the proposed C-element consumes (worst-case) energy of 4.1 fJ and an area 0.07 mu m(2) and outperforms the prior implementations in terms of energy-efficiency and footprint while exhibiting a comparable delay. We also propose a novel read circuitry for realising a bayesian inference engine by cascading a network of proposed FeFET-based C-elements for practical applications. Furthermore, for the first time, we analyze the impact of cross-correlation between the stochastic input bit streams on the ac
Building on the algorithmic equivalence between finite population replicator dynamics and particle filtering based approximation of bayesian inference, we design a computational model to demonstrate the emergence of D...
详细信息
Building on the algorithmic equivalence between finite population replicator dynamics and particle filtering based approximation of bayesian inference, we design a computational model to demonstrate the emergence of Darwinian evolution over representational units when collectives of units are selected to infer statistics of highdimensional combinatorial environments. The non-Darwinian starting point is two units undergoing a few cycles of noisy, selection-dependent information transmission, corresponding to a serial (one comparison per cycle), non-cumulative process without heredity. Selection for accurate bayesian inference at the collective level induces an adaptive path to the emergence of Darwinian evolution within the collectives, capable of maintaining and iteratively improving upon complex combinatorial information. When collectives are themselves Darwinian, this mechanism amounts to a top-down (filial) transition in individuality. We suggest that such a selection mechanism can explain the hypothesized emergence of fast timescale Darwinian dynamics over a population of neural representations within animal and human brains, endowing them with combinatorial planning capabilities. Further possible physical implementations include prebiotic collectives of non-replicating molecules and reinforcement learning agents with parallel policy search.
暂无评论