The distributed data infrastructure in Internet of Things (IoT) ecosystems requires efficient data-series compression methods, as well as the capability to meet different accuracy demands. However, the compression per...
详细信息
Machine learning with optical neural networks has featured unique advantages of the information processing including high speed,ultrawide bandwidths and low energy consumption because the optical dimensions(time,space...
详细信息
Machine learning with optical neural networks has featured unique advantages of the information processing including high speed,ultrawide bandwidths and low energy consumption because the optical dimensions(time,space,wavelength,and polarization)could be utilized to increase the degree of ***,due to the lack of the capability to extract the information features in the orbital angular momentum(OAM)domain,the theoretically unlimited OAM states have never been exploited to represent the signal of the input/output nodes in the neural network ***,we demonstrate OAM-mediated machine learning with an all-optical convolutional neural network(CNN)based on Laguerre-Gaussian(LG)beam modes with diverse diffraction *** proposed CNN architecture is composed of a trainable OAM mode-dispersion impulse as a convolutional kernel for feature extraction,and deep-learning diffractive layers as a *** resultant OAM mode-dispersion selectivity can be applied in information mode-feature encoding,leading to an accuracy as high as 97.2%for MNIST database through detecting the energy weighting coefficients of the encoded OAM modes,as well as a resistance to eavesdropping in point-to-point free-space ***,through extending the target encoded modes into multiplexed OAM states,we realize all-optical dimension reduction for anomaly detection with an accuracy of 85%.Our work provides a deep insight to the mechanism of machine learning with spatial modes basis,which can be further utilized to improve the performances of various machine-vision tasks by constructing the unsupervised learning-based auto-encoder.
The authors propose a distributed field mapping algorithm that drives a team of robots to explore and learn an unknown scalar field using a Gaussian Process(GP).The authors’strategy arises by balancing exploration ob...
详细信息
The authors propose a distributed field mapping algorithm that drives a team of robots to explore and learn an unknown scalar field using a Gaussian Process(GP).The authors’strategy arises by balancing exploration objectives between areas of high error and high *** computing high error regions is impossible since the scalar field is unknown,a bio-inspired approach known as Speeding-Up and Slowing-Down is leveraged to track the gradient of the GP *** approach achieves global field-learning convergence and is shown to be resistant to poor hyperparameter tuning of the *** approach is validated in simulations and experiments using 2D wheeled robots and 2D flying mini-ature autonomous blimps.
In the case of standalone houses, ensuring a continuous and regulated power supply from renewable sources is crucial. To address their unpredictable nature, an environmentally conscious hybrid renewable energy system ...
详细信息
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data. Both Ambient Diffusion and alternative SURE-based approaches for learning diffusion models from corrupted data res...
详细信息
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data. Both Ambient Diffusion and alternative SURE-based approaches for learning diffusion models from corrupted data resort to approximations which deteriorate performance. We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data, solving an open problem in Ambient diffusion. Our key technical contribution is a method that uses a double application of Tweedie's formula and a consistency loss function that allows us to extend sampling at noise levels below the observed data noise. We also provide further evidence that diffusion models memorize from their training sets by identifying extremely corrupted images that are almost perfectly reconstructed, raising copyright and privacy concerns. Our method for training using corrupted samples can be used to mitigate this problem. We demonstrate this by fine-tuning Stable Diffusion XL to generate samples from a distribution using only noisy samples. Our framework reduces the amount of memorization of the fine-tuning dataset, while maintaining competitive performance. Copyright 2024 by the author(s)
3D point cloud object tracking (3D PCOT) plays a vital role in applications such as autonomous driving and robotics. Adversarial attacks offer a promising approach to enhance the robustness and security of tracking mo...
详细信息
A common issue in learning decision-making policies in data-rich settings is spurious correlations in the offline dataset, which can be caused by hidden confounders. Instrumental variable (IV) regression, which utilis...
详细信息
A common issue in learning decision-making policies in data-rich settings is spurious correlations in the offline dataset, which can be caused by hidden confounders. Instrumental variable (IV) regression, which utilises a key unconfounded variable known as the instrument, is a standard technique for learning causal relationships between confounded action, outcome, and context variables. Most recent IV regression algorithms use a two-stage approach, where a deep neural network (DNN) estimator learnt in the first stage is directly plugged into the second stage, in which another DNN is used to estimate the causal effect. Naively plugging the estimator can cause heavy bias in the second stage, especially when regularisation bias is present in the first stage estimator. We propose DML-IV, a non-linear IV regression method that reduces the bias in two-stage IV regressions and effectively learns high-performing policies. We derive a novel learning objective to reduce bias and design the DML-IV algorithm following the double/debiased machine learning (DML) framework. The learnt DML-IV estimator has strong convergence rate and O(N−1/2) suboptimality guarantees that match those when the dataset is unconfounded. DML-IV outperforms state-of-the-art IV regression methods on IV regression benchmarks and learns high-performing policies in the presence of instruments. Copyright 2024 by the author(s)
Learners with a limited budget can use supervised data subset selection and active learning techniques to select a smaller training set and reduce the cost of acquiring data and training machine learning (ML) models. ...
详细信息
Learners with a limited budget can use supervised data subset selection and active learning techniques to select a smaller training set and reduce the cost of acquiring data and training machine learning (ML) models. However, the resulting high model performance, measured by a data utility function, may not be preserved when some data owners, enabled by the GDPR's right to erasure, request their data to be deleted from the ML model. This raises an important question for learners who are temporarily unable or unwilling to acquire data again: During the initial data acquisition of a training set of size k, can we proactively maximize the data utility after future unknown deletions? We propose that the learner anticipates/estimates the probability that (i) each data owner in the feasible set will independently delete its data or (ii) a number of deletions occur out of k, and justify our proposal with concrete real-world use cases. Then, instead of directly maximizing the data utility function, the learner can maximize the expected or risk-averse post-deletion utility based on the anticipated probabilities. We further propose how to construct these deletion-anticipative data selection (DADS) maximization objectives to preserve monotone submodularity and near-optimality of greedy solutions, how to optimize the objectives and empirically evaluate DADS' performance on real-world datasets. Copyright 2024 by the author(s)
This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel *** plate remains stationary,while the other moves downward at a squeezing *** space bet...
详细信息
This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel *** plate remains stationary,while the other moves downward at a squeezing *** space between these plates contains a Darcy-Forchheimer porous medium.A mixture of water-based fluid with gold(Au)and silicon dioxide(Si O2)nanoparticles is *** contrast to the conventional Fourier's heat flux equation,this study employs the Cattaneo-Christov heat flux equation.A uniform magnetic field is applied perpendicular to the flow direction,invoking magnetohydrodynamic(MHD)***,the model accounts for Joule heating,which is the heat generated when an electric current passes through the *** problem is solved via NDSolve in *** and statistical analyses are conducted to provide insights into the behavior of the nanomaterials between the parallel plates with respect to the flow,energy transport,and skin *** findings of this study have potential applications in enhancing cooling systems and optimizing thermal management *** is observed that the squeezing motion generates additional pressure gradients within the fluid,which enhances the flow rate but reduces the frictional ***,the fluid is pushed more vigorously between the plates,increasing the flow *** the fluid experiences higher flow rates due to the increased squeezing effect,it spends less time in the region between the *** thermal relaxation,however,abruptly changes the temperature,leading to a decrease in the temperature fluctuations.
Noisy qubit devices limit the fidelity of programs executed on near-term or Noisy Intermediate Scale Quantum (NISQ) systems. The fidelity of NISQ applications can be improved by using various optimizations during prog...
详细信息
ISBN:
(纸本)9798331541279
Noisy qubit devices limit the fidelity of programs executed on near-term or Noisy Intermediate Scale Quantum (NISQ) systems. The fidelity of NISQ applications can be improved by using various optimizations during program compilation (or transpilation). These optimizations or passes are designed to minimize circuit depth (or program duration), steer more computations on devices with lowest error rates, and reduce the communication overheads involved in performing two-qubit operations between non-adjacent qubits. Additionally, standalone optimizations have been proposed to reduce the impact of crosstalk, measurement, idling, and correlated errors. However, our experiments using real IBM quantum hardware show that using all optimizations simultaneously often leads to sub-optimal performance and the highest improvement in application fidelity is obtained when only a subset of passes are used. Unfortunately, identifying the optimal pass combination is non-trivial as it depends on the application and device specific properties. In this paper, we propose COMPASS, an automated software framework for optimal Compiler Pass Selection for quantum programs. COMPASS uses dummy circuits that resemble a given program but is composed of only Clifford gates and thus, can be efficiently simulated classically to obtain its correct output. The optimal pass set for the dummy circuit is identified by evaluating the efficacy of different pass combinations and this set is then used to compile the given program. Our experiments using real IBMQ machines show that COMPASS improves the application fidelity by 4.3x on average and by upto 248.8x compared to the baseline. However, the complexity of this search scales exponential in the number of compiler steps. To overcome this drawback, we propose Efficient COMPASS (E-COMPASS) that leverages a divide-and-conquer approach to split the passes into sub-groups and exhaustively searching within each sub-group. Our evaluations show that E-COMPASS impro
暂无评论