Point cloud processing methods exploit local point features and global context through aggregation which does not explicity model the internal correlations between local and global features. To address this problem, w...
详细信息
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems. However, applying them to autonomous 3D reconstruction, ...
详细信息
—This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings. We consider that a tensor is a superposition of multiple low-rank components. In particular,...
详细信息
Ambient backscatter communication (AmBC) has emerged as a promising paradigm for enabling sustainable low-power operation of Internet of Things (IoT) devices. This is due to its ability to enable sensing and communica...
详细信息
Polypharmacy side effect prediction is a vital task in healthcare machine learning, aiming to predict the occurrence of multiple side effects when patients take drugs together. Existing researches mainly follow the de...
Polypharmacy side effect prediction is a vital task in healthcare machine learning, aiming to predict the occurrence of multiple side effects when patients take drugs together. Existing researches mainly follow the deep encoder-decoder paradigm and suffer from the following two limitations: 1) The encoder simply combines inadequate information about drugs; 2) The decoder fails to capture dependencies among different side effects, thus hindering accurate prediction. To overcome the aforementioned limitations, we propose a boundary-guided polypharmacy side effect prediction method (BACON). Our framework constructs two complementary views based on drug’s chemical substructure and biochemical features and enhances drug representations via contrastive learning. The decoder incorporates a boundary-guided strategy to capture drug interaction dependencies for optimizing polypharmacy side effect prediction. Experimental results demonstrate BACON superiority over SOTA models in accurately predicting drug side effect events.
The thresholding bandit (TB) problem is a popular sequential decision-making problem, which aims at identifying the systems whose means are greater than a threshold. The TB problem is a variant of the multi-armed band...
详细信息
Variance reduction techniques such as SPIDER/SARAH/STORM have been extensively studied to improve the convergence rates of stochastic non-convex optimization, which usually maintain and update a sequence of estimators...
ISBN:
(纸本)9781713871088
Variance reduction techniques such as SPIDER/SARAH/STORM have been extensively studied to improve the convergence rates of stochastic non-convex optimization, which usually maintain and update a sequence of estimators for a single function across iterations. What if we need to track multiple functional mappings across iterations but only with access to stochastic samples of O(1) functional mappings at each iteration? There is an important application in solving an emerging family of coupled compositional optimization problems in the form of ∑mi=1 fi(gi(w)), where gi is accessible through a stochastic oracle. The key issue is to track and estimate a sequence of g(w) = (g1(w),..., gm(w)) across iterations, where g(w) has m blocks and it is only allowed to probe O(1) blocks to attain their stochastic values and Jacobians. To improve the complexity for solving these problems, we propose a novel stochastic method named Multi-block-Single-probe Variance Reduced (MSVR) estimator to track the sequence of g(w). It is inspired by STORM but introduces a customized error correction term to alleviate the noise not only in stochastic samples for the selected blocks but also in those blocks that are not sampled. With the help of the MSVR estimator, we develop several algorithms for solving the aforementioned compositional problems with improved complexities across a spectrum of settings with non-convex/convex/strongly convex/Polyak-Łojasiewicz (PL) objectives. Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on the strong convexity parameter. Empirical studies on multi-task deep AUC maximization demonstrate the better performance of using the new estimator.
Due to the influence of haze in winter, outdoor images usually lose contrast and fidelity. In view of the fact that most de-fog algorithms are not effective for images with large sky areas, an improved dark channel a ...
详细信息
computer clusters with the shared-nothing architecture are the major computing platforms for big data processing and *** cluster computing,data partitioning and sampling are two fundamental strategies to speed up the ...
详细信息
computer clusters with the shared-nothing architecture are the major computing platforms for big data processing and *** cluster computing,data partitioning and sampling are two fundamental strategies to speed up the computation of big data and increase *** this paper,we present a comprehensive survey of the methods and techniques of data partitioning and sampling with respect to big data processing and *** start with an overview of the mainstream big data frameworks on Hadoop *** basic methods of data partitioning are then discussed including three classical horizontal partitioning schemes:range,hash,and random *** partitioning on Hadoop clusters is also discussed with a summary of new strategies for big data partitioning,including the new Random Sample Partition(RSP)distributed *** classical methods of data sampling are then investigated,including simple random sampling,stratified sampling,and reservoir *** common methods of big data sampling on computing clusters are also discussed:record-level sampling and blocklevel ***-level sampling is not as efficient as block-level sampling on big distributed *** the other hand,block-level sampling on data blocks generated with the classical data partitioning methods does not necessarily produce good representative samples for approximate computing of big *** this survey,we also summarize the prevailing strategies and related work on sampling-based approximation on Hadoop *** believe that data partitioning and sampling should be considered together to build approximate cluster computing frameworks that are reliable in both the computational and statistical respects.
Component-based synthesis is an important research field in program synthesis. API-based synthesis is a subfield of component-based synthesis, the component library of which are Java APIs. Unlike existing work in API-...
详细信息
暂无评论