Medical image segmentation is a challenging task especially when dealing with unlabeled data. It has been proven that the use of complementary information for co-training is effective for medical image segmentation. W...
详细信息
The k-means with outliers problem is one of the most extensively studied clustering problems in the field of machine learning, where the goal is to discard up to z outliers and identify a minimum k-means clustering on...
详细信息
The k-means with outliers problem is one of the most extensively studied clustering problems in the field of machine learning, where the goal is to discard up to z outliers and identify a minimum k-means clustering on the remaining data points. Most previous results for this problem have running time dependent on the aspect ratio ∆ (the ratio between the maximum and the minimum pairwise distances) to achieve fast approximations. To address the issue of aspect ratio dependency on the running time, we propose sampling-based algorithms with almost linear running time in the data size, where a crucial component of our approach is an algorithm called Fast-Sampling. Fast-Sampling algorithm can find inliers that well approximate the optimal clustering centers without relying on a guess for the optimal clustering costs, where a 4-approximate solution can be obtained in time O(ndk log log n/∊2) with O(k/∊) centers opened and (1 + ∊)z outliers discarded. To reduce the number of centers opened, we propose a center reduction algorithm, where an O(1/∊)-approximate solution can be obtained in time O(ndk log log n/∊2 + dpoly(k, 1/∊) log(n∆)) with (1 + ∊)z outliers discarded and exactly k centers opened. Empirical experiments suggest that our proposed sampling-based algorithms outperform state-of-the-art algorithms for the k-means with outliers problem. Copyright 2024 by the author(s)
This work presents a novel volumetric parameterization technique along with the continuous adjoint method to support gradient-based CFD shape optimization of turbomachinery stages. The proposed parameterization retain...
详细信息
This work presents a novel volumetric parameterization technique along with the continuous adjoint method to support gradient-based CFD shape optimization of turbomachinery stages. The proposed parameterization retains axisymmetry and periodicity by acting on a transformed coordinate system. The same volumetric model controls the shape and the computational volume mesh in a seamless manner, avoiding the additional use of a mesh deformation tool. Moreover, it is differentiated to compute mesh sensitivities (i.e., derivatives of nodal coordinates with respect to the design variables) and is combined with the flow and continuous adjoint, multi-row solvers of the in-house PUMA software. Flow field solutions in successive rows communicate based on the mixing plane approach;the development of continuous adjoint to the latter is also presented in this article. The adjoint to the turbulence model and distance-from-the-wall (Hamilton-Jacobi) equations are solved, increasing the accuracy of the computed sensitivity derivatives. All these tools run on modern GPUs, accelerating both flow/adjoint solutions and shape/mesh manipulations. The capabilities of these tools are demonstrated in the shape optimization of the rotor blades of the MT1 high-pressure, transonic, turbine stage, aiming at maximum stage isentropic efficiency with constraints on stage reaction and inlet capacity.
Deep learning (DL) has gained great success in recent years, leading to state-of-the-art performance in research community and industrial fields like computer vision and natural language processing. One of the reasons...
详细信息
Deep learning (DL) has gained great success in recent years, leading to state-of-the-art performance in research community and industrial fields like computer vision and natural language processing. One of the reasons for this success is the huge amount parameters adopted in DL models. However, it is impractical to train a moderately large model with a large number of parameters on a typical single device. Thus, It is necessary to train DL models in clusters with distributed training algorithms. However, traditional distributed training algorithms are usually sub-optimal and highly customized, which owns the drawbacks to train large-scale DL models in varying computing clusters. To handle the above problem, researchers propose auto-parallelism, which is promising to train large-scale DL models efficiently and practically in various computing clusters. In this survey, we perform a broad and thorough investigation on challenges, basis, and strategy searching methods of auto-parallelism in DL training. First, we abstract basic parallelism schemes with their communication cost and memory consumption in DL training. Further, we analyze and compare a series of current auto-parallelism works and investigate strategies and searching methods which are commonly used in practice. At last, we discuss several trends in auto-parallelism which are promising in further research.
Various temporal denoising methods have been proposed to clean up the noise for real-time ray tracing (RTRT). These methods rely on the temporal correspondences of pixels between the current and previous frames, i.e. ...
详细信息
Various temporal denoising methods have been proposed to clean up the noise for real-time ray tracing (RTRT). These methods rely on the temporal correspondences of pixels between the current and previous frames, i.e. per-pixel screen-space motion vectors. However, the state-of-the-art temporal reuse methods with traditional motion vectors cause artifacts in motion occlusions. We accordingly propose a novel neural temporal denoising method for indirect illumination of Monte Carlo (MC) ray tracing at 1 sample per pixel. Based on end-to-end multi-scale kernel-based reconstruction, we apply temporally reliable dual motion vectors to facilitate better reconstruction of the occlusions, and also introduce additional motion occlusion loss to reduce ghosting artifacts. Experiments show that our method significantly reduces the over-blurring and ghosting artifacts while generating high-quality images at real-time rates.
In the Internet environment, bullet comments texts expressing emotions are an important kind of data with clear sentiment polarity, and this particular text can be used to analyse the trend of public opinion in the vi...
详细信息
Nature performs complex computations constantly at clearly lower cost and higher performance than digital computers. It is crucial to understand how to harness the unique computational power of nature in Machine Learn...
详细信息
Cytopathology report generation is a necessary step for the standardized examination of pathology images. However, manually writing detailed reports brings heavy workloads for pathologists. To improve efficiency, some...
详细信息
Blockchain technology promotes the development of the Internet of medical things(IoMT)from the centralized form to distributed trust mode as blockchain-based Internet of medical things(BIoMT).Although blockchain impro...
详细信息
Blockchain technology promotes the development of the Internet of medical things(IoMT)from the centralized form to distributed trust mode as blockchain-based Internet of medical things(BIoMT).Although blockchain improves the cross-institution data sharing ability,there still exist the problems of authentication difficulty and privacy *** paper first describes the architecture of the BIoMT system and designs an anonymous authentication model for medical data *** BIoMT system is divided into four layers:perceptual,network,platform,and *** model integrates an anonymous authentication scheme to guarantee secure data sharing in the network *** the untampered blockchain ledger can protect the privacy of medical data and system ***,an anonymous authentication scheme called the group blind signature(GBS)scheme is *** scheme can provide anonymity for the signer as that one member can represent the group to sign without exposing his *** blind property also can protect the message from being signed as it is anonymous to the ***-over,this GBS scheme is created with the lattice assumption,which makes it more secure against quantum *** addition,the security proof shows that this GBS scheme can achieve the security properties of dynamical-almost-full anonymity,blindness,traceability,and *** comparison analysis and performance evaluation of key size show that this GBS scheme is more efficient than similar schemes in other literature.
LOAM stands as a quintessential 3D Lidar SLAM algorithm capable of real-time robot positioning and mapping;however, it may succumb to positioning drift and mapping inaccuracies during prolonged operation. Addressing L...
详细信息
暂无评论