To segment medical images with distribution shifts, domain generalization (DG) has emerged as a promising setting to train models on source domains that can generalize to unseen target domains. Existing DG methods are...
详细信息
Emerging copper-based lead-free perovskites have garnered significant attention for ultraviolet (UV) photodetection due to their atmospheric stability and optoelectronic properties. However, their practical applicatio...
详细信息
With the exponential growth of big data and advancements in large-scale foundation model techniques, the field of machine learning has embarked on an unprecedented golden era. This period is characterized by significa...
详细信息
With the exponential growth of big data and advancements in large-scale foundation model techniques, the field of machine learning has embarked on an unprecedented golden era. This period is characterized by significant innovations across various aspects of machine learning, including data exploitation, network architecture development, loss function settings and algorithmic innovation.
Canonical Artificial bee colony(ABC) algorithm with a single species is insufficient to extend the diversity of solutions and may be trapped into the local optimal solution. This paper proposes a new co-evolutionary A...
详细信息
Canonical Artificial bee colony(ABC) algorithm with a single species is insufficient to extend the diversity of solutions and may be trapped into the local optimal solution. This paper proposes a new co-evolutionary ABC algorithm(HABC) based on Hierarchical communication model(HCM). HCM combines advantages of global and local communication pattern. With adjustment strategies on species and groups, HCM can reduce the computational complexity dynamically. Performance tests show that the HABC algorithm exhibit good performance on accuracy, robustness and convergence speed. Compared with ABC and Integrated co-evolution algorithm(IABC),HABC performs better in solving complex multimodal functions.
Dynamic evolution is highly desirable for service oriented systems in open environments. For the evolution to be trusted, it is crucial to keep the process consistent with the specification. In this paper, we study tw...
详细信息
Dynamic evolution is highly desirable for service oriented systems in open environments. For the evolution to be trusted, it is crucial to keep the process consistent with the specification. In this paper, we study two kinds of evolution scenarios and propose a novel verification approach based on hierarchical timed automata to model check the underlying consistency with the specification. It examines the procedures before, during and after the evolution process, respectively and can support the direct modeling of temporal aspects, as well as the hierarchical decomposition of software structures. Probabilities are introduced to model the uncertainty characterized in open environments and thus can support the verification of parameter-level evolution. We present a flattening algorithm to facilitate automated verification using the mainstream timed automata based model checker –UPPAAL(integrated with UPPAAL-SMC). We also provide a motivating example with performance evaluation that complements the discussion and demonstrates the feasibility of our approach.
Elastic Hadoop applications consisting of multiple virtual machines(VMs) are widely used to support big data analysis and processing. In this scenario, flash-based solid state drive(SSD) is usually deployed on hypervi...
详细信息
Elastic Hadoop applications consisting of multiple virtual machines(VMs) are widely used to support big data analysis and processing. In this scenario, flash-based solid state drive(SSD) is usually deployed on hypervisors and used as the cache to improve the IO performance. However, existing SSD caching schemes are mostly VM-centric, which focus on the low-level IO performance metrics of individual VMs. They may not lead to the optimized performance of elastic Hadoop applications, i.e., the job completion time(JCT), as the importance of VMs inside the application are different even though they have the similar low-level IO patterns. Considering the IO dependency among VMs and figuring out the importance, which we regard as the application-centric metrics, may potentially better improve the performance. We present IO dependency based requirement model, to characterize the requirement of SSD cache for each VM inside the elastic Hadoop application, and then use it in a genetic algorithm(GA) based approach to calculate the nearly optimal weights of VMs for allocating the per-VM SSD cache space and the capacity of the I/O operations per second(IOPS). Furthermore, we present a tool AC-SSD based on the approach and introduce the closed-loop adaptation to react to continuously changing workloads. The evaluation shows that by using AC-SSD, the JCT is reduced by up to 39% for IO sensitive workloads, up to 29% for continuously changing workloads, and over 12.5% for different scale of data comparing to the shared cache.
We consider the scattering of light in participating media composed of sparsely and randomly distributed discrete *** particle size is expected to range from the scale of the wavelength to several orders of magnitude ...
详细信息
We consider the scattering of light in participating media composed of sparsely and randomly distributed discrete *** particle size is expected to range from the scale of the wavelength to several orders of magnitude greater,resulting in an appearance with distinct graininess as opposed to the smooth appearance of continuous *** fundamental issue in the physically-based synthesis of such appearance is to determine the necessary optical properties in every local *** these properties vary spatially,we resort to geometrical optics approximation(GOA),a highly efficient alternative to rigorous Lorenz–Mie theory,to quantitatively represent the scattering of a single *** enables us to quickly compute bulk optical properties for any particle size *** then use a practical Monte Carlo rendering solution to solve energy transfer in the discrete participating *** proposed framework is the first to simulate a wide range of discrete participating media with different levels of graininess,converging to the continuous media case as the particle concentration increases.
Entity resolution (ER) aims to identify whether two entities in an ER task refer to the same real-world *** ER uses humans, in addition to machine algorithms, to obtain the truths of ER tasks. However, inaccurate or...
详细信息
Entity resolution (ER) aims to identify whether two entities in an ER task refer to the same real-world *** ER uses humans, in addition to machine algorithms, to obtain the truths of ER tasks. However, inaccurate orerroneous results are likely to be generated when humans give unreliable judgments. Previous studies have found thatcorrectly estimating human accuracy or expertise in crowd ER is crucial to truth inference. However, a large number ofthem assume that humans have consistent expertise over all the tasks, and ignore the fact that humans may have variedexpertise on different topics (e.g., music versus sport). In this paper, we deal with crowd ER in the Semantic Web *** identify multiple topics of ER tasks and model human expertise on different topics. Furthermore, we leverage similartask clustering to enhance the topic modeling and expertise estimation. We propose a probabilistic graphical model thatcomputes ER task similarity, estimates human expertise, and infers the task truths in a unified framework. Our evaluationresults on real-world and synthetic datasets show that, compared with several state-of-the-art approaches, our proposedmodel achieves higher accuracy on the task truth inference and is more consistent with the human real expertise.
Tensor Decision Diagrams (TDDs) provide an efficient structure for representing tensors by combining techniques from both tensor networks and decision diagrams, demonstrating competitive performance in quantum circuit...
详细信息
We address the problem of metric learning for multi-view data. Many metric learning algorithms have been proposed, most of them focus just on single view circumstances, and only a few deal with multi-view data. In thi...
详细信息
We address the problem of metric learning for multi-view data. Many metric learning algorithms have been proposed, most of them focus just on single view circumstances, and only a few deal with multi-view data. In this paper, motivated by the co-training framework, we propose an algorithm-independent framework, named co-metric, to learn Mahalanobis metrics in multi-view settings. In its implementation, an off-the-shelf single-view metric learning algorithm is used to learn metrics in individual views of a few labeled examples. Then the most confidently-labeled examples chosen from the unlabeled set are used to guide the metric learning in the next loop. This procedure is repeated until some stop criteria are met. The framework can accommodate most existing metric learning algorithms whether types-of- side-information or example-labels are used. In addition it can naturally deal with semi-supervised circumstances under more than two views. Our comparative experiments demon- strate its competiveness and effectiveness.
暂无评论