Neural Radiance Fields (NeRFs) have shown impressive capabilities in synthesizing photorealistic novel views. However, their application to room-size scenes is limited by the requirement of several hundred views with ...
详细信息
Neural Radiance Fields (NeRFs) have shown impressive capabilities in synthesizing photorealistic novel views. However, their application to room-size scenes is limited by the requirement of several hundred views with accurate poses for training. To address this challenge, we propose SN $^{2}$ eRF, a framework which can reconstruct the neural radiance field with significantly fewer views and noisy poses by exploiting multiple priors. Our key insight is to leverage both multi-view and monocular priors to constrain the optimization of NeRF in the setting of sparse and noisy pose inputs. Specifically, we extract and match key points to constrain pose optimization and use Ray Transformer with a monocular depth estimator to provide dense depth prior for geometry optimization. Benefiting from these priors, our approach achieves state-of-the-art accuracy in novel view synthesis for indoor room scenarios. IEEE
Medical Image Analysis (MIA) is integral to healthcare, demanding advanced computational techniques for precise diagnostics and treatment planning. The demand for accurate and interpretable models is imperative in the...
详细信息
Medical Image Analysis (MIA) is integral to healthcare, demanding advanced computational techniques for precise diagnostics and treatment planning. The demand for accurate and interpretable models is imperative in the ever-evolving healthcare landscape. This paper explores the potential of Self-Supervised Learning (SSL), transfer learning and domain adaptation methods in MIA. The study comprehensively reviews SSL-based computational techniques in the context of medical imaging, highlighting their merits and limitations. In an empirical investigation, this study examines the lack of interpretable and explainable component selection in existing SSL approaches for MIA. Unlike prior studies that randomly select SSL components based on their performance on natural images, this paper focuses on identifying components based on the quality of learned representations through various clustering evaluation metrics. Various SSL techniques and backbone combinations were rigorously assessed on diverse medical image datasets. The results of this experiment provided insights into the performance and behavior of SSL methods, paving the way for an explainable and interpretable component selection mechanism for artificial intelligence models in medical imaging. The empirical study reveals the superior performance of BYOL (Bootstrap Your Own Latent) with resnet as the backbone, as indicated by various clustering evaluation metrics such as Silhouette Coefficient (0.6), Davies-Bouldin Index (0.67), and Calinski-Harabasz Index (36.9). The study also emphasizes the benefits of transferring weights from a model trained on a similar dataset instead of a dataset from a different domain. Results indicate that the proposed mechanism expedited convergence, achieving 98.66% training accuracy and 92.48% testing accuracy in 23 epochs, requiring almost half the number of epochs for similar results with ImageNet weights. This research contributes to advancing the understanding of SSL in MIA, providin
Pervasive Computing has become more personal with the widespread adoption of the Internet of Things(IoT)in our day-to-day *** emerging domain that encompasses devices,sensors,storage,and computing of personal use and ...
详细信息
Pervasive Computing has become more personal with the widespread adoption of the Internet of Things(IoT)in our day-to-day *** emerging domain that encompasses devices,sensors,storage,and computing of personal use and surroundings leads to Personal IoT(PIoT).PIoT offers users high levels of personalization,automation,and *** proliferation of PIoT technology has extended into society,social engagement,and the interconnectivity of PIoT objects,resulting in the emergence of the Social Internet of Things(SIoT).The combination of PIoT and SIoT has spurred the need for autonomous learning,comprehension,and understanding of both the physical and social *** research on PIoT is dedicated to enabling seamless communication among devices,striking a balance between observation,sensing,and perceiving the extended physical and social environment,and facilitating information ***,the virtualization of independent learning from the social environment has given rise to Artificial Social Intelligence(ASI)in PIoT ***,autonomous data communication between different nodes within a social setup presents various resource management challenges that require careful *** paper provides a comprehensive review of the evolving domains of PIoT,SIoT,and ***,the paper offers insightful modeling and a case study exploring the role of PIoT in post-COVID *** study contributes to a deeper understanding of the intricacies of PIoT and its various dimensions,paving the way for further advancements in this transformative field.
Community question and answer (Q&A) websites have become invaluable information and knowledge-sharing sources. Effective topic modelling on these platforms is crucial for organising and navigating the vast amount ...
详细信息
A systematic review of metaheuristic Algorithms for energy-efficient task offloading in edge computing is provided. The Grey Wolf Optimizer (GWO) is explored as a promising approach for reducing energy consumption. Fu...
详细信息
The cellular automaton (CA), a discrete model, is gaining popularity in simulations and scientific exploration across various domains, including cryptography, error-correcting codes, VLSI design and test pattern gener...
详细信息
Generative adversarial networks(GANs) have drawn enormous attention due to their simple yet efective training mechanism and superior image generation quality. With the ability to generate photorealistic high-resolutio...
详细信息
Generative adversarial networks(GANs) have drawn enormous attention due to their simple yet efective training mechanism and superior image generation quality. With the ability to generate photorealistic high-resolution(e.g., 1024 × 1024) images, recent GAN models have greatly narrowed the gaps between the generated images and the real ones. Therefore, many recent studies show emerging interest to take advantage of pre-trained GAN models by exploiting the well-disentangled latent space and the learned GAN priors. In this study, we briefly review recent progress on leveraging pre-trained large-scale GAN models from three aspects, i.e.,(1) the training of large-scale generative adversarial networks,(2) exploring and understanding the pre-trained GAN models, and(3) leveraging these models for subsequent tasks like image restoration and editing.
Berth Allocation Problem (BAP) is a renowned difficult combinatorial optimization problem that plays a crucial role in maritime transportation systems. BAP is categorized as non-deterministic polynomial-time hard (NP-...
详细信息
Plant diseases are one of the major contributors to economic loss in the agriculture industry worldwide. Detection of disease at early stages can help in the reduction of this loss. In recent times, a lot of emphasis ...
详细信息
While databases are widely-used in commercial user-facing services that have stringent quality-of-service(QoS)requirement,it is crucial to ensure their good performance and minimize the hardware usage at the same *** ...
详细信息
While databases are widely-used in commercial user-facing services that have stringent quality-of-service(QoS)requirement,it is crucial to ensure their good performance and minimize the hardware usage at the same *** investigation shows that the optimal DBMS(database management system)software configuration varies for different user request patterns(i.e.,workloads)and hardware *** is challenging to identify the optimal software and hardware configurations for a database workload,because DBMSs have hundreds of tunable knobs,the effect of tuning a knob depends on other knobs,and the dependency relationship changes under different hardware *** this paper,we propose SHA,a software and hardware auto-tuning system for *** is comprised of a scaling-based performance predictor,a reinforcement learning(RL)based software tuner,and a QoS-aware resource *** performance predictor predicts its optimal performance with different hardware configurations and identifies the minimum amount of resources for satisfying its performance *** software tuner fine-tunes the DBMS software knobs to optimize the performance of the *** resource reallocator assigns the saved resources to other applications to improve resource utilization without incurring QoS violation of the database *** results show that SHA improves the performance of database workloads by 9.9%on average compared with a state-of-the-art solution when the hardware configuration is fixed,and improves 43.2%of resource utilization while ensuring the QoS.
暂无评论