The application of noninvasive methods to enhance healthcare systems has been facilitated by the development of new technology. Among the four major cardiovascular diseases, stroke is one of the deadliest and potentia...
详细信息
Mobile app developers struggle to prioritize updates by identifying feature requests within user reviews. While machine learning models can assist, their complexity often hinders transparency and trust. This paper pre...
详细信息
Interoperability is a crucial aspect of the effective functioning of Internet of Things (IoT) devices, particularly in the healthcare industry. Although the use of IoT devices in healthcare has brought numerous benefi...
详细信息
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detecti...
详细信息
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction.
Diffusion models have become a prevalent framework in deep generative modeling across various modalities. However, despite producing high quality results, these models are computationally expensive and suffer from slo...
详细信息
Diffusion models have become a prevalent framework in deep generative modeling across various modalities. However, despite producing high quality results, these models are computationally expensive and suffer from slow convergence. In this work, we address these challenges in image generation by leveraging the wavelet domain, which decomposes images into low and high-frequency components, each at half the resolution of the original image in both height and width. We observe that prioritizing the learning of low-frequency components over high-frequency details and masking out unnecessary high-frequency content in wavelet space can significantly enhance training convergence and reduce computational demands. This strategy simplifies the complexity associated with high-frequency details during training, allowing the model to capture the most representative features of the data distribution while maintaining a balance in detail preservation. To facilitate controlled learning across different wavelet coefficients, we employ a multitask loss function, with each task corresponding to the learning of a distinct wavelet subband. Additionally, to ensure consistency among wavelet coefficients, which is crucial for accurate reconstruction in pixel space, we introduce a multispectral cross-attention mechanism to aid the joint generation of different wavelet coefficients. The sampling process involves jointly generating wavelet coefficients, followed by an inverse wavelet transform to convert them back to pixel space. Our approach not only improves the training efficiency for unconditional image generation compared with the standard denoising diffusion probabilistic model (vanilla DDPM) but also uniquely supports the generation of high-frequency content conditioned on a low-resolution image, enabling both image generation and upsampling within a single model. To our knowledge, this capability is novel. Our model demonstrates superior performance in image generation compared with b
The growing advancements with the Internet of Things(IoT)devices handle an enormous amount of data collected from various applications like healthcare,vehicle-based communication,and smart *** research analyses cloud-...
详细信息
The growing advancements with the Internet of Things(IoT)devices handle an enormous amount of data collected from various applications like healthcare,vehicle-based communication,and smart *** research analyses cloud-based privacy preservation over the smart city based on query ***,there is a lack of resources to handle the incoming data and maintain them with higher privacy and ***,a solution based idea needs to be proposed to preserve the IoT data to set an innovative city environment.A querying service model is proposed to handle the incoming data collected from various environments as the data is not so trusted and highly sensitive towards *** handling privacy,other inter-connected metrics like efficiency are also essential,which must be considered to fulfil the privacy ***,this work provides a query-based service model and clusters the query to measure the relevance of frequently generated ***,a Bag of Query(BoQ)model is designed to collect the query from various *** is done with a descriptive service provisioning model to cluster the query and extract the query’s summary to get thefinal *** processed data is preserved over the cloud storage system and optimized using an improved Grey Wolf Opti-mizer(GWO).It is used to attain global and local solutions regarding privacy *** iterative data is evaluated without any over-fitting issues and computational complexity due to the tremendous data handling *** on this analysis,metrics like privacy,efficiency,computational complexity,the error rate is *** simulation is done with a MATLAB 2020a *** proposed model gives a better trade-off in contrast to existing approaches.
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation lear...
详细信息
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable ***, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational ***, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.
A dandelion algorithm(DA) is a recently developed intelligent optimization algorithm for function optimization problems. Many of its parameters need to be set by experience in DA,which might not be appropriate for all...
详细信息
A dandelion algorithm(DA) is a recently developed intelligent optimization algorithm for function optimization problems. Many of its parameters need to be set by experience in DA,which might not be appropriate for all optimization problems. A self-adapting and efficient dandelion algorithm is proposed in this work to lower the number of DA's parameters and simplify DA's structure. Only the normal sowing operator is retained;while the other operators are discarded. An adaptive seeding radius strategy is designed for the core dandelion. The results show that the proposed algorithm achieves better performance on the standard test functions with less time consumption than its competitive peers. In addition, the proposed algorithm is applied to feature selection for credit card fraud detection(CCFD), and the results indicate that it can obtain higher classification and detection performance than the-state-of-the-art methods.
Stock market’s volatile and complex nature makes it difficult to predict the market situation. Deep Learning is capable of simulating and analyzing complex patterns in unstructured data. Deep learning models have app...
详细信息
OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models...
详细信息
OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models havebeen employed for intricate tasks including object recognition, image generation, and image processing, leveragingtheir advanced capabilities to fuel transformative breakthroughs. Within the gaming industry, they have foundutility in crafting virtual characters and generating plots and dialogues, thereby enabling immersive and interactiveplayer experiences. Furthermore, these models have been harnessed in the realm of medical diagnosis, providinginvaluable insights and support to healthcare professionals in the realmof disease detection. The principal objectiveof this paper is to offer a comprehensive overview of OpenAI, OpenAI Gym, ChatGPT, DALL E, stable diffusion,the pre-trained clip model, and other pertinent models in various domains, encompassing CLIP Text-to-Image,education, medical imaging, computer vision, social influence, natural language processing, software development,coding assistance, and Chatbot, among others. Particular emphasis will be placed on comparative analysis andexamination of popular text-to-image and text-to-video models under diverse stimuli, shedding light on thecurrent research landscape, emerging trends, and existing challenges within the domains of OpenAI and *** a rigorous literature review, this paper aims to deliver a professional and insightful overview of theadvancements, potentials, and limitations of these pioneering language models.
暂无评论