Communication networks represent communication between entities like social networks and microservice call graphs of microservice systems. Link prediction is useful in various communication network service systems suc...
Communication networks represent communication between entities like social networks and microservice call graphs of microservice systems. Link prediction is useful in various communication network service systems such as predicting the relation between two services. Continuous-time dynamic graph (CTDG) is one form of representing temporal information in communication networks that treats them as a set of events occurring over time. Dynamic graph embedding for CTDG handles these events and disseminates event information to other nodes to get node embedding. Though dynamic graph embedding is suitable for making link prediction in communication networks, graph embedding for CTDG faces challenges such as how to model the event information dissemination process including information decaying over distance and influence of time information. To cope with this issue, we propose a CTDG-based dynamic graph embedding framework for dynamic communication networks link prediction called CTDGNN (Continuous-Time Dynamic Graph Neural Networks). In particular, we propose a self-adaptive information dissemination strategy based on node importance to update node embedding by disseminating event information. Finally, extensive numerical experiments on three real-world communication network datasets validate the effectiveness of our proposed model compared to other related methods.
Large Language Models (LLMs) have gained significant attention in the field of natural language processing (NLP) due to their wide range of applications. However, training LLMs for languages other than English poses s...
Large Language Models (LLMs) have gained significant attention in the field of natural language processing (NLP) due to their wide range of applications. However, training LLMs for languages other than English poses significant challenges, due to the difficulty in acquiring large-scale corpus and the requisite computing resources. In this paper, we propose ChatFlow, a cross-language transfer-based LLM, to address these challenges and train large Chinese language models in a cost-effective manner. We employ a mix of Chinese, English, and parallel corpus to continuously train the LLaMA2 model, aiming to align cross-language representations and facilitate the knowledge transfer specifically to the Chinese language model. In addition, we use a dynamic data sampler to progressively transition the model from unsupervised pre-training to supervised fine-tuning. Experimental results demonstrate that our approach accelerates model convergence and achieves superior performance. We evaluate ChatFlow on popular Chinese and English benchmarks, the results indicate that it outperforms other Chinese models post-trained on LLaMA-2-7B.
In recent times,internet of things(IoT)applications on the cloud might not be the effective solution for every IoT scenario,particularly for time sensitive applications.A significant alternative to use is edge computi...
详细信息
In recent times,internet of things(IoT)applications on the cloud might not be the effective solution for every IoT scenario,particularly for time sensitive applications.A significant alternative to use is edge computing that resolves the problem of requiring high bandwidth by end *** computing is considered a method of forwarding the processing and communication resources in the cloud towards the *** of the considerations of the edge computing environment is resource management that involves resource scheduling,load balancing,task scheduling,and quality of service(QoS)to accomplish improved *** this motivation,this paper presents new soft computing based metaheuristic algorithms for resource scheduling(RS)in the edge computing *** SCBMARS model involves the hybridization of the Group Teaching Optimization Algorithm(GTOA)with rat swarm optimizer(RSO)algorithm for optimal resource *** goal of the SCBMA-RS model is to identify and allocate resources to every incoming user request in such a way,that the client’s necessities are satisfied with the minimum number of possible resources and optimal energy *** problem is formulated based on the availability of VMs,task characteristics,and queue *** integration of GTOA and RSO algorithms assist to improve the allocation of resources among VMs in the data *** experimental validation,a comprehensive set of simulations were performed using the CloudSim *** experimental results showcased the superior performance of the SCBMA-RS model interms of different measures.
Nowadays,people developed various convolutional neural network(CNN) based models for computer *** famous models,such as GoogLeNet,Residual Network(ResNet),Visual Geometry Group(VGG),and You Only Look Once(YOLO),have d...
详细信息
Nowadays,people developed various convolutional neural network(CNN) based models for computer *** famous models,such as GoogLeNet,Residual Network(ResNet),Visual Geometry Group(VGG),and You Only Look Once(YOLO),have different architecture and *** which model to use may be a troublesome problem for those just starting to study image *** solve this problem,we introduce the GoogLeNet,ResNet-18,and VGG-16 models,comparing their architecture,features,and *** we give our suggestions based on the test results to help beginners choose a suitable *** conducted experiments to train and test GoogLeNet,ResNet-18,and VGG-16 on the Cifar-100 datasets with the same *** on the test results(test accuracy,average test loss,training loss),we analyze the figures for trends,key points,increase rate,and other *** we combine the architecture of each model to make our *** experimental results show that ResNet-18 can be a good choice when training the model with the Cifar-100 datasets because it performs well after training and has a low time *** Net-18 also has the fastest convergence *** would be the second choice because it functions similarly to ResNet-18 and is even ***,training GoogLeNet is a time-consuming *** is not recommended in this experiment because it has the worst performance and similar training complexity compared with ResNet-18.
The problem of path planning is a challenging task for mobile robots. A practical example can be seen in the robots commonly employed in warehouses: they must navigate to pick up goods and move them to certain locatio...
详细信息
ISBN:
(数字)9781665488105
ISBN:
(纸本)9781665488112
The problem of path planning is a challenging task for mobile robots. A practical example can be seen in the robots commonly employed in warehouses: they must navigate to pick up goods and move them to certain locations. Therefore, the robot needs a method of moving from an initial location in the warehouse to a final location repeatedly. In this paper, we propose a technique that allows a robot to path plan in generalized environments, from different starting and goal locations. The method is based on a graph representation of the environment, and is capable of finding the shortest path between two points in the environment. In order to generalize appropriately, we show that a neural network is able to effectively choose the correct actions to take at each time step in the path planning problem.
Artificial intelligence (AI) along with deep learning techniques has become an integral part of almost all aspects of life. One of the domains significantly impacted by this technological revolution is healthcare. Dee...
Artificial intelligence (AI) along with deep learning techniques has become an integral part of almost all aspects of life. One of the domains significantly impacted by this technological revolution is healthcare. Deep learning-based AI systems assist clinicians and medical professionals in disease diagnosis, personalized treatment, and monitoring through wearables, among other applications. Despite its expedient integration in healthcare, the trustworthiness of deep learning models remains a concern, primarily due to a lack of understanding of their underlying processes. However, Explainable AI (XAI) offers explanations through various methods, including Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive explanation (SHAP), and GRAD-CAM. XAI is utilized to enhance transparency, allowing users to understand and trust AI decisions. In this study, we present deep learning models for the classification of pneumonia disease in Chest X-ray Images followed by their explanations. Convolutional Neural Networks (CNNs) and other pre-trained models, including VGG16, MobileNetV3, and ResNet50, were used for classification of images as ‘normal’ or ‘pneumonia’. The VGG16 model, known for its exceptional image understanding capabilities, achieved the highest accuracy, with an impressive 93% score. Further, we used XAI techniques including SHAP, LIME, and Grad-CAM for explanation of models. LIME and Grad-CAM provided more accurate results than SHAP in our experiments. This approach was taken to evaluate the fairness and transparency of the model. The insights gained from XAI can be used to refine and improve machine learning models by identifying areas of weakness or misinterpretation which increases overall model robustness.
The Horse Herd Optimization Algorithm (HOA) is a new meta-heuristic algorithm based on the behaviors of horses at different ages. The HOA was introduced recently to solve complex and high-dimensional problems. This pa...
详细信息
With the emergence of the "unmanned" field, unmanned supermarket software has entered consumers’ lives in line with the pace of development of the times. Nowadays, developers of unmanned supermarket softwar...
详细信息
With the emergence of the "unmanned" field, unmanned supermarket software has entered consumers’ lives in line with the pace of development of the times. Nowadays, developers of unmanned supermarket software often tend to create feature-rich consumer mashup applications by invoking a variety of Web APIs (Application Programming Interfaces) in order to save time and costs. The number of APIs available on edge servers has significantly increased with the rise of mobile edge computing. To cope with the continuous growth in API volume and the increasing diversity of API functions in API sharing communities (e.g., ***), lightweight recommendation techniques have been employed to assist consumer mashup developers in finding their desired Web APIs from a vast pool of candidates. However, during the data processing in edge computing, traditional Web API recommendation approaches often prioritize the accuracy of recommended APIs, neglecting the potential privacy risks associated with disclosure. This significantly reduces the willingness and incentive of consumer mashup developers to share API information, such as historical edge mashup-API invocation records. To address this issue, we propose a privacy-preserving Web API recommendation approach, WARecITQ, based on Iterative Quantification (ITQ). Specifically, we convert the sensitive mashup-API invocation records into less-sensitive mashup indices using Iterative Quantization hash coding. We then utilize these less-sensitive mashup indices as the primary decision-making criteria for Web API recommendations, thereby achieving the goal of completing top-k API recommendations. Lastly, we crawled a real-world dataset of mashup-API invocation records from *** and conducted a series of experiments on the dataset to evaluate performance. The results of the experiments demonstrate the superiority of our WARecITQ approach in multiple performance metrics compared to other related approaches, while ensu
Gender bias in vision-language models (VLMs) can reinforce harmful stereotypes and discrimination. In this paper, we focus on mitigating gender bias towards vision-language tasks. We identify object hallucination as t...
Multimodal medical image fusion is vital for extracting complementary information and generating comprehensive images in clinical applications. However, existing deep learning-based fusion approaches face challenges i...
Multimodal medical image fusion is vital for extracting complementary information and generating comprehensive images in clinical applications. However, existing deep learning-based fusion approaches face challenges in effectively utilizing frequency-domain information, designing appropriate integration strategies and modelling long-range context correlation. To address these issues, we propose a novel unsupervised multimodal medical image fusion method called Multiscale Fourier Attention and Detail-Aware Fusion (MFA-DAF). Our approach employs a multiscale Fourier attention encoder to extract rich features, followed by a detail-aware fusion strategy for comprehensive integration. The fusion image is obtained using a nested connected Fourier attention decoder. We adopt a two-stage training strategy and design new loss functions for each stage. Experiment results demonstrate that our model outperforms other state of the art methods, producing fused images with enhanced texture information and superior visual quality.
暂无评论