Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of ***,existing studies neglect positional information when learni...
详细信息
Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of ***,existing studies neglect positional information when learning discrete snapshots,resulting in insufficient network topology *** the same time,due to the lack of appropriate data augmentation methods,it is difficult to capture the evolving patterns of the network *** address the above problems,a position-aware and subgraph enhanced dynamic graph contrastive learning method is proposed for discrete-time dynamic ***,the global snapshot is built based on the historical snapshots to express the stable pattern of the dynamic graph,and the random walk is used to obtain the position representation by learning the positional information of the ***,a new data augmentation method is carried out from the perspectives of short-term changes and long-term stable structures of dynamic ***,subgraph sampling based on snapshots and global snapshots is used to obtain two structural augmentation views,and node structures and evolving patterns are learned by combining graph neural network,gated recurrent unit,and attention ***,the quality of node representation is improved by combining the contrastive learning between different structural augmentation views and between the two representations of structure and *** results on four real datasets show that the performance of the proposed method is better than the existing unsupervised methods,and it is more competitive than the supervised learning method under a semi-supervised setting.
Gradient compression is a promising approach to alleviating the communication bottleneck in data parallel deep neural network (DNN) training by significantly reducing the data volume of gradients for synchronization. ...
详细信息
Gradient compression is a promising approach to alleviating the communication bottleneck in data parallel deep neural network (DNN) training by significantly reducing the data volume of gradients for synchronization. While gradient compression is being actively adopted by the industry (e.g., Facebook and AWS), our study reveals that there are two critical but often overlooked challenges: 1) inefficient coordination between compression and communication during gradient synchronization incurs substantial overheads, and 2) developing, optimizing, and integrating gradient compression algorithms into DNN systems imposes heavy burdens on DNN practitioners, and ad-hoc compression implementations often yield surprisingly poor system performance. In this paper, we propose a compression-aware gradient synchronization architecture, CaSync, which relies on flexible composition of basic computing and communication primitives. It is general and compatible with any gradient compression algorithms and gradient synchronization strategies and enables high-performance computation-communication pipelining. We further introduce a gradient compression toolkit, CompLL, to enable efficient development and automated integration of on-GPU compression algorithms into DNN systems with little programming burden. Lastly, we build a compression-aware DNN training framework HiPress with CaSync and CompLL. HiPress is open-sourced and runs on mainstream DNN systems such as MXNet, TensorFlow, and PyTorch. Evaluation via a 16-node cluster with 128 NVIDIA V100 GPUs and a 100 Gbps network shows that HiPress improves the training speed over current compression-enabled systems (e.g., BytePS-onebit, Ring-DGC and PyTorch-PowerSGD) by 9.8%-69.5% across six popular DNN models. IEEE
This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual *** is inspired by an important intuition which was often i...
详细信息
This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual *** is inspired by an important intuition which was often ignored in previous work:a real-world 3D scene consists of the contextually reasonable organization of objects,*** typically place one double bed with several subordinate objects into a bedroom instead of different shapes of ***,abstracts 3D repositories as multiplex networks and accordingly encodes implicit relations between or among ***,basic statistics such as co-occurrence,in combination with advanced relations,are used to tackle object relationships of different *** experiments demonstrate that the latent space of ScenePalette has rich contexts that are essential for contextual representation and exploration.
Parkinson’s disease(PD)is a chronic neurological condition that progresses over *** start to have trouble speaking,writing,walking,or performing other basic skills as dopamine-generating neurons in some brain regions...
详细信息
Parkinson’s disease(PD)is a chronic neurological condition that progresses over *** start to have trouble speaking,writing,walking,or performing other basic skills as dopamine-generating neurons in some brain regions are injured or *** patient’s symptoms become more severe due to the worsening of their signs over *** this study,we applied state-of-the-art machine learning algorithms to diagnose Parkinson’s disease and identify related risk *** research worked on the publicly available dataset on PD,and the dataset consists of a set of significant characteristics of *** aim to apply soft computing techniques and provide an effective solution for medical professionals to diagnose PD *** research methodology involves developing a model using a machine learning *** the model selection,eight different machine learning techniques were adopted:Namely,Random Forest(RF),Decision Tree(DT),Support Vector Machine(SVM),Naïve Bayes(NB),Light Gradient Boosting Machine(LightGBM),K-Nearest Neighbours(KNN),Extreme Gradient Boosting(XGBoost),and Logistic Regression(LR).Subsequently,the concentrated models were validated through 10-fold Cross-Validation and Receiver Operating Characteristic(ROC)—Area Under the Curve(AUC).In addition,GridSearchCV was utilised to measure each algorithm’s best parameter;eventually,the models were trained through the hyperparameter tuning *** 98%accuracy,LightGBM had the highest accuracy in this ***,KNN,and SVM came in second with 96%***,the performance scores of NB and LR were recorded to be 76%and 83%,*** is to be mentioned that after applying 10-fold cross-validation,the average performance score of LightGBM accounted for 93%.At the same time,the percentage of ROC-AUC appeared at 0.92,which indicates that this LightGBM model reached a satisfactory ***,we extracted meaningful insights and figured out potential gaps on top of *** extracting meaningful in
Dexterous robot manipulation has shone in complex industrial scenarios, where multiple manipulators, or fingers, cooperate to grasp and manipulate objects. When encountering multi-objective optimization with system co...
详细信息
Dexterous robot manipulation has shone in complex industrial scenarios, where multiple manipulators, or fingers, cooperate to grasp and manipulate objects. When encountering multi-objective optimization with system constraints in such scenarios, model predictive control(MPC) has demonstrated exceptional performance in complex multi-robot manipulation tasks involving multi-objective optimization with system constraints. However, in such scenarios, the substantial computational load required to solve the optimal control problem(OCP) at each triggering instant can lead to significant delays between state sampling and control application, hindering real-time performance. To address these challenges, this paper introduces a novel robust tube-based smooth MPC approach for two fundamental manipulation tasks: reaching a given target and tracking a reference trajectory. By predicting the successor state as the initial condition for imminent OCP solving, we can solve the forthcoming OCP ahead of time, alleviating delay effects. Additionally,we establish an upper bound for linearizing the original nonlinear system, reducing OCP complexity and enhancing response speed. Grounded in tube-based MPC theory, the recursive feasibility and closed-loop stability amidst constraints and disturbances are ensured. Empirical validation is provided through two numerical simulations and two real-world dexterous robot manipulation tasks, which shows that the seamless control input by our methods can effectively enhance the solving efficiency and control performance when compared to conventional time-triggered MPC strategies.
Diabetes has become one of the significant reasons for public sickness and death in worldwide. By 2019, diabetes had affected more than 463 million people worldwide. According to the International Diabetes Federation ...
详细信息
Emerging technologies of Agriculture 4.0 such as the Internet of Things (IoT), Cloud Computing, Artificial Intelligence (AI), and 5G network services are being rapidly deployed to address smart farming implementation-...
详细信息
Most of the search-based software remodularization(SBSR)approaches designed to address the software remodularization problem(SRP)areutilizing only structural information-based coupling and cohesion quality ***,in prac...
详细信息
Most of the search-based software remodularization(SBSR)approaches designed to address the software remodularization problem(SRP)areutilizing only structural information-based coupling and cohesion quality ***,in practice apart from these quality criteria,there require other aspects of coupling and cohesion quality criteria such as lexical and changed-history in designing the modules of the software ***,consideration of limited aspects of software information in the SBSR may generate a sub-optimal modularization ***,such modularization can be good from the quality metrics perspective but may not be acceptable to the *** produce a remodularization solution acceptable from both quality metrics and developers’perspectives,this paper exploited more dimensions of software information to define the quality criteria as modularization ***,these objectives are simultaneously optimized using a tailored manyobjective artificial bee colony(MaABC)to produce a remodularization *** assess the effectiveness of the proposed approach,we applied it over five software *** obtained remodularization solutions are evaluated with the software quality metrics and developers view of *** demonstrate that the proposed software remodularization is an effective approach for generating good quality modularization solutions.
Modern apps require high computing resources for real-time data processing, allowing app users (AUs) to access real-time information. Edge computing (EC) provides dynamic computing resources to AUs for real-time data ...
详细信息
Modern apps require high computing resources for real-time data processing, allowing app users (AUs) to access real-time information. Edge computing (EC) provides dynamic computing resources to AUs for real-time data processing. However, due to resources and coverage constraints, edge servers (ESs) in specific areas can only serve a limited number of AUs. Hence, the app user allocation problem (AUAP) becomes challenging in the EC environment. This paper proposes a quantum-inspired differential evolution algorithm (QDE-UA) for efficient user allocation in the EC environment. The quantum vector is designed to provide a complete solution to the AUAP. The fitness function considers the minimum use of ES, user allocation rate (UAR), energy consumption, and load balance. Extensive simulations and hypotheses-based statistical analyses (ANOVA, Friedman test) are performed to show the significance of the proposed QDE-UA. The results indicate that QDE-UA outperforms the majority of the existing strategies with an average UAR improvement of 112.42%, and 140.62% enhancement in load balance while utilizing 13.98% fewer ESs. Due to the higher UAR, QDE-UA shows 59.28% higher total energy consumption on average. However, the lower energy consumption per AU is evidence of its energy efficiency. IEEE
Partial-label learning(PLL) is a typical problem of weakly supervised learning, where each training instance is annotated with a set of candidate labels. Self-training PLL models achieve state-of-the-art performance b...
详细信息
Partial-label learning(PLL) is a typical problem of weakly supervised learning, where each training instance is annotated with a set of candidate labels. Self-training PLL models achieve state-of-the-art performance but suffer from error accumulation problems caused by mistakenly disambiguated instances. Although co-training can alleviate this issue by training two networks simultaneously and allowing them to interact with each other, most existing co-training methods train two structurally identical networks with the same task, i.e., are symmetric, rendering it insufficient for them to correct each other due to their similar limitations. Therefore, in this paper, we propose an asymmetric dual-task co-training PLL model called AsyCo,which forces its two networks, i.e., a disambiguation network and an auxiliary network, to learn from different views explicitly by optimizing distinct tasks. Specifically, the disambiguation network is trained with a self-training PLL task to learn label confidence, while the auxiliary network is trained in a supervised learning paradigm to learn from the noisy pairwise similarity labels that are constructed according to the learned label confidence. Finally, the error accumulation problem is mitigated via information distillation and confidence refinement. Extensive experiments on both uniform and instance-dependent partially labeled datasets demonstrate the effectiveness of AsyCo.
暂无评论