Welcome to the proceedings of ISPA 2005 which was held in the city of Nanjing. Parallel computing has become a mainstream research area in computerscience and the ISPA conference has become one of the premier forums ...
详细信息
ISBN:
(数字)9783540321002
ISBN:
(纸本)9783540297697
Welcome to the proceedings of ISPA 2005 which was held in the city of Nanjing. Parallel computing has become a mainstream research area in computerscience and the ISPA conference has become one of the premier forums for the presentation of new and exciting research on all aspects of parallel computing. We are pleased to present the proceedings for the 3rd International Symposium on Parallel and Distributed Processing and Applications (ISPA 2005), which comprises a collection of excellent technical papers, and keynote speeches. The papers accepted cover a wide range of exciting topics, including architectures, software, networking, and applications. The conference continues to grow and this year a record total of 968 manuscripts (including workshop submissions) were submitted for consideration by the Program Committee or workshops. From the 645 papers submitted to the main conference, the Program Committee selected only 90 long papers and 19 short papers in the program. Eight workshops complemented the outstanding paper sessions.
As an advanced carrier of on-board sensors, connected autonomous vehicle (CAV) can be viewed as an aggregation of self-adaptive systems with monitor-analyze-plan-execute (MAPE) for vehicle-related services. Meanwhile,...
详细信息
As an advanced carrier of on-board sensors, connected autonomous vehicle (CAV) can be viewed as an aggregation of self-adaptive systems with monitor-analyze-plan-execute (MAPE) for vehicle-related services. Meanwhile, machine learning (ML) has been applied to enhance analysis and plan functions of MAPE so that self-adaptive systems have optimal adaption to changing conditions. However, most of ML-based approaches don’t utilize CAVs’ connectivity to collaboratively generate an optimal learner for MAPE, because of sensor data threatened by gradient leakage attack (GLA). In this article, we first design an intelligent architecture for MAPE-based self-adaptive systems on Web 3.0-based CAVs, in which a collaborative machine learner supports the capabilities of managing systems. Then, we observe by practical experiments that importance sampling of sparse vector technique (SVT) approaches cannot defend GLA well. Next, we propose a fine-grained SVT approach to secure the learner in MAPE-based self-adaptive systems, that uses layer and gradient sampling to select uniform and important gradients. At last, extensive experiments show that our private learner spends a slight utility cost for MAPE (e.g., \(0.77\%\) decrease in accuracy) defending GLA and outperforms the typical SVT approaches in terms of defense (increased by \(10\%\sim 14\%\) attack success rate) and utility (decreased by \(1.29\%\) accuracy loss).
Pre-trained models (PTMs) have succeeded in various software engineering (SE) tasks following the “pre-train then fine-tune” paradigm. As fully fine-tuning all parameters of PTMs can be computationally expensive, a ...
详细信息
Pre-trained models (PTMs) have succeeded in various software engineering (SE) tasks following the “pre-train then fine-tune” paradigm. As fully fine-tuning all parameters of PTMs can be computationally expensive, a potential solution is parameter-efficient fine-tuning (PEFT), which freezes PTMs while introducing extra parameters. Although PEFT methods have been applied to SE tasks, researchers often focus on specific scenarios and lack a comprehensive comparison of PTMs from different aspects such as field, size, and architecture. To fill this gap, we have conducted an empirical study on six PEFT methods, eight PTMs, and four SE tasks. The experimental results reveal several noteworthy findings. For example, model architecture has little impact on PTM performance when using PEFT methods. Additionally, we provide a comprehensive discussion of PEFT methods from three perspectives. First, we analyze the effectiveness and efficiency of PEFT methods. Second, we explore the impact of the scaling factor hyperparameter. Finally, we investigate the application of PEFT methods on the latest open-source large language model, Llama 3.2. These findings provide valuable insights to guide future researchers in effectively applying PEFT methods to SE tasks.
Trajectory prediction is a crucial challenge in autonomous vehicle motion planning and decision-making techniques. However, existing methods face limitations in accurately capturing vehicle dynamics and interactions. ...
详细信息
Trajectory prediction is a crucial challenge in autonomous vehicle motion planning and decision-making techniques. However, existing methods face limitations in accurately capturing vehicle dynamics and interactions. To address this issue, this paper proposes a novel approach to extracting vehicle velocity and acceleration, enabling the learning of vehicle dynamics and encoding them as auxiliary information. The VDI-LSTM model is designed, incorporating graph convolution and attention mechanisms to capture vehicle interactions using trajectory data and dynamic information. Specifically, a dynamics encoder is designed to capture the dynamic information, a dynamic graph is employed to represent vehicle interactions, and an attention mechanism is introduced to enhance the performance of LSTM and graph convolution. To demonstrate the effectiveness of our model, extensive experiments are conducted, including comparisons with several baselines and ablation studies on real-world highway datasets. Experimental results show that VDI-LSTM outperforms other baselines compared, which obtains a 3% improvement on the average RMSE indicator over the five prediction steps.
Mobile software engineering has been a hot research topic for decades. Our fellow researchers have proposed various approaches (with over 7,000 publications for Android alone) in this field that essentially contribute...
Mobile software engineering has been a hot research topic for decades. Our fellow researchers have proposed various approaches (with over 7,000 publications for Android alone) in this field that essentially contributed to the great success of the current mobile ecosystem. Existing research efforts mainly focus on popular mobile platforms, namely Android and iOS. OpenHarmony, a newly open-sourced mobile platform, has rarely been considered, although it is the one requiring the most attention as OpenHarmony is expected to occupy one-third of the market in China (if not in the world). To fill the gap, we present to the mobile software engineering community a research roadmap for encouraging our fellow researchers to contribute promising approaches to OpenHarmony. Specifically, we start by presenting a tertiary study of mobile software engineering, attempting to understand what problems have been targeted by the mobile community and how they have been resolved. We then summarize the existing (limited) achievements of OpenHarmony and subsequently highlight the research gap between Android/iOS and OpenHarmony. This research gap eventually helps in forming the roadmap for conducting software engineering research for OpenHarmony.
Distributed Collaborative Machine Learning (DCML) has emerged in artificial intelligence-empowered edge computing environments, such as the Industrial Internet of Things (IIoT), to process tremendous data generated by...
详细信息
Distributed Collaborative Machine Learning (DCML) has emerged in artificial intelligence-empowered edge computing environments, such as the Industrial Internet of Things (IIoT), to process tremendous data generated by smart devices. However, parallel DCML frameworks require resource-constrained devices to update the entire Deep Neural Network (DNN) models and are vulnerable to reconstruction attacks. Concurrently, the serial DCML frameworks suffer from training efficiency problems due to their serial training nature. In this paper, we propose a Model Pruning-enabled Federated Split Learning framework (MP-FSL) to reduce resource consumption with a secure and efficient training scheme. Specifically, MP-FSL compresses DNN models by adaptive channel pruning and splits each compressed model into two parts that are assigned to the client and the server. Meanwhile, MP-FSL adopts a novel aggregation algorithm to aggregate the pruned heterogeneous models. We implement MP-FSL with a real FL platform to evaluate its performance. The experimental results show that MP-FSL outperforms the state-of-the-art frameworks in model accuracy by up to 1.35%, while concurrently reducing storage and computational resource consumption by up to 32.2% and 26.73%, respectively. These results demonstrate that MP-FSL is a comprehensive solution to the challenges faced by DCML, with superior performance in both reduced resource consumption and enhanced model performance.
暂无评论