TheWebisaliveenvironmentthatmanagesanddrivesawidespectrumofapp- cations in which a user may interact with a company, a governmental authority, a non-governmental organization or other non-pro?t institution or other us...
详细信息
ISBN:
(数字)9783540471288
ISBN:
(纸本)9783540471271
TheWebisaliveenvironmentthatmanagesanddrivesawidespectrumofapp- cations in which a user may interact with a company, a governmental authority, a non-governmental organization or other non-pro?t institution or other users. User preferences and expectations, together with usage patterns, form the basis for personalized, user-friendly and business-optimal services. Key Web business metrics enabled by proper data capture and processing are essential to run an e?ective business or service. Enabling technologies include data mining, sc- able warehousing and preprocessing, sequence discovery, real time processing, document classi?cation, user modeling and quality evaluation models for them. Recipient technologies required for user pro?ling and usage patterns include recommendation systems, Web analytics applications, and application servers, coupled with content management systems and fraud detectors. Furthermore, the inherent and increasing heterogeneity of the Web has - quired Web-based applications to more e?ectively integrate a variety of types of data across multiple channels and from di?erent sources. The development and application of Web mining techniques in the context of Web content, Web usage, and Web structure data has already resulted in dramatic improvements in a variety of Web applications, from search engines, Web agents, and content management systems, to Web analytics and personalization services. A focus on techniques and architectures for more e?ective integration and mining of c- tent, usage,and structure data from di?erent sourcesis likely to leadto the next generation of more useful and more intelligent applications.
This book constitutes the refereed proceedings of the 16th Australasian Conference on data Mining, AusDM 2018, held in Bathurst, NSW, Australia, in November 2018.
ISBN:
(数字)9789811366611
ISBN:
(纸本)9789811366604
This book constitutes the refereed proceedings of the 16th Australasian Conference on data Mining, AusDM 2018, held in Bathurst, NSW, Australia, in November 2018.
This book constitutes the workshop proceedings of the 18th International Conference on database Systems for Advanced Applications, DASFAA 2013, held in Wuhan, China, in April 2013. The volume contains three workshops...
详细信息
ISBN:
(数字)9783642402708
ISBN:
(纸本)9783642402692
This book constitutes the workshop proceedings of the 18th International Conference on database Systems for Advanced Applications, DASFAA 2013, held in Wuhan, China, in April 2013.
The volume contains three workshops, each focusing on specific area that contributes to the main themes of the DASFAA conference: The First International Workshop on Big data Management and Analytics (BDMA 2013), the Third International Workshop on Social Networks and Social Web (SNSM 2013) and the International Workshop on Semantic computing and Personalization (SeCoP 2013).
As an advanced carrier of on-board sensors, connected autonomous vehicle (CAV) can be viewed as an aggregation of self-adaptive systems with monitor-analyze-plan-execute (MAPE) for vehicle-related services. Meanwhile,...
详细信息
As an advanced carrier of on-board sensors, connected autonomous vehicle (CAV) can be viewed as an aggregation of self-adaptive systems with monitor-analyze-plan-execute (MAPE) for vehicle-related services. Meanwhile, machine learning (ML) has been applied to enhance analysis and plan functions of MAPE so that self-adaptive systems have optimal adaption to changing conditions. However, most of ML-based approaches don’t utilize CAVs’ connectivity to collaboratively generate an optimal learner for MAPE, because of sensor data threatened by gradient leakage attack (GLA). In this article, we first design an intelligent architecture for MAPE-based self-adaptive systems on Web 3.0-based CAVs, in which a collaborative machine learner supports the capabilities of managing systems. Then, we observe by practical experiments that importance sampling of sparse vector technique (SVT) approaches cannot defend GLA well. Next, we propose a fine-grained SVT approach to secure the learner in MAPE-based self-adaptive systems, that uses layer and gradient sampling to select uniform and important gradients. At last, extensive experiments show that our private learner spends a slight utility cost for MAPE (e.g., \(0.77\%\) decrease in accuracy) defending GLA and outperforms the typical SVT approaches in terms of defense (increased by \(10\%\sim 14\%\) attack success rate) and utility (decreased by \(1.29\%\) accuracy loss).
Within the realm of software engineering, specialized tasks on code, such as program repair, present unique challenges, necessitating fine-tuning Large language models (LLMs) to unlock state-of-the-art performance. Fi...
详细信息
Within the realm of software engineering, specialized tasks on code, such as program repair, present unique challenges, necessitating fine-tuning Large language models (LLMs) to unlock state-of-the-art performance. Fine-tuning approaches proposed in the literature for LLMs on program repair tasks generally overlook the need to reason about the logic behind code changes, beyond syntactic patterns in the data. High-performing fine-tuning experiments also usually come at very high computational costs. With MORepair, we propose a novel perspective on the learning focus of LLM fine-tuning for program repair: we not only adapt the LLM parameters to the syntactic nuances of the task of code transformation (objective ➊), but we also specifically fine-tune the LLM with respect to the logical reason behind the code change in the training data (objective ➋). Such a multi-objective fine-tuning will instruct LLMs to generate high-quality *** apply MORepair to fine-tune four open-source LLMs with different sizes and architectures. Experimental results on function-level and repository-level repair benchmarks show that the implemented fine-tuning effectively boosts LLM repair performance by 11.4% to 56.0%. We further show that our fine-tuning strategy yields superior performance compared to the state-of-the-art approaches, including standard fine-tuning, Fine-tune-CoT, and RepairLLaMA.
With the help of 5G network, edge intelligence (EI) can not only provide distributed, low-latency, and high-reliable intelligent services, but also enable intelligent maintenance and management of smart city. However,...
详细信息
With the help of 5G network, edge intelligence (EI) can not only provide distributed, low-latency, and high-reliable intelligent services, but also enable intelligent maintenance and management of smart city. However, the constantly changing available computing resources of end devices and edge servers cannot continuously guarantee the performance of intelligent inference. In order to guarantee the sustainability of intelligent services in smart city, we propose the Adaptive Model Selection and Partition Mechanism (AMSPM) in 5G smart city where EI provides services, which mainly consists of Adaptive Model Selection (AMS) and Adaptive Model Partition (AMP). In AMSPM, the model selection and partition of deep neural network (DNN) are formulated as an optimization problem. Firstly, we propose a recursive-based algorithm named AMS based on the computing resources of edge devices to derive an appropriate DNN model that satisfies the latency demand of intelligent services. Then, we adaptively partition the selected DNN model according to the computing resources of edge devices. The experimental results demonstrate that, when compared with state-of-the-art model selection and partition mechanisms, AMSPM not only reduces latency but also enhances computing resource utilization.
Knowledge Graphs (KGs) often suffer from incompleteness and this issue motivates the task of Knowledge Graph Completion (KGC). Traditional KGC models mainly concentrate on static KGs with a fixed set of entities and r...
详细信息
Knowledge Graphs (KGs) often suffer from incompleteness and this issue motivates the task of Knowledge Graph Completion (KGC). Traditional KGC models mainly concentrate on static KGs with a fixed set of entities and relations, or dynamic KGs with temporal characteristics, faltering in their generalization to constantly evolving KGs with possible irregular entity drift. Thus, in this paper, we propose a novel link prediction model based on the embedding representation to handle the incompleteness of KGs with entity drift, termed as DCEL. Unlike traditional link prediction, DCEL could generate precise embeddings for drifted entity without imposing any regular temporal characteristic. The drifted entity is added into the KG with its links to the existing entity predicted in an incremental fashion with no requirement to retrain the whole KG for computational efficiency. In terms of DCEL model, it fully takes advantages of unstructured textual description, and is composed of four modules, namely MRC (Machine Reading Comprehension), RCAA (Relation Constraint Attentive Aggregator), RSA (Relation Specific Alignment) and RCEO (Relation Constraint Embedding Optimization). Specifically, the MRC module is first employed to extract short texts from long and redundant descriptions. Then, RCAA is used to aggregate the embeddings of textual description of drifted entity and the pre-trained word embeddings learned from corpus to a single text-based entity embedding while shielding the impact of noise and irrelevant information. After that, RSA is applied to align the text-based entity embedding to graph-based space to obtain the corresponding graph-based entity embedding, and then the learned embeddings are fed into the gate structure to be optimized based on the RCEO to improve the accuracy of representation learning. Finally, the graph-based model TransE is used to perform link prediction for drifted entity. Extensive experiments conducted on benchmark datasets in terms of evaluat
This book constitutes the refereed proceedings of the First International Conference on Health Information Science, held in Beijing, China, in April 2012. The 15 full papers presented together with 1 invited pape...
详细信息
ISBN:
(数字)9783642293610
ISBN:
(纸本)9783642293603
This book constitutes the refereed proceedings of the First International Conference on Health Information Science, held in Beijing, China, in April 2012. The 15 full papers presented together with 1 invited paper and 3 industry/panel statements in this volume were carefully reviewed and selected from 38 submissions. The papers cover all aspects of the health information sciences and the systems that support this health information management and health service delivery. The scope includes 1) medical/health/biomedicine information resources, such as patient medical records, devices and equipments, software and tools to capture, store, retrieve, process, analyze, optimize the use of information in the health domain, 2) data management, data mining, and knowledge discovery (in health domain), all of which play a key role in decision making, management of public health, examination of standards, privacy and security issues, and 3) development of new architectures and applications for health information systems.
暂无评论