Designing neural network architecture for embedded devices is practical but challenging because the models are expected to be not only accurate but also enough lightweight and robust. However, it is challenging to bal...
详细信息
Handover process plays a crucial role in a Wireless Local Area Network (WLAN) with multiple Access Points (APs). To meet Quality of Service (QoS) or Quality of Experience (QoE) requirements for end-users, especially d...
详细信息
The integration of Large Language Models (LLMs) into software development tools like GitHub Copilot holds the promise of transforming code generation processes. While AI-driven code generation presents numerous advant...
详细信息
ISBN:
(数字)9798350353884
ISBN:
(纸本)9798350353891
The integration of Large Language Models (LLMs) into software development tools like GitHub Copilot holds the promise of transforming code generation processes. While AI-driven code generation presents numerous advantages for software development, code generated by large language models may introduce challenges related to security, privacy, and copyright issues. However, the performance implications of AI-generated code remain insufficiently explored. This study conducts an empirical analysis focusing on the performance regressions of code generated by GitHub Copilot across three distinct datasets: HumanEval, AixBench, and MBPP. We adopt a comprehensive methodology encompassing static and dynamic performance analyses to assess the effectiveness of the generated code. Our findings reveal that although the generated code is functionally correct, it frequently exhibits performance regressions compared to code solutions crafted by humans. We further investigate the code-level root causes responsible for these performance regressions. We identify four major root causes, i.e., inefficient function calls, inefficient looping, inefficient algorithm, and inefficient use of language features. We further identify a total of ten sub-categories of root causes attributed to the performance regressions of generated code. Additionally, we explore prompt engineering as a potential strategy for optimizing performance. The outcomes suggest that meticulous prompt designs can enhance the performance of AI-generated code. This research offers valuable insights contributing to a more comprehensive understanding of AI-assisted code generation.
This paper proposes an energy function-based direct method for large-signal stability assessment of grid-forming (GFM) inverters leveraging an equivalent-circuit representation of all involved control- and physical-la...
详细信息
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) provides 360-degree full-spac...
详细信息
As a component of asset management strategies, maintenance management is essential for improving the reliability of the electrical Distribution Networks (EDNs). To increase system efficiency, electricity companies inv...
详细信息
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data. Based on matrix multiplications, convolutions incur in high computational costs leading to scalability limitati...
详细信息
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data. Based on matrix multiplications, convolutions incur in high computational costs leading to scalability limitations in practice. To overcome these limitations, proposed methods rely on training GNNs in smaller number of nodes, and then transferring the GNN to larger graphs. Even though these methods are able to bound the difference between the output of the GNN with different number of nodes, they do not provide guarantees against the optimal GNN on the very large graph. In this paper, we propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon. We propose to grow the size of the graph as we train, and we show that our proposed methodology – learning by transference – converges to a neighborhood of a first order stationary point on the graphon data. A numerical experiment validates our proposed approach.
Motivated by the design question of additional fuel needed to complete a task in an uncertain environment, this paper introduces metrics to quantify the maximal additional energy used by a control system in the presen...
详细信息
Learning operators between infinitely dimensional spaces is an important learning task arising in machine learning, imaging science, mathematical modeling and simulations, etc. This paper studies the nonparametric est...
详细信息
Learning operators between infinitely dimensional spaces is an important learning task arising in machine learning, imaging science, mathematical modeling and simulations, etc. This paper studies the nonparametric estimation of Lipschitz operators using deep neural networks. Non-asymptotic upper bounds are derived for the generalization error of the empirical risk minimizer over a properly chosen network class. Under the assumption that the target operator exhibits a low dimensional structure, our error bounds decay as the training sample size increases, with an attractive fast rate depending on the intrinsic dimension in our estimation. Our assumptions cover most scenarios in real applications and our results give rise to fast rates by exploiting low dimensional structures of data in operator estimation. We also investigate the influence of network structures (e.g., network width, depth, and sparsity) on the generalization error of the neural network estimator and propose a general suggestion on the choice of network structures to maximize the learning efficiency quantitatively.
Neuromorphic engineering aims to advance computing by mimicking the brain’s efficient processing, where data is encoded as asynchronous temporal events. This eliminates the need for a synchronisation clock and minimi...
详细信息
暂无评论