People with hearing loss or hard hearing struggle with daily life activities as sign language is not widely known by the public. There are many attempts to use technology to help assist hearing loss individuals. Howev...
详细信息
Code summarization aims to generate natural language descriptions of source code, facilitating programmers to understand and maintain it rapidly. While previous code summarization efforts have predominantly focused on...
详细信息
ISBN:
(纸本)9798350330663
Code summarization aims to generate natural language descriptions of source code, facilitating programmers to understand and maintain it rapidly. While previous code summarization efforts have predominantly focused on method-level, this paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects. Unlike method-level code summarization, file-level code summarization typically involves long source code within a single file, which makes it challenging for Transformer-based models to understand the code semantics for the maximum input length of these models is difficult to set to a large number that can handle long code input well, due to the quadratic scaling of computational complexity with the input sequence length. To address this challenge, we propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences. Specifically, the SparseCoder employs a sliding window mechanism for self-attention to model short-term dependencies and leverages the structure message of code to capture long-term dependencies among source code identifiers by introducing two types of sparse attention patterns named global and identifier attention. To evaluate the performance of SparseCoder, we construct a new dataset FILE-CS for file-level code summarization in Python. Experimental results show that our SparseCoder model achieves state-of-the-art performance compared with other pre-trained models, including full self-attention and sparse models. Additionally, our model has low memory overhead and achieves comparable performance with models using full self-attention mechanism. Furthermore, we verify the generality of SparseCoder on other code understanding tasks, i.e., code clone detection and code search, and results show that our model outperforms baseline models in both tasks, demonstrating that our model can generate better code representations for various downstream tasks. Our
This study discusses the development of a web application aimed at facilitating Sinhala document creation, with a specific emphasis on Sinhala voice-to-text conversion and the handling of Sinhala commands through voca...
详细信息
ISBN:
(纸本)9798350319200
This study discusses the development of a web application aimed at facilitating Sinhala document creation, with a specific emphasis on Sinhala voice-to-text conversion and the handling of Sinhala commands through vocal input. Leveraging machine learning techniques, including convolutional neural networks and Natural Language Processing, the application's core features were established. Extensive research was conducted to tailor the application's content to the needs of its primary users, ensuring maximum effectiveness. The user-friendly interfaces of the web application are designed for clarity, simplicity, and consistency. The primary objective of this research is to comprehensively analyze the implementation of Sinhala voice-to-text conversion and Sinhala command handling systems. These systems are primarily designed to benefit diverse users, including journalists, content writers, and differently abled individuals with verbal abilities, by enhancing the efficiency of creating Sinhala documents. Through a detailed exploration of the research methodology, this study offers insight into the development process of the web-based system. The outcomes of the linguistic model training, presented within the study, reveal achievements and advancements that address the limitations inherent in existing solutions. Key findings from this research demonstrate the successful functionality of the Sinhala voice-to-text converter and the efficacy of the Sinhala command handler. The voice-to-text conversion system achieved an impressive accuracy rate of over 80%, while the Sinhala command handler exhibited an accuracy of approximately 80%. Moreover, this research envisions potential applications that extend beyond document creation. The technology showcased in the web application holds promise for broader language-based applications, impacting education, accessibility, and communication across the native Sinhala-speaking community in Sri Lanka. In summary, this research showcases th
Edge computing (EC) serves as an effective technology, empowering end-users to attain high bandwidth and low latency by offloading tasks with high computational demands from mobile devices to edge servers. However, a ...
详细信息
Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as *** an MEC environment,servers are deployed closer to mobile terminals to e...
详细信息
Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as *** an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user ***,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of ***,techniques for caching content at the edge are becoming popular for addressing the above *** is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud ***,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity *** address this challenge,we introduce a novel method for content caching over MEC,i.e.,*** synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching *** results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.
A neuron with binary inputs and a binary output represents a Boolean function. Our goal is to extract this Boolean function into a tractable representation that will facilitate the explanation and formal verification ...
详细信息
In this paper, we investigate the relationship between the use of discourse relations and the CEFR-level of argumentative English learner essays. Using both the Rhetorical Structure Theory (RST) and the Penn Discourse...
Indoor mmWave-based sensing technologies have garnered substantial interest from both the industrial and academic. Yet, the intrinsic challenge posed by the limited Field-of-View (FOV) of mmWave radars significantly r...
详细信息
The paper proposes a cryptographic protocol two-factor authentication with the zero-knowledge over the extended field GF(2m) on elliptic curves using biometric data and private key of the user. The implementation of a...
详细信息
Northern Bangladesh is home to the most lychee growing, which has a major economic impact. Lychee output and quality are reduced by various leaf and fruit diseases. Deep learning is used to construct a disease detecti...
详细信息
暂无评论