Topic detection is the task of determining and tracking hot topics in social media. Twitter is arguably the most popular platform for people to share their ideas with others about different issues. One such prevalent ...
详细信息
In the current era of smart technology, integrating the Internet of Things (IoT) with Artificial Intelligence has revolutionized several fields, including public health and sanitation. The smart lavatory solution prop...
详细信息
Vehicular Named Data Networks (VNDN) is a content centric approach for vehicle networks. The fundamental principle of addressing the content rather than the host, suits vehicular environment. There are numerous challe...
详细信息
Software defect prediction (SDP) is considered a dynamic research problem and is beneficial during the testing stage of the software development life cycle. Several artificial intelligence-based methods were avai...
详细信息
Software defect prediction (SDP) is considered a dynamic research problem and is beneficial during the testing stage of the software development life cycle. Several artificial intelligence-based methods were available to predict these software defects. However, the detection accuracy is still low due to imbalanced datasets, poor feature learning, and tuning of the model's parameters. This paper proposes a novel attention-included Deep Learning (DL) model for SDP with effective feature learning and dimensionality reduction mechanisms. The system mainly comprises ‘6’ phases: dataset balancing, source code parsing, word embedding, feature extraction, dimensionality reduction, and classification. First, dataset balancing was performed using the density peak based k-means clustering (DPKMC) algorithm, which prevents the model from having biased outcomes. Then, the system parses the source code into abstract syntax trees (ASTs) that capture the structure and relationship between different elements of the code to enable type checking and the representative nodes on ASTs are selected to form token vectors. Then, we use bidirectional encoder representations from transformers (BERT), which converts the token vectors into numerical vectors and extracts semantic features from the data. We then input the embedded vectors to multi-head attention incorporated bidirectional gated recurrent unit (MHBGRU) for contextual feature learning. After that, the dimensionality reduction is performed using kernel principal component analysis (KPCA), which transforms the higher dimensional data into lower dimensions and removes irrelevant features. Finally, the system used a deep, fully connected network-based SoftMax layer for defect prediction, in which the cross-entropy loss is utilized to minimize the prediction loss. The experiments on the National Aeronautics and Space Administration (NASA) and AEEEM show that the system achieves better outcomes than the existing state-of-the-art models f
1 Introduction Automatic bug assignment has been well-studied in the past *** textual bug reports usually describe the buggy phenomena and potential causes,engineers highly depend on these reports to fix ***,researche...
详细信息
1 Introduction Automatic bug assignment has been well-studied in the past *** textual bug reports usually describe the buggy phenomena and potential causes,engineers highly depend on these reports to fix ***,researchers spend much effort on processing bug reports,aiming for obtaining key information and/or clues for reproducing bugs,analyzing their root causes,assigning them to developers/maintainers,and fixing these ***,our previous research[1]reveals that noises in texts bring adverse impacts to automatic bug assignments unexpectedly,mainly due to insufficiency of classical Natural Language Processing(NLP)techniques.
Fog computing is an emerging paradigm that provides services near the end-user. The tremendous increase in IoT devices and big data leads to complexity in fog resource allocation. Inefficient resource allocation can l...
详细信息
Sequence-to-sequence models are fundamental building blocks for generating abstractive text summaries, which can produce precise and coherent summaries. Recently proposed, different text summarization models aimed to ...
详细信息
The provision of rebate to needy/underprivileged sections of society has been in practice since long in government organizations. The efficacy of such provisions lies in the fact that whether this rebate reaches peopl...
详细信息
In the field of Human Activity Recognition (HAR), the precise identification of human activities from time-series sensor data is a complex yet vital task, given its extensive applications across various industries. Th...
详细信息
The rigorous security requirements and domain experts are necessary for the tuning of firewalls and for the detection of attacks. Those firewalls may create an incorrect sense or state of protection if they are improp...
详细信息
暂无评论