We demonstrate wavelength-division-multiplexed data transmission and dispersion compensation of 25 Gb/s × 9 on-off-keying signals over a 20-km singlemode fiber using an integrated single-soliton microcomb and a c...
详细信息
With the rapid progress of generation technology, it has become necessary to attribute the origin of fake images. Existing works on fake image attribution perform multi-class classification on several Generative Adver...
详细信息
We demonstrate wavelength-division-multiplexed data transmission and dispersion compensation of 25 Gb/s × 9 on-off-keying signals over a 20-km singlemode fiber using an integrated single-soliton microcomb and a c...
详细信息
Metrics have emerged as an important tool for quantitatively evaluating researchers from a variety of perspectives,including research impact,research quality,interdisciplinarity,and *** in the field of library and inf...
详细信息
Metrics have emerged as an important tool for quantitatively evaluating researchers from a variety of perspectives,including research impact,research quality,interdisciplinarity,and *** in the field of library and information science,many previous studies have highlighted the characteristics of researchers in this ***,only a minority of the studies address the aspect of diversity in research *** purpose of this study is to(1)evaluate the topic diversity of researchers in library and information science and(2)examine the relationships between the researcher topic diversity and research *** propose an indicator to quantify author topic diversity,which we refer to as author topic diversity(ATD).Latent Dirichlet Allocation(LDA)is used to detect topics in the field,while cosine similarity is used to calculate the diversity of research topics in a given researcher’s *** results show that topic diversity in the field of library and information science varies greatly from author to *** addition,weak positive correlations are found between the ATD and citation indicators,suggesting that engaging in diversified topics may lead to higher research impact.
Direct Preference Optimization (DPO) has proven effective in complex reasoning tasks like math word problems and code generation. However, when applied to Text-to-SQL datasets, it often fails to improve performance an...
详细信息
AME4163: Principles of engineering Design is a design, build and test course offered at the University of Oklahoma, Norman, USA. Throughout the semester students are expected to reflect on authentic and immersive expe...
详细信息
Vulnerabilities are disclosed with corresponding patches so that users can remediate them in time. However, there are instances where patches are not released with the disclosed vulnerabilities, causing hidden dangers...
详细信息
ISBN:
(数字)9798350330663
ISBN:
(纸本)9798350330670
Vulnerabilities are disclosed with corresponding patches so that users can remediate them in time. However, there are instances where patches are not released with the disclosed vulnerabilities, causing hidden dangers, especially if dependent software remains uninformed about the affected code repository. Hence, it is crucial to automatically locate security patches for disclosed vulnerabilities among a multitude of commits. Despite the promising performance of existing learning-based localization approaches, they still suffer from the following limitations: (1) They cannot perform well in data scarcity scenarios. Most neural models require extensive datasets to capture the semantic correlations between the vulnerability description and code commits, while the number of disclosed vulnerabilities with patches is limited. (2) They struggle to capture the deep semantic correlations between the vulnerability description and code commits due to inherent differences in semantics and characters between code changes and commit messages. It is difficult to use one model to capture the semantic correlations between vulnerability descriptions and code commits. To mitigate these two limitations, in this paper, we propose a novel security patch localization approach named Prom VPat, which utilizes the dual prompt tuning channel to capture the semantic correlation between vulnerability descriptions and commits, especially in data scarcity (i.e., few-shot) scenarios. We first input the commit message and code changes with the vulnerability description into the prompt generator to generate two new inputs with prompt templates. Then, we adopt a pre-trained language model (i.e., PLM) as the encoder, utilize the prompt tuning method to fine-tune the encoder, and generate two correlation probabilities as the semantic features. In addition, we extract 26 handcrafted features from the vulnerability descriptions and the code commits. Finally, we utilize the attention mechanism to fuse the
When the existing digital watermarking algorithm is applied to the medical image, the watermark embedded in the Region of Interest (ROI) will change the properties of the original medical image, causing misdiagnosis. ...
详细信息
In environmental monitor architecture, the usual way of processing data is centralized. On the one hand, this leads to a long time of data processing, and with the increasing number of monitoring sensors, the pressure...
详细信息
Many few-shot learning approaches have been designed under the meta-learning framework, which learns from a variety of learning tasks and generalizes to new tasks. These meta-learning approaches achieve the expected p...
详细信息
Many few-shot learning approaches have been designed under the meta-learning framework, which learns from a variety of learning tasks and generalizes to new tasks. These meta-learning approaches achieve the expected performance in the scenario where all samples are drawn from the same distributions (i.i.d. observations). However, in real-world applications, few-shot learning paradigm often suffers from data shift, i.e., samples in different tasks, even in the same task, could be drawn from various data distributions. Most existing few-shot learning approaches are not designed with the consideration of data shift, and thus show downgraded performance when data distribution shifts. However, it is non-trivial to address the data shift problem in few-shot learning, due to the limited number of labeled samples in each task. Targeting at addressing this problem, we propose a novel metric-based meta-learning framework to extract task-specific representations and task-shared representations with the help of knowledge graph. The data shift within/between tasks can thus be combated by the combination of task-shared and task-specific representations. The proposed model is evaluated on popular benchmarks and two constructed new challenging datasets. The evaluation results demonstrate its remarkable performance.
暂无评论