We propose Corder, a self-supervised contrastive learning framework for sourcecodemodel. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained mode...
详细信息
ISBN:
(纸本)9781450380379
We propose Corder, a self-supervised contrastive learning framework for sourcecodemodel. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data;(2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the sourcecodemodel by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the codemodels pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.
暂无评论