Large language Models (LLMs), such as ChatGPT and Google Bard, have performed interestingly well when assisting developers on computer programming tasks, a.k.a., coding, thus potentially resulting in convenient and fa...
详细信息
ISBN:
(纸本)9798400704918
Large language Models (LLMs), such as ChatGPT and Google Bard, have performed interestingly well when assisting developers on computer programming tasks, a.k.a., coding, thus potentially resulting in convenient and faster softwareconstruction.. This new approach significantly enhances efficiency but also presents challenges in unsupervised code construction.with limited security guarantees. LLMs excel in producing code with accurate grammar, yet they are not specifically trained to guarantee the security of the code. In this paper, we provide an initial exploration into using formal software specifications as a starting point for softwareconstruction. allowing developers to translate descriptions of security-related behavior into natural language instructions for LLMs, a.k.a., prompts. In addition, we leveraged automated verification tools to evaluate the code produced against the aforementioned specifications, following a modular, step-by-step softwareconstruction.process. For our study, we leveraged Role-based Access Control (RBAC), a mature security model, and the javamodelinglanguage ( JML), a behavioral specification language for java. We test our approach on different publicly-available LLMs, namely, OpenAI ChatGPT 4.0, Google Bard, and Microsoft CoPilot. We provide a description of two applications-a security-sensitive Banking application employing RBAC and an RBAC API module itself-, the corresponding JML specifications, as well as a description of the prompts, the generated code, the verification results, as well as a series of interesting insights for practitioners interested in further exploring the use of LLMs for securely constructing applications.
暂无评论