Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...
详细信息
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec...
详细信息
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future *** this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users.
Data-driven garment animation is a current topic of interest in the computer graphics *** approaches generally establish the mapping between a single human pose or a temporal pose sequence,and garment deformation,but ...
详细信息
Data-driven garment animation is a current topic of interest in the computer graphics *** approaches generally establish the mapping between a single human pose or a temporal pose sequence,and garment deformation,but it is difficult to quickly generate diverse clothed human *** address this problem with a method to automatically synthesize dressed human animations with temporal consistency from a specified human motion *** the heart of our method is a twostage ***,we first learn a latent space encoding the sequence-level distribution of human motions utilizing a transformer-based conditional variational autoencoder(Transformer-CVAE).Then a garment simulator synthesizes dynamic garment shapes using a transformer encoder-decoder *** the learned latent space comes from varied human motions,our method can generate a variety of styles of motions given a specific motion *** means of a novel beginning of sequence(BOS)learning strategy and a self-supervised refinement procedure,our garment simulator is capable of efficiently synthesizing garment deformation sequences corresponding to the generated human motions while maintaining temporal and spatial *** verify our ideas *** is the first generative model that directly dresses human animation.
Optical image-based ship detection can ensure the safety of ships and promote the orderly management of ships in offshore *** deep learning researches on optical image-based ship detection mainly focus on improving on...
详细信息
Optical image-based ship detection can ensure the safety of ships and promote the orderly management of ships in offshore *** deep learning researches on optical image-based ship detection mainly focus on improving one-stage detectors for real-time ship detection but sacrifices the accuracy of *** solve this problem,we present a hybrid ship detection framework which is named EfficientShip in this *** core parts of the EfficientShip are DLA-backboned object location(DBOL)and CascadeRCNN-guided object classification(CROC).The DBOL is responsible for finding potential ship objects,and the CROC is used to categorize the potential ship *** also design a pixel-spatial-level data augmentation(PSDA)to reduce the risk of detection model *** compare the proposed EfficientShip with state-of-the-art(SOTA)literature on a ship detection dataset called *** show our ship detection framework achieves a result of 99.63%(mAP)at 45 fps,which is much better than 8 SOTA approaches on detection accuracy and can also meet the requirements of real-time application scenarios.
This paper considers link scheduling in a wireless network comprising of two types of nodes:(i)hybrid access points(HAPs)that harvest solar en-ergy,and(ii)devices that harvest radio frequency(RF)energy whenever HAPs *...
详细信息
This paper considers link scheduling in a wireless network comprising of two types of nodes:(i)hybrid access points(HAPs)that harvest solar en-ergy,and(ii)devices that harvest radio frequency(RF)energy whenever HAPs *** aim is to de-rive the shortest possible link schedule that determines the transmission time of inter-HAPs links,and uplinks from devices to *** first outline a mixed in-teger linear program(MILP),which can be run by a central node to determine the optimal schedule and transmit power of HAPs and *** then out-line a game theory based protocol called Distributed Schedule Minimization Protocol(DSMP)that is run by HAPs and ***,it does not require causal energy arrivals and channel gains *** results show that DSMP produces schedule lengths that are at most 1.99x longer than the schedule computed by MILP.
Wireless Ad Hoc Networks consist of devices that are wirelessly *** Ad Hoc Networks(MANETs),Internet of Things(IoT),and Vehicular Ad Hoc Networks(VANETs)are the main domains of wireless ad hoc *** is used in wireless ...
详细信息
Wireless Ad Hoc Networks consist of devices that are wirelessly *** Ad Hoc Networks(MANETs),Internet of Things(IoT),and Vehicular Ad Hoc Networks(VANETs)are the main domains of wireless ad hoc *** is used in wireless ad hoc *** is based on Transmission Control Protocol(TCP)/Internet Protocol(IP)network where clients and servers interact with each other with the help of IP in a pre-defined *** fetches data from a fixed *** redundancy,mobility,and location dependency are the main issues of the IP network *** these factors result in poor performance of wireless ad hoc *** main disadvantage of IP is that,it does not provide in-network ***,there is a need to move towards a new network that overcomes these *** Data Network(NDN)is a network that overcomes these *** is a project of Information-centric Network(ICN).NDN provides in-network caching which helps in fast response to user *** NDN in wireless ad hoc network provides many benefits such as caching,mobility,scalability,security,and *** considering the certainty,in this survey paper,we present a comprehensive survey on Caching Strategies in NDN-based Wireless *** cachingmechanism-based results are also *** the last,we also shed light on the challenges and future directions of this promising field to provide a clear understanding of what caching-related problems exist in NDN-based wireless ad hoc networks.
Tensors are a popular programming interface for developing artificial intelligence(AI)*** refers to the order of placing tensor data in the memory and will affect performance by affecting data locality;therefore the d...
详细信息
Tensors are a popular programming interface for developing artificial intelligence(AI)*** refers to the order of placing tensor data in the memory and will affect performance by affecting data locality;therefore the deep neural network library has a convention on the *** AI applications can use arbitrary layouts,and existing AI systems do not provide programming abstractions to shield the layout conventions of libraries,operator developers need to write a lot of layout-related code,which reduces the efficiency of integrating new libraries or developing new ***,the developer assigns the layout conversion operation to the internal operator to deal with the uncertainty of the input layout,thus losing the opportunity for layout *** on the idea of polymorphism,we propose a layout-agnostic virtual tensor programming interface,namely the VTensor framework,which enables developers to write new operators without caring about the underlying physical layout of *** addition,the VTensor framework performs global layout inference at runtime to transparently resolve the required layout of virtual tensors,and runtime layout-oriented optimizations to globally minimize the number of layout transformation *** results demonstrate that with VTensor,developers can avoid writing layout-dependent *** with TensorFlow,for the 16 operations used in 12 popular networks,VTensor can reduce the lines of code(LOC)of writing a new operation by 47.82%on average,and improve the overall performance by 18.65%on average.
Software engineering workflows use version control systems to track changes and handle merge cases from multiple contributors. This has introduced challenges to testing because it is impractical to test whole codebase...
详细信息
Software engineering workflows use version control systems to track changes and handle merge cases from multiple contributors. This has introduced challenges to testing because it is impractical to test whole codebases to ensure each change is defect-free, and it is not enough to test changed files alone. Just-in-time software defect prediction (JIT-SDP) systems have been proposed to solve this by predicting the likelihood that a code change is defective. Numerous techniques have been studied to build such JIT software defect prediction models, but the power of pre-trained code transformer language models in this task has been underexplored. These models have achieved human-level performance in code understanding and software engineering tasks. Inspired by that, we modeled the problem of change defect prediction as a text classification task utilizing these pre-trained models. We have investigated this idea on a recently published dataset, ApacheJIT, consisting of 44k commits. We concatenated the changed lines in each commit as one string and augmented it with the commit message and static code metrics. Parameter-efficient fine-tuning was performed for 4 chosen pre-trained models, JavaBERT, CodeBERT, CodeT5, and CodeReviewer, with either partially frozen layers or low-rank adaptation (LoRA). Additionally, experiments with the Local, Sparse, and Global (LSG) attention variants were conducted to handle long commits efficiently, which reduces memory consumption. As far as the authors are aware, this is the first investigation into the abilities of pre-trained code models to detect defective changes in the ApacheJIT dataset. Our results show that proper fine-tuning improves the defect prediction performance of the chosen models in the F1 scores. CodeBERT and CodeReviewer achieved a 10% and 12% increase in the F1 score over the best baseline models, JITGNN and JITLine, when commit messages and code metrics are included. Our approach sheds more light on the abilities of l
Medical image analysis plays an irreplaceable role in diagnosing,treating,and monitoring various *** neural networks(CNNs)have become popular as they can extract intricate features and patterns from extensive *** pape...
详细信息
Medical image analysis plays an irreplaceable role in diagnosing,treating,and monitoring various *** neural networks(CNNs)have become popular as they can extract intricate features and patterns from extensive *** paper covers the structure of CNN and its advances and explores the different types of transfer learning strategies as well as classic pre-trained *** paper also discusses how transfer learning has been applied to different areas within medical image *** comprehensive overview aims to assist researchers,clinicians,and policymakers by providing detailed insights,helping them make informed decisions about future research and policy initiatives to improve medical image analysis and patient outcomes.
Synthesizing garment dynamics according to body motions is a vital technique in computer ***-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and ...
详细信息
Synthesizing garment dynamics according to body motions is a vital technique in computer ***-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and complex to *** data-driven approaches either lack temporal consistency,or fail to handle garments that are different from body *** this paper,we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment *** a sequence of body motions,our workflow is able to gen-erate corresponding garment dynamics with both spatial and temporal *** that end,we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment ***-level attention is employed to capture the dependency of garments and body ***,a post-processing procedure is further tak-en to perform penetration removal and ***,textured clothing animation that is collision-free and tempo-rally-consistent is *** quantitatively and qualitatively evaluated our proposed workflow from different *** experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation,while running 1000 times ***,our workflow achieved superior synthesis perfor-mance compared with alternative *** stimulate further research in this direction,our code will be publicly available soon.
暂无评论