With the arrival of 5G,latency-sensitive applications are becoming increasingly *** Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has attracted much...
详细信息
With the arrival of 5G,latency-sensitive applications are becoming increasingly *** Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has attracted much attention among *** improve the Quality of Service(QoS),this study focuses on computation offloading in *** consider the QoS from the perspective of computational cost,dimensional disaster,user privacy and catastrophic forgetting of new *** QoS model is established based on the delay and energy consumption and is based on DDQN and a Federated Learning(FL)adaptive task offloading algorithm in *** proposed algorithm combines the QoS model and deep reinforcement learning algorithm to obtain an optimal offloading policy according to the local link and node state information in the channel coherence time to address the problem of time-varying transmission channels and reduce the computing energy consumption and task processing *** solve the problems of privacy and catastrophic forgetting,we use FL to make distributed use of multiple users’data to obtain the decision model,protect data privacy and improve the model *** the process of FL iteration,the communication delay of individual devices is too large,which affects the overall delay ***,we adopt a communication delay optimization algorithm based on the unary outlier detection mechanism to reduce the communication delay of *** simulation results indicate that compared with existing schemes,the proposed method significantly reduces the computation cost on a device and improves the QoS when handling complex tasks.
Three-dimensional(3D)surface geometry provides elemental information in various sciences and precision *** Projection Profilometry(FPP)is one of the most powerful non-contact(thus non-destructive)and non-interferometr...
详细信息
Three-dimensional(3D)surface geometry provides elemental information in various sciences and precision *** Projection Profilometry(FPP)is one of the most powerful non-contact(thus non-destructive)and non-interferometric(thus less restrictive)3D measurement techniques,featuring at its high ***,the measurement precision of FPP is currently evaluated experimentally,lacking a complete theoretical model for *** propose the first complete FPP precision model chain including four stage models(camera intensity,fringe intensity,phase and 3D geometry)and two transfer models(from fringe intensity to phase and from phase to 3D geometry).The most significant contributions include the adoption of a non-Gaussian camera noise model,which,for the first time,establishes the connection between camera’s electronics parameters(known in advance from the camera manufacturer)and the phase precision,and the formulation of the phase to geometry transfer,which makes the precision of the measured geometry representable in an explicit and concise *** a result,we not only establish the full precision model of the 3D geometry to characterize the performance of an FPP system that has already been set up,but also explore the expression of the highest possible precision limit to guide the error distribution of an FPP system that is yet to *** theoretical models make FPP a more designable technique to meet the challenges from various measurement demands concerning different object sizes from macro to micro and requiring different measurement precisions from a few millimeters to a few micrometers.
Code review is a critical process in software development, contributing to the overall quality of the product by identifying errors early. A key aspect of this process is the selection of appropriate reviewers to scru...
详细信息
Code review is a critical process in software development, contributing to the overall quality of the product by identifying errors early. A key aspect of this process is the selection of appropriate reviewers to scrutinize changes made to source code. However, in large-scale open-source projects, selecting the most suitable reviewers for a specific change can be a challenging task. To address this, we introduce the Code Context Based Reviewer Recommendation (CCB-RR), a model that leverages information from changesets to recommend the most suitable reviewers. The model takes into consideration the paths of modified files and the context derived from the changesets, including their titles and descriptions. Additionally, CCB-RR employs KeyBERT to extract the most relevant keywords and compare the semantic similarity across changesets. The model integrates the paths of modified files, keyword information, and the context of code changes to form a comprehensive picture of the changeset. We conducted extensive experiments on four open-source projects, demonstrating the effectiveness of CCB-RR. The model achieved a Top-1 accuracy of 60%, 55%, 51%, and 45% on the Android, OpenStack, QT, and LibreOffice projects respectively. For Mean Reciprocal Rank (MRR), CCB achieved 71%, 62%, 52%, and 68% on the same projects respectively, thereby highlighting its potential for practical application in code reviewer recommendation.
Intelligent education is a significant application of artificial intelligence. One of the key research topics in intelligence education is cognitive diagnosis, which aims to gauge the level of proficiency among studen...
详细信息
Intelligent education is a significant application of artificial intelligence. One of the key research topics in intelligence education is cognitive diagnosis, which aims to gauge the level of proficiency among students on specific knowledge concepts(e.g., Geometry). To the best of our knowledge, most of the existing cognitive models primarily focus on improving diagnostic accuracy while rarely considering fairness issues; for instance, the diagnosis of students may be affected by various sensitive attributes(e.g., region). In this paper,we aim to explore fairness in cognitive diagnosis and answer two questions:(1) Are the results of existing cognitive diagnosis models affected by sensitive attributes?(2) If yes, how can we mitigate the impact of sensitive attributes to ensure fair diagnosis results? To this end, we first empirically reveal that several wellknown cognitive diagnosis methods usually lead to unfair performances, and the trend of unfairness varies among different cognitive diagnosis models. Then, we make a theoretical analysis to explain the reasons behind this phenomenon. To resolve the unfairness problem in existing cognitive diagnosis models, we propose a general fairness-aware cognitive diagnosis framework, FairCD. Our fundamental principle involves eliminating the effect of sensitive attributes on student proficiency. To achieve this, we divide student proficiency in existing cognitive diagnosis models into two components: bias proficiency and fair *** design two orthogonal tasks for each of them to ensure that fairness in proficiency remains independent of sensitive attributes and take it as the final diagnosed result. Extensive experiments on the Program for International Student Assessment(PISA) dataset clearly show the effectiveness of our framework.
The overgeneralisation may happen because most studies on data publishing for multiple sensitive attributes(SAs)have not considered the personalised privacy ***,sensitive information disclosure may also be caused by t...
详细信息
The overgeneralisation may happen because most studies on data publishing for multiple sensitive attributes(SAs)have not considered the personalised privacy ***,sensitive information disclosure may also be caused by these personalised *** address the matter,this article develops a personalised data publishing method for multiple *** to the requirements of individuals,the new method partitions SAs values into two categories:private values and public values,and breaks the association between them for privacy *** the private values,this paper takes the process of anonymisation,while the public values are released without this *** algorithm is designed to achieve the privacy mode,where the selectivity is determined by the sensitive value frequency and undesirable *** experimental results show that the proposed method can provide more information utility when compared with previous *** theoretic analyses and experiments also indicate that the privacy can be guaranteed even though the public values are known to an *** overgeneralisation and privacy breach caused by the personalised requirement can be avoided by the new method.
Recent advancements in attribute localization have showcased its potential in discovering the intrinsic semantic knowledge for visual feature representations, thereby facilitating significant visual-semantic interacti...
详细信息
Recent advancements in attribute localization have showcased its potential in discovering the intrinsic semantic knowledge for visual feature representations, thereby facilitating significant visual-semantic interactions essential for zero-shot learning(ZSL). However, the majority of existing attribute localization methods heavily rely on classification constraints, resulting in accurate localization of only a few attributes while neglecting the rest important attributes associated with other classes. This limitation hinders the discovery of the intrinsic semantic relationships between attributes and visual features across all *** address this problem, we propose a novel attribute localization refinement(ALR) module designed to enhance the model's ability to accurately localize all attributes. Essentially, we enhance weak discriminant attributes by grouping them and introduce weighted attribute regression to standardize the mapping values of semantic attributes. This module can be flexibly combined with existing attribute localization *** experiments show that when combined with the ALR module, the localization errors in existing methods are corrected, and state-of-the-art classification performance is achieved.
Despite the effectiveness of vision-language supervised fine-tuning in enhancing the performance of vision large language models(VLLMs), existing visual instruction tuning datasets include the following limitations.(1...
详细信息
Despite the effectiveness of vision-language supervised fine-tuning in enhancing the performance of vision large language models(VLLMs), existing visual instruction tuning datasets include the following limitations.(1) Instruction annotation quality: despite existing VLLMs exhibiting strong performance,instructions generated by those advanced VLLMs may still suffer from inaccuracies, such as hallucinations.(2) Instructions and image diversity: the limited range of instruction types and the lack of diversity in image data may impact the model's ability to generate diversified and closer to real-world scenarios outputs. To address these challenges, we construct a high-quality, diverse visual instruction tuning dataset MMInstruct,which consists of 973k instructions from 24 domains. There are four instruction types: judgment, multiplechoice, long visual question answering, and short visual question answering. To construct MMInstruct, we propose an instruction generation data engine that leverages GPT-4V, GPT-3.5, and manual correction. Our instruction generation engine enables semi-automatic, low-cost, and multi-domain instruction generation at 1/6 the cost of manual construction. Through extensive experiment validation and ablation experiments,we demonstrate that MMInstruct could significantly improve the performance of VLLMs, e.g., the model fine-tuning on MMInstruct achieves new state-of-the-art performance on 10 out of 12 benchmarks. The code and data shall be available at https://***/yuecao0119/MMInstruct.
In the era of network communication,digital image encryption(DIE)technology is critical to ensure the security of image ***,there has been limited research on combining deep learning neural networks with chaotic mappi...
详细信息
In the era of network communication,digital image encryption(DIE)technology is critical to ensure the security of image ***,there has been limited research on combining deep learning neural networks with chaotic mapping for the encryption of digital ***,this paper addresses this gap by studying the generation of pseudo-random sequences(PRS)chaotic signals using dual logistic chaotic *** signals are then predicted using long and short-term memory(LSTM)networks,resulting in the reconstruction of a new chaotic *** the research process,it was discovered that there are numerous training parameters associated with the LSTM network,which can hinder training *** overcome this challenge and improve training efficiency,the paper proposes an improved particle swarm optimization(IPSO)algorithm to optimize the LSTM ***,the obtained chaotic signal from the optimized model training is further scrambled,obfuscated,and diffused to achieve the final encrypted *** research presents a digital image encryption(DIE)algorithm based on a double chaotic map(DCM)and *** algorithm demonstrates a high average NPCR(Number of Pixel Change Rate)of 99.56%and a UACI(Unified Average Changing Intensity)value of 33.46%,indicating a strong ability to resist differential ***,the proposed algorithm realizes secure and sensitive digital image encryption,ensuring the protection of personal information in the Internet environment.
The rapid acceleration of urbanization and industrialization has led to a significant increase in PM2.5 pollution, making it a critical global concern. The accurate prediction of PM2.5 concentrations is of utmost impo...
详细信息
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec...
详细信息
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future *** this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users.
暂无评论