Antenna optimization using machine learning is a rapidly evolving field that leverages the power of artificial intelligence to design and improve antenna systems. Antenna optimization is a process of modifying antenna...
详细信息
Recently,Generative Adversarial Networks(GANs)have become the mainstream text-to-image(T2I)***,a standard normal distribution noise of inputs cannot provide sufficient information to synthesize an image that approache...
详细信息
Recently,Generative Adversarial Networks(GANs)have become the mainstream text-to-image(T2I)***,a standard normal distribution noise of inputs cannot provide sufficient information to synthesize an image that approaches the ground-truth image ***,the multistage generation strategy results in complex T2I ***,this study proposes a novel feature-grounded single-stage T2I model,which considers the“real”distribution learned from training images as one input and introduces a worst-case-optimized similarity measure into the loss function to enhance the model's generation *** results on two benchmark datasets demonstrate the competitive performance of the proposed model in terms of the Frechet inception distance and inception score compared to those of some classical and state-of-the-art models,showing the improved similarities among the generated image,text,and ground truth.
Image deraining is a highly ill-posed *** significant progress has been made due to the use of deep convolutional neural networks,this problem still remains challenging,especially for the details restoration and gener...
详细信息
Image deraining is a highly ill-posed *** significant progress has been made due to the use of deep convolutional neural networks,this problem still remains challenging,especially for the details restoration and generalization to real rain *** this paper,we propose a deep residual channel attention network(DeRCAN)for *** channel attention mechanism is able to capture the inherent properties of the feature space and thus facilitates more accurate estimations of structures and details for image *** addition,we further propose an unsupervised learning approach to better solve real rain images based on the proposed *** qualitative and quantitative evaluation results on both synthetic and real-world images demonstrate that the proposed DeRCAN performs favorably against state-of-the-art methods.
With the profound use of digital contents in education and social media, multimedia content have become a prevalent means of communication and with such rapid increase, information security is still a major concern. T...
详细信息
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...
详细信息
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
Suicide is a significant public health issue that devastates individuals and society. Early warning systems are crucial in preventing suicide. The purpose of this research is to create a deep learning model to identif...
详细信息
Graph processing has been widely used in many scenarios,from scientific computing to artificial *** processing exhibits irregular computational parallelism and random memory accesses,unlike traditional ***,running gra...
详细信息
Graph processing has been widely used in many scenarios,from scientific computing to artificial *** processing exhibits irregular computational parallelism and random memory accesses,unlike traditional ***,running graph processing workloads on conventional architectures(e.g.,CPUs and GPUs)often shows a significantly low compute-memory ratio with few performance benefits,which can be,in many cases,even slower than a specialized single-thread graph *** domain-specific hardware designs are essential for graph processing,it is still challenging to transform the hardware capability to performance boost without coupled software *** article presents a graph processing ecosystem from hardware to *** start by introducing a series of hardware accelerators as the foundation of this ***,the codesigned parallel graph systems and their distributed techniques are presented to support graph ***,we introduce our efforts on novel graph applications and hardware *** results show that various graph applications can be efficiently accelerated in this graph processing ecosystem.
Cloud computing is an emerging field in information technology, enabling users to access a shared pool of computing resources. Despite its potential, cloud technology presents various challenges, with one of the most ...
详细信息
The current study is defined by two main aims. An effective strategy for improving local search is to combine the Set Algebra-Based Heuristic Algorithm (SAHA) algorithm with the Nelder-Mead simplex method. The approac...
详细信息
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...
详细信息
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation *** computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end *** of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud *** smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system *** address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog *** framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation *** FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud *** simulation-based executions,tasks are allocated to the nearest available nodes with minimum *** the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of *** successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
暂无评论