In the wake of rapid advancements in artificial intelligence(AI), we stand on the brink of a transformative leap in data systems. The imminent fusion of AI and DB(AI×DB) promises a new generation of data systems,...
详细信息
In the wake of rapid advancements in artificial intelligence(AI), we stand on the brink of a transformative leap in data systems. The imminent fusion of AI and DB(AI×DB) promises a new generation of data systems, which will relieve the burden on end-users across all industry sectors by featuring AI-enhanced functionalities, such as personalized and automated in-database AI-powered analytics, and selfdriving capabilities for improved system performance. In this paper, we explore the evolution of data systems with a focus on deepening the fusion of AI and DB. We present NeurDB, an AI-powered autonomous data system designed to fully embrace AI design in each major system component and provide in-database AI-powered analytics. We outline the conceptual and architectural overview of NeurDB, discuss its design choices and key components, and report its current development and future plan.
Low earth orbit(LEO) satellite edge computing can overcome communication difficulties in harsh environments, which lack the support of terrestrial communication infrastructure. It is an indispensable option for achiev...
详细信息
Low earth orbit(LEO) satellite edge computing can overcome communication difficulties in harsh environments, which lack the support of terrestrial communication infrastructure. It is an indispensable option for achieving worldwide wireless communication coverage in the future. To improve the quality-of-service(QoS) for Internet-of-things(IoT) devices, we combine LEO satellite edge computing and ground communication systems to provide network services for IoT devices in harsh environments. We study the QoS-aware computation offloading(QCO) problem for IoT devices in LEO satellite edge computing. Then we investigate the computation offloading strategy for IoT devices that can minimize the total QoS cost of all devices while satisfying multiple constraints, such as the computing resource constraint, delay constraint, and energy consumption constraint. We formulate the QoSaware computation offloading problem as a game model named QCO game based on the non-cooperative competition game among IoT devices. We analyze the finite improvement property of the QCO game and prove that there is a Nash equilibrium for the QCO game. We propose a distributed QoS-aware computation offloading(DQCO) algorithm for the QCO game. Experimental results show that the DQCO algorithm can effectively reduce the total QoS cost of IoT devices.
In the fields of intelligent transportation and multi-task cooperation, many practical problems can be modeled by colored traveling salesman problem(CTSP). When solving large-scale CTSP with a scale of more than 1000d...
详细信息
In the fields of intelligent transportation and multi-task cooperation, many practical problems can be modeled by colored traveling salesman problem(CTSP). When solving large-scale CTSP with a scale of more than 1000dimensions, their convergence speed and the quality of their solutions are limited. This paper proposes a new hybrid IT?(HIT?) algorithm, which integrates two new strategies, crossover operator and mutation strategy, into the standard IT?. In the iteration process of HIT?, the feasible solution of CTSP is represented by the double chromosome coding, and the random drift and wave operators are used to explore and develop new unknown regions. In this process, the drift operator is executed by the improved crossover operator, and the wave operator is performed by the optimized mutation strategy. Experiments show that HIT? is superior to the known comparison algorithms in terms of the quality solution.
Palmprint recognition is an emerging biometrics technology that has attracted increasing attention in recent years. Many palmprint recognition methods have been proposed, including traditional methods and deep learnin...
详细信息
Palmprint recognition is an emerging biometrics technology that has attracted increasing attention in recent years. Many palmprint recognition methods have been proposed, including traditional methods and deep learning-based methods. Among the traditional methods, the methods based on directional features are mainstream because they have high recognition rates and are robust to illumination changes and small noises. However, to date, in these methods, the stability of the palmprint directional response has not been deeply studied. In this paper, we analyse the problem of directional response instability in palmprint recognition methods based on directional feature. We then propose a novel palmprint directional response stability measurement (DRSM) to judge the stability of the directional feature of each pixel. After filtering the palmprint image with the filter bank, we design DRSM according to the relationship between the maximum response value and other response values for each pixel. Using DRSM, we can judge those pixels with unstable directional response and use a specially designed encoding mode related to a specific method. We insert the DRSM mechanism into seven classical methods based on directional feature, and conduct many experiments on six public palmprint databases. The experimental results show that the DRSM mechanism can effectively improve the performance of these methods. In the field of palmprint recognition, this work is the first in-depth study on the stability of the palmprint directional response, so this paper has strong reference value for research on palmprint recognition methods based on directional features.
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentat...
详细信息
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object ***,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient ***,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training *** results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,*** detection performance surpasses that of other single-task or multi-task algorithm models.
In modern society,an increasing number of occasions need to effectively verify people's *** is the most ef-fective technology for personal *** research on automated biometrics recognition mainly started in the 196...
详细信息
In modern society,an increasing number of occasions need to effectively verify people's *** is the most ef-fective technology for personal *** research on automated biometrics recognition mainly started in the 1960s and *** the following 50 years,the research and application of biometrics have achieved fruitful *** 2014-2015,with the successful applications of some emerging information technologies and tools,such as deep learning,cloud computing,big data,mobile communication,smartphones,location-based services,blockchain,new sensing technology,the Internet of Things and federated learning,biometric technology entered a new development ***,taking 2014-2015 as the time boundary,the development of biometric technology can be divided into two *** addition,according to our knowledge and understanding of biometrics,we fur-ther divide the development of biometric technology into three phases,i.e.,biometrics 1.0,2.0 and *** 1.0 is the primary de-velopment phase,or the traditional development *** 2.0 is an explosive development phase due to the breakthroughs caused by some emerging information *** present,we are in the development phase of biometrics *** 3.0 is the future development phase of *** the biometrics 3.0 phase,biometric technology will be fully mature and can meet the needs of various *** 1.0 is the initial phase of the development of biometric technology,while biometrics 2.0 is the advanced *** this paper,we provide a brief review of biometrics ***,the concept of biometrics 2.0 is defined,and the architecture of biometrics 2.0 is *** particular,the application architecture of biometrics 2.0 in smart cities is *** challenges and perspectives of biometrics 2.0 are also discussed.
Integrated sensing and communication (ISAC) is a promising technique to increase spectral efficiency and support various emerging applications by sharing the spectrum and hardware between these functionalities. Howeve...
详细信息
Integrated sensing and communication (ISAC) is a promising technique to increase spectral efficiency and support various emerging applications by sharing the spectrum and hardware between these functionalities. However, the traditional ISAC schemes are highly dependent on the accurate mathematical model and suffer from the challenges of high complexity and poor performance in practical scenarios. Recently, artificial intelligence (AI) has emerged as a viable technique to address these issues due to its powerful learning capabilities, satisfactory generalization capability, fast inference speed, and high adaptability for dynamic environments, facilitating a system design shift from model-driven to data-driven. Intelligent ISAC, which integrates AI into ISAC, has been a hot topic that has attracted many researchers to investigate. In this paper, we provide a comprehensive overview of intelligent ISAC, including its motivation, typical applications, recent trends, and challenges. In particular, we first introduce the basic principle of ISAC, followed by its key techniques. Then, an overview of AI and a comparison between model-based and AI-based methods for ISAC are provided. Furthermore, the typical applications of AI in ISAC and the recent trends for AI-enabled ISAC are reviewed. Finally, the future research issues and challenges of intelligent ISAC are discussed.
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection *** Strengths,Weaknesses,Opportunities,Threats(SWOT)ana...
详细信息
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection *** Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data *** model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT *** model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample *** data is used to retain more semantic information to generate *** model was applied to species in Southern California,USA,citing SWOT analysis data to train the *** show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development *** model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data *** study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development.
Recently,deep image-hiding techniques have attracted considerable attention in covert communication and high-capacity information ***,these approaches have some *** example,a cover image lacks self-adaptability,inform...
详细信息
Recently,deep image-hiding techniques have attracted considerable attention in covert communication and high-capacity information ***,these approaches have some *** example,a cover image lacks self-adaptability,information leakage,or weak *** address these issues,this study proposes a universal and adaptable image-hiding ***,a domain attention mechanism is designed by combining the Atrous convolution,which makes better use of the relationship between the secret image domain and the cover image ***,to improve perceived human similarity,perceptual loss is incorporated into the training *** experimental results are promising,with the proposed method achieving an average pixel discrepancy(APD)of 1.83 and a peak signal-to-noise ratio(PSNR)value of 40.72 dB between the cover and stego images,indicative of its high-quality ***,the structural similarity index measure(SSIM)reaches 0.985 while the learned perceptual image patch similarity(LPIPS)remarkably registers at ***,self-testing and cross-experiments demonstrate the model’s adaptability and generalization in unknown hidden spaces,making it suitable for diverse computer vision tasks.
Recently,Generative Adversarial Networks(GANs)have become the mainstream text-to-image(T2I)***,a standard normal distribution noise of inputs cannot provide sufficient information to synthesize an image that approache...
详细信息
Recently,Generative Adversarial Networks(GANs)have become the mainstream text-to-image(T2I)***,a standard normal distribution noise of inputs cannot provide sufficient information to synthesize an image that approaches the ground-truth image ***,the multistage generation strategy results in complex T2I ***,this study proposes a novel feature-grounded single-stage T2I model,which considers the“real”distribution learned from training images as one input and introduces a worst-case-optimized similarity measure into the loss function to enhance the model's generation *** results on two benchmark datasets demonstrate the competitive performance of the proposed model in terms of the Frechet inception distance and inception score compared to those of some classical and state-of-the-art models,showing the improved similarities among the generated image,text,and ground truth.
暂无评论