Recently, image inpainting has been proposed as a solution for restoring the polluted image in the field of computer vision. Further, face inpainting is a subfield of image inpainting, which refers to a set of image e...
详细信息
This paper presents a comprehensive review of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) technologies, focusing on their impact in driving AI automation across diverse sectors with a s...
详细信息
With artificial intelligence propelling rapid technological advances, many tools and frameworks have surfaced to assist virtual learning settings. They all make unique claims about how best to facilitate distance educ...
详细信息
The quick increasing in the Internet of Things (IoT) devices has raised significant security concerns, particularly in the face of reactive jamming attacks. This paper proposes a trust-based protocol named Trust-Based...
详细信息
Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex *** Architecture is one of the recent technique...
详细信息
Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex *** Architecture is one of the recent techniques in which Enterprise Systems can be developed as fine-grained smaller components and deployed *** methodology brings numerous benefits like scalability,resilience,flexibility in development,faster time to market,*** the advantages;Microservices bring some challenges *** microservices need to be invoked one by one as a *** most applications,more than one chain of microservices runs in parallel to complete a particular requirement To complete a user’s *** results in competition for resources and the need for more inter-service communication among the services,which increases the overall latency of the application.A new approach has been proposed in this paper to handle a complex chain of microservices and reduce the latency of user requests.A machine learning technique is followed to predict the weighting time of different types of *** communication time among services distributed among different physical machines are estimated based on that and obtained insights are applied to an algorithm to calculate their priorities dynamically and select suitable service instances to minimize the latency based on the shortest queue waiting *** were done for both interactive as well as non interactive workloads to test the effectiveness of the *** approach has been proved to be very effective in reducing latency in the case of long service chains.
Potatoes are one of the world's most popular and economically important crops. For many uses in agriculture, breeding, and trading, accurate recognition of potato breeds is important. In recent years, deep learnin...
详细信息
In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance *** exploit flexible resource utilization,a key advantage of cloud *** users share GPUs,which serve as coprocesso...
详细信息
In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance *** exploit flexible resource utilization,a key advantage of cloud *** users share GPUs,which serve as coprocessors of central processing units(CPUs)and are activated only if tasks demand GPU *** a container environment,where resources can be shared among multiple users,GPU utilization can be increased by minimizing idle time because the tasks of many users run on a single ***,unlike CPUs and memory,GPUs cannot logically multiplex their ***,GPU memory does not support over-utilization:when it runs out,tasks will ***,it is necessary to regulate the order of execution of concurrently running GPU tasks to avoid such task failures and to ensure equitable GPU sharing among *** this paper,we propose a GPU task execution order management technique that controls GPU usage via time-based *** technique seeks to ensure equal GPU time among users in a container environment to prevent task *** the meantime,we use a deferred processing method to prevent GPU memory shortages when GPU tasks are executed simultaneously and to determine the execution order based on the GPU usage *** the order of GPU tasks cannot be externally adjusted arbitrarily once the task commences,the GPU task is indirectly paused by pausing the *** addition,as container pause/unpause status is based on the information about the available GPU memory capacity,overuse of GPU memory can be prevented at the *** a result,the strategy can prevent task failure and the GPU tasks can be experimentally processed in appropriate order.
The term 'Mobile Cloud Computing (MCC)', which combines mobile and cloud computing, has gained popularity in the IT sector and become a hot topic of conversation. MCC has completely changed how mobile users us...
详细信息
Keratitis is an intraocular disease that is an inflammation of the cornea and one of the challenges of ophthalmology due to its multifarious character and mild initial symptoms. In this paper, we present KeraNet Deep ...
详细信息
The phenomenon of urban heat islands (UHI) presents a critical challenge for urban sustainability, exacerbating local temperatures, increasing energy demands, and impairing public health. Traditional methods for addre...
详细信息
暂无评论