Because of their development and success, MANETs are now preferred alternatives in many fields of study and application. The IEEE 802.11 standard is the basis of most wireless technologies. To treat collisions in wire...
Because of their development and success, MANETs are now preferred alternatives in many fields of study and application. The IEEE 802.11 standard is the basis of most wireless technologies. To treat collisions in wireless networks, IEEE 802.11 distributed Coordination Function (DCF) MAC layer employs Binary Exponential Backoff (BEB) algorithm, that is helpful in decreasing the potential of collision, but comes at the expense of several network performance metrics, such as medium access delay, bandwidth management, power use, and fairness. A Deep Learning technique called Deep Reinforcement Learning (DRL) allows an agent to communicate with the environment in order to accomplish a goal. In the present paper, we suggest Q-learning (QL), one of the DRL approaches to improve the Binary Exponential Backoff (BEB) in MANETs. The intelligently designed Int-BEB aims for better CW selection; this selection strikes to reduce collisions as much as possible and to give more chances to nodes that make collisions to access the channel by taking into consideration their residual energy. In-depth simulations are performed to evaluate the performance of the proposed mechanism. The performance of Int-BEB is evaluated in diverse networks with several criteria such as packet successful transmission, packet loss and energy.
With the exponential growth of biomedical knowledge in unstructured text repositories such as PubMed, it is imminent to establish a knowledge graph-style, efficient searchable and targeted database that can support th...
详细信息
ISBN:
(纸本)9798350337488
With the exponential growth of biomedical knowledge in unstructured text repositories such as PubMed, it is imminent to establish a knowledge graph-style, efficient searchable and targeted database that can support the need of information retrieval from researchers and clinicians. To mine knowledge from graph databases, most previous methods view a triple in a graph (see Fig. 1) as the basic processing unit and embed the triplet element (i.e. drugs/chemicals, proteins/genes and their interaction) as separated embedding matrices, which cannot capture the semantic correlation among triple elements. To remedy the loss of semantic correlation caused by disjoint embeddings, we propose a novel approach to learn triple embeddings by combining entities and interactions into a unified representation. Furthermore, traditional methods usually learn triple embeddings from scratch, which cannot take advantage of the rich domain knowledge embedded in pre-trained models, and is also another significant reason for the fact that they cannot distinguish the differences implied by the same entity in the multi-interaction triples. In this paper, we propose a novel fine-tuning based approach to learn better triple embeddings by creating weakly supervised signals from pre-trained knowledge graph embeddings. The method automatically samples triples from knowledge graphs and estimates their pairwise similarity from pre-trained embedding models. The triples are then fed pairwise into a Siamese-like neural architecture, where the triple representation is fine-tuned in the manner bootstrapped by triple similarity scores. Finally, we demonstrate that triple embeddings learned with our method can be readily applied to several downstream applications (e.g. triple classification and triple clustering). We evaluated the proposed method on two open-source drug-protein knowledge graphs constructed from PubMed abstracts, as provided by BioCreative. Our method achieves consistent improvement in both t
The traditional approaches for simulation of video analytics applications suffer from the lack of real-data generated by employed machine learning techniques. Machine learning methods need huge data that causes networ...
The traditional approaches for simulation of video analytics applications suffer from the lack of real-data generated by employed machine learning techniques. Machine learning methods need huge data that causes network congestion and high latency in cloud-based networks. This paper proposes a novel method for performance measurement and simulation of video analytics applications to evaluate the solutions addressing the cloud congestion problem. The proposed simulation is achieved by building a model prototype called Video Analytic Data Reduction Model (VADRM) that divides video analytic jobs into smaller tasks with fewer processing requirements to run on edge networking. Real data generated from VADRM prototype is characterized and tested by curve fitting to find the distribution models for generating the larger number of artificial data for resource management simulation. Distribution models based on real data of CNN-based VADRM prototype are used to build a queueing model and comprehensive simulation of real-time video analytics applications.
The perception module of self-driving vehicles relies on a multi-sensor system to understand its environment. Recent advancements in deep learning have led to the rapid development of approaches that integrate multi-s...
详细信息
As a cutting-edge technology of low-altitude Artificial Intelligence of Things (AIoT), autonomous aerial vehicle object detection significantly enhances the surveillance services capabilities of low-altitude AIoT. How...
详细信息
ChatGPT, an AI-based chatbot, offers coherent and useful replies based on analysis of large volumes of data. In this article, leading academics, scientists, distinguish researchers and engineers discuss the transforma...
详细信息
Heuristic algorithms have been developed to find approximate solutions for high-utility itemset mining (HUIM) problems that compensate for the performance bottlenecks of exact algorithms. However, heuristic algorithms...
详细信息
ISBN:
(数字)9798350317152
ISBN:
(纸本)9798350317169
Heuristic algorithms have been developed to find approximate solutions for high-utility itemset mining (HUIM) problems that compensate for the performance bottlenecks of exact algorithms. However, heuristic algorithms still face the problem of long runtime and insufficient mining quality, especially for large transaction datasets with thousands to tens of thousands of items and up to millions of transactions. To solve these problems, a novel GPU-based efficient parallel heuristic algorithm for HUIM (PHA-HUIM) is proposed in this paper. The iterative process of PHA-HUIM consists of three main steps: the search strategy, fitness evaluation, and ring topology communication. The search strategy and ring topology communication are designed to run in constant time on GPU. The parallelism of fitness evolution helps to substantially accelerate the algorithm. To improve the mining quality, a multi-start strategy with an unbalanced allocation strategy is employed in the search process. Ring topology communication is adopted to maintain population diversity. A load balancing strategy is introduced to reduce the thread divergence to improve the parallel efficiency. The experimental results on nine large datasets show that PHA-HUIM outperforms state-of-the-art HUIM algorithms in terms of speedup performance, runtime, and mining quality.
We consider single-phase flow with solute transport where ions in the fluid can precipitate and form a mineral, and where the mineral can dissolve and release solute into the fluid. Such a setting includes an evolving...
详细信息
Instant delivery has become a fundamental service in people's daily lives. Different from the traditional express service, the instant delivery has a strict shipping time constraint after being ordered. However, t...
详细信息
ISBN:
(数字)9798350317152
ISBN:
(纸本)9798350317169
Instant delivery has become a fundamental service in people's daily lives. Different from the traditional express service, the instant delivery has a strict shipping time constraint after being ordered. However, the labor shortage makes it challenging to realize efficient instant delivery. To tackle the problem, researchers have studied to introduce vehicles (i.e., taxis) or Unmanned Aerial Vehicles (UAVs or drones) into instant delivery tasks. Unfortunately, the delivery detour of taxis and the limited battery of UAVs make it hard to meet the rapidly increasing instant delivery demands. Under this circumstance, this paper proposes an air-ground cooperative instant delivery paradigm to maximize the delivery performance and meanwhile minimize the negative effects on the taxi passengers. Specifically, a data-driven delivery potential-demands-aware cooperative strategy is designed to improve the overall delivery performance of both UAVs and taxis as well as the taxi passengers' experience. The experimental results show that the proposed method improves the delivery number by 30.1% and 114.5% compared to the taxi-based and UAV-based instant delivery respectively, and shortens the delivery time by 35.7% compared to the taxi-based instant delivery.
For better protection of distributedsystems, two well-known techniques are: checkpointing and rollback recovery. While failure protection is often considered a separate issue, it is crucial for establishing better se...
详细信息
暂无评论