Given a set S of n static points and a mobile point p in R2, we study the variations of the smallest circle that encloses S{p} when p moves along a straight line . In this work, a complete characterization of the locu...
详细信息
Given p ∈ N, a p distance coloring is a coloring f : V → {1, 2, · · · , n} of the vertices of G such that f(u) 6= f(v) for all pair of vertices u and v in G where d(u, v), the distance between u and v...
详细信息
An often overlooked metric in millimeterwave communication is the so-called stability of assigned links. Links can fail due to obstacles (both static and dynamic), and user mobility. Handling static obstacles is easy;...
详细信息
Recently, reconfigurable intelligent surfaces (RISs) have been introduced in millimeter wave (mmWave) device to device (D2D) communication scenarios to provide seamless connection and high data rate to a pair of proxi...
详细信息
There are several schemes for checkpointing and rollback recovery. In this paper, we analyze some such schemes under a stochastic model. We have found expressions for average cost of checkpointing, rollback recovery, ...
详细信息
The channel assignment problem with separation is formulated as a vertex coloring problem of a graph G = (V, E) where each vertex represents a base station and two vertices are connected by an edge if their correspond...
详细信息
The channel assignment problem with separation is formulated as a vertex coloring problem of a graph G = (V, E) where each vertex represents a base station and two vertices are connected by an edge if their corresponding base stations are interfering to each other. The L(δ1, δ2,,δt) coloring of G is a mapping f: V → {0,1, , λ} such that |f(u) - f(v)| ≥ δi if d(u,v) = i, where d(u,v) denotes the distance between vertices u and v in G and 1 ≤ i ≤ t. Here λ, the largest color assigned to a vertex of G, is known as the span. The same color can be reused in two vertices u and v if d(u,v) ≥ t+1, where t+1 is the reuse distance. The objective is to minimize λ over all such coloring function f. Here (δ1,δ2,,δt) is called the separation vector where δ1,δ2,,δt are positive integers with δ1 ≥ δ2 ≥ ≥ δt. Let λ∗ be the minimum span such that there exists an L(1, 1, , 1) coloring of G. We denote the separation vector (1,1,, 1) as (1t). We deal with the problem of finding the maximum value of δ1 such that there exists an L(δ1,1t-1) coloring with span equal to λ∗. So far bounds on δ1 have been obtained for L(δ1, 1t-1) coloring with span λ∗ for the square and triangular grids. Shashanka et al. [18] posed the problem as open for the honeycomb grid. We give lower and upper bounds of δ1 for L(δ1, 1t-1) coloring with span λ∗ of the honeycomb grid. The bounds are asymptotically tight. We also present color assignment algorithms to achieve the lower bound.
Synthesis of quaternary quantum circuits involves basic quaternary gates and logic operations in the quaternary quantum domain. In this paper, we propose new projection operations and quaternary logic gates for synthe...
详细信息
Multi-Access Edge computing (MEC) has emerged as a promising new paradigm allowing low latency access to services deployed on edge servers to avert network latencies often encountered in accessing cloud services. A ke...
详细信息
Multi-Access Edge computing (MEC) has emerged as a promising new paradigm allowing low latency access to services deployed on edge servers to avert network latencies often encountered in accessing cloud services. A key component of the MEC environment is an auto-scaling policy which is used to decide the overall management and scaling of container instances corresponding to individual services deployed on MEC servers to cater to traffic fluctuations. In this work, we propose a Safe Reinforcement Learning (RL)-based auto-scaling policy agent that can efficiently adapt to traffic variations to ensure adherence to service specific latency requirements. We model the MEC environment using a Markov Decision Process (MDP). We demonstrate how latency requirements can be formally expressed in Linear Temporal Logic (LTL). The LTL specification acts as a guide to the policy agent to automatically learn auto-scaling decisions that maximize the probability of satisfying the LTL formula. We introduce a quantitative reward mechanism based on the LTL formula to tailor service specific latency requirements. We prove that our reward mechanism ensures convergence of standard Safe-RL approaches. We present experimental results in practical scenarios on a test-bed setup with real-world benchmark applications to show the effectiveness of our approach in comparison to other state-of-the-art methods in literature. Furthermore, we perform extensive simulated experiments to demonstrate the effectiveness of our approach in large scale scenarios.
暂无评论