Mobile edge computing provides the opportunity for wireless users to exploit the power of cloud computing without a large communication delay. To serve data-intensive applications (e.g., video analytics, machine learn...
详细信息
Mobile edge computing provides the opportunity for wireless users to exploit the power of cloud computing without a large communication delay. To serve data-intensive applications (e.g., video analytics, machine learning tasks) from the edge, we need, in addition to computation resources, storage resources for storing server code and data as well as network bandwidth for receiving user-provided data. Moreover, due to time-varying demands, the code and data placement needs to be adjusted over time, which raises concerns of system stability and operation cost. In this paper, we address these issues by proposing a two-time-scale framework that jointly optimizes service (code and data) placement and request scheduling, while considering storage, communication, computation, and budget constraints. First, by analyzing the hardness of various cases, we completely characterize the complexity of our problem. Next, we develop a polynomial-time service placement algorithm by formulating our problem as a set function optimization, which attains a constant-factor approximation under certain conditions. Furthermore, we develop a polynomial-time request scheduling algorithm by computing the maximum flow in a carefully constructed auxiliary graph, which satisfies hard resource constraints and is provably optimal in the special case where requests have homogeneous resource demands. Extensive synthetic and trace-driven simulations show that the proposed algorithms achieve 90% of the optimal performance.
Intracranial pressure (ICP) burden or pressure time dose (PTD) is a valuable clinical indicator for pending intracranial hypertension, mostly based on threshold exceedance. Pulse frequency and waveform morphology (WFM...
详细信息
Intracranial pressure (ICP) burden or pressure time dose (PTD) is a valuable clinical indicator for pending intracranial hypertension, mostly based on threshold exceedance. Pulse frequency and waveform morphology (WFM) of the ICP signal contribute to PTD. The temporal resolution of the ICP signal has a great influence on PTD calculation but has not been systematically studied yet. Hence, the temporal resolution of the ICP signal on PTD calculation is investigated. We retrospectively analysed continuous 48 h ICP recordings with high temporal resolution obtained from 94 patients at the intensive care unit who underwent neurosurgery due to an intracranial haemorrhage and received an intracranial pressure probe (43 females, median age: 72 years, range: 23 to 88 years). The cumulative area under the curve above the threshold of 20 mmHg was compared for different temporal resolutions of the ICP signal (beat-to-beat, 1 s, 300 s, 1800 s, 3600 s). Events with prolonged ICP elevation were compared to those with few isolated threshold exceedances. PTD increased for lower temporal resolutions independent of WFM and frequency of threshold exceedance. PTDbeat-to-beat best reflected the impact of frequency of threshold exceedance and WFM. Events that could be distinguished in PTDbeat-to-beat became magnified more than 7-fold in PTD1s and more than 104 times in PTD1h, indicating an overestimation of PTD. PTD calculation should be standardised, and beat-by-beat PTD could serve as an easy-to-grasp indicator for the impact of frequency and WFM of ICP elevations on ICP burden.
Computing cohesive subgraphs is a central problem in graph theory. While many formulations of cohesive subgraphs lead to NP-hard problems, finding a densest subgraph can be done in polynomial-time. As such, the denses...
详细信息
Computing cohesive subgraphs is a central problem in graph theory. While many formulations of cohesive subgraphs lead to NP-hard problems, finding a densest subgraph can be done in polynomial-time. As such, the densest subgraph model has emerged as the most popular notion of cohesiveness. Recently, the data mining community has started looking into the problem of computing k densest subgraphs in a given graph, rather than one. In this paper we consider a natural variant of the k densest subgraphs problem, where overlap between solution subgraphs is allowed with no constraint. We show that the problem is fixed-parameter tractable with respect to k, and admits a PTAS for constant k. Both these algorithms complement nicely the previously known O (nk) algorithm for the problem. (c) 2022 Elsevier B.V. All rights reserved.
Although indoor localization has been studied over a decade, it is still challenging to enable many IoT applications, such as activity tracking and monitoring in smart home and customer navigation and trajectory minin...
详细信息
Although indoor localization has been studied over a decade, it is still challenging to enable many IoT applications, such as activity tracking and monitoring in smart home and customer navigation and trajectory mining in smart shopping mall, which typically require meter-level localization accuracy in a highly dynamic and large-scale indoor environment. Therefore, this article aims at designing and implementing an adaptive and scalable indoor tracking system in a cost-effective way. First, we propose a zero site-survey overhead (ZSSO) algorithm to enhance the system scalability. It integrates the step information and map constraints to infer user's positions based on the particle filter and supports the auto labeling of scanned Wi-Fi signal for constructing the fingerprint database without the extra site-survey overhead. Further, we propose an iterative-weight-update (IWU) strategy for ZSSO to enhance system robustness and make it more adaptive to the dynamic changing of environments. Specifically, a two-step clustering mechanism is proposed to delete outliers in the fingerprint database and alleviate the mismatch between the auto-tagged coordinates and the corresponding signal features. Then, an iterative fingerprint update mechanism is designed to continuously evaluate the Wi-Fi fingerprint localization results during online tracking, which will further refine the fingerprint database. Finally, we implement the indoor tracking system in real-world environments and conduct a comprehensive performance evaluation. The field testing results conclusively demonstrate the scalability and effectiveness of the proposed algorithms.
Group testing idea is an efficient approach to detect prevalence of an infection in the test samples taken from a group of individuals. It is based on the idea of pooling the test samples and performing tests to the m...
详细信息
Group testing idea is an efficient approach to detect prevalence of an infection in the test samples taken from a group of individuals. It is based on the idea of pooling the test samples and performing tests to the mixed samples. This approach results in possible reduction in the required number of tests to identify infections. Classical group testing works consider static settings where the infection statuses of the individuals do not change throughout the testing process. In our paper, we study a dynamic infection spread model, inspired by the discrete time SIR model, where infections are spread via non-isolated infected individuals, while infection keeps spreading over time, a limited capacity testing is performed at each time instance as well. In contrast to the classical, static group testing problem, the objective in our setup is not to find the minimum number of required tests to identify the infection status of every individual in the population, but to control the infection spread by detecting and isolating the infections over time by using the given, limited number of tests. In order to analyze the performance of the proposed algorithms, we focus on the average-case analysis of the number of individuals that remain non-infected throughout the process of controlling the infection. We propose two dynamic algorithms that both use given limited number of tests to identify and isolate the infections over time, while the infection spreads, while the first algorithm is a dynamic randomized individual testing algorithm, in the second algorithm we employ the group testing approach similar to the original work of Dorfman. By considering weak versions of our algorithms, we obtain lower bounds for the performance of our algorithms. Finally, we implement our algorithms and run simulations to gather numerical results and compare our algorithms and theoretical approximation results under different sets of system parameters.
In order to solve the problem of large energy consumption caused by premature convergence of comprehensive energy scheduling algorithm, a comprehensive energy optimal coordination planning algorithm considering new en...
详细信息
ISBN:
(纸本)9781665420655
In order to solve the problem of large energy consumption caused by premature convergence of comprehensive energy scheduling algorithm, a comprehensive energy optimal coordination planning algorithm considering new energy grid connection is proposed. The power output models of wind power station, photovoltaic power station, cascade hydropower station and thermal power plant are established to determine their power characteristics. Considering the new energy grid connection, construct the joint distribution function, calculate the correlation coefficient between wind power generation and photovoltaic power generation, and establish a comprehensive power generation mode. By setting the objective function and constraints, the comprehensive energy joint control model is constructed, and the optimal coordinated programming algorithm is designed to solve the objective function to meet the load demand of users. The example analysis results show that under the condition of roughly the same solution time, compared with the coordinated planning algorithm based on multi-objective difference and dynamic programming, the algorithm designed in this paper has less energy consumption of thermal power plants and water consumption of cascade hydropower stations, and can realize the optimal operation of the whole power system.
Theory of Computing (ToC) is an important aspect of nearly every undergraduate CS curriculum, as it concerns what computation fundamentally means. However, there has been little research into ToC pedagogy, both within...
详细信息
ISBN:
(纸本)9798400706042
Theory of Computing (ToC) is an important aspect of nearly every undergraduate CS curriculum, as it concerns what computation fundamentally means. However, there has been little research into ToC pedagogy, both within the classroom and how it fits within its institutional context. We propose in this working group to create a survey of current ToC pedagogy. Our goals are to create a standard for teaching ToC, find trends, determine under-researched areas, and to build a community among ToC educators.
To handle the exponential growth of data-intensive network edge services and automatically solve new challenges in routing management, machine learning is steadily being incorporated into software-defined networking s...
详细信息
To handle the exponential growth of data-intensive network edge services and automatically solve new challenges in routing management, machine learning is steadily being incorporated into software-defined networking solutions. In this line, the article presents the design of a piecewise-stationary Bayesian multi-armed bandit approach for the online optimum end-to-end dynamic routing of data flows in the context of programmable networking systems. This learning-based approach has been analyzed with simulated and emulated data, showing the proposal's ability to sequentially and proactively self-discover the end-to-end routing path with minimal delay among a considerable number of alternatives, even when facing abrupt changes in transmission delay distributions due to both variable congestion levels on path network devices and dynamic delays to transmission links.
The group testing idea is an efficient infection identification approach based on pooling the test samples of a group of individuals, which results in identification with less number of tests than individually testing...
详细信息
The group testing idea is an efficient infection identification approach based on pooling the test samples of a group of individuals, which results in identification with less number of tests than individually testing the population. In our work, we propose a novel infection spread model based on a random connection graph which represents connections between n individuals. Infection spreads via connections between individuals, and this results in a probabilistic cluster formation structure as well as non-i.i.d. (correlated) infection statuses for individuals. We propose a class of two-step sampled group testing algorithms where we exploit the known probabilistic infection spread model. We investigate the metrics associated with two-step sampled group testing algorithms. To demonstrate our results, for analytically tractable exponentially split cluster formation trees, we calculate the required number of tests and the expected number of false classifications in terms of the system parameters, and identify the trade-off between them. For such exponentially split cluster formation trees, for zero-error construction, we prove that the required number of tests is O(log2n). Thus, for such cluster formation trees, our algorithm outperforms any zero-error non-adaptive group test, binary splitting algorithm, and Hwang's generalized binary splitting algorithm. Our results imply that, by exploiting probabilistic information on the connections of individuals, group testing can be used to reduce the number of required tests significantly even when the infection rate is high, contrasting the prevalent belief that group testing is useful only when the infection rate is low.
暂无评论