This paper exploits sink mobility to prolong the network lifetime in wireless sensor networks (WSNs) where the information delay caused by moving the sink should be bounded. We build a unified framework for analyzing ...
详细信息
ISBN:
(纸本)9781457720529
This paper exploits sink mobility to prolong the network lifetime in wireless sensor networks (WSNs) where the information delay caused by moving the sink should be bounded. We build a unified framework for analyzing this joint sink mobility and routing problem. We offer a mathematical modeling that is general and captures diversified issues, e.g. sink mobility, routing, delay, etc. We discuss the induced subproblems and present efficient solutions for them. Then, we generalize these solutions and propose a polynomial-time optimal algorithm for the origin problem. In simulations, we show the benefits of involving a mobile sink. We also show that the impact of the delay bound on the network lifetime.
This paper presents a fault-tolerant middleware for private storage services based on a client-server model. A client-side API split files into redundant chunks, which are encoded/decoded, anonymized, and distributed ...
详细信息
This paper presents a fault-tolerant middleware for private storage services based on a client-server model. A client-side API split files into redundant chunks, which are encoded/decoded, anonymized, and distributed to different storage locations in parallel by using multistream pipeline. A chunk placement engine on the server-side API receives/delivers chunks from/to the client side API. In this model, the user preserves her files in-house and achieves data availability anytime from anywhere even in site failure scenarios while the organizations preserves the control of their storage resources. The experimental evaluation reveals the middleware can compensate the overhead produced on the client side by taking advantage of multiple cores commonly found in user's devices. The users observe that the middleware produces similar performance to the private and public file hosting systems tested in the evaluation. The organizations observe a significant reduction of work in their servers. In site disaster scenarios, the middleware yields even better performance than fault-tolerant online distributed solutions tested in the evaluation.
Concurrent trace is an emerging challenge when debugging multicore systems. In concurrent trace, trace buffer becomes a bottleneck since all trace sources try to access it simultaneously. In addition, the on-chip inte...
详细信息
ISBN:
(纸本)9783981080186
Concurrent trace is an emerging challenge when debugging multicore systems. In concurrent trace, trace buffer becomes a bottleneck since all trace sources try to access it simultaneously. In addition, the on-chip interconnection fabric is extremely high hardware cost for the distributed trace signals. In this paper, we propose a clustering-based scheme which implements concurrent trace for debugging Network-on-Chip (NoC) based multicore systems. In the proposed scheme, a unified communication framework eliminates the requirement for interconnection fabric which is only used during debugging. With clustering scheme, multiple concurrent trace sources can access distributed trace buffer via NoC under bandwidth constraint. We evaluate the proposed scheme using Booksim and the results show the effectiveness of the proposed scheme.
The target coverage is an important yet challenging problem in wireless sensor networks, especially when both coverage and energy constraints should be taken into account. Due to its nonlinear nature, previous studies...
详细信息
The target coverage is an important yet challenging problem in wireless sensor networks, especially when both coverage and energy constraints should be taken into account. Due to its nonlinear nature, previous studies of this problem have mainly focused on heuristic algorithms; the theoretical bound remains unknown. Moreover, the most popular method used in the previous literature, i.e., discretization of continuous time, has yet to be justified. This paper fills in these gaps with two theoretical results. The first one is a formal justification for the method. We use a simple example to illustrate the procedure of transforming a solution in time domain into a corresponding solution in the pattern domain with the same network lifetime and obtain two key observations. After that, we formally prove these two observations and use them as the basis to justify the method. The second result is an algorithm that can guarantee the network lifetime to be at least (1 - ε) of the optimal network lifetime, where ε can be made arbitrarily small depending on the required precision. The algorithm is based on the column generation (CG) theory, which decomposes the original problem into two sub-problems and iteratively solves them in a way that approaches the optimal solution. Moreover, we developed several constructive approaches to further optimize the algorithm. Numerical results verify the efficiency of our CG-based algorithm.
The 15th ACM SIGPLAN International Conference on Functional Programming (ICFP) took place on September 27–29, 2010 in Baltimore, Maryland. After the conference, the programme committee, chaired by Stephanie Weirich, ...
The 15th ACM SIGPLAN International Conference on Functional Programming (ICFP) took place on September 27–29, 2010 in Baltimore, Maryland. After the conference, the programme committee, chaired by Stephanie Weirich, selected several outstanding papers and invited their authors to submit to this special issue of Journal of Functional Programming. Umut A. Acar and James Cheney acted as editors for these submissions. This issue includes the seven accepted papers, each of which provides substantial new material beyond the original conference version. The selected papers reflect a consensus by the program committee that ICFP 2010 had a number of strong papers that link core functional programming ideas with other areas, such as multicore, embedded systems, and data compression.
Cloud computing is a new computing model. The resource monitoring tools are immature compared to traditional distributed computing and grid computing. In order to better monitor the virtual resource in cloud computing...
详细信息
Cloud computing is a new computing model. The resource monitoring tools are immature compared to traditional distributed computing and grid computing. In order to better monitor the virtual resource in cloud computing, a periodically and event-driven push (PEP) monitoring model is proposed. Taking advantage of the push and event-driven mechanism, the model can provide comparatively adequate information about usage and status of the resources. It can simplify the communication between Master and Work Nodes without missing the important issues happened during the push interval. Besides, we develop "mon" to make up for the deficiency of Libvirt in monitoring of virtual CPU and memory.
If the route computation operation in an adaptive router returns more than one output channels, the selection strategy chooses one from them based on the congestion metric used. The effectiveness of a selection strate...
详细信息
With the growing scale of high-performance computing (HPC) systems, today and more so tomorrow, faults are a norm rather than an exception. HPC applications typically tolerate fail-stop failures under the stop-and-wai...
详细信息
In this paper we present a thorough experience on tuning double-precision matrix-matrix multiplication (DGEMM) on the Fermi GPU architecture. We choose an optimal algorithm with blocking in both shared memory and regi...
详细信息
ISBN:
(纸本)9781450307710
In this paper we present a thorough experience on tuning double-precision matrix-matrix multiplication (DGEMM) on the Fermi GPU architecture. We choose an optimal algorithm with blocking in both shared memory and registers to satisfy the constraints of the Fermi memory hierarchy. Our optimization strategy is further guided by a performance modeling based on micro-architecture benchmarks. Our optimizations include software pipelining, use of vector memory operations, and instruction scheduling. Our best CUDA algorithm achieves comparable performance with the latest CUBLAS library1. We further improve upon this with an implementation in the native machine language, leading to 20% increase in performance. That is, the achieved peak performance (efficiency) is improved from 302Gflop/s (58%) to 362Gflop/s (70%). Copyright 2011 ACM.
We present a novel approach for the fabrication of hierarchical electrodes that integrates bottom-up nanostructure self-assembly and top-down microfabrication processes. The electrodes consist of virus-templated nanos...
详细信息
暂无评论