With the field of wireless sensor networks rapidly maturing, the focus shifts from "easy" deployments, like remote monitoring, to more difficult domains where applications impose strict, real-time constraint...
ISBN:
(纸本)9783540730897
With the field of wireless sensor networks rapidly maturing, the focus shifts from "easy" deployments, like remote monitoring, to more difficult domains where applications impose strict, real-time constraints on performance. One such class of applications is safety critical systems, like fire and burglar alarms, where events detected by sensor nodes have to be reported reliably and timely to a sink node. A complicating factor is that systems must operate for years without manual intervention, which puts very strong demands on the energy efficiency of protocols running on current sensor-node *** we are not aware of a solution that meets all requirements of safety-critical systems, i.e. provides reliable data delivery and low latency and low energy consumption, we present Dwarf, an energy-efficient, robust and dependable forwarding algorithm. The core idea is to use unicast-based partial flooding along with a delay-aware node selection strategy. Our analysis and extensive simulations of real-world scenarios show that Dwarf tolerates large fractions of link and node failures, yet is energy efficient enough to allow for an operational lifetime of several years.
With the proliferation of various kinds of sensor networks, we will see large amounts of heterogeneous data. They have different characteristics such as data content, formats, modality and quality. Existing research h...
ISBN:
(纸本)9783540730897
With the proliferation of various kinds of sensor networks, we will see large amounts of heterogeneous data. They have different characteristics such as data content, formats, modality and quality. Existing research has largely focused on issues related to individual sensor networks; how to make use of diverse data beyond the individual network level is largely unaddressed. In this paper, we propose a semantics-based approach for this problem and describe a system that constructs applications that utilize many sources of data simultaneously. We propose models to formally describe the semantics of data sources, and processing modules that perform various kinds of operations on data. Based on such formal semantics, our system composes data sources and processing modules together in response to users queries. The semantics provides a common ground such that data sources and processing modules from various parties can be shared and reused among applications. We describe our system architecture, illustrate application deployment, and share our experiences in the semantic approach.
The problem we address in this paper is how to detect an intruder moving through a polygonal space that is equipped with a camera sensor network. We propose a probabilistic sensor tasking algorithm in which cameras se...
ISBN:
(纸本)9783540730897
The problem we address in this paper is how to detect an intruder moving through a polygonal space that is equipped with a camera sensor network. We propose a probabilistic sensor tasking algorithm in which cameras sense the environment independently of one another, thus reducing the communication overhead. Since constant monitoring is prohibitively expensive with complex sensors such as cameras, the amount of sensing done is also minimized. To be effective, a minimum detection probability must be guaranteed by the system over all possible paths through the space. The straightforward approach of enumerating all such paths is intractable, since there is generally an infinite number of potential paths. Using a geometric decomposition of the space, we lowerbound the detection probability over all paths using a small number of linear constraints. The camera tasking is computed for set of example layouts and shows large performance gains with our probabilistic scheme over both constant monitoring as well as over a deterministic heuristic.
We develop a practical, distributed algorithm to detect events, identify measurement errors, and infer missing readings in ecological applications of wireless sensor networks. To address issues of non-stationarity in ...
ISBN:
(纸本)9783540730897
We develop a practical, distributed algorithm to detect events, identify measurement errors, and infer missing readings in ecological applications of wireless sensor networks. To address issues of non-stationarity in environmental data streams, each sensor-processor learns statistical distributions of differences between its readings and those of its neighbors, as well as between its current and previous measurements. Scalar physical quantities such as air temperature, soil moisture, and light flux naturally display a large degree of spatiotemporal coherence, which gives a spectrum of fluctuations between adjacent or consecutive measurements with small variances. This feature permits stable estimation over a small state space. The resulting probability distributions of differences, estimated online in real time, are then used in statistical significance tests to identify rare events. Utilizing the spatio-temporal distributed nature of the measurements across the network, these events are classified as single mode failures - usually corresponding to measurement errors at a single sensor - or common mode events. The event structure also allows the network to automatically attribute potential measurement errors to specific sensors and to correct them in real time via a combination of current measurements at neighboring nodes and the statistics of differences between them. Compared to methods that use Bayesian classification of raw data streams at each sensor, this algorithm is more storage-efficient, learns faster, and is more robust in the face of non-stationary phenomena. Field results from a wireless sensor network (sensor Web) deployed at Sevilleta National Wildlife Refuge are presented.
The exposure of a path p is a measure of the likelihood that an object traveling along p is detected by a network of sensors and it is formally defined as an integral over all points x of p of the sensibility (the str...
ISBN:
(纸本)9783540730897
The exposure of a path p is a measure of the likelihood that an object traveling along p is detected by a network of sensors and it is formally defined as an integral over all points x of p of the sensibility (the strength of the signal coming from x) times the element of path length. The minimum exposure path (MEP) problem is, given a pair of points x and y inside a sensor field, find a path between x and y of a minimum exposure. In this paper we introduce the first rigorous treatment of the problem, designing an approximation algorithm for the MEP problem with guaranteed performance characteristics. Given a convex polygon P of size n with O(n) sensors inside it and any real number Ɛ > 0, our algorithm finds a path in P whose exposure is within an 1 + Ɛ factor of the exposure of the MEP, in time O(n/Ɛ2ψ), where ψ is a topological characteristic of the field. We also describe a framework for a faster implementation of our algorithm, which reduces the time by a factor of approximately Θ(1/Ɛ), by keeping the same approximation ratio.
We introduce a class of anchoritic sensor networks, where communications between sensor nodes are undesirable or infeasible due to, e.g., harsh environments, energy constraints, or security considerations. Instead, we...
ISBN:
(纸本)9783540730897
We introduce a class of anchoritic sensor networks, where communications between sensor nodes are undesirable or infeasible due to, e.g., harsh environments, energy constraints, or security considerations. Instead, we assume that the sensors buffer the measurements over the lifetime and report them directly to a sink without necessarily requiring communications. Upon retrieval of the reports, all sensor data measurements will be available to a central entity for post *** algorithm is based on the further assumption that some of the data fields that are being observed by the sensors can be modeled as a local (i.e. having decaying spatial correlations) stochastic process; if not, then choose an auxiliary field, e.g., carefully engineered random signals intentionally generated by arranged devices, "cloud shadows" cast on the ground, or animal heat. The sensor nodes record the measurements, or a function of the measurements, e.g., "1" when the measured signal is above a threshold, and "0" otherwise. These time-stamped sequences are ultimately transferred to the sink. The localization problem is then approached by analyzing the correlations between these sequences at pairs of *** for applications, we discuss the localization scheme for large-scaled sensor networks deployed on the seabed and study a two-tiered architecture that organizes deaf sensors with local masters.
Information privacy is usually concerned with the confidentiality of personally identifiable information (PII), such as electronic medical records. Nowadays, Web services are used to support different applications whi...
详细信息
Information privacy is usually concerned with the confidentiality of personally identifiable information (PII), such as electronic medical records. Nowadays, Web services are used to support different applications which may contain PII, such as healthcare applications. Thus, the information access control mechanism for Web services must be embedded into privacy-enhancing technologies. Further as application goes mobile and ubiquitous, location become an important determinant for enforcing privacy constraints. On the other hand, role-based access control (RBAC) model has been widely investigated and applied into various applications for a period of time. This paper presents a privacy access control policy enforcement model extended from RBAC with location intelligence for Web services-based applications. In addition, we illustrate the realization of this model with a middleware architecture. This paper also illustrates our proposed mechanism in the context of extensible access control markup language (XACML) and WS-policy constraints.
In this paper, we study the two-tiered wireless sensor network (WSN) architecture and propose the optimal cluster association algorithm for it to maximize the overall network lifetime. A two-tiered WSN is formed by nu...
ISBN:
(纸本)9783540730897
In this paper, we study the two-tiered wireless sensor network (WSN) architecture and propose the optimal cluster association algorithm for it to maximize the overall network lifetime. A two-tiered WSN is formed by number of small sensor nodes (SNs), powerful application nodes (ANs), and base-stations (BSs, or gateways). SNs capture, encode, and transmit relevant information to ANs, which then send the combined information to BSs. Assuming the locations of the SNs, ANs, and BSs are fixed, we consider how to associate the SNs to ANs such that the network lifetime is maximized while every node meets its bandwidth requirement. When the SNs are homogeneous (e.g., same bandwidth requirement), we give optimal algorithms to maximize the lifetime of the WSNs; when the SNs are heterogeneous, we give a 2-approximation algorithm that produces a network whose lifetime is within 1/2 of the optimum. We also present algorithms to dynamically update the cluster association when the network topology changes. Numerical results are given to demonstrate the efficiency and optimality of the proposed approaches. In simulation study, comparing network lifetime, our algorithm outperforms other heuristics almost twice.
CORIE is a pilot environmental observation and forecasting system (EOFS) for the Columbia River. The goal of CORIE is to characterize and predict complex circulation and mixing processes in a system encompassing the l...
详细信息
ISBN:
(纸本)9783540730897
CORIE is a pilot environmental observation and forecasting system (EOFS) for the Columbia River. The goal of CORIE is to characterize and predict complex circulation and mixing processes in a system encompassing the lower river, the estuary, and the near-ocean using a multi-scale data assimilation *** challenge for scientists is to maintain the accuracy of their modeling system while minimizing resource usage. In this paper, we first propose a metric for characterizing the error in the CORIE data assimilation model and study the impact of the number of sensors on the error reduction. Second, we propose a genetic algorithm to compute the optimal configuration of sensors that reduces the number of sensors to the minimum required while maintaining a similar level of error in the data assimilation model. We verify the results of our algorithm with 30 runs of the data assimilation model. Each run uses data collected and estimated over a two-day period. We can reduce the sensing resource usage by 26.5% while achieving comparable error in data assimilation. As a result, we can potentially save 40 thousand dollars in initial expenses and 10 thousand dollars in maintenance expense per *** algorithm can be used to guide operation of the existing observation network, as well as to guide deployment of future sensor stations. The novelty of our approach is that our problem formulation of network configuration is influenced by the data assimilation framework which is more meaningful to domain scientists, rather than using abstract sensing models.
We take an algorithmic approach to a well-known communication channel problem and develop several algorithms for solving it. Specifically, we develop power control algorithms for sensor networks with collaborative rel...
ISBN:
(纸本)9783540730897
We take an algorithmic approach to a well-known communication channel problem and develop several algorithms for solving it. Specifically, we develop power control algorithms for sensor networks with collaborative relaying under bandwidth constraints, via quantization of finite rate (bandwidth limited) feedback channels. We first consider the power allocation problem under collaborative relaying where the tradeoff between minimizing ones own energy expenditure and the energy for relaying is considered under the constraints of packet outage probability and bandwidth constrained (finite rate) feedback. Then we develop bandwidth constrained quantization algorithms (due to the finite rate feedback) that seek the optimal way of quantizing channel quality and power values in order to minimize the total average transmission power and satisfy the given probability of outage. We develop two kinds of quantization protocols and associated quantization algorithms. For separate source-relay quantization, we reduce the problem to the well-known k-median problem [1] on line graphs and show a a simple O((KJ)2N) polynomial time algorithm, where log2 KJ is the quantization bandwidth and N is the size of the discretized parameter space. For joint quantization, we first develop a simple 2-factor approximation of complexity O(KJN + N logN). Then, for Ɛ > 0, we develop a fully polynomial approximation scheme (FPAS) that approximates the optimal quantization cost to within an 1+Ɛ-factor. The running time of the FPAS is polynomial in 1/Ɛ, size of the input N and also ln F, where F is the maximum available transmit power.
暂无评论