作者:
P. BelliniI. BrunoP. NesiDISIT-DSI
Distributed Systems and Internet Technology Lab Dipartimento di Sistemi e Informatica Università degli Studi di Firenze Florence Italy
The evolution of content distribution for entertainment and infotainment is urging better pricing and value-for-money for industry products and services. Content providers, aggregators and distributors constantly need...
详细信息
The evolution of content distribution for entertainment and infotainment is urging better pricing and value-for-money for industry products and services. Content providers, aggregators and distributors constantly need to adopt innovative means for reducing costs and to satisfy the user needs. AXMEDIS framework, aims at providing technical solutions and tools for automating content production, formatting, protection and distribution. The solution described in the paper was designed as a distributed environment based on the GRID technology and provides an efficient management of content production/finalization as well as the dynamic service discovery and composition, distributed resource management and adaptive media delivery. AXMEDIS is a large 1st FP6 Integrated Project of Research and Development, in which a set of enabling technologies are developed to cope with the above mentioned problems
In the last few years, there has been a dramatic growth of global, distributed applications such as Skype for VoIP telephony, P2P file sharing systems, and software/content distribution networks. Almost in parallel, a...
详细信息
In the last few years, there has been a dramatic growth of global, distributed applications such as Skype for VoIP telephony, P2P file sharing systems, and software/content distribution networks. Almost in parallel, a number of different monitoring overlay networks have been proposed to monitor the health of such systems for facilitating tasks like performance planning and problem solving. An important task of any monitoring overlay is the computation of aggregation functions such as MEAN over a set of nodes. However, in the face of multi-million nodes' networks, the computation of any aggregation function over the whole network or even a large subset of it is challenging. Every query scheme has to be robust against chum - nodes join and leave the system in arbitrary rates - and scalable up to millions of nodes. Consider the simple approach to aggregation. We may first do a broadcast to the network with an aggregate query and then have each node return its local value. Clearly, this would take significant time to complete in a large network. Moreover, without suitable coordination, the responses may collectively become a DDoS attack to the querying node. The lack of scalability in this approach has led to in-network computation, where an overlay is constructed to disseminate and compute the query in a distributed manner [1]. Several variations to computing aggregates have been proposed for sensor networks, Grids and cluster-based applications with each approach being focused on the constraints imposed by the network under study. Considine et al. [2] for example propose the use of duplicate insensitive sketches to compute SUM aggregates for networks with resource constraints and node failures such as sensor overlays. Other systems like Astrolabe [3] target smaller and less dynamic networks by constructing a fault resilient monitoring network to cope with network partitioning. In this paper, we address the problem of computing aggregate functions for large-scale netwo
The decoupling is one of the most important features of service components that make Web services successful in the future. Decoupling metrics can be used to measure and evaluate the decoupling attributes of a distrib...
详细信息
The decoupling is one of the most important features of service components that make Web services successful in the future. Decoupling metrics can be used to measure and evaluate the decoupling attributes of a distributed service oriented software architecture that has very significant impacts on the understandability, maintainability, reliability, testability, and reusability of software components. The decoupling metrics can also be used as a criterion for selection of existing service components for compositions. Many measurement metrics for decoupling have been proposed, however most measures are not specific for service component decoupling. This paper provides a practical guide for evaluating decoupling between service-oriented components in the service composition such as business process execution language (BEPL). Coupling was originally defined as the measure of the strength of association established by a connection from one module to another. Most of the existing techniques and measuring coupling metrics are classified by procedural programming and object-oriented programming. In this paper we propose a decoupling metrics based on the black-box parameters of service stateness (stateless/stateful), interaction (one-way, two-way), service interface required, service interface provided (supporting/supported interface), invocation modes (sync/async), self-containment (stand-alone/indirect dependent), implicit invocation (blocking/non-blocking), and binding modes (static/dynamic)
In a cellular communication scenario, wireless sensors can be deployed to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the in...
详细信息
ISBN:
(纸本)1424413804
In a cellular communication scenario, wireless sensors can be deployed to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for approximating ITs over an extended C-band (licensed and unused television band) by fitting sub channel frequencies and corresponding ITs to a regression model. Using this model, IT of a random sub channel can be calculated by the base station (BS) for further analysis of the channel interference. Our proposed model based on readings reported by sensors helps in dynamic channel selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS maximizes channel utilization and proves to be economic from energy consumption point of view. It also exhibits substantial amount of accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. Overall channel allocation efficiency is also maximized along with fairness to individual users
The ieee 802.11e draft specification is intended to solve the performance problems of WLANs for real time traffic by extending the original 802.11 medium access control (MAC) protocol and introducing priority access m...
详细信息
The ieee 802.11e draft specification is intended to solve the performance problems of WLANs for real time traffic by extending the original 802.11 medium access control (MAC) protocol and introducing priority access mechanisms in the form of the enhanced distributed channel access mechanism (EDCA) and hybrid coordination channel access (HCCA). The draft standard comes with a lot of configurable parameters for channel access, admission control, etc. but it is not very clear how real time traffic actually performs with respect to capacity and throughput in such WLANs that deploy this upcoming standard. In this report we have provided detailed simulation results on the performance of enterprise-anticipated real time VoIP application and collaborative video conferencing in presence of background traffic in EDCA-based ieee 802.11 WLANs. We estimate the channel capacity and acceptable load conditions for some important enterprise usage scenarios. Subsequently, admission control limits are experimentally derived for these usage scenarios for voice and video traffic. Our simulations show that admission control greatly helps in maintaining the quality of admitted voice calls and video conferencing sessions that are prioritized as per EDCA mechanisms within acceptable channel load conditions. The use of admission control allows admitted voice calls and video sessions to retain their throughput and delay characteristics while unadmitted traffic (voice/video streams) greatly suffer from poor quality (delays, packet drops, etc.) as the channel load increases
Wireless sensor networks (WSNs) are event-based systems that rely on the collective effort of dense deployed sensor nodes. Due to the dense deployment, since sensor observations are spatially correlated with respect t...
详细信息
ISBN:
(纸本)9781424404636
Wireless sensor networks (WSNs) are event-based systems that rely on the collective effort of dense deployed sensor nodes. Due to the dense deployment, since sensor observations are spatially correlated with respect to spatial location of sensor nodes, it may not be necessary for every sensor node to transmit its data. Therefore, due to the resource constraints of sensor nodes it is needed to select the minimum number of sensor nodes to transmit the data to the sink. Furthermore, to achieve the application-specific distortion bound at the sink it is also imperative to select the appropriate reporting frequency of sensor nodes to achieve the minimum energy consumption. In order to address these needs, we propose the new distributed Node and Rate Selection (DNRS) method which is based on the principles of natural immune system. Based on the B-cell stimulation in immune system, DNRS selects the most appropriate sensor nodes that send samples of the observed event, are referred to as designated nodes. The aim of the designated node selection is to meet the event estimation distortion constraint at the sink node with the minimum number of sensor nodes. DNRS enables each sensor node to distributively decide whether it is a designated node or not. In addition, to exploit the temporal correlation in the event data DNRS regulates the reporting frequency rate of each sensor node while meeting the application-specific delay bound at the sink. Based on the immune network principles, DNRS distributively selects the appropriate reporting frequencies of sensor nodes according to the congestion in the forward path and the event estimation distortion periodically calculated at the sink by Adaptive LMS Filter. Performance evaluation shows that DNRS provides the minimum number of designated nodes to reliably detect the event properties and it regulates the reporting frequency of designated nodes to exploit the temporal correlation in the event data whereby it provides the significant
The Common Instrument Middleware Architecture (CIMA) project, supported by the NSF Middleware Initiative, aims at making scientific instruments and sensors remotely accessible by providing a general solution for servi...
详细信息
The Common Instrument Middleware Architecture (CIMA) project, supported by the NSF Middleware Initiative, aims at making scientific instruments and sensors remotely accessible by providing a general solution for services and user interfaces to remotely access data from instruments and to remotely monitor experiments. X-ray crystallography is one of several motivating applications for the development of CIMA. Data such as CCD frames and sensor readings may be accessed by portals through middleware services as they are being acquired or through persistent archives. CIMA software may be used to federate online instruments in multiple labs, so this project must also address problems in data management and data sharing. This paper describes a collaboration between the CIMA and the Open Grid computing Environments (OGCE) project to enable remote users to monitor instruments and interact with data gathered from CIMA-enabled crystallography laboratories through various Web portal components (portlets) running within a standards-compliant portal container. We also discuss an approach taken to develop portlets that use Web services for data management and solutions for managing distributed identity and access control
暂无评论