With the major development of sensor technologies and advancements of communication network infrastructures, there is a growing interest to add more intelligence in the e-health monitoring for facilitating an effectiv...
详细信息
ISBN:
(数字)9781728160955
ISBN:
(纸本)9781728196497
With the major development of sensor technologies and advancements of communication network infrastructures, there is a growing interest to add more intelligence in the e-health monitoring for facilitating an effective healthcare system. While IoT devices are capable of continuous health-parameter sensing and providing notifications to the user, an effective business process management (BPM) facilitates effective system integration and data processing workflow. This paper proposes an efficient framework for managing emergency situations (specifically, health-related) through the analysis of heterogeneous data sources. The proposed framework, named CLAWER (cloud-Fog bAsed Workflow for Emergency seRvice) aims to bridge the gap between process management and data analytics by providing an automated workflow for personalized health-monitoring and efficient recommendation system. Here, the IoT devices are used for collecting the movement and health data. The smart phone can act as an edge device to acquire data with user movement information. The accumulated data is initially processed inside the fog device, and finally the analysis and recommendations are generated by the cloud. In this paper the indoor health-status of the users are analysed in small cell cloud enhanced eNode B, which is used as fog device. The generated recommendations are stored in the fog device to provide the recommendations to the users with low latency and in timely manner. The experimental analysis of CLAWER yields better precision and recall values than the existing methods.
Traffic congestion is a major threat to transportation sector in every urban city around the world. This causes many adverse effects like, heavy fuel consumption, increased waiting time, pollution, etc. and pose an em...
详细信息
Traffic congestion is a major threat to transportation sector in every urban city around the world. This causes many adverse effects like, heavy fuel consumption, increased waiting time, pollution, etc. and pose an eminent challenge to the movement of emergency vehicles. To achieve better driving we proceed towards a trending research field called Social Internet of Vehicles (SIoV). A social network paradigm that permits the establishment of social relationships among every vehicle in the network or with any road infrastructure can be radically helpful. This holds as the aim of SIoV, to be beneficial for the drivers, in improving the road safety, avoiding mishaps, and have a friendly-driving environment. In this paper, we propose a Dynamic congestion control with Throughput Maximization scheme based on Social Aspect (D-TMSA) utilizing the social, behavioral and preference-based relationships. Our proposed scheme along with the various social relationship types allocates green signal to maximize the traffic flow passing through an intersection. Simulation results show that the D-TMSA outperforms the existing work by achieving high throughput, lowering the total traveling time and reducing the average waiting time to better the flow of traffic based on their social attributes with each other.
With rapid availability of renewable energy sources and growing interest in their use in the datacenter industry presents opportunities for service providers to reduce their energy related costs, as well as, minimize ...
With rapid availability of renewable energy sources and growing interest in their use in the datacenter industry presents opportunities for service providers to reduce their energy related costs, as well as, minimize the ecological impact of their infrastructure. However, renewables are largely intermittent and can, negatively affect users’ applications and their performance, therefore, the profit of the service providers. Furthermore, services could be offered from those geographical locations where electricity is relatively cheaper than other locations; which may degrade the applications’ performance and potentially increase users’ costs. To ensure larger providers’ profits and lower users’ costs, certain non-interactive workloads could be either: moved and executed in geographical locations offering the lowest energy prices; or could be queued and delayed to execute later (in day or night time) when renewables, such as solar and wind energies, are at peak. However, these may have negative impacts on the energy consumption, workloads performance, and users’ costs. Therefore, to ensure energy, performance and cost efficiencies, appropriate workload scheduling, placement, migration, and resource management techniques are required to mange the infrastructure resources, workloads, and energy sources. In this paper, we propose a workload placement and three different migration policies that maximize the providers’ revenues, ensure the workload performance, reduce energy consumption, along with reducing ecological impacts and users’ costs. Using real workload traces and electricity prices for several geographical locations and distributed, heterogeneous, datacenters, our experimental evaluation suggest that the proposed approaches could save significant amount of energy ( ∼ 15.26%), reduces service monetary costs ( ∼ 0.53% - ∼ 19.66%), improves ( ∼ 1.58%) or, at least, maintains the expected level of applications’ performance, and increases providers’ revenue along with
Over the last few years, deploying data to cloud service for repository is an appealing passion that avoids efforts on significant information sustenance and administration. In distributed repository utilities, dedupl...
详细信息
Over the last few years, deploying data to cloud service for repository is an appealing passion that avoids efforts on significant information sustenance and administration. In distributed repository utilities, deduplication technique is often exploited to minimize the capacity and bandwidth necesseties of amenities by erasing repetitive data and caching only a solitary duplicate of them. Proof-of-Ownership mechanisms authorize any possessor of the identical information to approve to the distributed repository server that he possess the information in a dynamic way. In repository utilities with enormous information, the repository servers may intend to minimize the capacity of cached information, and the customers may want to examine the integrity of their information with a reasonable cost. We propose Secure Deduplication and Virtual Auditing of Data in cloud (SDVADC) mechanism that realizes integrity auditing and deduplication of information in cloud. The mechanism supports secure deduplication of information and effective virtual auditing of the documents during the download process. In addition, the proposed mechanism lowers the burden of dataowner to audit documents by himself and there is no need to delegate auditing to the Third Party Auditor (TPA). Experimental results demonstrate that the virtual auditing has low auditing time cost relative to the existing public auditing schemes.
Understanding human interests and intents from movement data are fundamental challenges for any location-based service. With the pervasiveness of sensor embedded smartphones and wireless networks and communication, th...
Understanding human interests and intents from movement data are fundamental challenges for any location-based service. With the pervasiveness of sensor embedded smartphones and wireless networks and communication, the availability of spatio-temporal mobility trace (timestamped location information) is increasingly growing. Analysing these huge amount of mobility data is another major concern. This paper proposes a cloud-based framework named Movcloud to efficiently manage and analyse mobility data. Specifically, the framework presents a hierarchical indexing schema to store trajectory data in different spatio-temporal resolution, clusters the trajectories based on semantic movement behaviour instead of only raw latitude, longitude point and resolves mobility queries using MapReduce paradigm. Movcloud is implemented over Google cloud Platform (GCP) and an extensive set of experiments on real-life data yield the effectiveness of the proposed framework. Movcloud has achieved ~ 28% better clustering accuracy and also executed three times faster than the baseline methods.
Geospatial data analysis is an emerging area of research today due to the potential to enable varied location-aware services. The existing centralized cloud-based analysis becomes time and computing-intensive for huge...
ISBN:
(数字)9781728152868
ISBN:
(纸本)9781728152875
Geospatial data analysis is an emerging area of research today due to the potential to enable varied location-aware services. The existing centralized cloud-based analysis becomes time and computing-intensive for huge amount of geospatial data processing. This paper addresses the challenge of time and power-efficiency in QoS-aware geospatial query resolution. We propose a cloudlet based hierarchical paradigm, namely Geo-cloudlet, where the cloudlets contain the geospatial data of the districts. The state and national level geospatial data are stored inside the state cloud and country cloud respectively. The query resolution is performed by either the cloudlet or by the state cloud or country cloud depending upon the geographical region related to the query. The experimental analysis illustrates that the proposed architecture Geo-cloudlet reduces the latency up to 61.3% and power consumption up to 61.1% over the use of only remote cloud servers for geospatial query resolution.
With the cloud repository service furnished by the cloudcomputing, users can comfortably arrange themselves as a cluster and distribute information effectively. In order to empower public verifier to audit the distri...
With the cloud repository service furnished by the cloudcomputing, users can comfortably arrange themselves as a cluster and distribute information effectively. In order to empower public verifier to audit the distributed information, clients in the cluster need to Figure out signatures on complete chunks of collaborative information. Every client in the cluster modifies and signs his respective chunks, and deploys in the cloud server. Hence specific chunks of shared information are normally signed by specific clients. If anyone of the customers' is found malicious, he is immediately repudiated from the cluster. The prevailing clients in the cluster are permitted to re-sign the chunks that were earlier signed by this eliminated client. This approach is inefficient due to the massive amount of collaborative information in the cloud. By exploiting the approach of proxy re-signatures, the CSP is acknowledged to re-sign chunks in support of the prevailing clients during customer repudiation. When many clients deploy the same information to the cloud repository, repository space has identical copies, hence deduplication technology is usually utilized to lower the capacity and bandwidth prerequisites of the utilities by removing repetitious information and hoarding only an original replica of them. In order to assimilate both data honesty and deduplication in cloud, we present a novel Secure Two Level Deduplication and Auditing of Shared Data in cloud (STLDAS) mechanism. Experimental results show that our mechanism achieves secure deduplication and appreciable improvement in tag generation.
As Internet of Things (IoT) is overpopulated with multitude of objects, services and interactions, efficiently locating the most relevant object is emerging as a major obstacle. Over the last few years, the Social Int...
详细信息
ISBN:
(数字)9781538677964
ISBN:
(纸本)9781538677971
As Internet of Things (IoT) is overpopulated with multitude of objects, services and interactions, efficiently locating the most relevant object is emerging as a major obstacle. Over the last few years, the Social Internet of Things (SIoT) paradigm, where objects independently establish social relationships among them, has become more popular as it provides a number of exciting characteristics to boost network navigability and carryout reliable discovery approaches. Given a large scale deployment of socially connected objects, finding the shortest path to reach the service provider remains as a fundamental challenge. In most of the existing search techniques, the physical significance of the objects is not very well explained and the geographical location of mobile objects is not considered. In this paper, to improve the search performance over the SIoT, we propose a novel object search mechanism based on physical location proximity and social context of users in social communities. The results show an enhancement over the existing search technique in terms of average path length.
The demand of deploying information has enormously increased within the last decade. Numerous distributedcomputing service suppliers have emerged (for eg., Microsoft Azure, Dropbox) in order to satisfy the requiremen...
详细信息
ISBN:
(数字)9781538677964
ISBN:
(纸本)9781538677971
The demand of deploying information has enormously increased within the last decade. Numerous distributedcomputing service suppliers have emerged (for eg., Microsoft Azure, Dropbox) in order to satisfy the requirements for information repository and high performance computation. The customers using the cloud repository services can conveniently arrange as a cluster and distribute information among themselves. Information proprietor computes the signatures for every chunk and deploys in the distributed server in order to allow the public verifier to perform public integrity verification on the information stored on the cloud server. In Panda scheme [1], by using the proxy re-signatures, cloud Service Provider (CSP) verifies and re-signs the revoked customer chunks in favor of the existing customers. The malicious CSP might use the Resign key deliberately to transform the signature of one customer to another. Apart from this, conspiracy amidst the mischievous cloud server and the repudiated customer reveals the private key information of the customers present in the cluster. We propose Secure Auditing and Re-signing of Revoked Customer Chunks by cloud Using Regression Method. Re-key computed by the information proprietor using regression method is highly secure and the mischievious cloud cannot detect the private information of the customers in the cluster. Our mechanism is collusion resistant, reduces computation cost of re-sign key by information proprietor and in addition CSP securely performs auditing and re-signing of repudiated customer chunks.
This paper proposes an architectural framework for the efficient orchestration of containers in cloud environments. It centres around resource scheduling and rescheduling policies as well as autoscaling algorithms tha...
This paper proposes an architectural framework for the efficient orchestration of containers in cloud environments. It centres around resource scheduling and rescheduling policies as well as autoscaling algorithms that enable the creation of elastic virtual clusters. In this way, the proposed framework enables the sharing of a computing environment between differing client applications packaged in containers, including web services, offline analytics jobs, and backend pre-processing tasks. The devised resource management algorithms and policies will improve utilization of the available virtual resources to reduce operational cost for the provider while satisfying the resource needs of various types of applications. The proposed algorithms will take factors that are previously omitted by other solutions into consideration, including 1) the pricing models of the acquired resources, 2) and the fault-tolerability of the applications, and 3) the QoS requirements of the running applications, such as the latencies and throughputs of the web services and the deadline of the analytical and pre-processing jobs. The proposed solutions will be evaluated by developing a prototype platform based on one of the existing container orchestration platforms.
暂无评论