Android's permission system offers an all-or-nothing installation choice for users. To make it more flexible, users may choose a popular app tool, called permission manager, to selectively grant or revoke an app&#...
详细信息
ISBN:
(纸本)9781450336239
Android's permission system offers an all-or-nothing installation choice for users. To make it more flexible, users may choose a popular app tool, called permission manager, to selectively grant or revoke an app's permissions at runtime. A fundamental requirement for such permission manager is that the granted or revoked permissions should be enforced faithfully. However, we discover that none of existing permission managers meet this requirement due to permission leaks. To address this problem, we propose CICC, a finegrained, semantic-aware, and transparent approach for any permission managers to defend against the permission leaks. Compared to existing solutions, CICC is fine-grained because it detects the permission leaks using call-chain information at the component instance level, instead of at the app level or component level. The fine-grained feature enables it to generate a minimal impact on the usability of running apps. CICC is semantic-aware in a sense that it manages call-chains in the whole lifecycle of each component instance. CICC is transparent to users and app developers, and it requires minor modification to permission managers. Our evaluation shows that CICC incurs relatively low performance overhead and power consumption. Copyright 2015 ACM.
Outsourced data in cloud and computation results are not always trustworthy because data owners lack physical possession and control over the data as a result of virtualization, replication, and migration techniques. ...
详细信息
Recently, security issues are obstructing the development and using of cloud computingservices. Authentication and integrity play an important role in the cloud security, and numerous concerns have been raised to rec...
详细信息
It is a huge challenge to deploy a cloud computingsystem in large-scale data centers. In order to help resolve this issue, we propose an automatic cloud system deployment approach with the characteristics of reliabil...
详细信息
Hadoop/MapReduce has emerged as a de facto programming framework to explore cloud-computing resources. Hadoop has many configuration parameters, some of which are crucial to the performance of MapReduce jobs. In pract...
详细信息
Indexing microblogs for realtime search is challenging, because new microblogs are created at tremendous speed, and user query requests keep constantly changing. To guarantee user obtain complete query results, micro-...
详细信息
Modern GPUs have been widely used to accelerate the graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate t...
详细信息
ISBN:
(纸本)9781450332057
Modern GPUs have been widely used to accelerate the graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or the atomic operations, leading to significant penalties/overheads when implemented on GPUs. To this end, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. We propose a light-weight asynchronous processing framework called Frog with a hybrid coloring model. We find that majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution will separate the processing of the vertices based on the distribution of colors.
For datacenters with limited power supply, restricting the servers' power budget (i.e., the maximal power provided to servers) is an efficient approach to increase the server density (the server quantity per rack)...
详细信息
ISBN:
(纸本)9781467375887
For datacenters with limited power supply, restricting the servers' power budget (i.e., the maximal power provided to servers) is an efficient approach to increase the server density (the server quantity per rack), which can effectively improve the cost-effectiveness of the datacenters. However, this approach may also affect the performance of applications in servers. Hence, the prerequisite of adopting the approach in datacenters is to precisely evaluate the application performance degradation caused by restricting the servers' power budget. Unfortunately, existing evaluation methods are inaccurate because they are either improper or coarse-grained, especially for the latency-sensitive applications widely deployed in datacenters. In this paper, we analyze the reasons why state-of-the-art methods are not appropriate for evaluating the performance degradation of latency-sensitive applications in case of power restriction, and we propose a new evaluation method which can provide a fine-grained way to precisely describe and evaluate such degradation. We verify our proposed method by a real-world application and the traces from Tencent's datecenter with 25328 servers. The experimental results show that our method is much more accurate compared with the state of the art, and we can significantly increase datacenter efficiency by saving servers' power budget while maintaining the applications' performance degradation within controllable and acceptable range.
Softwares usually need to be updated to fix bugs or add new features. On the other hand, some critical softwares, such as cloud applications, need to provide service continuously, thus should be updated without downti...
详细信息
Softwares usually need to be updated to fix bugs or add new features. On the other hand, some critical softwares, such as cloud applications, need to provide service continuously, thus should be updated without downtime. Conventional Dynamic Software Updating (DSU) systems try to update programs while running, but they hardly consider the communication of the program to be updated with other programs, which may lead to some inconsistency problems. We handle the problem with an improved DSU system by using multi-version execution. When a new update arrives, instead of updating the application to the new version, we fork a new process of the old version and dynamically update it to the new version, then make these two versions run concurrently until the update finishes. We implement a prototype system called MUC (Multi-vesion for Updating of Cloud) on Linux. To verify our prototype, we apply MUC to cloud applications Redis and Ice cast, and evaluate the overhead of MUC at runtime.
SKVM is a high performance in-memory Key-Value (KV) store for multicore, which is designed for high concurrent data access. There are some problems of existing systems dealing with high concurrent data processing on m...
详细信息
ISBN:
(纸本)9781467371957
SKVM is a high performance in-memory Key-Value (KV) store for multicore, which is designed for high concurrent data access. There are some problems of existing systems dealing with high concurrent data processing on multicore: lock competition, cache coherency overhead, and large numbers of concurrent network connections. To solve the problems and make the in-memory KV store scale well on multicore, high concurrent data access is divided into two steps: high concurrent connection processing and high concurrent data processing. Half sync/half async model (HSHA) is adopted to eliminate network bottle-neck, which can support high concurrent network connection. Through data partition, lock competition is eliminated and cache movement is reduced. Furthermore, consistent hash is adopted as data distribution strategy which can improve the scalability of system on multicore. Though some of these ideas appear elsewhere, SKVM is the first to combine them together. The experimental results show that SKVM can reach at most 2.4× higher throughput than Memcached, and scales near linearly with the number of cores under any workload.
暂无评论