For the implementation of distributedstorage frameworks in the context of mobile crowd sensing (MCS), compressed sensing (CS) theory provides significant support, mainly because of the essential characteristic that C...
详细信息
For the implementation of distributedstorage frameworks in the context of mobile crowd sensing (MCS), compressed sensing (CS) theory provides significant support, mainly because of the essential characteristic that CS theory will contain global information when encoding any measurements. Therefore, with limited measurement resources, the rational allocation of measurement resources becomes the most critical factor affecting recovery accuracy when using CS to recover the data. Unfortunately, the latest distributedstorage frameworks do not take into account the importance of measurement resource allocation, which directly leads to a significant loss of data recovery accuracy. Therefore, to address this issue, this article proposes a volatility-based allocation strategy for the measurement resource. First, we process the target monitoring region in blocks. Next, we calculate the magnitude of fluctuations between adjacent reconstructed data by volatility, which is used to assess the importance of the different areas. Finally, a volatility-based measurement allocation scheme is proposed by fully considering the importance of different areas. It is important to note that the introduction of the concept of "volatility" in the context of MCS makes it feasible to correctly differentiate the importance of individual parts of the targetmonitoring regionwithout any prior knowledge by employing extremely fuzzy recovery data. In addition, extensive experiments show that our measurement allocation scheme improves data recovery accuracy by 44% for uneven data distribution scenarios and 25% for even data distribution scenarios, compared with the randommeasurement allocation used in the state-of-the-art MCS distributedstorage framework.
Large volumes of data are being generated daily from IoT networks, healthcare, and many other applications, which makes secure, reliable, and cost-effective datastorage a critical infrastructure of the computing syst...
详细信息
Large volumes of data are being generated daily from IoT networks, healthcare, and many other applications, which makes secure, reliable, and cost-effective datastorage a critical infrastructure of the computing system. Existing datastorage largely depends on centralized clouds, which is not only costly but also vulnerable to single points of failure and other types of security attacks. Moreover, cloud providers will have full access to user data and revision history beyond user control. To provide data security, data encryption has to be used, which requires extensive computing power and cumbersome key management. distributedstorage system (DSS) is being widely viewed as a natural solution to future online datastorage due to improved access time and lower storage cost. However, the existing DSS also has the limitations of low storage efficiency and weak data security. In this article, we investigate multi-layer code-based distributed data storage systems that can achieve inherit content confidentiality and optimal storage efficiency. Our comprehensive performance analysis shows that the optimal code can improve the feasible region in reliable datastorage by 50% under various adversarial attack scenarios.
Presently many searchable encryption schemes have been proposed for cloud and fog computing, which use fog nodes (or fog servers) to partly undertake some computational tasks. However, these related schemes still reta...
详细信息
Presently many searchable encryption schemes have been proposed for cloud and fog computing, which use fog nodes (or fog servers) to partly undertake some computational tasks. However, these related schemes still retain cloud servers to undertake most computational tasks, which result in large communication costs between edge devices and cloud servers. Therefore, in this paper we propose a self-verifiable attribute-based keyword search scheme for distributed data storage (SV-KSDS) in full fog computing, where each decryption operation on the data required by a user must meet the negotiated decryption rule between fog servers. Our SV-KSDS scheme first provides attribute-based distributed data storage among fog servers through the (w, sigma) threshold secret-sharing scheme, where fog servers can provide self-verifiable keyword search and data decryption for terminal users. Compared with the datastorage in cloud computing, our scheme extends it to the distributed structure while providing fine-grained access control for distributed data storage through attribute-based encryption. The access control policy of our scheme is constructed on linear secret-sharing scheme, whose security is reduced to the decisional bilinear Diffie-Hellman assumption against chosen-keyword attack and the decisional q-parallel bilinear Diffie-Hellman assumption against chosen-plaintext attack in the standard model. Based on theoretical analysis and practical testing, our SV-KSDS scheme generates less computation and communication costs, which further unloads some computational tasks from terminal users to fog servers so as to reduce computing costs of terminal users.
Current trends, such as an increase in the number of devices connected to the Internet, an exponential increase in the volume of information, the development of network and cloud technologies, are changing all areas o...
详细信息
ISBN:
(纸本)9781728175423
Current trends, such as an increase in the number of devices connected to the Internet, an exponential increase in the volume of information, the development of network and cloud technologies, are changing all areas of activity. The global increase in network traffic dictates the need for business to configure large-scale computing systems and networks. To introduce new digital services in various fields, innovative approaches to building computing infrastructures meant for the processing, transmission and storage of large data arrays are needed. The aim of this work is to develop a software component for building a digital cloud platform for distributed data storage in higher education. Current work proposes an architecture and software tools for distributed processing and storage of user data on a digital cloud platform of a higher educational institution. The main class diagrams of the developed cloud platform and the principles of interaction of its main components are described in detail. The functioning process and the features of building a network infrastructure of the developed digital cloud platform for distributed data storage are considered. The developed digital cloud platform implements the capabilities of immediate connection and deployment of new digital services through the use of a wide range of software components and open APIs.
This paper deals with the information and control system dependability issue. Firstly, the "criterion delegating" approach is presented briefly. This approach, developed earlier, allows to improve the system...
详细信息
ISBN:
(纸本)9783030018184
This paper deals with the information and control system dependability issue. Firstly, the "criterion delegating" approach is presented briefly. This approach, developed earlier, allows to improve the system elements reliability by the criteria number reducing within the configuration forming problem. As such approach needs a datastorage to distribute up-to-date monitoring and control tasks context data through the system, the one's models are developed and presented. We develop two model types, centralized (based on Viewstamped Replication protocol) and fully decentralized. The models are considered and discussed in terms of communication environment workload on the operation and reconfiguration stages of the system.
Reliability and energy efficiency are two key considerations when designing a compressive sensing (CS)-based data-gathering scheme. Most researchers assume there is no packets loss, thus, they focus only on reducing t...
详细信息
Reliability and energy efficiency are two key considerations when designing a compressive sensing (CS)-based data-gathering scheme. Most researchers assume there is no packets loss, thus, they focus only on reducing the energy consumption in wireless sensor networks (WSNs) while setting reliability concerns aside. To balance the performance-energy trade-off in lossy WSNs, a distributed data storage (DDS) and gathering scheme based on CS (CS-DDSG) is introduced, which combines CS and DDS. CS-DDSG utilizes broadcast properties to resist the impact of packet loss rates. Neighboring nodes receive packets with process constraints imposed to decrease the volume of both transmissions and receptions. The mobile sink randomly queries nodes and constructs a measurement matrix based on received data with the purpose of avoiding measuring the lossy nodes. Additionally, we demonstrate how this measurement matrix satisfies the restricted isometry property. To analyze the efficiency of the proposed scheme, an expression that reflects the total number of transmissions and receptions is formulated via random geometric graph theory. Simulation results indicate that our scheme achieves high precision for unreliable links and reduces the number of transmissions, receptions and fusions. Thus, our proposed CS-DDSG approach effectively balances energy consumption and reconstruction accuracy.
Achieving reliability in Wireless Sensor Networks (WSNs) is challenging due to the limited resources available. In this study, we investigate the design of data survivability schemes using decentralized storage system...
详细信息
Achieving reliability in Wireless Sensor Networks (WSNs) is challenging due to the limited resources available. In this study, we investigate the design of data survivability schemes using decentralized storage systems in WSNs. We propose a datastorage system design based on Decentralized Erasure Codes (DEC) that features a simple and decentralized construction of the target code. The proposed framework allows sensor nodes to cooperate to build an erasure code-based storage that can tolerate a given failure/erasure rate. Code construction and decoding can both be performed randomly allowing for a distributed operation with no prior setup or coordination between source nodes. Further, we present two approaches that utilize Random Linear Network Coding (RLNC) to enhance the proposed scheme in order to achieve energy efficiency. We present the theoretical basis of the schemes then validate and evaluate their performance through simulations. (C) 2016 Elsevier B.V. All rights reserved.
In the past, people have focused on cluster computing and grid computing. Now, however, this focus has shifted to cloud computing. Irrespective of what techniques are used, there are always storage requirements. The c...
详细信息
In the past, people have focused on cluster computing and grid computing. Now, however, this focus has shifted to cloud computing. Irrespective of what techniques are used, there are always storage requirements. The challenge people face in this area is the huge amount of data to be stored, and its complexity. People are now using many cloud applications. As a result, service providers must serve increasingly more people, causing more and more connections involving substantially more data. These problems could have been solved in the past, but in the age of cloud computing, they have become more complex. This paper focuses on cloud computing infrastructure, and especially data services. The goal of this paper is to implement a high performance and load balancing, and able-to-be-replicated system that provides datastorage for private cloud users through a virtualization system. This system extends and enhances the functionality of the Hadoop distributed system. The proposed approach also implements a resource monitor of machine status factors such as CPU, memory, and network usage to help optimize the virtualization system and datastorage system. To prove and extend the usability of this design, a synchronize app was also developed running on Android based on our distributed data storage.
distributed data storage (DDS) provides a promising approach to the reliable recovery of the whole sensor readings in a wireless sensor network (WSN) by visiting a small subset of sensor nodes. To reduce the number of...
详细信息
distributed data storage (DDS) provides a promising approach to the reliable recovery of the whole sensor readings in a wireless sensor network (WSN) by visiting a small subset of sensor nodes. To reduce the number of transmissions/receptions, various DDS schemes based on compressive sensing (CS) have been proposed in the literature. However, these schemes only exploit the spatial correlation among sensor readings from geographically neighboring nodes, and the potential temporal correlation over multiple time slots within a frame duration is ignored. This results in energy inefficiency of DDS within the WSN. In this letter, we take a new approach, and exploit spatial and temporal (spatiotemporal) correlations among sensor readings simultaneously. A novel DDS coding scheme, referred to as spatiotemporal compressive network coding (ST-CNC), is proposed to collect sensor readings across the WSN in a more energy-efficient manner. Compared with the existing DDS schemes with CS, the proposed scheme significantly reduces the number of transmissions and receptions with similar recovery performance.
Compared with cloud computing-based datastorage, distributed data storage in fog computing is more vulnerable to malicious attacks. So, it is very necessary to provide a secure distributed auditing mechanism with pro...
详细信息
Compared with cloud computing-based datastorage, distributed data storage in fog computing is more vulnerable to malicious attacks. So, it is very necessary to provide a secure distributed auditing mechanism with protecting the identity privacy of data owners and controlling the identities of auditors under fog computing-based datastorage. In this paper, we propose a dual attribute-based auditing scheme for fog computing-based data dynamic storage. Our auditing scheme can protect the identity privacy of data owners, and provide an attribute-based access control for corresponding audits with a distributed collaborative verification between related fog servers. In our scheme, a data owner can securely upload his divided and blinded file blocks with corresponding block authenticators (related with his attribute set) to related fog servers. To prevent malicious auditors from consuming system resources by abusing audit requests, the data owner can provide an attribute-based access control for corresponding audits, where the data owner specifies the attribute set of corresponding auditors who have the right to check the integrity of related data. Further, a distributed collaborative verification mechanism between related fog servers is constructed to reduce the disadvantages of centralized verification, where the Shamir's secret-sharing method is used to decompose the picked blinding factor as the shared sub-secrets sent to each fog server respectively. Compared with cloud computing-based datastorage, our collaborative verification mechanism can implement distributed auditing consent of stored data between multiple fog servers. Our auditing scheme can further audit out specific suspicious fog servers. Additionally, we provide a dynamic data operation mechanism to efficiently support the updating of users' data under fog computing-based datastorage. Furthermore, related theoretical analysis and experimental evaluation show our scheme is secure and efficient.
暂无评论