Textile-Reinforced Concrete (TRC) is a composite material combining of weaving grid and fine-grained cement concrete. The fiber grid made from carbon, aramid or glass is not corroded by the environment. Woven mesh has...
详细信息
This study centers on the application of vertical federated learning technology in the context of Internet banking loans, with a particular focus on innovations in data privacy protection, risk control model algorithm...
This study centers on the application of vertical federated learning technology in the context of Internet banking loans, with a particular focus on innovations in data privacy protection, risk control model algorithms, and secure multi-party computation. Currently, banking risk control strategies mainly rely on traditional data processing technologies, which often fall short in protecting user privacy and ensuring data usage efficiency. We adopt vertical federated learning technology, offering an innovative solution for the Internet banking loan scenario. Firstly, regarding data privacy protection, we propose a differential privacy mechanism to safeguard user-sensitive data. Secondly, we innovatively apply risk control model algorithms, facilitating collaborative modeling across multiple Internet loan platforms through federated learning. Furthermore, we introduce secure multi-party computation technology to ensure the secure transmission of data and confidentiality of computation processes during federated learning. Through empirical experiments on real Internet loan datasets, we validate the effectiveness and feasibility of our proposed methods. After implementing our risk control model, the credit approval rate increased from 3.44% to 18.2%, with a single-day high reaching 25.53%. The average loan amount increased by 7,700 yuan, and the average interest rate slightly declined by 0.48%, marking a significant improvement and breakthrough compared to traditional risk control models. This study offers innovative solutions for data privacy protection and risk control in the Internet loan scenario, providing safer and more reliable services for financial institutions and users. Moreover, our methods possess high practicality and promotional value. The potential widespread impact on the industry is profound.
Processing-in-memory (PIM) is promising to solve the well-known data movement challenge by performing in-situ computations near the data. Leveraging PIM features is pretty profitable to boost the energy efficiency of ...
详细信息
Processing-in-memory (PIM) is promising to solve the well-known data movement challenge by performing in-situ computations near the data. Leveraging PIM features is pretty profitable to boost the energy efficiency of applications. Early studies mainly focus on improving the programmability for computation offloading on PIM architectures. They lack a comprehensive analysis of computation locality and hence fail to accelerate a wide variety of applications. In this paper, we present a general-purpose instruction-level offloading technique for near-DRAM PIM architectures, namely IOTPIM, to exploit PIM features comprehensively. IOTPIM is novel with two technical advances: 1) a new instruction offloading policy that fully considers the locality of the whole on-chip cache hierarchy, and 2) an offloading performance benefit prediction model that directly predicts offloading performance benefits of an instruction based on the input dataset characterizes, preserving low analysis overheads. The evaluation demonstrates that IOTPIM can be applied to accelerate a wide variety of applications, including graph processing, machine learning, and image processing. IOT-PIM outperforms the state-of-the-art PIM offloading techniques by 1.28×-1.51× while ensuring offloading accuracy as high as 91.89% on average.
Recent works have proposed various distributed federated learning (FL) systems for the edge computing paradigm. These FL algorithms can assist pervasive applications in various aspects, e.g., decision making, pattern ...
详细信息
ISBN:
(纸本)9781665404242
Recent works have proposed various distributed federated learning (FL) systems for the edge computing paradigm. These FL algorithms can assist pervasive applications in various aspects, e.g., decision making, pattern recognition, and behavior prediction. Existing solutions do not efficiently support the training based on the real-time location-specific data, because fundamentally, the "data collection" problem is rarely studied in the context of FL systems. To address this problem, we present a novel system, VC-SGD (Vehicular Clouds-Stochastic Gradient Descent), which seamlessly integrates the emerging concept of vehicular clouds with an edge-based FL. We show that by using vehicular clouds as virtual edge servers, VC-SGD is able to effectively support FL algorithms that use real-time location-specific data. We develop a general simulator that uses SUMO to simulate vehicle mobility and MXNet to perform real training. We use our simulator to verify the efficacy of VC-SGD. The experimental results demonstrate that VC-SGD improves over existing solutions.
The prediction of environmental disasters, both technogenic and natural, is currently based on advances in mathematical modeling. The high cost and costly maintenance of computing clusters actualizes the research in t...
详细信息
Lattice Boltzmann method (LBM) is a promising approach to solving Computational Fluid Dynamics (CFD) problems, however, its nature of memory-boundness limits nearly all LBM algorithms' performance on modern comput...
详细信息
ISBN:
(纸本)9783030856656;9783030856649
Lattice Boltzmann method (LBM) is a promising approach to solving Computational Fluid Dynamics (CFD) problems, however, its nature of memory-boundness limits nearly all LBM algorithms' performance on modern computer architectures. This paper introduces novel sequential and parallel 3D memory-aware LBM algorithms to optimize its memory access performance. The introduced new algorithms combine the features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging two temporal time steps. We also design a parallel methodology to guarantee thread safety and reduce synchronizations in the parallel LBM algorithm. At last, we evaluate their performances on three high-end manycore systems and demonstrate that our new 3D memory-aware LBM algorithms outperform the state-of-the-art Palabos software (which realizes the Fuse Swap Prism LBM solver) by up to 89%.
With the growth of renewable energy, grid-connected inverter as the interface has gradually increased its application. However, with the large-scale connection of inverters, resonance and stability problems have also ...
详细信息
ISBN:
(纸本)9781728163444
With the growth of renewable energy, grid-connected inverter as the interface has gradually increased its application. However, with the large-scale connection of inverters, resonance and stability problems have also been brought to distributed systems. Aiming at this issue, in this paper, a multi-parallel equivalent small signal model based on the structure of the multi-inverter grid-connected system is established firstly, and the expression of the output current of each inverter is obtained. According to the phase characteristics of the inverter's output at different frequency bands, the output current expressions for synchronized and interleaved control period are further obtained. On this basis, influences of penetration, control gain and delay on harmonic or resonance are analyzed. Finally, a platform of four 30kW inverters is setup, and "resonance stability margin" method is proposed to verify the harmonic resonance characteristics of the inverter output current under high penetration.
Rapidly generated data and the amount magnitude of data analytical jobs pose great pressure to the underlying computing facilities. A distributed multi-cluster computing environment such as a hybrid cloud consequently...
详细信息
ISBN:
(纸本)9781450388160
Rapidly generated data and the amount magnitude of data analytical jobs pose great pressure to the underlying computing facilities. A distributed multi-cluster computing environment such as a hybrid cloud consequently raises its necessity due to its advantages in adapting geographically distributed and potentially cloud-based computing resources. Different clusters forming such an environment could be heterogeneous and may be resource-elastic as well. From analytical perspective, in accordance with increasing needs on streaming applications and timely analytical demands, many data analytical jobs nowadays are time-critical in terms of their temporal urgency. And the overall workload of the computing environment can be hybrid to contain both time-critical and general applications. These all call for an efficient resource management approach capable to apprehend both computing environment and application features. However, the added up complexity and high dynamics of the system greatly hinder the performance of traditional rule-based approaches. In this work, we propose to utilize deep reinforcement learning for developing elasticity-compatible resource management for a heterogeneous distributedcomputing environment, aiming for less occurrences of missing temporal deadline while maintaining low average execution time ratio. Along with reinforcement learning we design a deep model employing Long Short-Term Memory (LSTM) structure and partial model sharing for multi-target learning mechanism. The experimental results show that the proposed approach could greatly outperform baselines and serve as a robust resource management for variant workloads.
In the last few years, deep-learning models are becoming crucial for numerous scientific and industrial applications. Due to the growth and complexity of deep neural networks, researchers have been investigating techn...
详细信息
Big data workflows have emerged as a powerful paradigm that enables researchers and practitioners to run complex multi-step computational processes in the cloud to gain insight into their large datasets. To create a w...
详细信息
ISBN:
(数字)9798350373554
ISBN:
(纸本)9798350373561
Big data workflows have emerged as a powerful paradigm that enables researchers and practitioners to run complex multi-step computational processes in the cloud to gain insight into their large datasets. To create a workflow, a user logs on to a specialized software, called Big Data Workflow Management System, or simply BDW system, to select and connect together various components, or tasks, into a workflow. The workflow is then mapped onto a set of distributed compute resources, such as Virtual Machines (VMs), and storage resources, such as S3 buckets and EBS volumes. It is then executed, with different branches and tasks of the workflow running in parallel on different nodes. During execution, the BDW system captures provenance, which is the history of data derivation that describes data processing steps that yielded each output result. Workflow management, including workflow composition and schedule refinement, is a challenging problem. This problem is further exacerbated by the growing number and heterogeneity of workflow tasks and cloud resources, as well as by the growing size and complexity of workflow structures. Few efforts were made to leverage provenance for facilitating workflow composition and schedule refinement. To address these issues, we 1) produce a comprehensive conceptual model for big data workflow provenance that captures the complexity and heterogeneity of cloud-based workflow execution, 2) propose a scalable Cassandra database schema for provenance-aware workflow composition and schedule refinement, 3) outline a four-step provenance-based schedule refinement process for balancing workflow execution time and cost, and 4) present a scalable and highly available microservices-based reference architecture for big data workflow management in the cloud. Our proposed loosely coupled architecture ensures superior scalability, as well as operational and technological independence of each module within the BDW system.
暂无评论