Aiming at the high complexity of parameter optimization for portfolio models, this paper designs a distributed high-performance portfolio optimization platform(HPPO) based on parallelcomputing framework and event dri...
详细信息
ISBN:
(数字)9781665403986
ISBN:
(纸本)9781665403993
Aiming at the high complexity of parameter optimization for portfolio models, this paper designs a distributed high-performance portfolio optimization platform(HPPO) based on parallelcomputing framework and event driven architecture. The platform consists of the data layer, the model layer, and the excursion layer, which is built in a component, pluggable, and loosely coupled way. The platform adopts parallelization acceleration for backtesting and optimizing parameters of portfolio models in a certain historical interval. The platform is able to docking portfolio model with real-time market. Based on the HPPO platform, a parallel program is designed to optimize the parameters of the value at risk(VAR) model. The performance of the platform are summarized by analyzing the experimental results and comparing with the open source framework Zipline and Rqalpha.
In this work, we study heuristic and randomized approaches for parallel jobs execution in high performance computing environments with heterogeneous resources. Traditional process of a job-flow scheduling implies many...
详细信息
ISBN:
(数字)9781728182681
ISBN:
(纸本)9781728182698
In this work, we study heuristic and randomized approaches for parallel jobs execution in high performance computing environments with heterogeneous resources. Traditional process of a job-flow scheduling implies many requirements for jobs execution criteria and priority and, thus, usually cannot be changed. However, special microscheduling heuristics may be used in such scenarios as a secondary optimization based on the primary scheduling procedure. Coordinated and randomized microscheduling approaches simulate different strategies for the whole job queue scheduling in order to select the most resource-efficient scenarios. Based on a conservative backfilling scheduling procedure we study how different resources allocation heuristics affect integral job-flow scheduling characteristics in a dedicated simulation environment.
Given the continuous increase of the data size to be processed, the need for the information manipulation’ speed (acquisition, processing, and analysis) has become increasingly necessary in many areas. In brain imagi...
详细信息
Mobile Edge computing (MEC) enables mobile users to offload their computation loads to nearby edge servers, and is seen to be integrated in the 5G architecture to support a variety of low-latency applications and serv...
详细信息
ISBN:
(数字)9781728190747
ISBN:
(纸本)9781728183824
Mobile Edge computing (MEC) enables mobile users to offload their computation loads to nearby edge servers, and is seen to be integrated in the 5G architecture to support a variety of low-latency applications and services. However, an edge server might soon be overloaded when its computation resources are heavily requested, and would then fail to process all of its received computation loads in time. Unlike most of existing schemes that ingeniously instruct the overloaded edge server to transfer computation loads to the remote cloud, we make use of the spare computation resources from other local edge servers by specially taking the risk of network link failures into account. We measure such link failure risks with the financial risk management metric of Conditional Value-at-Risk (CVaR), and well constrain it to the offloading decisions using a Minimum Cost Flow (MCF) problem formulation. Numerical results validate the enhancement of the MEC service's availability by our risk-aware offloading scheme.
Today, in the age of digital era, electronic circuit is the key component and its design, testing is validated through simulator. But even though use of simulator is cost-effective, large circuit simulation is quite t...
详细信息
Scalable web systems are directly related to distributed storage systems used to process large amounts of data (big data). An example of such a system is Hadoop with its many extensions supporting data storage such as...
详细信息
ISBN:
(纸本)9783319999814;9783319999807
Scalable web systems are directly related to distributed storage systems used to process large amounts of data (big data). An example of such a system is Hadoop with its many extensions supporting data storage such as SQL-on-Hadoop systems and the "Parquet" file format. Another kind of systems for storing and processing big data are NoSQL databases, such as HBase, which are used in applications requiring fast random access. The Kudu system was created to combine the advantages of Hadoop and HBase and enable both effective data set analysis and fast random access. As subject of the research, performance analysis of the mentioned systems was performed. The experiment was conducted in the Amazon Web Services public cloud environment, where the cluster of nine virtual machines was configured. For research purpose, containing about billion rows fragment of "Wikipedia Page Traffic Statistics" public dataset was used. The results of the measurements confirm that the Kudu system is a promising alternative to the commonly used technologies.
In recent years, the parallelcomputing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on...
详细信息
In sub-second stream computing, the answer to a complex query usually depends on operations of aggregation or join on streams, especially multi-way theta join. Some attribute keys are not distributed uniformly, which ...
详细信息
ISBN:
(数字)9781728190747
ISBN:
(纸本)9781728183824
In sub-second stream computing, the answer to a complex query usually depends on operations of aggregation or join on streams, especially multi-way theta join. Some attribute keys are not distributed uniformly, which is called the data intrinsic skew problem, such as taxi car plate in GPS trajectories and transaction records, or stock code in stock quotes and investment portfolios etc. In this paper, we define the concept of key redundancy for single stream as the degree of data intrinsic skew, and joint key redundancy for multi-way streams. We present an execution model for multi-way stream theta joins with a fine-grained cost model to evaluate its performance. We propose a solution named Group Join (GroJoin) to make use of key redundancy during transmission and execution in a cluster. GroJoin is adaptive to data intrinsic skew in the way that it depends on the grouping condition we find out, i.e., the selectivity of theta join results should be smaller than 25%. Experiments are carried out by our MS-Generator to produce multi-way streams, and the simulation results show that GroJoin can decrease at most 45% transmission overheads with different key redundancies and value-key proportionality coefficients, and reduce at most 70% query delay with different key distributions. We further implement GroJoin in Multi-Way Stream Theta Join by Spark Streaming. The experimental results demonstrate that there are about 40%~50% join latency reduced after our optimization with a very small computation cost.
The design of test case automatic generation technology is an important step in the implementation of software automated testing. It plays an important role in guiding the later testing work and is the fundamental gua...
详细信息
The BlockChain is a New Era Technology that provides secure technological impacts on various exchanges. It totally revolves over the distributed database. That framework helps to eliminate the failure at any single po...
详细信息
暂无评论