In the field of wireless sensor network, it is one of the hottest issues of current research that how to maintain the quality of network coverage and balance nodes' energy consumption to optimize the network lifet...
详细信息
ISBN:
(纸本)9780769541105
In the field of wireless sensor network, it is one of the hottest issues of current research that how to maintain the quality of network coverage and balance nodes' energy consumption to optimize the network lifetime. This paper analyzed LEACH (low energy adaptive clustering hierarchy) and proposed an energy-efficient distributed clustering algorithm on coverage (ECAC). In the algorithm, the redundancy degree of coverage and node's rest energy are considered. Moreover, the distribution of cluster head is more reasonable. Simulation results show that: the improved algorithm can efficiently reduce the network energy consumption and improve the quality of coverage.
Internet Technologies are widely used in healthcare services and information management. Various applications have been proved to be highly beneficial in delivering healthcare services, information and disease managem...
详细信息
ISBN:
(纸本)9781538621622
Internet Technologies are widely used in healthcare services and information management. Various applications have been proved to be highly beneficial in delivering healthcare services, information and disease management. Social Networking is one of the best internet technologies that is being used in various business and management areas, and it has been revolutionary in enhancing the various operations through effective interactions between the various users. This paper focuses on reviewing the social networking, its architecture, and the prospects of using it for Infectious Disease Management (IDM) in Saudi Arabia. Methods: Review of Various literature sources and Social Networking applications for Healthcare/IDM Management. Results: The Study has found that Social Networking can be an effective approach for integrating it with the healthcare/IDM Systems as it would enable effective user interactions, which could enhance behavioral aspects in delivering/receiving healthcare services. However, there has been no study identified that has used social networking for IDM in Saudi Arabia. This paper suggests the wide scope for future research in integrating the social networking concept across various healthcare systems in Saudi Arabia.
CVaR has more advantages, than the VaR as portfolio risk measurement tool, but with the calculation of VaR, the Monte Carlo simulation method of the CVaR is difficult, and the cost is high. The establishment of the sy...
详细信息
ISBN:
(纸本)9780769548180
CVaR has more advantages, than the VaR as portfolio risk measurement tool, but with the calculation of VaR, the Monte Carlo simulation method of the CVaR is difficult, and the cost is high. The establishment of the system of distributed parallel algorithm can reduce costs and speed up the calculation, which is conducive to the promotion of the CVaR Monte Carlo algorithm.
Large-scale parallel simulation and modeling have changed our world. Today, supercomputers are not just for research and scientific exploration;they have become an integral part of many industries, among which finance...
详细信息
ISBN:
(纸本)9780769541105
Large-scale parallel simulation and modeling have changed our world. Today, supercomputers are not just for research and scientific exploration;they have become an integral part of many industries, among which finance is one of the strongest growth factors for supercomputers, driven by ever increasing data volumes, greater data complexity and significantly more challenging data analysis. In this paper, a modest application of the developments of high-performance computing in finance is studied deeply. Attentions are not only focused on the what benefits the parallel algorithm bring to the financial research, but also on the practical applications of the High-Performance computing in real financial markets, especially some recent advances is highlighted. On that basis, some suggestions about the challenges and development directions of HPCs in finance are proposed.
For the image-set based classification, a considerable advance has been made by representing original image sets on Grassmann manifold. In order to extend the advantages of the Euclidean based dimensionality reduction...
详细信息
ISBN:
(纸本)9781538621622
For the image-set based classification, a considerable advance has been made by representing original image sets on Grassmann manifold. In order to extend the advantages of the Euclidean based dimensionality reduction methods to the Grassmann Manifold, several methods have been suggested recently to jointly perform dimensionality reduction and metric learning on Grassmann manifold and they have achieved good results in some computer vision tasks. Nevertheless, when handling the classification tasks on the complicated datasets, the learned features do not exhibit enough discriminatory ability and the data distribution of the resulted Grassmann manifold also be ignored which may lead to overfitting. To overcome the two problems, we propose a new method named Structure Maintaining Discriminant Maps (SMDM) for manifold dimensionality reduction problems. As to SMDM, we mainly design a new discriminant function for metric learning. We make experiments on two tasks: face recognition and object categorization to evaluate the proposed method, the achieved better results compared with the state-of-the-art methods, showing the feasibility and effectiveness of the proposed algorithm.
Abnormal communication refers to the user in the call traffic and business management and other daily consumption of abnormal behavior in the user. Communication operators have large-scale user data sets, and using da...
详细信息
ISBN:
(纸本)9781538674451
Abnormal communication refers to the user in the call traffic and business management and other daily consumption of abnormal behavior in the user. Communication operators have large-scale user data sets, and using data sets reasonably to make good guidance and recommendation to businesses can bring better economic benefits. However, for large-scale user feature datasets, serial machine learning and analysis methods spend a lot of time in feature processing, and data sets training is facing a huge time cost. In order to process and train abnormal user data better and more efficiently, this paper uses Spark to implement feature engineering and analyze large-scale anomaly user datasets to highlight the efficiency of Spark in analyzing feature data and implement distributed training to accelerate the algorithm model. The training algorithm takes the SVM distributed training dataset as an example and compares it with the stand-alone serial SVM and scikitlearn SVM. The experimental results show the advantages of distributedcomputing as well as good training results. Finally, common logistic regression and Bayesian algorithms and other distributedcomputing models to compare the training effect.
Cloud computing, as a brand new and powerful computing model, is achieving increased popularity. Its emergence will provide extraordinary opportunities for firms, especially Small and Medium businesses (SMBs), to inno...
详细信息
ISBN:
(纸本)9780769548180
Cloud computing, as a brand new and powerful computing model, is achieving increased popularity. Its emergence will provide extraordinary opportunities for firms, especially Small and Medium businesses (SMBs), to innovate and develop their electronic business. The paper starts with the discussion of the difficulties and challenges faced by SMBs during their development of electronic business. Then after mainly summarizing the definition and characteristics of cloud computing, the paper focuses on the benefits SMBs gained from cloud computing model to develop their electronic businessapplications and the new management challenges introduced through adoption of this new model.
The parallel implementation of a novel mesh simplification method is introduced detailedly in this paper, which is based on a Beowulf cluster system. Taking full advantage of the distributed memory and high performanc...
详细信息
ISBN:
(纸本)9780769541105
The parallel implementation of a novel mesh simplification method is introduced detailedly in this paper, which is based on a Beowulf cluster system. Taking full advantage of the distributed memory and high performance network, we can simplify out-of-core models quickly and avoid thrashing the virtual memory system. In addition, the file I/O and load balancing are also considered to make sure a near optimal utilization of the computational resources as well as obtaining high quality output. A set of numerical experiments have demonstrated that our parallel implementation can not only reduce the execution time greatly but also obtain higher parallel efficiency.
Ontology can specify the concepts and their relationships. If ontology was used as the bridge to map between the source and target artifacts, it can help to solve the keyword matching problem regarding the requirement...
详细信息
ISBN:
(纸本)9781538621622
Ontology can specify the concepts and their relationships. If ontology was used as the bridge to map between the source and target artifacts, it can help to solve the keyword matching problem regarding the requirement traceability in the semantic level. A domain ontology recommendation method has been proposed. Using this method, the lexical semantic representation list of the dependency syntax types can be produced at first using Stanford Parser. Then, the terms as well as their relations can be extracted using the clear algorithm and rule-based matching algorithm. As follows, these terms and their relations can be transformed into domain concepts and their relations, obtaining the domain ontology recommendation. Finally, the domain ontology that is suitable for the requirement traceability can be gained through the processes of selection and revision by the domain experts. The experimental results demonstrate that if the proposed domain ontology was applied to requirement traceability, it can effectively improve the accuracy of the requirement traceability, compared with the vector space model and the requirement traceability method based on HOWNET.
Automatic decomposition is an optimization technique that distributes computation and data onto different processors. The consequence of decomposition directly affects the performance of parallel program. Since every ...
详细信息
ISBN:
(纸本)9780769548180
Automatic decomposition is an optimization technique that distributes computation and data onto different processors. The consequence of decomposition directly affects the performance of parallel program. Since every computing node has its own memory in distributed memory parallel computers (DMPCs), false dependence does not hinder the parallelism. Affine decomposition is an effective method to represent and derive computation partition and data distribution, and its principle of adding dependence constraint is too strict to gain more parallelism. Some loop nests do not satisfy the affine condition, and are prohibited from parallelism by affine decomposition. However, if only the irregular access is caused by indirect array, loop and array reference can be partitioned at compile time. To tackle above problems of affine decomposition, an improved static decomposition algorithm of DMPCs proposed in this paper. The experimental results show that this algorithm can improve the performance of parallel programs.
暂无评论