Graph partitioning is a well-known problem which has varied applications such as scientific computing, distributedcomputing, social network analysis, task scheduling in multi-processor systems, data mining, cloud com...
详细信息
Machine Learning (ML) approaches are widely-used classification/regression methods for data mining applications. However, the time-consuming training process greatly limits the efficiency of ML approaches. We use the ...
详细信息
ISBN:
(纸本)9781538610428
Machine Learning (ML) approaches are widely-used classification/regression methods for data mining applications. However, the time-consuming training process greatly limits the efficiency of ML approaches. We use the example of SVM (traditional ML algorithm) and DNN (state-of-the-art ML algorithm) to illustrate the idea in this paper. For SVM, a major performance bottleneck of current tools is that they use a unified data storage format because the data formats can have a significant influence on the complexity of storage and computation, memory bandwidth, and the efficiency of parallel processing. To address the problem above, we study the factors influencing the algorithm's performance and conduct auto-tuning to speed up SVM training. DNN training is even slower than SVM. For example, using a 8-core CPUs to train AlexNet model by CIFAR-10 dataset costs 8.2 hours. CIFAR-10 is only 170 MB, which is not efficient for distributed processing. Moreover, due to the algorithm limitation, only a small batch of data can be processed at each iteration. We focus on finding the right algorithmic parameters and using auto-tuning techniques to make the algorithm run faster. For SVM training, our implementation achieves 1.7-16.3x speedup (6.8x on average) against the non-adaptive case (using the worst data format) for various datasets. For DNN training on CIFAR-10 dataset, we reduce the time from 8.2 hours to only roughly 1 minute. We use the benchmark of dollars per speedup to help the users to select the right deep learning hardware.
this paper describes the model, methods and tools to find rational ways of the energy development with regard to energy security requirements. As known, energy security is directly related to the uninterrupted energy ...
详细信息
ISBN:
(纸本)9783319570990;9783319570983
this paper describes the model, methods and tools to find rational ways of the energy development with regard to energy security requirements. As known, energy security is directly related to the uninterrupted energy supply. It is important to choose rational ways of the energy development with ensuring energy security in the future. A number of the specific external condition combinations of energy sector operation and development taking into account uncertainties and other factors leads to a huge possible energy sector states set. therefore it cannot be processed in reasonable time. To overcome this issue an approach of combinatorial modeling is applied to manage the growing size of the energy sector states set.
In any well-structured software project, a necessary step consists in validating results relatively to functional expectations. However, in the high-performance computing (HPC) context, this process can become cumbers...
详细信息
ISBN:
(纸本)9783319619828;9783319619811
In any well-structured software project, a necessary step consists in validating results relatively to functional expectations. However, in the high-performance computing (HPC) context, this process can become cumbersome due to specific constraints such as scalability and/or specific job launchers. In this paper we present an original validation front-end taking advantage of HPC resources for HPC workloads. By adding an abstraction level between users and the batch manager, our tool JCHRONOSS, drastically reduces test-suite running time, while taking advantage of distributed resources available to HPC developers. We will first introduce validation work-flow challenges before presenting the architecture of our tool and its contribution to HPC validation suites. Eventually, we present results from real test-cases, demonstrating effective speed-up up to 25x compared to sequential validation time paving the way to more thorough validation of HPC applications.
Shop floor scheduling problem is known for a long time and is considered an academic exercise for solving by constraint solvers or for linear solvers. In a small production scale it can be solved easily. For a large s...
详细信息
In deregulated electricity market determination of available transfer capability (ATC) in advance becomes necessary to use whole existing transmission network efficiently and economically. this work considers optimal ...
详细信息
ISBN:
(纸本)9781509030385
In deregulated electricity market determination of available transfer capability (ATC) in advance becomes necessary to use whole existing transmission network efficiently and economically. this work considers optimal power flow (OPF) based method to estimate ATC when probabilistic solar power is injected into the system as active power source. Novel voltage sensitivity indices are considered for the optimal placement of solar power generation into the transmission network. Probabilistic demand is incorporated by considering load variations as normally distributed random variables. Monte Carlo sampling (MCS) and Latin Hypercube sampling (LHS) techniques are considered to extract samples from the normally distributed load samples. Simulation result shows a significant reduction in ATC values occur when probabilistic load is taken into account, as compare to the steady load.
Advances in cyber-physical systems and the introduction of Industry 4.0 have opened the door for interconnectivity in the industrial automation paradigm. One of the emerging technologies proven to be useful in factory...
详细信息
ISBN:
(纸本)9783319646350;9783319646343
Advances in cyber-physical systems and the introduction of Industry 4.0 have opened the door for interconnectivity in the industrial automation paradigm. One of the emerging technologies proven to be useful in factory automation is wireless sensor networks. In dynamic situations, wireless sensor networks need to be able to self-reconfigure while maintaining data integrity and efficiency. One solution popular with researchers is the use of multi-agent systems to manage wireless sensor networks. Typically, software agents are located on a server or cloud environment. Recent advances in microcomputers have made it feasible to embed these agents on the devices they control. this requires new reconfiguration and network management protocols. In this paper, an embedded agent architecture for wireless sensor network is proposed and an application specific example is given for an oil and gas refinery. An experiment is also conducted to investigate the effect of cluster sizes and signal frequency on the ratio of lost signals in a wireless sensor network cluster.
the proceedings contain 32 papers. the special focus in this conference is on Artificial Intelligence, Multimedia Systems and Software technologies. the topics include: On Fuzzy RDM-arithmetic;hidden Markov models wit...
ISBN:
(纸本)9783319484280
the proceedings contain 32 papers. the special focus in this conference is on Artificial Intelligence, Multimedia Systems and Software technologies. the topics include: On Fuzzy RDM-arithmetic;hidden Markov models with affix based observation in the field of syntactic analysis;an experiment on numeric, linguistic and color coded rating scale comparison;comparison of RDM complex interval arithmetic and rectangular complex arithmetic;homogeneous ensemble selection - experimental studies;deterministic method for the prediction of time series;a study on directionality in the Ulam square withthe use of the Hough transform;real-time system of delivering water-capsule for firefighting;subject-specific methodology in the frequency scanning phase of SSVEP-based BCI;S-boxes cryptographic properties from a statistical angle;data scheme conversion proposal for information security monitoring systems;non-standard certification models for pairing based cryptography;the use of the objective digital image quality assessment criterion indication to create panoramic photographs;accuracy of high-end and self-build eye-tracking systems;mouth features extraction for emotion analysis;parallel facial recognition system based on 2DHMM;system of acoustic assistance in spatial orientation for the blind;performance and energy efficiency in distributedcomputing;the approach to web services composition;loop nest tiling for image processing and communication applications;the method of evaluating the usability of the website based on logs and user preferences and ontology-based approaches to big data analytics.
Co-authorship analysis in science and technology partnerships provides a vision of cooperation patterns between individuals and organizations and is still widely used to understand and assess scientific collaboration ...
详细信息
Co-authorship analysis in science and technology partnerships provides a vision of cooperation patterns between individuals and organizations and is still widely used to understand and assess scientific collaboration patterns. this analysis is conducted by means of bibliometry, which is the quantitative study of scientific production. However, withthe evolution of database management systems, there was a significant increase in the volume of stored data, which could difficult the analysis. In this context, the developed work presents an efficient parallel optimization of bibliometric information, in order to allow this scientific analysis in a Big Data environment. Our results show that the time taken to calculate the transitivity value using the sequential approach grows 4.08 times faster than the parallel proposed approach when the number of nodes tends to infinity; the time taken to calculate the average distance and diameter values using the sequential approach grows 5.27 times faster than the parallel proposed approach when the number of nodes tends to infinity. Also, the results found present good values of speed up and efficiency.
Pervasive computing provides an exciting paradigm for supporting anywhere anytime services, and is built on the tremendous advances made in a broad spectrum of technologies including wireless communication, wireless a...
Pervasive computing provides an exciting paradigm for supporting anywhere anytime services, and is built on the tremendous advances made in a broad spectrum of technologies including wireless communication, wireless and sensor networking, mobile and distributedcomputing, as well as signal and information processing. Pervasive computing enables computers to interact withthe real world in a ubiquitous and natural manner. the objective of the 8th IEEE PerCom international Workshop on Information Quality and Quality of Service for Pervasive computing (IQ2S 2017) is to provide a forum to exchange ideas, present results, share experience, and enhance collaborations among researchers, professionals, and application developers in various aspects of Quality of Information, Quality of Experience, and Quality of Service for pervasive computing in network contexts including wireless, mobile and sensor networks.
暂无评论