Despite being adopted in software engineering tasks, deep neural networks are treated mostly as a black box due to the difficulty in interpreting how the networks infer the outputs from the inputs. To address this pro...
详细信息
ISBN:
(纸本)9781728125084
Despite being adopted in software engineering tasks, deep neural networks are treated mostly as a black box due to the difficulty in interpreting how the networks infer the outputs from the inputs. To address this problem, we propose AutoFocus, an automated approach for rating and visualizing the importance of input elements based on their effects on the outputs of the networks. The approach is built on our hypotheses that (1) attention mechanisms incorporated into neural networks can generate discriminative scores for various input elements and (2) the discriminative scores reflect the effects of input elements on the outputs of the networks. This paper verifies the hypotheses by applying AutoFocus on the task of algorithm classification (i.e., given a program source code as input, determine the algorithm implemented by the program). AutoFocus identifies and perturbs code elements in a program systematically, and quantifies the effects of the perturbed elements on the network's classification results. Based on evaluation on more than 1000 programs for 10 different sorting algorithms, we observe that the attention scores are highly correlated to the effects of the perturbed code elements. Such a correlation provides a strong basis for the uses of attention scores to interpret the relations between code elements and the algorithm classification results of a neural network, and we believe that visualizing code elements in an input program ranked according to their attention scores can facilitate faster program comprehension with reduced code.
Performance growth of single-core processors has conic to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empo...
详细信息
ISBN:
(纸本)9781509036691
Performance growth of single-core processors has conic to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couple of compilers are updated to developing challenges for synchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In this work we investigate current species for classification of algorithms and related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction tools along with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original species of algorithms. We executed new theories into the device, empowering automatic characterization of program code. The result produced by checking against the benchmark HPCC.
Multicore systems along with GPUs enabled to increase the parallelism extensively. Few compilers are enhanced to emerging issues with respect to threading and synchronization. Proper classification of algorithms and p...
详细信息
ISBN:
(纸本)9781479980475
Multicore systems along with GPUs enabled to increase the parallelism extensively. Few compilers are enhanced to emerging issues with respect to threading and synchronization. Proper classification of algorithms and programs will benefit largely to the community of programmers to get chances for efficient parallelization. In this work we analyzed the existing species for algorithm classification, where we discuss the classification of related work and compare the amount of problems which are difficult for classification. We have selected set of algorithms which resemble in structure for various problems but perform given specific tasks. These algorithms are tested using existing tools such as Bones compiler and A-Darwin, an automatic species extraction tool. The access patterns are produced for various algorithmic kernels by running against A-Darwin and analysis is done for various code segments. We have identified that all the algorithms cannot be classified using only existing patterns and created new set of access patterns.
Code generation and programming have become ever more challenging over the last decade due to the shift towards parallel processing. Emerging processor architectures such as multi-cores and GPUs exploit increasingly p...
详细信息
Code generation and programming have become ever more challenging over the last decade due to the shift towards parallel processing. Emerging processor architectures such as multi-cores and GPUs exploit increasingly parallelism, requiring programmers and compilers to deal with aspects such as threading, concurrency, synchronization, and complex memory partitioning. We advocate that programmers and compilers can greatly benefit from a structured classification of program code. Such a classification can help programmers to find opportunities for parallelization, reason about their code, and interact with other programmers. Similarly, parallelising compilers and source-to-source compilers can take threading and optimization decisions based on the same classification. In this work, we introduce algorithmic species, a classification of affine loop nests based on the polyhedral model and targeted for both automatic and manual use. Individual classes capture information such as the structure of parallelism and the data reuse. To make the classification applicable for manual use, a basic vocabulary forms the base for the creation of a set of intuitive classes. To demonstrate the use of algorithmic species, we identify 115 classes in a benchmark set. Additionally, we demonstrate the suitability of algorithmic species for automated uses by showing a tool to automatically extract species from program code, a species-based source-to-source compiler, and a species-based performance prediction model.
Purpose: To assess the reproducibility of retinal nerve fiber layer (SRNFL) measurements and the variability of the probabilistic classificationalgorithm in normal, hypertensive and glaucomatous eyes using Stratus op...
详细信息
Purpose: To assess the reproducibility of retinal nerve fiber layer (SRNFL) measurements and the variability of the probabilistic classificationalgorithm in normal, hypertensive and glaucomatous eyes using Stratus optical coherence tomography (OCT). Methods: Forty-nine eyes (13 normal, 17 ocular hypertensive [OHT] and 19 glaucomatous) of 49 subjects were included in this study. RNFL was determined with Stratus OCT using the standard protocol RNFL thickness 3.4. Three different images of each eye were taken consecutively during the same session. To evaluate OCT reproducibility, coefficient of variation (COV) and intraclass correlation coefficient (ICC) were calculated for average thickness (AvgT), superior average thickness (Savg), and inferior average thickness (Iavg) parameters. The variability of the results of the probabilistic classificationalgorithm, based on the OCT normative database, was also analyzed. The percentage of eyes with changes in the category assigned was calculated for each group. Results: The 50th percentile of COV was 2.96%, 4.00%, and 4.31% for AvgT, Savg, and Iavg, respectively. Glaucoma group presented the largest COV for all three parameters (3.87%, 5.55%, 7.82%). ICC were greater than 0.75 for almost all measures (except from the inferior thickness parameter in the normal group;ICC = 0.64, 95% CI 0.334-0.857). Regarding the probabilistic classificationalgorithm for the three parameters (AvgT, Savg, Iavg), the percentage of eyes without color-code category changes among the three images was as follows: normal group, 100%, 84.6% and 92%;OHT group, 89.5%, 52.7%, 79%;and Glaucoma group, 82%, 70.6%, and 76.5%, respectively. A probabilistic category switch from pathologic to normal or vice versa was observed in three eyes (15.8%) of the glaucomatous group for the Savg parameter and in two eyes of the OHT group: one eye (5,9%) for the AvgT and one eye (5.9%) for the Savg parameter. Conclusions: OCT RNFL measurements showed a good reproducibili
This paper presents an objective and quantitative study of segmentation algorithms. This study is distinguished from other studies by considering both evaluation and comparison, treating algorithms selected from disti...
详细信息
This paper presents an objective and quantitative study of segmentation algorithms. This study is distinguished from other studies by considering both evaluation and comparison, treating algorithms selected from distinct technique groups as well as using carefully designed synthetic images for the test experiments. All these characteristics make this study a general and effective one for revealing the performance of segmentation algorithms. (C) 1997 Elsevier Science B.V.
A framework for describing specific algorithms and their data structures is introduced so that designs can be presented in a uniform style that is suitable for discovering new designs as well as documenting known ones...
详细信息
A framework for describing specific algorithms and their data structures is introduced so that designs can be presented in a uniform style that is suitable for discovering new designs as well as documenting known ones. Data objects are described in terms of a formal grammar, and most data manipulation is characterized as composition of a few simple algorithms. Descriptions for several standard algorithms are included to illustrate the process, and a few newly designed structures are introduced. Such an approach is expected to be extremely useful in the construction of software libraries. [ABSTRACT FROM AUTHOR]
暂无评论