Prolog program synthesis can be made more efficient by using schemata which capture similarities in previously-seen programs. Such schemata narrow the search involved in the synthesis of a new program. We define a gen...
详细信息
Prolog program synthesis can be made more efficient by using schemata which capture similarities in previously-seen programs. Such schemata narrow the search involved in the synthesis of a new program. We define a generalization operator for forming schemata from programs and a downward refinement operator for constructing programs from schemata. These operators define schema-hierarchy graphs which can be used to aid in the synthesis of new programs. Algorithms are presented for efficiently obtaining least generalizations of schemata, for adding new schemata to a schema-hierarchy graph, and for using schemata to construct new programs. (C) 1998 Elsevier Science B.V.
Data transformation is a laborious and time-consuming task for analysts. programming by example (PBE) is a technique that can simplify this difficult task for data analysts by automatically generating programs for dat...
详细信息
Data transformation is a laborious and time-consuming task for analysts. programming by example (PBE) is a technique that can simplify this difficult task for data analysts by automatically generating programs for data transformation. Most of the previously proposed PBE methods are based on search algorithms, but recent improvements in machine learning (ML) have led to its application in PBE research. For example, RobustFill was proposed as an ML-based PBE method for string transformation by using long short-term memory (LSTM) as the sequential encoder-decoder model. However, an ML-based PBE method has not been developed for tabular transformations, which are used frequently in data analysis. Thus, in the present study, we propose an ML-based PBE method for tabular transformations. First, we consider the features of tabular transformations, which are more complex and data intensive than string transformations, and propose a new ML-based PBE method using the state-of-the-art Transformer sequential encoder-decoder model. To our knowledge, this is the first ML-based PBE method for tabular transformations. We also propose two decoding methods comprising multistep beam search and program validation-beam search, which are optimized for program generation, and thus generate correct programs with higher accuracy. Our evaluation results demonstrated that the Transformer-based PBE model performed much better than LSTM-based PBE when applied to tabular transformations. Furthermore, the Transformer-based model with the proposed decoding method performed better than the conventional PBE model using the search-based method.
Recovery from human operator error is key to system dependability: users are usually the main source for incorrect data, and this data can render a system unusable. Although current management information systems allo...
详细信息
Recovery from human operator error is key to system dependability: users are usually the main source for incorrect data, and this data can render a system unusable. Although current management information systems allow for the reversal of some processes, they don't offer an undo function to correct the errors human operators might introduce while editing data. The authors propose a method for correcting this potential problem with a new undo function that allows the recovery of previous states even after records have changed.
We present the Monte Carlo event generator YFSWW3 version 1.16 for the process of W-pair production and decay in electron-positron collisions. It includes O(alpha) electroweak radiative corrections in the WW productio...
详细信息
We present the Monte Carlo event generator YFSWW3 version 1.16 for the process of W-pair production and decay in electron-positron collisions. It includes O(alpha) electroweak radiative corrections in the WW production stage together with the O(alpha (3)) initial-state-radiation (ISR) corrections in the leading-logarithmic (LL) approximation, implemented within the Yennie-Frautschi-Suura (YFS) exclusive exponentiation framework. The photon radiation in the W decays is generated by the dedicated program PHOTOS up to O(alpha (2)) LL, normalized to the W branching ratios. The program is interfaced with the tau decay library TAUOLA and the quark fragmentation/hadronization package JETSET. The semi-analytical code Korwan for the calculations of the differential and total cross-sections at the Born level and in the ISR approximation is included. (C) 2001 Elsevier Science B.V. All rights reserved.
In recent years, "Artificial Intelligence (AI)" has become a focal point of discussion. AI music composition, an interdisciplinary field blending computer science and musicology, has emerged as a prominent a...
详细信息
In recent years, "Artificial Intelligence (AI)" has become a focal point of discussion. AI music composition, an interdisciplinary field blending computer science and musicology, has emerged as a prominent area of research. Despite rapid advancements in AI music creation technology, there remains a dearth of comprehensive surveys addressing the core technologies within this domain. To address this gap, this study conducted a comprehensive search across multiple databases spanning a 23-year period (2000-2023) on the topic of "AI music composition." Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standard for literature screening, the study systematically organized the development status, frontier hotspots, and technical evaluations of the field. Drawing from literature data, the study verified Price's Law, Lotka's Law, and Bradford's Law-three scientific productivity laws-while summarizing the current landscape from four perspectives: authors, organizations, countries, and journals. Subsequently, utilizing VOSviewer and CiteSpace, two technical software tools, the study conducted an in-depth analysis consisting of four steps: clustering, time zone, burst words, and high-frequency referenced literature. The study presented the evolution trajectory of frontiers and hotspots through visualization maps. Finally, building upon quantitative statistical insights, the study qualitatively expanded research efforts by organizing and evaluating the latest AI music generation algorithm technologies. The systematic literature analysis, both quantitative and qualitative, aims to furnish researchers and practitioners in related fields with systematic references.
Automation of X-ray laboratory instruments has been achieved by combining commercially available automation hardware, X-ray units of various manufacturers, and a PDP-11 minicomputer. The automation provides simultaneo...
详细信息
Automation of X-ray laboratory instruments has been achieved by combining commercially available automation hardware, X-ray units of various manufacturers, and a PDP-11 minicomputer. The automation provides simultaneous on-line control of up to ten separate X-ray instruments through user programs written in Lab-Basic which is the 1-8 User Basic software enlarged by the incorporation of three of additional capabilities, viz., routines to control automation hardware, commands making the programming easier, and mass-storage facility. The interactive nature of the hardware and of the software was dictated by the diverse types of analyses required in a general-purpose X-ray laboratory. The automation system provides the versatility needed for the various X-ray procedures by allowing the user to plan, debug, and execute the measurement, data reduction, and data presentation. Analytical procedures automated so far include the measurement of residual stress by X-ray diffraction, elemental analysis of catalysts by X-ray fluorescence, and quantitative phase analysis of ceramic catalyst support material by X-ray diffraction. Extension to other methods is contemplated.
We present a summary of the CHI project conducted at Kestrel Institute through mid-1984. The objective of this project was to perform research on knowledge-based software environments. Toward this end, key portions of...
详细信息
We present a summary of the CHI project conducted at Kestrel Institute through mid-1984. The objective of this project was to perform research on knowledge-based software environments. Toward this end, key portions of a prototype environment, called CHI, were built that established the feasibility of this approach. One result of this research was the development of a wide-spectrum language that could be used to express all stages of the program development process in the system. Another result was that the prototype compiler was used to synthesize itself from very-high-level description of itself. In this way the system was bootstrapped. We describe the overall nature of the work done on this project, give highlights of implemented prototypes, and describe the implications that this work suggests for the future of software engineering. In addition to this historical perspective, current research projects at Kestrel Institute as well as commercial applications of the technology at Reasoning Systems are briefly surveyed.
Elicitation of the elements of Unified Modelling Language (UML) analysis and design models from sentences written in scripted English is essential in the production of analysis and design models. The correct elicitati...
详细信息
Elicitation of the elements of Unified Modelling Language (UML) analysis and design models from sentences written in scripted English is essential in the production of analysis and design models. The correct elicitation of these elements depends on the intuitive, manually defined set of linguistic heuristics, which is used to map a word in the sentence to its correct semantics in the domain of UML analysis and design models. This paper proposes a Genetic Algorithm-based classification rule discovery approach and a developed Enhanced Intuitive Linguistic Heuristics (EILH) dataset to automate the definition of the intuitive linguistic heuristics set to elicit five elements of UML analysis and design models from English sentences. These elements are the use case, the actor, the sender, the receiver, and the message. The automatically defined intuitive linguistic heuristics set was evaluated by developing an Artificial Neural Network (ANN) to recognize the elements of the UML analysis and design models using both manually defined and automatically defined sets. This comparison shows the superiority of the automatically defined set over the manually defined one.
作者:
Fouks, JDUFR Sci
IRCOM SICSP2MI UMR 6615 F-86960 Futuroscope France
We study, with the help of chaitin's algorithmic theory of information, the survival of an autonomous automatic system in a drastically simplified world. Despite our abstract animal has only to recognize its food,...
详细信息
We study, with the help of chaitin's algorithmic theory of information, the survival of an autonomous automatic system in a drastically simplified world. Despite our abstract animal has only to recognize its food, it cannot avoid errors without the help of an unreasonably large memory. So the continuous integration of new informations, implying the elimination of obsolete ones, is the only way to compute a correct behavior on a realizable machine. (C) 1999 Elsevier Science B.V. All rights reserved.
Decision tables for real time process control logically present the solution to a problem and directly replace flowcharts. A decision table is a self-contained description of the solution to a complex decision problem...
详细信息
Decision tables for real time process control logically present the solution to a problem and directly replace flowcharts. A decision table is a self-contained description of the solution to a complex decision problem. It presents all pertinent variables, the rules for combining possible conditions, and actions desired. The power of a decision table is that format and programming are independent of the problem, and it is thereby possible to automate much of the manpower required to design, program, test, and document solutions to complex control problems.
暂无评论