JSON has not only become a de factor language for data exchange between distributed systems but commonly been used as the intermediate configuration format in those systems. On the other hand, annotations have extensi...
详细信息
A recurring issue in generative approaches, in particular if they generate code for multiple target languages, is logging. How to ensure that logging is performed consistently for all the supported languages? How to e...
详细信息
ISBN:
(纸本)9781728125350
A recurring issue in generative approaches, in particular if they generate code for multiple target languages, is logging. How to ensure that logging is performed consistently for all the supported languages? How to ensure that the specific semantics of the source language, e.g. a modeling language or a domain-specific language, is reflected in the logs? How to expose logging concepts directly in the source language, so as to let developers specify what to log? This paper reports on our experience developing a concrete logging approach for ThingML, a textual modeling language built around asynchronous components, statecharts and a first-class action language, as well as a set of "compilers" targeting C, Go, Java and JavaScript.
Blockchain technology is an emerging technology that allows new forms of decentralized architectures, designed to generate trust among users, without the intervention of mediators or knowledge between the parties. Sin...
详细信息
ISBN:
(数字)9781728162782
ISBN:
(纸本)9781728162799
Blockchain technology is an emerging technology that allows new forms of decentralized architectures, designed to generate trust among users, without the intervention of mediators or knowledge between the parties. Since 2015, thanks to the introduction of Smart Contracts by Ethereum, it is possible to run programs on the blockchain, greatly extending the potential of this technology. The programming of Smart Contract, through the Solidity language is different from the traditional one. First of all, any action that requires to modify the blockchain costs gas, which corresponds to a fraction of the currency used by that given blockchain, and therefore to real money. Gas optimization is a unique challenge in this context and has obvious implications. This document aims to provide a set of design patterns and tips to help gas saving in developing Smart Contracts on Ethereum. The provided patterns are presented divided into five main categories, based on their features.
作者:
Kemmerer, DavidPurdue Univ
Dept Speech Language & Hearing Sci Lyles Porter Hall715 Clin Dr W Lafayette IN 47907 USA
There are nearly 6,500 languages in the world, and they vary greatly with regard to both lexical and grammatical semantics. Hence, an early stage of utterance planning involves "thinking for speaking"-i.e., ...
详细信息
There are nearly 6,500 languages in the world, and they vary greatly with regard to both lexical and grammatical semantics. Hence, an early stage of utterance planning involves "thinking for speaking"-i.e., shaping the thoughts to be expressed so they fit the idiosyncratic meanings of the symbolic units that happen to be available in the target language. This paper has three main sections that cover three distinct types of crosslinguistic semantic diversity. Each type is initially elaborated with examples, and then its implications for the neurobiology of speech production are considered. Type 1: Semantic field partitions. These are exemplified by huge crosslinguistic differences in many domains of meaning, including colors, body parts, household containers, events of cutting and breaking, and topological spatial relations. When such differences are viewed from the perspective of contemporary neurocognitive theories which assume that most concrete concepts are subserved by both modal (i.e., sensory, motor, and affective) and transmodal (i.e., integrative) cortical systems, they imply that speakers must access language-specific semantic structures at multiple levels of representation in the brain. Type 2: Semantic conflation classes. Some languages have whole sets of words that systematically encode two or more components of meaning. For instance, in the roughly 53 Athabaskan languages there are no generic verbs like give, carry, or throw;instead, there are entire sets of 9-13 verbs for these kinds of actions, with each set making the same distinctions between the types of objects that are given, carried, or thrown, such as animate objects, round objects, stick-like objects, flat objects, etc. This regular conflation of [action + object] in Athabaskan verb meanings predicts that speakers frequently co-activate both action-related and object-related cortical regions in a functionally integrated fashion. Type 3: Grammatically obligatory semantic categories. This kind o
For the evolution and maintenance of legacy systems, it is essential that they are reverse engineered. It is becoming important because in many Legacy systems, suitable documentation is not available which makes maint...
详细信息
ISBN:
(纸本)9781728134567
For the evolution and maintenance of legacy systems, it is essential that they are reverse engineered. It is becoming important because in many Legacy systems, suitable documentation is not available which makes maintenance. The proposed approach extracts hidden information which is useful for maintaining a software. Application modeling and data modeling are essential for the development of the software and are used for designing application and database separately. Generally, in software systems application layer is the object-orientedprogramming language and database is the backend store. Modeling of these layers separately creates inconsistency. This paper proposes a technique to reverse engineer the database schema to UML model which will incorporate modeling of an object-oriented technology on top of purely relational database. Using this conversion of database schema to UML diagram, the gap between these layers is being bridged. It will provide the understanding of the legacy system in a better way. The database schema is converted into XMI format which can be imported in a UML modeling tool. The proposed approach converts the SQL database schema as well as more frequently used SELECT statements for the schema into UML model which is in the form of XMI. By importing the generated XMI file into MagicDraw, UML model is graphically represented.
This work deals with non-native children's speech and investigates both multi-task and transfer learning approaches to adapt a multi-language Deep Neural Network (DNN) to speakers, specifically children, learning ...
详细信息
ISBN:
(纸本)9781538646595
This work deals with non-native children's speech and investigates both multi-task and transfer learning approaches to adapt a multi-language Deep Neural Network (DNN) to speakers, specifically children, learning a foreign language. The application scenario is characterized by young students learning English and German and reading sentences in these second-languages, as well as in their mother language. The paper analyzes and discusses techniques for training effective DNN-based acoustic models starting from children's native speech and performing adaptation with limited non-native audio material. A multi -lingual model is adopted as baseline, where a common phonetic lexicon, defined in terms of the units of the International Phonetic Alphabet (IPA), is shared across the three languages at hand (Italian, German and English);DNN adaptation methods based on transfer learning are evaluated on significant non-native evaluation sets. Results show that the resulting non-native models allow a significant improvement with respect to a mono-lingual system adapted to speakers of the target language.
Audio Word2Vec offers vector representations of fixed dimensionality for variable-length audio segments using Sequence to-sequence Autoencoder (SA). These vector representations are shown to describe the sequential ph...
详细信息
ISBN:
(纸本)9781538646595
Audio Word2Vec offers vector representations of fixed dimensionality for variable-length audio segments using Sequence to-sequence Autoencoder (SA). These vector representations are shown to describe the sequential phonetic structures of the audio segments to a good degree, with real world applications such as spoken term detection (STD). This paper examines the capability of language transfer of Audio Word2Vec. We train SA from one language (source language) and use it to extract the vector representation of the audio segments of another language (target language). We found that SA can still catch the phonetic structure from the audio segments of the target language if the source and target languages are similar. In STD, we obtain the vector representations from the SA learned from a large amount of source language data, and found them surpass the representations from naive encoder and SA directly learned from a small amount of target language data. The result shows that it is possible to learn Audio Word2Vec model from high-resource languages and use it on low-resource languages. This further expands the usability of Audio Word2Vec.
Research on multilingual speech emotion recognition faces the problem that most available speech corpora differ from each other in important ways, such as annotation methods or interaction scenarios. These inconsisten...
详细信息
ISBN:
(纸本)9781538646595
Research on multilingual speech emotion recognition faces the problem that most available speech corpora differ from each other in important ways, such as annotation methods or interaction scenarios. These inconsistencies complicate building a multilingual system. We present results for cross-lingual and multilingual emotion recognition on English and French speech data with similar characteristics in terms of interaction (human-human conversations). Further, we explore the possibility of fine-tuning a pre-trained cross-lingual model with only a small number of samples from the target language, which is of great interest for low-resource languages. To gain more insights in what is learned by the deployed convolutional neural network, we perform an analysis on the attention mechanism inside the network.
We introduce a novel type of representation learning to obtain a speaker invariant feature for zero-resource languages. Speaker adaptation is an important technique to build a robust acoustic model. For a zero-resourc...
详细信息
ISBN:
(纸本)9781538646595
We introduce a novel type of representation learning to obtain a speaker invariant feature for zero-resource languages. Speaker adaptation is an important technique to build a robust acoustic model. For a zero-resource language, however, conventional model-dependent speaker adaptation methods such as constrained maximum likelihood linear regression are insufficient because the acoustic model of the target language is not accessible. Therefore, we introduce a model-independent feature extraction based on a neural network. Specifically, we introduce a multi-task learning to a bottleneck feature-based approach to make bottleneck feature invariant to a change of speakers. The proposed network simultaneously tackles two tasks: phoneme and speaker classifications. This network trains a feature extractor in an adversarial manner to allow it to map input data into a discriminative representation to predict phonemes, whereas it is difficult to predict speakers. We conduct phone discriminant experiments in Zero Resource Speech Challenge 2017. Experimental results showed that our multi-task network yielded more discriminative features eliminating the variety in speakers.
This paper proposes an adversarial multilingual training to train bottleneck (BN) networks for the target language. A parallel shared-exclusive model is also proposed to train the BN network. Adversarial training is u...
详细信息
ISBN:
(纸本)9781538646595
This paper proposes an adversarial multilingual training to train bottleneck (BN) networks for the target language. A parallel shared-exclusive model is also proposed to train the BN network. Adversarial training is used to ensure that the shared layers can learn language-invariant features. Experiments are conducted on IARPA Babel datasets. The results show that the proposed adversarial multilingual BN model outperforms the baseline BN model by up to 8.9% relative word error rate (WER) reduction. The results also show that the proposed parallel shared-exclusive model achieves up to 1.7% relative WER reduction when compared with the stacked share-exclusive model.
暂无评论