作者:
M.J. AdamsonR.I. DamperImage
Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University of Southampton Southampton UK
Previous attempts to derive connectionist models for text-to-phoneme conversion-such as NETtalk and NETspeak-have generally used pre-aligned training data and purely feedforward networks, both of which represent simpl...
详细信息
Previous attempts to derive connectionist models for text-to-phoneme conversion-such as NETtalk and NETspeak-have generally used pre-aligned training data and purely feedforward networks, both of which represent simplifications of the problem. In this work, we explore the potential of recurrent networks to perform the conversion task when trained on non-aligned data. Initially, our use of a single recurrent network produced disappointing results. This led to the definition of a two-phase model in which the hidden-unit representation of an auto-associative network was fed forward to a recurrent network. Although this model currently does not perform as well as NETspeak it is solving a harder problem. Also, we propose several possible avenues for improvement.
In the area of medical imaging, fully-automatic and robust segmentation techniques would have an enormous beneficial impact on clinical practice and research, by decreasing dramatically the manual effort which must ot...
详细信息
In the area of medical imaging, fully-automatic and robust segmentation techniques would have an enormous beneficial impact on clinical practice and research, by decreasing dramatically the manual effort which must otherwise be devoted to this task. Deployment of conventional image processing techniques has not so far led to a fully-automatic solution, although semi-automatic systems do exist. Since no known, robust segmentation algorithm exists, the ability of neural networks to discover regularities and features in complex data is appealing. Indeed, many preliminary attempts at neural segmentation have been described, although none yet achieves the necessary level of performance for routine application. Southampton General Hospital have a requirement to obtain lung-boundary data within an asthma research project. In connection with this requirement, we have previously reported on work in which multilayer perceptrons (MLPs) are trained using backpropagation to segment the region of the lungs in magnetic resonance images of the thorax. This is achieved by training the network to classify voxels as either boundary (voxels on the boundary between lung interior and surrounding tissue) or non-boundary. In this paper, we present the latest results using this technique. We also show how the generalisation performance of the MLP can be improved using a variety of techniques, including weight pruning algorithms.
This paper describes two computing paradigms known as neural computing and evolutionary computing, and their potential contribution to building intelligent software systems. The paper begins by giving a brief introduc...
详细信息
This paper describes two computing paradigms known as neural computing and evolutionary computing, and their potential contribution to building intelligent software systems. The paper begins by giving a brief introduction to the origins of each paradigm. Then two sections introduce the basic principles, and identify the role of each paradigm in intelligent system design. Each section ends with a number of applications that have been or are being investigated. These include connection admission control, modem communication, adaptive model-based control, face and handwriting recognition, frequency assignment, help-desk scheduling, financial time-series prediction, face recognition and evolving agent behavior. A section introduces the idea of using communications theory to design neural networks and the paper concludes with the authors' views on the future of neural and evolutionary computing for intelligent software systems.
Supervised parameter adaptation in many artificial neural networks is largely based on an instantaneous version of gradient descent called the least-mean-square (LMS) algorithm, As the gradient is estimated using sing...
详细信息
Supervised parameter adaptation in many artificial neural networks is largely based on an instantaneous version of gradient descent called the least-mean-square (LMS) algorithm, As the gradient is estimated using single samples of the input ensemble, its convergence properties generally deviate significantly from that of the true gradient descent because of the noise in the gradient estimate, It is thus important to study the gradient-noise characteristics so that the convergence of the LMS algorithm can be analyzed from a new perspective, This paper considers only neural models which are Linear with respect to their adaptable parameters and has two major contributions, First, it derives an expression for the gradient-noise covariance under the assumption that the input samples are real, stationary, Gaussian distributed;but can be partially correlated, This expression relates the gradient correlation and input correlation matrices to the gradient-noise covariance and explains why the gradient noise generally correlates maximally with the steepest principal axis and minimally with the one of the smallest curvature, regardless of the magnitude of the weight error, Second, a recursive expression for the weight-error correlation matrix is derived ina straightforward manner using the gradient-noise covariance, and comparisons are drawn with the complex LMS algorithm.
Much work has been undertaken on investigating the use of semaphore primitives in concurrent programming languages. It has been shown that semaphores are adequate for expressing many forms of concurrency control, incl...
详细信息
Much work has been undertaken on investigating the use of semaphore primitives in concurrent programming languages. It has been shown that semaphores are adequate for expressing many forms of concurrency control, including the enforcement of communication protocols, and mutual exclusion protocols on shared resources. In this paper we present a formal language for real-time distributed programs which includes a semaphore primitive. This primitive is used to lock and unlock resources which are directly associated with either processors or communication channels. The semaphores are real-time, i.e. the programmer can express timing constraints about when the semaphores should lock and unlock. It is demonstrated that, using these semaphores, a number of apparently disjoint issues in real-time distributedsystems theory can be unified within a single notion of resource restriction. In particular it is shown that different models of communication, control of shared access to resources (mutual exclusion), and process to processor mapping (physical placement), can all be expressed and reasoned about in a unified manner.
Integrating concurrent and object-oriented language facilities is currently an active research area. There are a few experimental languages which attempt to combine various models of concurrency within an OOP framewor...
详细信息
This paper addresses the issue related to the deadline guarantees of synchronous messages in FDDI networks. The exact value of X i (the minimum amount of time for each node i on the network to transmit its synchronous...
详细信息
This paper addresses the issue related to the deadline guarantees of synchronous messages in FDDI networks. The exact value of X i (the minimum amount of time for each node i on the network to transmit its synchronous messages during the message’s (relative) deadline D i ), which is better than the previously published, is derived. Based on this result, the schedulability of synchronous message sets is generally analysed. This analysis differs from previous ones by taking into account synchronous message sets with the minimum message deadline ( D min ) less than twice the Target Token Rotation Time (TTRT).
Trials of video on demand and pay-per-view systems are underway in many countries, and as a result, many cable and telecommunications companies are having to upgrade or replace their distribution networks. Video strea...
详细信息
Trials of video on demand and pay-per-view systems are underway in many countries, and as a result, many cable and telecommunications companies are having to upgrade or replace their distribution networks. Video streams must be secure in order to prevent unauthorized viewing of the programs being transmitted, yet most existing security systems do not make use of the potential for bidirectional signalling in new and upgraded networks. It would prove more useful to many companies if they could use one system to test a number of different distribution networks to see if they save an acceptable quality of service. In this manner, a company could decide if existing networks could be used without substantial changes, or whether expensive upgrades or indeed replacements were justified.
This paper presents an evolutionary approach for the design of feedforward and recurrent neural networks. We show that evolutionary algorithms can be used for the construction of networks for real-world tasks. Therefo...
详细信息
This paper presents an evolutionary approach for the design of feedforward and recurrent neural networks. We show that evolutionary algorithms can be used for the construction of networks for real-world tasks. Therefore, a data structure based genotypic network representation, as well as genetic operators, are introduced. Results from the classification, function approximation and time-series domains are presented.
暂无评论