We compare two optimization methods for lattice-based sequence discriminative training of neural network acoustic models: distributedhessian-free (DHF) and stochastic gradient descent (SGD). Our findings on two diffe...
详细信息
ISBN:
(纸本)9781479928934
We compare two optimization methods for lattice-based sequence discriminative training of neural network acoustic models: distributedhessian-free (DHF) and stochastic gradient descent (SGD). Our findings on two different LVCSR tasks suggest that SGD running on a single GPU machine achieves the best accuracy 2.5 times faster than DHF running on multiple non-GPU machines;however, DHF training achieves a higher accuracy at the end of the optimization. In addition, we present an improved modified forward-backward algorithm for computing lattice-based expected loss functions and gradients that results in a 34% speedup for SGD.
暂无评论