Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words
Type
ArticleKAUST Department
Computer Science ProgramDate
2013-10-04Online Publication Date
2013-10-04Print Publication Date
2013Permanent link to this record
http://hdl.handle.net/10754/550837
Metadata
Show full item recordAbstract
Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.Citation
Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words 2013, 06 (10):533 Journal of Software Engineering and ApplicationsPublisher
Scientific Research Publishing, Inc.ae974a485f413a2113503eed53cd6c53
10.4236/jsea.2013.610064