AI Bibliography

WIKINDX Resources  

Musaev, M., & Rakhimov, M. 2020, Accelerated training for convolutional neural networks. Paper presented at 2020 International Conference on Information Science and Communications Technologies (ICISCT). 
Resource type: Proceedings Article
BibTeX citation key: Musaev2020
View all bibliographic details
Categories: Artificial Intelligence, Cognitive Science, Complexity Science, Computer Science, Data Sciences, Decision Theory, General
Subcategories: Autonomous systems, Big data, Decision making, Deep learning, Machine intelligence, Machine learning, Machine recognition, Neural nets, Neurosymbolic, Q-learning
Creators: Musaev, Rakhimov
Publisher:
Collection: 2020 International Conference on Information Science and Communications Technologies (ICISCT)
Attachments  
Abstract
Handwritten character recognition (HCR) is one of the ongoing research field in Artificial Intelligence. The recognition of handwritten characters is based on pattern recognition and image processing. One of the most common machine learning methods for solving this problem is Convolutional Neural Networks (CNN). To implement such tasks, a multilayer neural network is often used. CNN require as much data as possible to ensure high accuracy, while parallel processing can help us save time when neural network training. Such training time can be reduced by improving training operations using parallel computing technology and multi-core platforms. In this paper, exemplary parallelization of CNN training by dint of OpenMP technology and its libraries has been implemented. English alphabets were chosen as an experiment for the handwritten character recognition system, and CNN was used for its better accuracy. Each character data set contains 26 alphabets. That is, the image size of each character ranges from 16×16 to 256×256 pixels, and these pixels are taken as features for training the CNN. We analyzed how the speedup of CNN training depends on image size. Three multi-core processors with different specifications were chosen for the hardware implementation. The results show that the proposed parallel approach using OpenMP technology gives good acceleration, which reduces training time.
  
WIKINDX 6.7.0 | Total resources: 1621 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)