Economics at your fingertips  

Distributed algorithm to train neural networks using the Map Reduce paradigm

Cristian Mihai Barca and Claudiu Dan Barca
Additional contact information
Cristian Mihai Barca: Electronics, Communications and Computers, University of Pitesti, Romania
Claudiu Dan Barca: The Romanian-American University, Bucharest, Romania

Database Systems Journal, 2017, vol. 8, issue 1, 3-11

Abstract: With rapid development of powerful computer systems during past decade, parallel and distributed processing becomes a significant resource for fast neural network training, even for real-time processing. Different parallel computing based methods have been proposed in recent years for the development of system performance. The two main methods are to distribute the patterns that are used for training - training set level parallelism, or to distribute the computation performed by the neural network - neural network level parallelism. In the present research work we have focused on the first method.

Keywords: Artificial Neural Networks; Machine Learning; Map-Reduce Hadoop; Distributed System (search for similar items in EconPapers)
Date: 2017
References: View complete reference list from CitEc
Citations Track citations by RSS feed

Downloads: (external link) (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link:

Access Statistics for this article

Database Systems Journal is currently edited by Ion Lungu

More articles in Database Systems Journal from Academy of Economic Studies - Bucharest, Romania Contact information at EDIRC.
Series data maintained by Adela Bara ().

Page updated 2017-09-29
Handle: RePEc:aes:dbjour:v:8:y:2017:i:1:p:3-11