PERFORMANCE REFINEMENT OF CONVOLUTIONAL NEURAL NETWORK ARCHITECTURES FOR SOLVING BIG DATA PROBLEMS

Authors

  • Saud Aljaloud

DOI:

https://doi.org/10.25130/tjps.v28i1.1270

Keywords:

MNIST database, CIFAR10, GPU Bigdata, Deep learning, CNN, Theano and TensorFlow, MNIST database.

Abstract

The use of more examples than contrasted ones to compare neural network frameworks through using the MNIST database is considered a good research method. This is because this database is the subject of active research at the moment and has produced excellent results. However, in order to be trained and deliver accurate results, neural networks need a sizeable amount of sample data, as will be covered in more detail later. Because of this, big data experts frequently encounter problems of this nature. Therefore, two of the most well-liked neural network frameworks, Theano and TensorFlow, were compared in this study for how well they performed on a given problem. The MNIST database was used for this specific problem, represented by the recognition of handwritten digits from one to nine. As the project description implied, this study would not present a standard comparison because of this; instead, it would present a comparison of these networks' performance in a Big Data environment using distributed computing. The FMNIST or Fashion MNIST database and CIFAR10 were also tested (using the same neural network design), extending the scope of the comparison beyond MNIST. The same code was used with the same structure thanks to the use of a higher-level library called Keras, making use of the aforementioned support (in our case, Theano or TensorFlow). There has been a surge in open-source parallel GPU implementation research and development as a result of the high computational cost of training CNNs on large data sets. However, there are not many studies on assessing the performance traits of those implementations. In this study, these implementations were compared carefully across a wide range of parameter configurations, in addition to investigating potential performance bottlenecks, and identifying a number of areas that could use more fine-tuning.

Downloads

Published

2023-02-20

How to Cite

Saud Aljaloud. (2023). PERFORMANCE REFINEMENT OF CONVOLUTIONAL NEURAL NETWORK ARCHITECTURES FOR SOLVING BIG DATA PROBLEMS. Tikrit Journal of Pure Science, 28(1), 89–95. https://doi.org/10.25130/tjps.v28i1.1270

Similar Articles

1 2 3 4 > >> 

You may also start an advanced similarity search for this article.