EconPapers    
Economics at your fingertips  
 

In-Memory Versus Disk-Based Computing with Random Forest for Stock Analysis: A Comparative Study

Chitra Joshi, Chitrakant Banchorr, Omkaresh Kulkarni and Kirti Wanjale

Acta Informatica Pragensia, 2025, vol. 2025, issue 3, 460-473

Abstract: Background: The advancement of big data analytics calls for careful selection of processing frameworks to optimize machine learning effectiveness. Choosing the appropriate framework can significantly influence the speed and accuracy of data analysis, ultimately leading to more informed decision making. In adapting to this changing landscape, businesses should focus on factors such as how well a system scales, how easily it can be used and how effectively it integrates with their existing tools. The effectiveness of these frameworks plays a crucial role in determining data processing speed, model training efficiency and predictive accuracy. As data become increasingly large, diverse and fast-moving, conventional processing systems often fall short of the performance required for modern analytics.Objective: This research seeks to thoroughly assess the performance of two prominent big data processing frameworks-Apache Spark (in-memory computing) and MapReduce (disk-based computing)-with a focus on applying random forest algorithms to predict stock prices. The primary objective is to assess and compare their effectiveness in handling large-scale financial datasets, focusing on key aspects such as predictive accuracy, processing speed and scalability.Methods: The investigation uses the MapReduce methodology and Apache Spark independently to analyse a substantial stock price dataset and to train a random forest regressor. Mean squared error (MSE) and root mean square error (RMSE) were employed to assess the primary performance indicators of the models, while mean absolute error (MAE) and the R-squared value were used to evaluate the goodness of fit of the models.Results: The RMSE, MAE and MSE obtained for the Spark-based implementation were lower, compared to the MapReduce-based implementation, although these low values indicate high prediction accuracy. It also had a big impact on the time it took to train and run models because of its optimized in-memory processing. As opposed to this, the MapReduce approach had higher latency and lower accuracy, reflecting its disk-based constraints and reduced efficiency for iterative machine learning tasks.Conclusion: The conclusion supports the fact that Spark is the better option for complex machine learning tasks such as stock price prediction, as it is good for handling large amounts of data. MapReduce is still a reliable framework but not fast enough to process and not lightweight enough for analytics that are too rapid and iterative. The outcomes of this study are helpful for data scientists and financial analysts to choose the most appropriate framework for big data machine learning applications.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://aip.vse.cz/doi/10.18267/j.aip.275.html (text/html)
http://aip.vse.cz/doi/10.18267/j.aip.275.pdf (application/pdf)
free of charge

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:prg:jnlaip:v:2025:y:2025:i:3:id:275:p:460-473

Ordering information: This journal article can be ordered from
Redakce Acta Informatica Pragensia, Katedra systémové analýzy, Vysoká škola ekonomická v Praze, nám. W. Churchilla 4, 130 67 Praha 3
http://aip.vse.cz

DOI: 10.18267/j.aip.275

Access Statistics for this article

Acta Informatica Pragensia is currently edited by Editorial Office

More articles in Acta Informatica Pragensia from Prague University of Economics and Business Contact information at EDIRC.
Bibliographic data for series maintained by Stanislav Vojir ().

 
Page updated 2025-08-19
Handle: RePEc:prg:jnlaip:v:2025:y:2025:i:3:id:275:p:460-473