Adapting and Optimizing the Systemic Model of Banking Originated Losses (SYMBOL) Tool to the Multi-core Architecture
Ronal Muresano () and
Andrea Pagano ()
Additional contact information
Ronal Muresano: European Commission, Joint Research Centre (JRC), Institute for the Protection and the Security of the Citizen (IPSC), Financial and Economic Analysis Unit
Andrea Pagano: European Commission, Joint Research Centre (JRC), Institute for the Protection and the Security of the Citizen (IPSC), Financial and Economic Analysis Unit
Computational Economics, 2016, vol. 48, issue 2, No 4, 253-280
Abstract:
Abstract Currently, multi-core system is a predominant architecture in the computational word. This gives new possibilities to speedup statistical and numerical simulations, but it also introduce many challenges we need to deal with. In order to improve the performance metrics, we need to consider different key points as: core communications, data locality, dependencies, memory size, etc. This paper describes a series of optimization steps done on the SYMBOL model meant to enhance its performance and scalability. SYMBOL is a micro-funded statistical tool which analyses the consequences of bank failures, taking into account the available safety nets, such as deposit guarantee schemes or resolution funds. However, this tool, in its original version, has some computational weakness, because its execution time grows considerably, when we request to run with large input data (e.g. large banking systems) or if we wish to scale up the value of the stopping criterium, i.e. the number of default scenarios to be considered. Our intention is to develop a tool (extendable to other model having similar characteristics) where a set of serial (e.g. deleting redundancies, loop enrolling, etc.) and parallel strategies (e.g. OpenMP, and GPU programming) come together to obtain shorter execution time and scalability. The tool uses automatic configuration to make the best use of available resources on the basis of the characteristics of the input datasets. Experimental results, done varying the size of the input dataset and the stopping criterium, show a considerable improvement one can obtain by using the new tool, with execution time reduction up to 96 % of with respect to the original serial version.
Keywords: Optimization algorithm; Parallel techniques; OpenMP; GPU; SYMBOL (search for similar items in EconPapers)
Date: 2016
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
http://link.springer.com/10.1007/s10614-015-9509-4 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:kap:compec:v:48:y:2016:i:2:d:10.1007_s10614-015-9509-4
Ordering information: This journal article can be ordered from
http://www.springer. ... ry/journal/10614/PS2
DOI: 10.1007/s10614-015-9509-4
Access Statistics for this article
Computational Economics is currently edited by Hans Amman
More articles in Computational Economics from Springer, Society for Computational Economics Contact information at EDIRC.
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().