Economics at your fingertips  


M. A. Kaboudan
Additional contact information
M. A. Kaboudan: Penn State Lehigh Valley

No 331, Computing in Economics and Finance 2000 from Society for Computational Economics

Abstract: Genetic programming (or GP) is a random search technique that emerged in the late 1980s and early 1990s. A formal description of the method was introduced in Koza (1992). GP applies to many optimization areas. One of them is modeling time series and using those models in forecasting. Unlike other modeling techniques, GP is a computer program that 'searches' for a specification that replicates the dynamic behavior of observed series. To use GP, one provides operators (such as +, -, *, ?, exp, log, sin, cos, ... etc.) and identifies as many variables thought best to reproduce the dependent variable's dynamics. The program then randomly assembles equations with different specifications by combining some of the provided variables with operators and identifies that specification with the minimum sum of squared errors (or SSE). This process is an iterative evolution of successive generations consisting of thousands of the assembled equations where only the fittest within a generation survive to breed better equations also using random combinations until the best one is found. Clearly from this simple description, the method is based on heuristics and has no theoretical foundation. However, resulting final equations seem to produce reasonably accurate forecasts that compare favorably to forecasts humanly conceived specifications produce. With encouraging results difficult to overlook or ignore, it is important to investigate GP as a forecasting methodology. This paper attempts to evaluate forecasts genetically evolved models (or GEMs) produce for experimental data as well as real world time series.The organization of this paper inot four Sections. Section 1 contains an overview of GEMs. The reader will find lucid explanation of how models are evolved using genetic methodology as well as features found to characterize GEMs as a modeling technique. Section 2 contains descriptions of simulated and real world data and their respective fittest identified GEMs. The MSE and a new alpha-statistic are presented to compare models' performances. Simulated data were chosen to represent processes with different behavioral complexities including linear, linear-stochastic, nonlinear, nonlinear chaotic, and nonlinear-stochastic. Real world data consist of two time series popular in analytical statistics: Canadian lynx data and sunspot numbers. Predictions of historic values of each series (used in generating the fittest model) are also presented there. Forecasts and their evaluations are in Section 3. For each series, single- and multi-step forecasts are evaluated according to the mean squared error, normalized mean squared error, and alpha- statistic. A few concluding remarks are in the discussion in Section 4.

Date: 2000-07-05
References: View complete reference list from CitEc
Citations Track citations by RSS feed

Downloads: (external link) (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link:

Access Statistics for this paper

More papers in Computing in Economics and Finance 2000 from Society for Computational Economics CEF 2000, Departament d'Economia i Empresa, Universitat Pompeu Fabra, Ramon Trias Fargas, 25,27, 08005, Barcelona, Spain. Contact information at EDIRC.
Series data maintained by Christopher F. Baum ().

Page updated 2017-09-29
Handle: RePEc:sce:scecf0:331