A comparison of generalised maximum entropy and ordinary least square
Manije Sanei Tabass and
G.R. Mohtashami Borzadaran
International Journal of Information and Decision Sciences, 2018, vol. 10, issue 4, 297-310
Abstract:
The generalised maximum entropy (GME) estimation method is based on the classic maximum entropy approach of Jaynes (1957). It has the ability to estimate the parameters of a regression model without imposing any constraints on the probability distribution of errors and it is robust even when we have ill-posed problems. In this paper, we simulate two sets of data from regression model with different distribution for disturbance, standard normal and Cauchy distributions respectively. For this dataset, regression coefficients are obtained by GME and OLS methods and these techniques are compared with each other for some sample sizes. Moreover, we have used some prior information on parameters to obtain GME estimators. The estimation results of GME in the case of non-normal distributed are discussed here.
Keywords: regression model; generalised maximum entropy; GME; Monte Carlo experiment; ordinary least square; OLS. (search for similar items in EconPapers)
Date: 2018
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.inderscience.com/link.php?id=95495 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ids:ijidsc:v:10:y:2018:i:4:p:297-310
Access Statistics for this article
More articles in International Journal of Information and Decision Sciences from Inderscience Enterprises Ltd
Bibliographic data for series maintained by Sarah Parker ().