EconPapers    
Economics at your fingertips  
 

On Bayesian predictive density estimation for skew-normal distributions

Othmane Kortbi ()
Additional contact information
Othmane Kortbi: UAE University

Metrika: International Journal for Theoretical and Applied Statistics, 2025, vol. 88, issue 6, No 1, 735-748

Abstract: Abstract This paper is concerned with prediction for skew-normal models, and more specifically the Bayes estimation of a predictive density for $$Y \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_y I_p, \lambda )$$ Y μ ∼ S N p ( μ , v y I p , λ ) under Kullback–Leibler loss, based on $$X \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_x I_p, \lambda )$$ X μ ∼ S N p ( μ , v x I p , λ ) with known dependence and skewness parameters. We obtain representations for Bayes predictive densities, including the minimum risk equivariant predictive density $$\hat{p}_{\pi _{o}}$$ p ^ π o which is a Bayes predictive density with respect to the noninformative prior $$\pi _0\equiv 1$$ π 0 ≡ 1 . George et al. (Ann Stat 34:78–91, 2006) used the parallel between the problem of point estimation and the problem of estimation of predictive densities to establish a connection between the difference of risks of the two problems. The development of similar connection, allows us to determine sufficient conditions of dominance over $$\hat{p}_{\pi _{o}}$$ p ^ π o and of minimaxity. First, we show that $$\hat{p}_{\pi _{o}}$$ p ^ π o is a minimax predictive density under KL risk for the skew-normal model. After this, for dimensions $$p\ge 3$$ p ≥ 3 , we obtain classes of Bayesian minimax densities that improve $$\hat{p}_{\pi _{o}}$$ p ^ π o under KL loss, for the subclass of skew-normal distributions with small value of skewness parameter. Moreover, for dimensions $$p\ge 4$$ p ≥ 4 , we obtain classes of Bayesian minimax densities that improve $$\hat{p}_{\pi _{o}}$$ p ^ π o under KL loss, for the whole class of skew-normal distributions. Examples of proper priors, including generalized student priors, generating Bayesian minimax densities that improve $$\hat{p}_{\pi _{o}}$$ p ^ π o under KL loss, were constructed when $$p\ge 5$$ p ≥ 5 . This findings represent an extension of Liang and Barron (IEEE Trans Inf Theory 50(11):2708–2726, 2004), George et al. (Ann Stat 34:78–91, 2006) and Komaki (Biometrika 88(3):859–864, 2001) results to a subclass of asymmetrical distributions.

Keywords: Skew-normal distributions; Predictive densities; Minimax estimators; Admissibility; Kullback–Leibler loss; Bayes estimators; Primary 62C20; 62C15; 62C10; Secondary 62F10; 62H12 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s00184-024-00946-4 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:metrik:v:88:y:2025:i:6:d:10.1007_s00184-024-00946-4

Ordering information: This journal article can be ordered from
http://www.springer.com/statistics/journal/184/PS2

DOI: 10.1007/s00184-024-00946-4

Access Statistics for this article

Metrika: International Journal for Theoretical and Applied Statistics is currently edited by U. Kamps and Norbert Henze

More articles in Metrika: International Journal for Theoretical and Applied Statistics from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-10-06
Handle: RePEc:spr:metrik:v:88:y:2025:i:6:d:10.1007_s00184-024-00946-4