Novelty detection based on learning entropy
Gejza Dohnal and
Ivo Bukovský
Applied Stochastic Models in Business and Industry, 2020, vol. 36, issue 1, 178-183
Abstract:
The Approximate Individual Sample Learning Entropy is based on incremental learning of a predictor x˜(k+h)=ϕ(x(k),w), where x(k) is an input vector of a given size at time k, w is a vector of weights (adaptive parameters), and h is a prediction horizon. The basic assumption is that, after the underlying process x changes its behavior, the incrementally learning system will adapt the weights w to improve the predictor x˜. Our goal is to detect a change in the behavior of the weight increment process. The main idea of this paper is based on the fact that weight increments △w(k), where △w(k) = w(k + 1) − w(k), create a weakly stationary process until a change occurs. Once a novelty behavior of the underlying process x(k) occurs, the process △w(k) changes its characteristics (eg, the mean or variation). We suggest using convenient characteristics of △w(k) in a multivariate detection scheme (eg, the Hotelling's T2 control chart).
Date: 2020
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1002/asmb.2456
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wly:apsmbi:v:36:y:2020:i:1:p:178-183
Access Statistics for this article
More articles in Applied Stochastic Models in Business and Industry from John Wiley & Sons
Bibliographic data for series maintained by Wiley Content Delivery ().