EconPapers    
Economics at your fingertips  
 

Online Learning

Vladimir Shikhman () and David Müller ()
Additional contact information
Vladimir Shikhman: Chemnitz University of Technology
David Müller: Chemnitz University of Technology

Chapter 2 in Mathematical Foundations of Big Data Analytics, 2021, pp 21-39 from Springer

Abstract: Abstract In a world where automatic data collection becomes ubiquitous, we have to deal more and more often with data flow rather than with data sets. Whether we consider pricing of goods, portfolio selection, or expert advice, a common feature emerges: huge amounts of dynamic data need to be understood and quickly processed. In order to cope with this issue, the paradigm of online learning has been introduced. According to the latter, data becomes available in a sequential order and is used to update our decision at each iteration step. This is in strong contrast with batch learning which generates the best decision by learning on the entire training data set at once. For those applications, where the available amount of data truly explodes, it has become convenient to apply online learning techniques. Crucial for online learning is the question on how to measure the quality of the implemented decisions. For that, the notion of regret, known from decision theory, has been introduced. Loosely speaking, regret compares the losses caused by active decision strategies over time, on the one hand, with the losses caused by a passive decision strategy in hindsight, on the other hand. Surprisingly enough, online learning techniques allow to drive the average regret to zero as time progresses. In this chapter we explain the mathematics behind. First, we introduce some auxiliary notions from convex analysis, namely, those of dual norm, prox-function, and Bregman divergence. Second, we present an online learning technique called online mirror descent. Under convexity assumptions, an optimal rate of convergence for the corresponding regret is derived. We elaborate on the versions of the online mirror descent algorithm in entropic and Euclidean setups. In particular, the entropic setup enables us to online portfolio selection and to prediction with expert advice. The Euclidean setup leads to online gradient descent.

Date: 2021
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-662-62521-7_2

Ordering information: This item can be ordered from
http://www.springer.com/9783662625217

DOI: 10.1007/978-3-662-62521-7_2

Access Statistics for this chapter

More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-23
Handle: RePEc:spr:sprchp:978-3-662-62521-7_2