Three Big Data Tools for a Data Scientist’s Toolbox
ULB Institutional Repository from ULB -- Universite Libre de Bruxelles
Sometimes data is generated unboundedly and at such a fast pace that it is no longer possible to store the complete data in a database. The development of techniques for handling and processing such streams of data is very challenging as the streaming context imposes severe constraints on the computation: we are often not able to store the whole data stream and making multiple passes over the data is no longer possible. As the stream is never finished we need to be able to continuously provide, upon request, up-to-date answers to analysis queries. Even problems that are highly trivial in an off-line context, such as: “How many different items are there in my database?” become very hard in a streaming context. Nevertheless, in the past decades several clever algorithms were developed to deal with streaming data. This paper covers several of these indispensable tools that should be present in every big data scientists’ toolbox, including approximate frequency counting of frequent items, cardinality estimation of very large sets, and fast nearest neighbor search in huge data collections.
Note: SCOPUS: cp.k
References: Add references at CitEc
Citations: Track citations by RSS feed
Published in: Lecture Notes in Business Information Processing (2018) v.324,p.112-133
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:ulb:ulbeco:2013/279376
Ordering information: This working paper can be ordered from
http://hdl.handle.ne ... lb.ac.be:2013/279376
Access Statistics for this paper
More papers in ULB Institutional Repository from ULB -- Universite Libre de Bruxelles Contact information at EDIRC.
Bibliographic data for series maintained by Benoit Pauwels ().