EconPapers    
Economics at your fingertips  
 

Data compression using word encoding with Huffman code

Chengwen Liu and Clement Yu

Journal of the American Society for Information Science, 1991, vol. 42, issue 9, 685-698

Abstract: A technique for compressing large databases is presented. The method replaces frequent variable‐length byte strings (words or word fragments) in the database by minimum‐redundancy codes—Huffman codes. An essential part of the technique is the construction of the dictionary to yield high compression ratios. A heuristic is used to count frequencies of word fragments. A detailed analysis is provided of our implementaton in support of high compression ratios and efficient encoding and decoding under the constraint of a fixed amount of main memory. In each phase of our implementation, we explain why certain data structures or techniques are employed. Experimental results show that our compression scheme is very effective for compressing large databases of library records. © 1991 John Wiley & Sons, Inc.

Date: 1991
References: Add references at CitEc
Citations:

Downloads: (external link)
https://doi.org/10.1002/(SICI)1097-4571(199110)42:93.0.CO;2-1

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:bla:jamest:v:42:y:1991:i:9:p:685-698

Ordering information: This journal article can be ordered from
https://doi.org/10.1002/(ISSN)1097-4571

Access Statistics for this article

More articles in Journal of the American Society for Information Science from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().

 
Page updated 2025-03-19
Handle: RePEc:bla:jamest:v:42:y:1991:i:9:p:685-698