Shennong: A Python toolbox for audio speech features extraction
Mathieu Bernard (),
Maxime Poli,
Julien Karadayi and
Emmanuel Dupoux
Additional contact information
Mathieu Bernard: EconomiX - EconomiX - UPN - Université Paris Nanterre - CNRS - Centre National de la Recherche Scientifique
Post-Print from HAL
Abstract:
We introduce Shennong, a Python toolbox and command-line utility for audio speech features extraction. It implements a wide range of well-established state-of-the-art algorithms: spectro-temporal filters such as Mel-Frequency Cepstral Filterbank or Predictive Linear Filters, pre-trained neural networks, pitch estimators, speaker normalization methods, and post-processing algorithms. Shennong is an open source, reliable and extensible framework built on top of the popular Kaldi speech processing library. The Python implementation makes it easy to use by non-technical users and integrates with third-party speech modeling and machine learning tools from the Python ecosystem. This paper describes the Shennong software architecture, its core components, and implemented algorithms. Then, three applications illustrate its use. We first present a benchmark of speech features extraction algorithms available in Shennong on a phone discrimination task. We then analyze the performances of a speaker normalization model as a function of the speech duration used for training. We finally compare pitch estimation algorithms on speech under various noise conditions.
Keywords: [No; keyword; available] (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:
Published in Behavior Research Methods, 2023, ⟨10.3758/s13428-022-02029-6⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04312370
DOI: 10.3758/s13428-022-02029-6
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().