Persephone (beta version)

NOTE: This codebase is not actively maintained and development efforts are being placed elsewhere. If you’re interested in training a speech recognition model using ELAN files, consider using Elpis (https://github.com/CoEDL/elpis). If you’re interested in phonetic transcription using an existing multilingual speech recognition model, consider trying https://www.dictate.app/.

Persephone (/pərˈsɛfəni/) is an automatic phoneme transcription tool. Traditional speech recognition tools require a large pronunciation lexicon (describing how words are pronounced) and much training data so that the system can learn to output orthographic transcriptions. In contrast, Persephone is designed for situations where training data is limited, perhaps as little as an hour of transcribed speech. Such limitations on data are common in the documentation of low-resource languages. It is possible to use such small amounts of data to train a transcription model that can help aid transcription, yet such technology has not been widely adopted.

The speech recognition tool presented here is named after the goddess who was abducted by Hades and must spend one half of each year in the Underworld. Which of linguistics or computer science is Hell, and which the joyful world of spring and light? For each it’s the other, of course. — Alexis Michaud

The goal of Persephone is to make state-of-the-art phonemic transcription accessible to people involved in language documentation. Creating an easy-to-use user interface is central to this. The user interface and APIs are a work in progress and currently Persephone must be run via a command line.

The tool is implemented in Python/Tensorflow with extensibility in mind. Currently just one model is implemented, which uses bidirectional long short-term memory (LSTMs) and the connectionist temporal classification (CTC) loss function.

Contributors

Persephone has been built based on the code contributions of:

Indices and tables