Skip to content

hexgrad/misaki

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

misaki

Misaki is a G2P engine designed for Kokoro models.

Hosted demo: https://hf.co/spaces/hexgrad/Misaki-G2P

English Usage

You can run this in one cell on Google Colab:

!pip install -q "misaki[en]"

from misaki import en

g2p = en.G2P(trf=False, british=False, fallback=None) # no transformer, American English

text = '[Misaki](/misˈɑki/) is a G2P engine designed for [Kokoro](/kˈOkəɹO/) models.'

phonemes, tokens = g2p(text)

print(phonemes) # misˈɑki ɪz ə ʤˈitəpˈi ˈɛnʤən dəzˈInd fɔɹ kˈOkəɹO mˈɑdᵊlz.

To fallback to espeak:

# Installing espeak varies across platforms, this silent install works on Colab:
!apt-get -qq -y install espeak-ng > /dev/null 2>&1

!pip install -q "misaki[en]" phonemizer-fork

from misaki import en, espeak

fallback = espeak.EspeakFallback(british=False) # en-us

g2p = en.G2P(trf=False, british=False, fallback=fallback) # no transformer, American English

text = 'Now outofdictionary words are handled by espeak.'

phonemes, tokens = g2p(text)

print(phonemes) # nˈW Wɾɑfdˈɪkʃənˌɛɹi wˈɜɹdz ɑɹ hˈændəld bI ˈispik.

English

Japanese

The second gen Japanese tokenizer now uses pyopenjtalk-plus and features pitch accent marks and improved phrase merging. Deep gratitude to @sophiefy for invaluable recommendations and nuanced help with pitch accent.

The first gen Japanese tokenizer mainly relies on cutlet => fugashi => mecab => unidic-lite, with each being a wrapper around the next. Deep gratitute to @Respaired for helping me learn the ropes of Japanese tokenization before any Kokoro model had started training.

Korean

The Korean tokenizer is copied from 5Hyeons's g2pkc fork of Kyubyong's widely used g2pK library. Deep gratitute to @5Hyeons for kindly helping with Korean and permissively extending the code by @Kyubyong.

Chinese

The second gen Chinese tokenizer adapts better logic from paddlespeech's frontend. Jieba now cuts and tags, and pinyin-to-ipa is no longer used.

The first gen Chinese tokenizer uses jieba to cut, pypinyin, and pinyin-to-ipa.

Vietnamese

TODO

  • Data: Compress data (no need for indented json) and eliminate redundancy between gold and silver dictionaries.
  • Fallbacks: Train seq2seq fallback models on dictionaries using this notebook.
  • Homographs: Escalate hard words like axes bass bow lead tear wind using BERT contextual word embeddings (CWEs) and logistic regression (LR) models (nn.Linear followed by sigmoid) as described in this paper. Assuming trf=True, BERT CWEs can be accessed via doc._.trf_data, see en.py#L479. Per-word LR models can be trained on WikipediaHomographData, llama-hd-dataset, and LLM-generated data.
  • More languages: Add ko.py, ja.py, zh.py.
  • Per-language pip installs

Acknowledgements

misaki

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Languages