I got transcription working in an Electron app via moonshine via onnx. A lot of experimenting (AKA repeated guessing) had to be done to get models loaded and audio passed to them.
I had to import llama-tokenizer-js like this:
((async function importTokenizer() {
var tokenizerModule = await import('llama-tokenizer-js');
llamaTokenizer = new tokenizerModule.LlamaTokenizer();
})());
Instead of:
import llamaTokenizer from 'llama-tokenizer-js';
Very weird, but I'm not willing to do the research to find out what to blame for this.