Skip to content(if available)orjump to list(if available)

Jargonic: Industry-Tunable ASR Model

suchire

Is their WER graph just completely made up? It’s comically bad

gronky_

I just tried the demo on the homepage and I don’t know what kind of sorcery this is but it’s blowing my mind.

I input a bunch of completely made up words (Quastral Syncing, Zarnix Meshing, HIBAX, Bilxer) and used them in a sentence and the model zero-shotted perfect speech recognition!

It’s so counterintuitive for me that this would work. I would have bet that you have to provide at least one audio sample in order for the model to recognize a word it was never trained on.

Providing it to the model in text modality and it being able to recognize it in the audio modality must be an emergent property.

four_fifths

so if i understand this correctly — you want the speech recognition model to identify a vocabulary of specific terms that it wasn't trained on. instead of fine-tuning with training data that includes the new vocabulary, you input the full vocabulary at test time as a list of words and the model is able to generate transcripts that include words from the vocabulary.

seems like it could be very useful but it really comes down to the specifics.

you can prompt whisper with context — how does this compare?

how large of a vocabulary can it work with? if it's a few dozen words it's only gonna help for niche use cases. if it can handle 100s-1000s with good performance that could completely replace fine-tuning for many uses

GavCo

I was wondering the same and found these related papers:

https://arxiv.org/pdf/2309.08561 https://arxiv.org/pdf/2406.02649

I haven't really dug in yet but from a quick skim, it looks promising. They show a big improvement over Whisper on a medical dataset (F1 increased from 80.5% to 96.58%).

The inference time for the keyword detection is about 10ms. If it scales linearly with additional keywords you could potentially scale to hundreds or thousands of keywords but it really depends on how sensitive you are to latency. For real-time with large vocabularies my guess is you might still want to fine-tune.

agold97

yeah — sounds about right. retraining the whole model just to add one jargon-y term isn’t super efficient. this approach lets you plug in a vocab list at runtime instead, which feels a lot more scalable.

FloatArtifact

How does this keyword spotting compare versus grammar or intent approach for speech recognition commands with dictation?

How does keyword spotting handle complex phrases as commands?

htrp

perhaps it's using openai advanced voice or another tts to create waveforms for comparison?