Skip to content(if available)orjump to list(if available)

How AI hears accents: An audible visualization of accent clusters

retrac

I'm deaf. Something close to standard Canadian English is my native language. Most native English speakers claim my speech is unmarked but I think they're being polite; it's slightly marked as unusual and some with a good ear can easily tell it's because of hearing loss.

Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.

It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.

Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)

AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.

Hearing different things, as it were.

dmevich1

Fascinating work — especially how geography and history influence accent clustering more than language families. Brilliant visualization!

AprilArcus

The Australian-Vietnamese continuum is well-explained by Australia being the geographically nearest region which can supply native English language teachers to English language learners in Vietnam, rather than by any intrinsic phonetic resemblance between Vietnamese and Australian English.

crazygringo

This is fascinating in theory, but I'm confused in practice.

When I play the different recordings, which I understand have the accent "re-applied" to a neutral voice, it's very difficult to hear any actual differences in vowels, let alone prosody. Like if I click on "French", there's something vaguely different, but it's quite... off. It certainly doesn't sound like any native French speaker I've ever heard. And after all, a huge part of accent is prosody. So I'm not sure what vocal features they're considering as "accent"?

I'm also curious what the three dimensions are supposed to represent? Obviously there's no objective answer, but if they've listened to all the samples, surely they could explain the main constrasting features each dimension seems to encode?

johnwatson11218

I just got a project running whereby I used python + pdfplumber to read in 1100 pdf files, most of my humble bundle collection. I extracted the text and dumped it into a 'documents' table in postgresql. Then I used sentence transformers to reduce each 1K chunk to a single 384D vector which I wrote back to the db. Then I averaged these to produce a document level embedding as a single vector.

Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.

It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.

Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.

pinkmuffinere

Why do the voices all sound so similar? I'm not talking about accent, I'm talking about the pitch, timbre, and other qualities of the voice themselves. For instance, all the phrases I heard sounded like they were said by a medium-set 45 year old man. Nothing from kids, the elderly, or people with lower / higher-pitch voices. I assume this expected from the dataset for some reason, but am really curious about that reason. Did they just get many people with similar vocal qualities but wide ranges of accents?

dwohnitmok

From the article:

> By clicking or tapping on a point, you will hear a standardized version of the corresponding recording. The reason for voice standardization is two-fold: first, it anonymizes the speaker in the original recordings in order to protect their privacy. Second, it allows us to hear each accent projected onto a neutral voice, making it easier to hear the accent differences and ignore extraneous differences like gender, recording quality, and background noise. However, there is no free lunch: it does not perfectly preserve the source accent and introduces some audible phonetic artifacts.

> This voice standardization model is an in-house accent-preserving voice conversion model.

gmurphy

Since our own accents generally sound neutral to ourselves, I would love someone to make an accent-doubler - take the differences between two accents and expand them, so an Australian can hear what they sound like to an American, or vice-versa

zman0225

Going mono-tonal to that of an expressive ebook increased my "American English" score from a 52% to 92%.

I'd suggest training a little less on audio books.

djmips

What does it mean mono-tonal and what is an expressive ebook? I assume you are not American born? I had been of the understanding that rythm was more important than the exact sounds in comprehension.

bikeshaving

The source code for this is unminified and very readable if you’re one of the rare few who has interesting latent spaces to visualize.

https://accent-explorer.boldvoice.com/script.js?v=5

agrnet

could you explain what it means for someone to “have interesting latent spaces”? curious how you’re using that metaphor here

ilyausorov

Nothing too secret in there! We anonymized everything and anyway it's just a basic Plotly plot. Feel free to check it out.

3abiton

Good catch. I really hate javascript so i never got into d3js, so plptly was such a life saver.

ilyausorov

Plotly is great! Much love.

afiodorov

Apparently Persian and Russian are close. Which is surprising to say the least. I know people keep getting confused about how Portuguese from Portugal and Russian sound close yet the Persian is new to me.

CGMthrowaway

Idea: Farsi and Russian both have simple list of vowel sounds and no diphtongs. Making it hard/obvious when attempting to speak english, which is rife with them and many different vowel sounds

ilyausorov

Yeh they seem to be in the same "major" cluster, although Serbian/Croatian, Romanian, Bulgarian, Turkish, Polish and Czech are all close.

Turkish and Persian seem to be the nearest neighbors.

zehaeva

When I went to Portugal I was struck by how much Portuguese there does sound like Spanish with a Russian accent!

oscarfree

Part of this is the "dark L" sound

BalinKing

I’d guess that the sibilants, consonant clusters, and/or vowel reduction would play a big role.

binary132

I thought I was the only one who perceived an audible similarity between Portuguese and Russian.

djmips

I had that too but it was Brazillian Portuguese where I noticed it.

mh-

I speak neither, and both also sound similar to me depending on the accents of the speakers.

efskap

BERT still making headlines in 2025, you love to see it.

tmshapland

Fascinating! How did you decouple the speaker-specific vocal characteristics (timbre, pitch range) from the accent-defining phonetic and prosodic features in the latent space?

oscarfree

We didn't explicitly. Because we finetuned this model for accent classification, the later transformer layers appear to ignore non-accent vocal characteristics. I verified this for gender for example.