Skip to content(if available)orjump to list(if available)

Updates to Advanced Voice Mode for paid users

zaptrem

> Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music. We are actively investigating these issues and working toward a solution.

Would be cool to hear some samples of this. I remember there was some hallucinated background music during the meditation demo in the original reveal livestream but haven't seen much beyond that. Artifact of training on podcasts to get natural intonation.

automationist

If anyone's wondering, here's a short sample. It quietly updated last night, and I ended up chatting for like an hour. It sounds as smart as before, but like 10x more emotionally intelligent. Laughter is the biggest giveaway, but the serious/empathetic tones for more therapy-like conversations are noticeable, too. https://drive.google.com/file/d/16kiJ2hQW3KF4IfwYaPHdNXC-rsU...

candiddevmike

Did it really say partwheel or is it garbled?

transcriptase

I use advanced voice a lot and have come across many weird bugs.

1) Every response would be normal except end with a “whoosh” like one of those sound effects some mail clients use when an message is sent, and the model itself either couldn’t or wouldn’t acknowledge it.

2) The same except with someone knocking on a door. Like someone would play on a soundboard.

3) The entire history in the conversation disappearing after several minutes of back and forth, leading to the model having no idea what I’m talking about and acting as if it’s a fresh conversation.

4) Advanced voice mode stuttering because it hears its own voice and thinks it’s me interrupting (on a brand new iPhone 16 Pro, medium-low built in speaker volume and built-in mic).

5) Really weird changes in pronunciation or randomly saying certain words high-pitched, or suddenly using a weird accent.

And all of this was prior to these most recent changes.

It also stutters and repeats sometimes and says poor connection even though I know the connection is near-ideal.

zaptrem

I may know why that first one happens! They’re not correctly padding the latent in their decoder (by default torch pads with zeros, they should pad with whatever their latent’s representation of silence is). You can hear the same effect in songs generated with our music model: https://sonauto.ai/

Yeah we’re too lazy to fix it too

transcriptase

I’m super curious now, how does padding lead to repeatedly ending tts replies with what seem to be an actual non-speech sound effect?

arthurcolle

they still need to post-train out the emissions of all the trapped souls

kubb

I have the feeling that the Advanced Voice Mode is significantly worse than when I used it earlier this week. The voice sounds disinterested, and has weird intonation. It used to be excellent for foreign language conversation practice, now significantly worse.

Edit: After using up my 15 minutes for testing, I have to say that the new voice is actually not bad, although I was used to something else. But it has a very clear "artificial" quality to it. It also sometimes misinterprets my input as something completely different than what I said, for example "please like my video and subscribe to my channel".

doctorhandshake

Stumbled across the new voice this afternoon after months of not using voice mode and after being impressed by the naturalness, was also let down by the disinterested tone. That combined with the platitudes and tendency to repeat back to me what I was saying without new information left me disappointed with the update.

vunderba

Is this new? I'm on the Plus plan and just a few days ago carried on a conversation for around 45 minutes while on a walk with my dog.

Agreed though, the new voice (at least for Sol) accent sounds significantly degraded particularly when conversing in Chinese.

kubb

Apparently it's 6 months old [1]. You might be using the standard voice mode (the advanced one has just 1 voice IIUC).

[1] https://www.reddit.com/r/OpenAI/comments/1hdamrm/so_advanced...

vunderba

Thanks. OpenAI's docs are frustratingly vague about the whole thing. It seems (assuming the 15 minute hard limit holds true) that I must have been conversing with advanced mode for 15 minutes since Advanced is the default for Plus subscribers on the mobile app, and then it must have possibly handed it off to the standard voice mode after that.

Advanced https://help.openai.com/en/articles/9617425-advanced-voice-m...

Standard https://help.openai.com/en/articles/8400625-voice-mode-faq

arthurcolle

No advanced voice mode has multiple voices

bigshot

There’s a 15 minute limit?

kubb

In the Plus subscription yes. You can also pay 200 dollars per month for Pro, and in that plan, the advanced voice mode is unlimited. 200 bucks is quite a lot, I've gotta say. I wish there was a middle ground option, but even for the 20 dollars for Pro, they should give you more than 15 minutes.

ed_mercer

I keep using standard voice mode (Cove) because I like its grounded voice a lot. The advanced Cove’s voice sounds too much like an overly happy guy. I wish I could tell it to chill and talk normally but it won’t.

TheTaytay

I wish they still had the voice mode that was _only_ text-to-speech, and speech-to-text. It didn't sound as good, but it was as smart as the underlying model. The advanced voice mode regularly goes off the rails for me, makes the same mistake repeatedly, and other things that the text-version of advanced LLMs hasn't done for months now.

adeelk93

Don’t they? Press the microphone button for speech-to-text, and the speaker button for text-to-speech

og_kalu

In the App:

Settings> Personalization> Custom Instructions then Advanced Dropdown. Uncheck Advanced Voice

On Desktop site:

Profile Button> Customize ChatGPT then Advanced Dropdown. Uncheck Advanced Voice

tallytarik

> Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music.

This would be really funny if it weren’t real life.

dedicate

In my daily use, I just want the answer, not a performance. I'd rather it sound like a smart assistant, not my best friend.

arnaudsm

If there's an OpenAI PM reading this: please add the model selector for voice modes. 80% of this thread is users confused about which model they're using.

cladopa

Today I used ChatGPT and the voice was disgusting for the first time since I use ChatGPT(months).

It was the voice of someone(a woman) that was confrontational, someone who does not like you.

It made me want to close and remove the chat immediately.

transcriptase

I don’t suppose you have a bunch of custom instructions telling ChatGPT to be concise, terse, etc do you? Those impact the voice model too and it turns out the “get to the point I’m not an idiot” pre-prompts people have been recommending really don’t translate well when the voice mode uses it as a personality.