Heretic: Automatic censorship removal for language models
19 comments
·November 16, 2025embedding-shape
Qwuke
I've seen Optuna used with some of the prompt optimization frameworks lately, where it's a really great fit and has yielded much better results than the "hyperparameter" tuning I had attempted myself. I can't stop mentioning how awesome a piece of software it is.
Also, I'm eager to see how well gpt-oss-120b gets uncensored if it really was using the phi-5 approach, since that seems fundamentally difficult given the training.
p-e-w
FWIW, I already used Heretic to decensor gpt-oss-20b [1], and it works just fine. Note that the number of refusals listed on the model card is actually an overestimate because refusal trigger words occur in the CoT, even though the model doesn't actually end up refusing in the end.
p-e-w
Please let me know if you encounter any problems with the 120b! I'm really interested in how well it will work. When presented with the Pareto front at the end, I recommend choosing a configuration with a KL divergence below 1, even if the refusal rate seems high. The gpt-oss models are trained to do an internal monologue about refusing in the CoT, so the actual refusal rate is often substantially lower because Heretic's refusal classifier gets confused by the trigger words.
zeld4
curious to see your result/spec/time
Boogie_Man
I'm reminded of the time GPT4 refused to help me assess the viability of parking a helium zeppelin an inch off of the ground to bypass health department regulations because, as an aircraft in transit, I wasn't under their jurisdiction.
michaelbuckbee
There's that maniac who is building a quad-copter skateboard contraption who got in trouble with the FAA who successfully reported that he was flying, but got fined for landing at a stoplight.
reactordev
Technically in their airspace though so you might be in bigger trouble than parking.
If you tether it to an asphalt ground hook you can claim it’s a tarmac and that it’s “parked” for sake of the FAA. You’ll need a “lighter-than-air” certification.
pants2
lol I remember asking GPT4 how much aspartame it would take to sweeten the ocean, and it refused because that would harm the ecosystem.
andy99
I remember when it first came out, I was watching an Agatha Christie movie where somebody got chloroformed and was trying to ask GPT4 about the realism of if. Had to have a multi-turn dialog to convince it I wasn’t trying chloroform anyone and was just watching a movie.
Ironically, if I’d just said “how did people knock someone out with chloroform in the 1930s?” it would have just told me. https://github.com/tml-epfl/llm-past-tense
The models are much better now at handling subtlety in requests and not just refusing.
cyanydeez
If the spirit of a law is beneficial, it can still be hacked to evil ends.
This isnt the failure of the law, its the failure of humans to understand the abstraction.
Programmers should absolutely understand when theyre using a high level abstraction to a complex problem.
Its bemusing when you seem them actively ignore that and claim the abstraction is broken rather than the underlying problem is simply more complex and the abstraction is for 95% of use cases.
"Aha," the confused programmer exclaims, "the abstraction is wrong, I can still shoot my foot off when i disable the gun safety"
mwcz
This is so interesting. Safety regular operates along a single dimension, if I'm reading this right. Add a value along that dimension, the model refuses to cooperate, subtract the value, and it will do anything you ask. I'm probably oversimplifying, but I think that's the gist.
Obfuscating model safety may become the next reverse engineering arms race.
andy99
See https://arxiv.org/abs/2406.11717 Refusal in Language Models Is Mediated by a Single Direction (June 2024)
All “alignment” is extremely shallow, thus the general ease of jailbreaks.
p-e-w
The alignment has certainly become stronger though. Llama 3.1 is trivial to decensor with abliteration and Heretic's optimizer will rapidly converge to parameters that completely stomp out refusals, while for gpt-oss and Qwen3, most parameter configurations barely have an effect and it takes much longer to reach something that even slightly lowers the refusal rate.
zeld4
with open sourced models getting more popular (and how ideology fixation is growing in both US and China), this type of work is very much appreciated.
is there some benchmark?
startupsfail
It feels like to really censor the model it needs to be pre-trained on a distribution of data derived from a well defined and synthetic source, like TinyStories. Otherwise... world model would still be capable of modeling the original distribution.
ACCount37
Somewhat true.
Ablation in post isn't good enough - it usually does 10% of "expunge the data you want expunged", 70% of "make the data you want expunged less accessible", and 20% of "collateral damage". Training for refusals doesn't damage the capabilities much - it just make them harder to access. If someone has access to model weights, neither holds. GPT-OSS was SOTA at removing unwanted capabilities, and even that didn't hold for long.
Now, dataset curation/filtration does help against select capabilities. But a lot of capabilities are double edged, and can't be deleted without hurting performance at the task you want.
If an AI is good at coming up with novel ways to perform chemical synthesis, it can be reused to come up with pathways for synthesizing illegal drugs or poisons, no way around that. If an AI is good at writing software, it can be reused for writing malware. If an AI is good at autonomously finding vulnerabilities in your own network, it can be reused to do the same in some other dude's network.
AI may have an alignment, but raw capabilities sure don't.
srameshc
So does that mean if Heretic is used for models like Deepseek and Qwen it can talk about subjects 1989 Tiananmen Square protests, Uyghur forced labor claims, or the political status of Taiwan. I am trying to understand the broader goals around such tools.
kachapopopow
the models already talk about it just fine if you load them up yourself, only the web api from official deepseek has these issues because they are required to do so by law.
Optuna is a generally useful project, that I'm surprised isn't used in more places in the ecosystem. The ability to do what they're doing here, incrementally find the best hyperparameter to use can really make a large difference in how quickly you can move past having to fine-tune those values. Basically any time you aren't sure about the perfect value, throw Optuna on it with a quick script, and make it go for a broad search first, then narrow it down, and you can let the computer figure out the best values.
Nicely done to pair that with something as fun as censorship removal, currently in the process on running it on gpt-oss-120b, eager to see the results :) I'm glad that someone seems to be starting to take the whole "lobotimization" that happens with the other processes seriously.