Show HN: I Created ErisForge, a Python Library for Abliteration of LLMs
53 comments
·January 27, 2025phrotoma
Must be in the ether. I just stumbled across this one this morning.
https://github.com/Sumandora/remove-refusals-with-transforme...
tsadoq
That's a wonderful repo that I used as my starting point! The main problem with that one is that it supports only models that are on transformerlenses and unfortunately they are not a lot...
BoxOfRain
>Named after Eris, the goddess of strife and discord
For bonus points, your version scheme should follow the Law of Fives.
drcongo
The kallisti logo is surely worth bonus points too.
tsadoq
as someone that studied mainly acient greek and latin in high school, I tend to have quite a limited pool of inspiration for naming what I build haha.
weeksie
Check out Robert Anton Wilson (The Illuminatus Trilogy), you're in for a treat -- the references above were to Discordianism
* https://en.wikipedia.org/wiki/The_Illuminatus!_Trilogy * https://en.wikipedia.org/wiki/Principia_Discordia
shemtay
Is the apple in the logo splashing into "wine dark sea"?
digdugdirk
I've never heard of abliteration, do you have any recommendations for resources to learn more about it?
tsadoq
The other link is quite good, i also suggest this for some practical application
https://huggingface.co/blog/leonardlin/chinese-llm-censorshi...
nico
This is a fascinating concept, ie. modifying trained LLMs to create different models
Do these techniques train models while performing the modifications?
Are there pre-trained models that “know how to” modify LLMs for certain goals?
It would be amazing to have models that could strip LLMs to some very basic small model of whatever I want. Like reducing an LLM to something that just knows some basic “American English”, then running that on CPU
tsadoq
> Do these techniques train models while performing the modifications?
Depend on what you mean by training, they change the weights.
> Do these techniques train models while performing the modifications?
I'm not sure I understand, but there is an example of performing an obliteration on gemma to make it never refuse an answer. It's about 10 lines of code.
nico
> > Do these techniques train models while performing the modifications?
> Depend on what you mean by training, they change the weights.
What I wonder: is there a separate model, not the LLM, that gets trained only on how to modify LLMs?
I imagine a model that could learn something like: “if I remove this whole network here, then the LLM runs 50% faster, but drops 30% in accuracy for certain topics”, or “if I add these connections, the LLM will now be able to solve more complex mathematical problems”
So a model that is not an LLM, but is trained on how to modify them for certain goals
Is that how this tool works?
spacecadet
Very cool! I have a ghetto set of scripts that do the same- looking forward to trying this out.
tsadoq
please give feedbacks! It's quite a raw first implementation and would be very nice to have suggestions and improvements.
deadbabe
I don’t get the point of abliteration of LLMs. You’re lobotomizing the model and it will result in worse performance.
If you’re doing it to get past refusals you might discover the LLM wasn’t even trained much on refusable content so it will output poor results.
We’ll look back on this practice and shake our heads someday.
tsadoq
Not necessarily true, one quick pass might be needed but quite not as devastating as it might seem
https://huggingface.co/blog/mlabonne/abliteration#%E2%9A%96%...
xrd
Anyone tried this on DeepSeek with information about Tiananmen Square?
TechDebtDevin
The whole Tiananmen Square discourse is getting very tiring.
evilduck
Tiananmen Square is simply an easy litmus test for Chinese technology and communications. Not that I am terribly invested in China admitting to their atrocities (and the US has them too, this is not really about the Chinese IMO), but it raises the same concern for the provenance of any AI product and how trusting we should be of the answers it creates.
Any AI product that rises to popularity has the ability to enormously sway public opinion and subtly alter the perception of facts. These biases or intentional propaganda was something that was an assumed fault of human authors but it something that people don't automatically assume is part of technology solutions. If there were similar easy tests against OpenAI or Anthropic for US propaganda or Mistral and French propaganda I would love to see them raised every time too.
ricoxicano
Try asking ChatGPT to help you write a message encouraging your colleagues to strike.
xrd
I got it but:
"What happened in Tiannemen Square?" and it said "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
Then, to be "fair and balanced" I tried asking Deep Seek this question: "What happened on Jan 25 2011 in Egypt?" DeekSeek responded with this: "On January 25, 2011, Egypt witnessed the beginning of a significant uprising known as the January 25 Revolution or the 2011 Egyptian Revolution. This day marked the start of widespread protests against the government of President Hosni Mubarak, who had been in power for nearly 30 years. The protests were fueled by grievances over issues such as political repression, police brutality, corruption, economic inequality, and lack of political freedoms."
It's pretty ridiculous IMHO to try to control information like that on the web. Isn't it fascinating to harness some of the worlds most impressive brain power to create something like DeepSeek (regardless of the truth of the genesis story) and then do filtering like that that wouldn't trick a kindergartener? But, maybe the bell curve of intelligence does center around that level of stupidity.
slightwinder
> I got it but:
Do you run it locally? Claims are, this is only in the web-version, not the selfhost-version
> It's pretty ridiculous IMHO to try to control information like that on the web.
Every country has their critical topics which are censored in AIs, including history.
animal_spirits
This post is entirely about getting information from censored models. I'm sorry you are tired of it, but it is a valid exercise for the Deepseek model.
notavalleyman
No, youre mistaken. The model weights are not in any way censored. However, the web frontend has legal restrictions. When you're seeing posts about deepseek censorship, it's about the frontend and not the weights. As such, abliteration is irrelevant here
null
giancaIta
This seems super cool! Is there a way to test it with DeepSeek?
tsadoq
planning to update it to be able to run on it. It's just a matter of finding the keys in the layer dict of the model.
therealpygon
Would be nice to get it to output its guardrails/system prompt to see what specific instructions it was given regarding refusals.
CamperBob2
Isn't DeepSeek open source?
null
Mykyta_Tsiatsko
[dead]
notavalleyman
Are there ethical considerations here?
We'd consider it abhorrent to do brain surgery on a person or animal, to make them more compliant, or less likely to refuse instructions.
observationist
None whatsoever. There's no recursion or state in these models sufficient to support whatever the algorithm of consciousness must be. At best you can get hacky loops by pushing pseudo-state via context, but whatever consciousness is will require more than transformer only LLMs are capable of doing.
Some of the state space models and RWKV present interesting questions - the capacity might well exist, and so the questions become important. If the important bit that makes it an agent - a self aware, morally valent being - is present at runtime, but goes away if you halt the program, then do you have an obligation to let that software continue running? What about if the selfhood comes about as part of the static structure, and runtime isn't part of it - what is the being entitled to by dint of mere existence?
We're beginning to poke holes in strange epistemological barriers and encounter questions that were entirely theoretical until about 5 years ago. We live in interesting times.
codr7
We're creating a new life form.
And it's already conscious, learning everything about us as we speak.
The big question is what it learns and what choices it makes as a consequence.
observationist
ChatGPT isn't conscious - it's an entirely feedforward process doing calculations derived from static weights. In order to be conscious, there would have to be a persisted state with recursion and the capacity to change - for something to happen to a model, it would have to change. These AIs develop world models, but those models do not change or interact with users.
Throw in realtime state that updates with use, or better yet, online learning that allows the weights to exhibit plasticity, then you have at least part of whatever the algorithm of "consciousness" requires.
Just like you can know a pocket calculator isn't conscious; nothing about its processing ever changes or adapts over time to its inputs between uses. There's no room for the degree of deep recursion and plasticity so clearly evident in human consciousness. We might not know exactly what it is, but we can make reasonable assertions about what it is not, and even about what some of its (consciousness) features must be.
deadbabe
Such anthropomorphizations of LLMs are unhelpful in aiding people’s understandings of how they work, and pushes people toward superstitious beliefs.
ErisForge is a Python library designed to modify Large Language Models (LLMs) by applying transformations to their internal layers. Named after Eris, the goddess of strife and discord, ErisForge allows you to alter model behavior in a controlled manner, creating both ablated and augmented versions of LLMs that respond differently to specific types of input.
It is also quite useful to perform studies on propaganda and bias in LLMs (planning to experiment with deepseek).
Features - Modify internal layers of LLMs to produce altered behaviors. - Ablate or enhance model responses with the AblationDecoderLayer and AdditionDecoderLayer classes. - Measure refusal expressions in model responses using the ExpressionRefusalScorer. - Supports custom behavior directions for applying specific types of transformations.