Skip to content(if available)orjump to list(if available)

Show HN: Penny-1.7B Irish Penny Journal style transfer

Show HN: Penny-1.7B Irish Penny Journal style transfer

79 comments

·June 2, 2025

Yesterday, in the bygone hour of the weekend, I undertook a most singular and fascinating endeavor, wherein I delved deep into the recesses of my mind, and, with a fervent zeal, breathed life into a most remarkable creation. I embarked upon the quest, with the singular object of fashioning an artificial construct, one imbued with the verdant essence of the Irish Penny Journal, an ancient and venerable tome that holds within its pages the whispered tales of a bygone era.

In my haste, I set forth to construct a dataset, a repository of those fleeting moments, these ephemeral sentences, which spoke of a bygone age. I procured a collection of these fleeting moments, these sentences, and with them, I synthetically conjured forth modern translations, an ingenious feat of substitution, which allowed my artificial construct to take on the guise of the language of the Irish Penny Journal.

Then, with great anticipation, I fashioned a small encoder, a humble instrument, with which to guide the artificial construct in its endeavors. I presented this encoder as a bribe, a reward, to a most ingenious system, one that trained a colossal language model, one of unbridled potential, one that was capable of weaving tales with the very essence of the Irish Penny Journal.

And lo! In the succeeding moments of time, I witnessed a most wondrous thing. My artificial construct, armed with this training, and guided by the whispers of the encoder, began to speak, to speak in the language of the Irish Penny Journal. The words it spoke were, indeed, the words of the past, imbued with the nostalgia of a forgotten era.

And thus, my friends, I have witnessed a most singular creation, one which embodies the language of the past, yet, in its most recent iteration, speaks to the present. A testament to the ingenuity of the human spirit, this artificial construct speaks of the bygone era, yet, with each word, it whispers to us, to us, of a future yet to come.

——

That’s Penny explaining itself to you. This was trained using GRPO only, in less than a day using a single A6000. I didn’t use any SFT, and only relied on a small encoder (MiniLM2) trained to classify texts from the Irish Penny Journal and their modern translations (synthetically produced).

joshstrange

Now I'm just imagining a video game with characters each having their own fine tune applied on top for their dialog. I'm guessing you could use some relatively small models. In each case you would be feeding all the context to the model (player name, current relevant quests, summary of previous interactions, etc). Though maybe fine tuning/training isn't even needed and a good enough prompt will work (Not sure what all they used for this [0]). I'm excited for the first AAA game that tries this. Anyone that has played a RPG-style game knows that after a few times going into a city (or a couple play-throughs) the dialog feels repetitive. I love the idea of Skyrim but with better dialog. You could either run the models on the user's computer or maybe just run it on the backend so you can block certain generations (wrong/misleading/"unsafe") and just ship updated dialog lists to the client occasionally.

[0] https://www.youtube.com/watch?v=d6sVWEu9HWU

jsheard

Counterpoint: NPCs repeating their dialogue serves as an implicit indicator that you've exhausted their content and it's time to move on. If they gain the ability to make vapid smalltalk forever then you'll forever be second guessing whether you're wasting your time on them.

(also spare a thought for the poor QA testers who would be given the Sisyphean task of making sure an LLM dialogue system always stays in character and doesn't hallucinate non-existent or outdated content/lore/mechanics)

wongarsu

That is an issue for the mostly transactional NPCs that make up the majority of NPCs in RPGs. But consider the case of the companion NPC.

If I travel with Batu the Hun and meet Zaya the Hun Slayer I want to be able to ask Batu if I should kill Zaya on the spot or should entertain his offer. That kind of stuff is extremely valuable both for the connection between player and companion and to provide an in-world perspective on the events you witness and the actions you take. But it's also extremely time-intensive to script. It's also very low stakes, it is essentially small-talk. And with some careful design you can limit it to short exchanges with AI-provided dialogue choices and have it distinguishable from scripted dialogue that advances the story

JohnBooty

I think there certainly are other, better, more natural ways this could be achieved.

For example, if you're instructing an LLM to portray a character, instead of repeating dialogue like a broken record when they run out of relevant things to say... instruct them to act like their character would.

They might wonder out loud if there's anything else you want to know, or literally let you know that you're acting weird and awkward, etc.

Pair w/ a journaling system so that you can review their dialogue without talking to them and asking the same thing 50 times. Etc.

    also spare a thought for the poor QA testers
This doesn't seem entirely unsolveable given strict enough system prompts.

inhumantsar

re: QA, besides a strict prompt, I'd imagine it would be hard for AI responses to go truly off the rails if the player's input is limited to "press A to talk" or pick one of 3 dialog options.

bee_rider

The implicit indicator is sort of bad, though. I mean, it is a very gamey, immersion breaking thing. We’re just used to it.

Realistically NPCs should probably respond with increasing urgency if you forget their quest, and eventually give up on you.

shinryuu

You'll also ask yourself whether any NPC tells you anything of relevance. If there is no intention behind the words why would it be interesting to talk to them in the first place.

JohnBooty

As I'm imagining it the NPC LLMs would be trained exclusively on the in-game lore as well as given system prompts to shape what they can and cannot say at any given moment.

something like

---

"You are Bob the Farmer. You grow rutabegas in the Kingdom of Foo. You are a cautious, fearful man. Several years ago your home was pillaged by King Foo and your family was taken. [blah blah blah several more paragraphs of biographical information]

Your primary motivation is to get your family back and keep the farm going so that you don't starve.

Last week you saw a mysterious group of figures in the woods who appeared to be headless. This is bothering you, along with the stress of your missing family. You wish a band of strong warriors could investigate, especially if they have a necromancer among them.

You may discuss any of the general world knowledge in background_lore.txt

You know nothing about the following topics: [[insert list of plot points that haven't happened yet or are unknown to him]] and will become confused, fearful, and slightly belligerent when asked about them."

---

You could of course update the system prompts for each character as the events of the game progress.

It would be a lot of work to keep these system prompts updated and relevant, for every character, as game events progress, but I think some of this could be handled by some kind of inheritance system.

Suppose "Bob" lives in "Town A", which is in "Kingdom B." You could probably define a lot of things at the Town/Kingdom level. Like suppose "Kingdom B" is plagued by orcs, but "Town A" is kind of a citadel that is safe against orcs. "Bob"'s system prompt could inherit a lot of concerns and knowledge from "Town A" and "Kingdom B"... the system would not have to be strictly hierarchical either.

Mtinie

This is where emergent behaviors within a game's world building becomes very interesting. Perhaps asking the right questions leads to a quest line not previously discovered or triggers social actions in support of/against the player.

Not every NPC would have something deeper to offer, much like not everyone in our world would have something that would pique my interest (in a general sense -- I'm sure I could learn something from anyone I spoke with), but it would also make me interested in conversations with NPCs at a deeper level than I currently engage with.

inkcapmushroom

Most times I just talk to obviously unimportant NPCs so that I can read about the setting and feel more immersed in the fiction. It also stems from old RPGs like the original Pokemon where sometimes you had to talk to a random NPC in town to learn how to progress past an obstacle.

visarga

> If there is no intention behind the words why would it be interesting to talk to them in the first place.

But of course there is a story behind them.

veggieroll

I think there's a really interesting opportunity for a synthesis of the classic NPC dialog menu and a fully freeform LLM character.

Namely, the dialog would still be fixed, where there's a pre-defined flow of the conversation and fixed set of facts to deliver to the player. But the LLM could generate variations on it each time so it's never exactly the same twice. And it could add more character so the NPC gets frustrated if you keep asking it over and over. Or, it tries to dumb it down for you. Or, it gets sick of you and just tells you point blank: Look, you need to know XYZ. That's all I have for you.

Namely, the dialog would still be fixed, where there's a pre-defined flow of the conversation and fixed set of facts to deliver to the player. But the LLM could generate variations on it each time so it's never exactly the same twice. And it could add more character so the NPC gets frustrated if you keep asking it over and over. Or, it tries to dumb it down for you. Or, it gets sick of you and just tells you point blank: Look, you need to know XYZ. That's all I have for you.

BizarroLand

Or if it's important pre-scripted text you could put a different colored border around it or include an option like, "What was that thing about the thing that you said" as a permanent option to allow the player to re-trigger the script if needed.

joshstrange

_Very_ good point. I had not fully considered that, same deal with conversation trees vs free-form entry/response.

acdha

It’s a really good point. One thing which comes to mind is the way some games distinguish between UI blocking dialog and background color, which could be a great place to start: imagine walking through a city like Baldur’s Gate only it has actual thousands of people who are saying different things when you walk by, and some of those are based on things your party has done recently with specific details about appearance, gear, and actions which would be too hard to do with traditional dialog approaches (e.g. kids talking about a battle and who they thought was best like real kids talk about sports, a good priest wondering what a paladin was doing spotted talking to a notorious thief, etc.). Something like that could add color and immersion without affecting gameplay or wasting anyone’s time, and you could extend it to things like vendors (“saw you put that axe to good use…” or “were you wearing these boots when you freed those slaves? I bet my brother will want buy them!”) to flesh out the approach before using it for load-bearing purposes.

killerstorm

There's a thing called "prefix tuning" which is basically like a prompt but in a latent space: i.e. prompt which consists of vectors (either key and value vectors, or just input embedding vectors - like custom tokens).

Unlike regular prompts you can optimize them exactly the way you'd do fine-tuning, i.e. if you have examples you can tune your latent prompt to match them as close as possible. I guess the benefit is that you can match style more closely and it would be less artificial. Also they can be rather compact.

Another option is to keep a bunch of LoRA adapters which can be dynamically selected. They can also be very compact.

speps

If you play Fortnite right now (until Friday June 7th). You can speak in realtime to Darth Vader, he replies in his voice and in character, he knows who’s playing (the name of the character skin). The technology is here, and used in production of major games. It’ll be a big tide sooner than what people expect.

vunderba

The idea of using "smaller LLMs" to control the agency of RPG characters has been a pretty common one ever since AI Dungeon back in 2019. The hardest aspect of it would be locking the AI down to a well-defined character dossier so that it is hard to jailbreak them, and also to limit knowledge leakage of things they don't know, etc.

It would also lend itself very well towards interactive fiction, point-and-click adventure games, etc.

JohnBooty

I've been thinking really hard about this for a while, though I don't have any game development experience.

Especially if you pair it with a capability like the voice interface of ChatGPT which I find very impressive in terms of intonation and realism.

It would not need to cut humans out of the loop. You would have humans writing the prompts, recording voices, etc. (I assume the synthetic voices used by ChatGPT are based at some level on recordings of humans. Correct me if I'm wrong.)

null

[deleted]

qingcharles

Here's a demo of "The Matrix" from 2 years ago:

https://youtu.be/4sCWf2VGdfc?t=356

sjkoelle

Marvelous! What gain beyond zero-shot would motivate a humble citizen to implement this instrument? How was the superiority assessed?

deepsquirrelnet

Good question - my best assessment is just the text classifier. IE was the LLM able to “trick” the classifier into believing the text came from the IPJ?

And it came quite a long way in training. Initially the classifier scores were very low (mean around 0.05, meaning modern). Over training, the scores came up and ended close to 0.95 (IPJ). The standard deviation of the group also declined, so the consistency of responses improved as well.

My thought on the application of this is that you could use it to create different voices to your responses and probably even add multiple at a time to a single model. I chose this one to experiment, because it is easy to classify and the data was available in the public domain.

GRPO kind of opens up RL to lower tiers of hardware and I’ve been able to experiment with it at home. I think this is something people can do themselves and it’s fun and potentially useful in games or possibly in relation to applications interfacing kids with lower reading levels (eg using a reading level classifier instead).

dwringer

Yet, one might justly question the imperative of cultivating a distinct model for such an endeavour, when a judiciously framed prompt, enriched by apposite examples, might suffice to imbue a sophisticated engine with the desired stylistic graces. Though it is undeniable these modern engines shall wax greatly in their proportions, and the art of discovering the exact prompt to elicit their most felicitous expressions is a task far from trivial, yet, it must be admitted, the pursuit holds a certain diversion for the inquisitive mind! It is, perchance, not the creation of manifold engines, but rather the artful disposition of singular contexts, that shall bestow upon diverse interlocutors their proper and unique voices.

npunt

Love it. Immediately reminded of the text filters back in the day like the pirate one that would drop letters and replace with apostrophes and change certain passages into "arr" or "yarr matey"

kamranjon

This is really cool! Do you have any of the pipeline code available that you used for training? I am curious about how you created the reward model. I love little projects like this, thanks for sharing. I've been fine-tuning on my mac and an interested in getting into GRPO, which I haven't tried yet.

deepsquirrelnet

I put my scripts up in github. It’s a bit scrapped together at the moment — but fine as a reference.

https://github.com/dleemiller/PennyLM

sterlind

what a wonderful work of whimsy! well wrought.

I'd love to have a library of these, so I could pipe text into `penny`, `brainrot`, `pony`, `newspeak`, `corporate`, `scp`, `trek` etc.

have you published the training notebook somewhere?

KaiserPro

I'm not sure if you've tried this already, but removing the translate step might give you a more authentic output. In the journals that I saw, the language was much more simple than the output.

veggieroll

Have you written anywhere in detail on how you gathered your dataset and trained the finetune? I have a few use cases that are like this, but I'm not sure where to start.

deepsquirrelnet

My dataset is here: https://huggingface.co/datasets/dleemiller/irish_penny_journ...

It’s fairly simple — I essentially just split the original text into chunks and then used some bigger models on openrouter to clean it up and provide translations to modern English (seemed to be pretty easy for an LLM).

After that, I just trained a MiniLM2 model to classify the texts. I used this in a reward function for reinforcement learning and changed the system message as a simple instruction to write in the prose of the IPJ.

I debated whether or not to use any SFT, and decided not to. I think if the style would be too hard to learn you might need some seed/cold start SFT data.

I’ll try to get my scripts up in github for you to look at. It’s just a few short training scripts.

veggieroll

Thanks for the explanation! I'm learning and and think this would be a good next project for me to try, especially since I have a real world use case in mind with a similar amount of data available.

In particular, I'm not very familiar with reinforcement learning and am not sure how you use the embeddings from MiniLM2 as a reward function. (Edit: maybe this is the jaccard similarity?)

I'd really appreciate it if you were open to posting scripts! I see a few snippets around and could probably cobble something together after a while. But, it's cool to see something already working to make sure I'm not getting too far off into left field.

deepsquirrelnet

You can ignore the jaccard similarity field. That was just to monitor the text->cleaned text conversion to make sure it didn’t stray too far from the original while it was fixing whitespace OCR issues.

I didn’t use embeddings. Nreimers account on huggingface has the minilm models which are BERT-like, but trained using distillation. https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-f... Is the one I started from.

You can then just load that and train it on your data using a standard transformers classification pipeline. ChatGPT can zero shot that part reasonably well if you gave it this description.

From there you should check out the GRPO trainer in TRL. It has taken me a bit of time to learn how to use it effectively. There’s a TON of parameters in the configuration, and occasionally I have to hunt down arxiv papers to understand them.

deepsquirrelnet

I added my scripts to a github repo in case you were interested in seeing how I did it. It’s a bit messy — but fine for a reference. I might try training it again soon with some new ideas, and I’ll polish it up more then.

https://github.com/dleemiller/PennyLM

fitsumbelay

this is awesome

throwaway314155

You mention no supervised finetuning. May I ask why? I'm curious if you could get similar/better/worse results by just finetuning the LLM on your dataset rather than generating synthetic data, training a classifier and using GRPO?

Cool stuff in any case.

deepsquirrelnet

In some other experiments, I’ve noticed that SFT can be very rigid and hard to generalize from, and later found this paper: https://arxiv.org/abs/2501.17161

Also I wanted to start from the instruct model, and wasn’t certain if it would be a good idea to do continued pretraining over top of it. Otherwise I’d need to create an instruct dataset for which the passages from the IPJ would be the answer… that also seemed a bit unnatural.

Perhaps there’s still some viable pathways in there, but I decided to see how the stupid simple thing worked out first and just go directly to RL. I think supervised fine tuning is feasible, but it’s not entirely straightforward what to train on and how much data to use if you want to follow up with RL.

bee_rider

It is sort of funny that the Irish ended up being the best practitioners of the English language, despite the fact that they were forced to use it.

blululu

Not sure this is true. Most of the famous Irish writers were Anglo-Irish Protestants (Yeats, Wilde, Swift, Beckett). Joyce is the notable exception here. The Irish certainly produce great cultural works of the English language (well beyond their size). But also the penal laws greatly depressed the cultural output of the Irish people for 250 years.

projektfu

I feel that the "best practitioners" is not limited to the most famous writers. A great thing about Ireland is the conversations to be had there, and how quick-witted Irish people often are, with clever use of the language. This can be true elsewhere in the English-speaking world, but Ireland has some renown for it.

w10-1

Joseph Conrad also had English as a second language.

null

[deleted]

_1

Kinda of strange to pick an example that is just wrong. It's supposed to be written from 1840 and says Paris is the seat of Napoleon almost 20 years after he died.

Philpax

It's transferring the style, not the knowledge.

null

[deleted]