Skip to content(if available)orjump to list(if available)

Word2vec-style vector arithmetic on docs embeddings

thornton

We’ve done similar work. Use case was identifying pages in an old website that now 404 and where they should be redirected to.

Basically doc2vec and cosine similarity. Totally nonsensical matching outputs to the point matching on title tag vectors or precis was better so now I’m curious if we just did something wrong…

jdthedisciple

Intriguing! This inspired me to run the example "calculation" ("king" - "man" + "woman") against several well-known embedding models and order them by L2 distance between the actual output and the embedding for "queen". Result:

    voyage-3-large:             0.54
    voyage-code-3:              0.62
    qwen3-embedding:4b:         0.71
    embeddinggemma:             0.84
    voyage-3.5-lite:            0.94
    text-embedding-3-small:     0.97
    voyage-3.5:                 1.01
    text-embedding-3-large:     1.13
Shocked by the apparently bad performance of OpenAI's SOTA model. Also always had a gut feeling that `voyage-3-large` secretly may be the best embedding model out there. Have I been vindicated? Make of it what you will ...

Also `qwen3-embedding:4b` is my current favorite for local RAG for good reason...

aDyslecticCrow

There have been quite a few writing tools that are effectively just GPT wrappers with pre-defined prompts. "rephrase this more formally". Personally I find them to modify too much or are difficult to use effectively. Asking a for a few different rephrasings and then merging it myself ends up being my workflow.

But ever since learning about word2vec, I've been thinking that there must be a better way. "Push" a section in with the formal vector a bit. Add a pinch of "brief", dial up the "humour" vector. I think it could create a very controllable and efficient writing tool.

acmiyaguchi

This does exist to some degree, as far as I understand, along the lines of style-transfer and ControlNet in visual domains. Anthropic has some research called "persona vectors" which effectively push generative behaviors toward or away from particular traits.

[0] https://www.anthropic.com/research/persona-vectors [1] https://arxiv.org/abs/2507.21509

nostrebored

> How do we actually use this in technical writing workflows or documentation experiences? I’m not sure. I was just curious to learn whether or not it would work.

--

There are a few easy applications.

* When surfacing relevant documents, you can keep a list of the previous documents visited and boost in the "direction" that the customer is headed (could be an average of the previous N docs or weight towards frequency). But then you're just building a worse recsys for something where latency probably isn't that critical.

* If you know for every feature you release, you need an API doc, an FAQ, usage samples for different workflows or verticals you're targetting, you can represent each of these as f(doc) + f(topic) and find the existing doc set. But then, you can have much more deterministic workflows from just applying structure.

It's nice that you have a super flexible tool in the toolbox, but I think a lot of text based embedding applications (especially on out of domain data like long, unchunked technical docs) are just better off being something else if you have the time.

kaycebasques

> If you know for every feature you release, you need an API doc, an FAQ, usage samples for different workflows or verticals you're targetting, you can represent each of these as f(doc) + f(topic) and find the existing doc set. But then, you can have much more deterministic workflows from just applying structure.

This one sounds promising to me, thanks for the suggestion. We technical writers often build out "docs completeness" spreadsheets where we track how completely each product feature is covered, exactly as you described. E.g. the rows are features, column B is "Reference", column C is "Tutorial" etc. So cell B1 would contain the deeplink to the reference for some particular feature. When we inherit a huge, messy docs set (which is fairly common) it can take a very long time to build out a docs completeness dashboard. I think the embeddings workflow you're suggesting could speed up the initial population of these dashboards a lot.

seg_lol

You can probably do this in a day with a CLI based LLM like Claude Code. It can write the tools that would allow you to sort, test and cross check your doc sets.