Skip to content(if available)orjump to list(if available)

AI's real superpower: consuming, not creating

purplehat_

I often see things like this and get a little bit of FOMO because I'd love to see what I can get out of this but I'm just not willing to upload all these private documents of mine to other people's computers where they're likely to be stored for training or advertising purposes.

How are you guys dealing with this risk? I'm sure on this site nobody is naive to the potential harms of tech, but if you're able to articulate how you've figured out that the risk is worth the benefits to you I'd love to hear it. I don't think I'm being to cynical to wait for either local LLMs to get good or for me to be able to afford expensive GPUs for current local LLMs, but maybe I should be time-discounting a bit harder?

I'm happy to elaborate on why I find it dangerous, too, if this is too vague. Just really would like to have a more nuanced opinion here.

ben_w

The docs I upload are ones I'd be OK getting leaked. That also includes code. Even more broadly, it also includes whatever pics I put onto social media, including chat groups like Telegram.

This does mean that, useful as e.g. Claude Code is, for any business with NDA-type obligations, I don't think I could recommend it over a locally hosted model, even though the machine needed to run a decent local model might cost €10k (with current price increases due to demand exceeding supply), that the machine is still slower than what hosts the hosted models, that the rapid rate of improvement means a 3-month delay between SOTA in open-weights and private-weights is enough to matter*.

But until then? If I'm vibe coding a video game I'd give away for free anyway, or copy-editing a blog post that's public anyway, or using it to help with some short stories that I'd never be able to charge money for, or uploading pictures of the plants in my garden right by the public road… that's fine.

* When the music (money for training) stops, it could be just about any provider whose model is best, whatever that is is likely to still get distilled down fairly cheaply and/or some 3-month-old open-weights model is likely to get fine-tuned for each task fairly cheaply; independently of this, without the hyper-scalers the supply chains may shift back from DCs to PCs and make local models much more affordable.

empiko

I don't really buy this post. LLMs are still pretty weak at long contexts and asking them to find some patterns in data usually leads to very superficial results.

kitd

At least half of AI's "superpower" in OP's case is the fact that he has everything in Obsidian already. With all of that background context, any tool becomes super valuable in evaluating & guiding future actions.

Still, all credit to him for creating that asset in the first place.

impendia

I was in a research math lecture the other day, and the speaker used some obscure technical terminology I didn't know. So I dug out my phone and googled it.

The AI summary at the top was surprisingly good! Of course, the AI isn't doing anything original; instead, it created a summary of whatever written material is already out there. Which is exactly what I wanted.

Arisaka1

My counterpoint to this is, if someone cannot verify the validity of the summary then is it truly a summary? And what would the end result be if the vast majority of people opted to adopt or deny a position based on the summary written by a third party?

This isn't strictly a case against AI, just a case that we have a contradiction on the definition of "well informed". We value over-consumption, to the point where we see learning 3 things in 5 minutes as better than learning 1 thing in 5 minutes, even if that means being fully unable to defend or counterpoint what we just read.

I'm speficially referring to what you said: "the speaker used some obscure technical terminology I didn't know" this is due to lack of assumed background knowledge, which makes it hard to verify a summary on your own.

lazide

The issue is even deeper - the 1 thing in 5 minutes was probably already surface knowledge. We don’t usually really ‘know’ the thing that quickly. But we might have a chance.

The 3 things in 5 minutes is even worse - it’s like taking Google Maps everywhere without even thinking about how to get from point A to point B - the odds of knowing anything at all from that are near zero.

And since it summarizes the original content, it’s an even bigger issue - we never even have contact with the thing we’re putatively learning from, so it’s even harder to tell bullshit from reality.

It’s like we never even drove the directions Google Maps was giving us.

We’re going to end up with a huge number of extremely disconnected and useless people, who all absolutely insist they know things and can do stuff. :s

FridayoLeary

I have to agree. People moan that the ai summary is rubbish but that misses the point. If i need a quick overview of a subject i don't necessarily need anything more then a low quality summary. It's easier then wading through a bunch of blogs of unknown quality.

sam_goody

I have a counterpoint from yesterday.

I looked up a medical term, that is frequently misused (eg. "retarded"), and asked the Gemini to compare it with similar conditions.

Because I have enough of a background in the subject matter, I could tell what it had construed by its mixing the many incorrect references with the much fewer correct references in the training data.

I asked it for sources, and it failed to provide anything useful. But once I am looking at sources, I would be MUCH better off searching and only reading the sources might actually be useful.

I was sitting with a medical professional at the time (who is not also a programmer) and he completely swallowed what Gemini was feeding him. He commented that he appreciates that these summaries let him know when he is not up to date with the latest advances, and he learnt alot from the response.

As an aside, I am not sure I appreciate that Google's profile would now associate me with that particular condition.

Scary!

solumunus

Try the same with Perplexity?

null

[deleted]

nnnnico

What is the approach used? It seems everything gets done in context by plain text searches with some agent like Claude code or is there RAG involved? (was the article written by AI? it has that LinkedIn-groove all over the place)

mettamage

Sorry, is this new? Providing the right data to LLMs supercharges them. Yes, I agree. I've been doing this since March 2025 when there was a blog post about using MCP on HN. I'm not the only one who's doing that.

I've written my whole lifestory, the parts I'm willing to share that is, and posted it in Claude. It helped me way better with all kinds of things. It took me 2 days to write without formatting, pretty much how I write all my HN comments (but then 2 days straight: eat, sleep, write).

I've also exported all my notes, but it's too big for the context. That's why I wrote my life story.

From a practical standpoint I think the focus is on context management. Obsidian can help with this (I haven't used it so don't know the details). For code, it means doing things like static and dynamic analysis to see which functions calls what and create a topology of function calls and send that as context, then Claude Code can more easily know what to edit, and it doesn't need to read all the code.

TN1ck

Curious, what did you get out of it? Counseling? Some action plan? A reflection? Seems intriguing to do, but would like to know how it helped you exactly if you don’t mind sharing.

mettamage

Career planning at the moment, tailoring resumes. Currently, it's not tailoring it well enough yet because it's hallucinating too much, so I need to write a specific prompt for that. But I know for work, where I do similar things (text generation with a human in the loop), that I can tackle that problem.

So yea, I definitely, add to the "AI generated" text part but I read over all the texts, and usually they don't get sent out. Ultimately, it's still a lot quicker to do it this way.

For career planning, so far it hasn't beaten my own insights but it came close. For example, it mentioned that I should actually be a developer advocate instead of a software engineer. 2 to 3 years ago I came to that same thought. I ultimately rejected the idea due to how I am but it is a good one to think about.

What I see now, I think the best job for me would be a tech consultant. Or as I'd also like to call it: a data analyst that spots problems and then uses his software engineering or teaching skills to solve that problem. I don't think that job has a good catch all title as it is a pretty generalist job. I'm currently at a company that allows me to do this but the pay is quite low, so I'm looking for a tech company where I could do something similar. Maybe a product manager role? It really depends on the company culture.

What I also noticed it did better: it doesn't reduce me to data engineering anymore. It understands that I aspire to learn everything and anything I can get my hands on. It's my mode of living and Claude understands that.

So nothing too spectacular yet, but it'll come. It requires more prompt/context engineering and fine tuning of certain things. I didn't get around to it yet.

skydhash

The article is more about offloading your thinking to the machine than a real usage of what notes is. You may as well make every decision rely on a coin toss.

I take notes for remembrance and relevance (what is interesting for me). But linking concepts is all my thinking. Doing whatever rhe article is prescribing is like sending someone on a tourist trip to take pictures and then bragging that you visited the country. While knowing that some pictures are photoshopped.

bzmrgonz

I disagree, what ai brings to thr table is instant and total recall of our thoughts/notes/experiences. Deep analysis of that vast data storage is only possible via ai, which should trigger the aha!!! Moment, or the "you're crazy ai" moment. Either way, it's very useful. And we haven't even talked about the knowl she we have collecting digital dust in emails, and notes and reports of past employees.

heliumtera

Not really surprising that a tool created for surveillance and mass profiling turned out to be pretty good at surveiling and profiling

embedding-shape

Is this like your belief? Transformers et al were invented by researchers with the explicit goal of surveillance and mass profiling? You think maybe that could have been an unintended effect of something/someone else? Or it's all the researchers fault?

sallveburrpi

Anyone has a simple setup for this with local LLMs like Mistral that they can share?

I would love to try this out but don’t feel comfortable sharing all my personal notes with a third party.

adidoit

Ironically this article/blog itself is giving off an AI-generated smell as it's tone and cadence seem very similar to LinkedIn posts or rather output of prompts to create LinkedIn posts.

neom

A guy I work with has been doing this, I watched his tutorial and it was all a bit... overwhelming for me (to think about using such a system), I'm still on pen and paper, heh. Nevertheless - here is his template: https://github.com/kmikeym/obsidian-claude-starter and tutorial: https://www.youtube.com/watch?v=1U32hZYxfcY

tigranbs

I would say the AI consumption aspect was a side effect: the primary goal was to "generate" new stuff. So far, to me, the significant boost is the coding aspect. Still, for the rest of the people, I think you are right: 90% of the benefits come from being an interactive, conversational search on top of the available information that AI can read/consume.