The skill of the future is not 'AI', but 'Focus'
70 comments
·April 20, 2025arkj
schneems
I feel that respecting the focus of others is also an important skill.
If I'm pulled 27 different ways. Then when I finally get around to another engineer’s question “I need help” is a demand for my synchronous time and focus. Versus “I’m having problems with X, I need to Y, can you help me Z” could turn into a chat, or it could mean I’m able to deliver the needed information at once and move on. Many people these days don’t even bother to write questions. They write statements and expect you to infer the question from the statement.
On the flip side, a thing we could learn more from LLMs is how to give a good response by explaining our reasoning out loud. Not “do X” but instead “It sounds like you want to W, and that’s blocked by Y. That is happening because of Z. To fix it you need to X because it …”
daveguy
> Many people these days don’t even bother to write questions. They write statements and expect you to infer the question from the statement.
This is one of my biggest pet peeves. Not even asking for help just stating a complaint.
MichaelZuo
Well it seems like an easy way to filter them into the ignore pile…
bob1029
> It’s easy to get hooked on fast answers and forget to ask why something works
This is really a tragedy because the current technology is arguably one of the best things in existence for explaining "why?" to someone in a very personalized way. With application of discipline from my side, I can make the LLM lecture me until I genuinely understand the underlying principles of something. I keep hammering it with edge cases and hypotheticals until it comes back with "Exactly! ..." after reiterating my current understanding.
The challenge for educators seems the same as it has always been - How do you make the student want to dig deeper? What does it take to turn someone into a strong skeptic regarding tools or technology?
I'd propose the use of hallucinations as an educational tool. Put together a really nasty scenario (i.e., provoke a hallucination on purpose on behalf of the students that goes under their radar). Let them run with a misapprehension of the world for several weeks. Give them a test or lab assignment regarding this misapprehension. Fail 100% of the class on this assignment and have a special lecture afterward. Anyone who doesn't "get it" after this point should probably be filtered out anyways.
the_snooze
In a way, I think it shows why "superfluous" things like sports and art are so important in school. In those activities, there are no quick answers. You need to persist through the initial learning curve and slow physical adaptation just to get baseline competency. You're not going to get a violin to stop sounding like a dying cat unless you accept that it's a gradual focused process.
Telemakhos
Sports and art aren't superfluous: they teach gross and fine (respectively) motor skills. School isn't just about developing cognitive skills or brainwashing students into political orthodoxies: it's also about teaching students how to control their bodies in general and specific muscle groups, like the hands, in particular. Art is one way of training the hands; music is another (manipulating anything from a triangle to a violin), as is handwriting. Without that training. Students may well not get enough of that dexterity training at home, particularly in the age of tablets [0].
AllegedAlec
With a bit more focus you might not have missed OP's point
otabdeveloper4
> You're not going to get a violin to stop sounding like a dying cat unless you accept that it's a gradual focused process.
You can sample that shit and make some loops in your DAW. Or just use a generative AI nowadays.
rf15
There are many ways to be a skillless hack, but why celebrate it?
tarboreus
You can also just sit in the corner and never make anything. So what?
null
brightball
This is my constant concern these days and it makes me wonder if grading needs to change in order to alleviate some of the pressure to get the right answer so that students can focus on how.
nonrandomstring
> Losing focus as a skill is something I see with every batch of new students.
Gaining focus as a skill is something to work on with every batch of new students
We're on the same page. I'm turning that around to say: let's remember focus isn't something we're naturally born with, it has to be built. Worked on hard. People coming to that task are increasingly damaged/injured imho.
null
alganet
Using aimbot in Gunbound didn't make players better. Yes, it changed everything: it destroyed the game ecosystem.
Can humanity use "literacy aimbot" responsibly? I don't know.
It's just a cautionary tale. I'm not expecting to win an argument. I could come up with counter anectdotes myself:
ABS made breaking in slippery conditions easier and safer. People didn't learned to brake better, they still pushed the pedal harder thinking it would make it stop faster, not realizing the complex dynamics of "making a car stop". That changed everything. It made cars safer.
Also, just an anecdote.
Sure, a lot of people need focus. Some people don't, they need to branch out. Some systems need aimbot (like ABS), some don't (like Gunbound).
The future should be home to all kinds of skills.
cadamsdotcom
Appreciate the balanced take. An improvement to a technology can be good, or it can be harmful - and beyond a certain point, further amplification can destroy society and the commons which we all benefit from.
Coca leaves are a relatively benign part of daily life in Peru & a few other surrounding countries - they’re on the table in your hotel lobby, ready to be stirred into your tea. But cocaine - same base but made more intense with technology - causes many problems - and don’t even start about crack cocaine.
So when thinking of technology through the lens of what it amplifies we can consider traditional Internet research & writing contrasted vs. using AI - the latter gives instant answers and often an instant first draft.
Great for some; harmful for others. Where that point lies and what factors contribute is different for every individual.
Ozzie_osman
> Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and Exploitation (click on the top result). LLMs, however, do not give this choice.
I've actually found that LLMs are great at exploration for me. I'd argue, even better than exploitation. I've solved many a complex problem by using an LLM as a thought partner. I've refined many ideas by getting the LLM to brainstorm with me. There's this awesome feedback loop you can create with the LLM when you're in exploration mode that is impossible to replicate on your own, and still somewhat difficult even with a human thought partner.
tombert
I'm kind of in the same boat.
I've started doing something that I have been meaning to do for years, which is to go through all the seminal papers on concurrency and make a minimal implementation of them. I did Raft recently, then Lamport timestamps, then a lot of the common Mutex algorithms, then Paxos, and now I'm working on Ambient Calculus.
I've attempted this before, but I would always get stuck on some detail that I didn't fully grasp in the paper and would abandon the project. Using ChatGPT, I've been able to unblock myself much easier. I will ask it to clarify stuff in the paper, and sometimes it doesn't even matter if it's "wrong", so much as it's giving me some form of feedback and helps me think of other ideas on how to fix things.
Doing this, I manage to actually finish these projects, and I think I more or less understand them, and I certainly understand them more than I would have had I abandoned them a quarter of the way through like I usually do.
boleary-gl
I was a skeptic until I started seeing it this way. I do think that this is exactly why we’ve seen LLMs overtake search engines so quickly in the last 12-18 months. They allow a feedback loop that just doesn’t exist scrolling and clicking.
marbro
[dead]
knallfrosch
When I use LLMs, I quickly lose focus.
Copy-paste, copy-paste. No real understanding of the solutions, even for areas of my expertise. I just don't feel like understanding the flood of information, without any real purpose behind the understanding. While I probably (?) get done more, I also just don't enjoy it. But I also can't go back to googling for hours now that this ready-made solution exists.
I wish it would have never been invented.
(Obviously scoped to my enjoyment of hobbyist projects, let's keep AI cancer research out of the picture..)
dimal
I’ve gotten into this mode too, but often when I do this, I eventually find myself in a rabbit hole dead end that the AI unwittingly lead me into. So I’m slowing down and using them to understand the code better. Unfortunately, all the tools are optimized for vibe coding, getting the quick answer without understanding, so it feels like I’m fighting the tools.
spacemadness
I recommend using them to ask questions about why something works rather than spit out code. They excel at that a lot of the time.
vjvjvjvjghv
Being allowed to focus seems to be a privilege these days.
When I started in the 90s I could work on something for weeks without much interruption. These days there is almost always some scrum master, project manager or random other manager who wants to get an update or do some planning. Doing actual work seems to have taken a backseat to talking about work.
schneems
The flip side of focus (to me) is responsiveness. A post to SO might deliver me the exact answer I need, but it will take focus to write the correct question and patience to wait for a response and then time spent iterating in the comments. In contrast an LLM will happily tell me the wrong thing, instantaneously. It’s responsive.
Good engineers must also be responsive to their teammates, managers, customers, and the business. Great engineers also find a way to weave in periods of focus.
I’m curious how others navigate these?
It seems there was a large culture shift when Covid hit and non-async non-remote people all moved online and expected online to work like in person. I feel pushed to be more responsive at the cost of focus. On the flip side, I’ve given time and space to engineers so they could focus only to come back and find they had abused that time and trust. Or some well meaning engineers got lost in the weeds and lost the narrative of *why* they were focusing. It is super easy to measure responsiveness: how long did it take to respond. It’s much harder to measure quality and growth. Especially when being vulnerable about what you don’t know or the failure to make progress is a truly senior level skill.
How do we find balance?
mrj
Notification blindness.
I've been struggling with finding balance for years as a front-line manager who codes. I need to be responsive-ish to incoming queries but also have my own tasks. If I am too responsive, it's easy for my work to become my evening time and my working hours for everybody else.
The "weaving" in of periods of focus is maintained by ignoring notifications and checking them in batches. Nobody gets to interrupt me when I'm in focus mode (much chagrin for my wife) and I can actually get stuff done. This happened a lot by accident, I get enough notifications for long enough that I don't really hear or notice them just like I don't hear or notice the trains that pass near my house.
MrDarcy
This also worked for me. I flipped to permanent DND mode with clear communication I check notifications at specific times of day.
There are very few notifications that can’t wait a few hours for my attention and those that cannot have the expectation of being a phone call.
jncfhnb
This is why I honestly like discord over forums
layer8
When you’re on the asking side, sure, instant gratification is great. On the answering side, not so much. Chat interfaces are not a good fit for anything you may have to mull over for a while, or do some investigation before answering, and for anything where multiple such threads may occur in parallel, or that you want to reference later.
jncfhnb
I don’t agree
The thing is that most people seeking help are not able to form their question effectively. They can’t even identify the key elements of their problem.
They _need_ help with people trying to parse out their actual problem. Stack Overflow actively tells you to fuck off if you can’t form your question to their standards, and unsurprisingly that’s not very helpful to people that are struggling.
You will need to repeat walking people through the same problems over and over. But… that’s what helping people is like. That’s how we teach people in schools. We don’t just point them to textbooks. Active discords tend to have people that are willing to do this.
billmalarky
I built a distributed software engineering firm pre-covid, so all of our clients were onsite even though we were full-remote. My engineers plugged into the engineering teams of our clients, so it's not like we were building on the side and just handing over deliverables, we had to fully integrate into the client teams.
So we had to solve this problem pre-covid, and the solution remained the same during the pandemic when every org went full remote (at least temporarily).
There is no "one size fits all approach" because each engineer is different. We had dozens of engineers on our team, and you learn that people are very diverse in how they think/operate.
But we came up with a framework that was really successful.
1) Good faith is required: you mention personnel abusing time/trust, that's a different issue entirely, no framework will be successful if people refuse to comply. This system only works if teammates trust the person. Terminate someone who can't be trusted.
2) "Know thyself": Many engineers wouldn't necessarily even know how THEY operated best (if they needed large chunks of focus time, or were fine multi-tasking, etc). We'd have them make a best guess when onboarding and then iterate and update as they figured out how they worked best.
3) Proactively Propagate Communication Standard: Most engineers would want large chunks of uninterrupted focus time, so we would tell them to EXPLICITLY tell their teammates or any other stakeholders WHEN they would be focusing and unresponsive (standardize it via schedule), and WHY (ie sell the idea). Bad feelings or optics are ALWAYS simply a matter of miscommunication so long as good faith exists. We'd also have them explain "escalation patterns", ie "if something is truly urgent, DM me on slack a few times and finally, call my phone."
4) Set comms status: Really this is just slack/teams. but basically as a soft reminder to stakeholders, set your slack status to "heads down building" or something so people remember that you aren't available due to focus time. It's really easy to sync slack status to calendar blocks to automate this.
We also found that breaking the day into async task time and sync task time really helped optimize. Async tasks are tasks that can get completed in small chunks of time like code review, checking email, slack, etc. These might be large time sinks in aggregate, but generally you can break into small time blocks and still be successful. We would have people set up their day so all the async tasks would be done when they are already paying a context switching cost. IE, scheduled agile cadence meetings etc. If you're doing a standup meeting, you're already gonna be knocked out of flow so might as well use this time to also do PR review, async comms, etc. Naturally we had people stack their meetings when possible instead of pepper throughout the day (more on how this was accomplished below).
Anyways, sometimes when an engineer of ours joined a new team, there might be a political challenge in not fitting into the existing "mold" of how that team communicated (if that team's comm standard didn't jive with our engineer's). This quickly resolved every single time when our engineer was proven out to be much more productive/effective than the existing engineers (who were kneecapped by the terrible distracting existing standard of meetings, constant slack interruptions, etc). We would even go as far as to tell stakeholders our engineers would not be attending less important meetings (not immediately, once we had already proven ourselves a bit). The optics around this weren't great at first, but again, our engineers would start 1.5-2X'ing productivity of the in-house engineers, and political issues melt away very quickly.
TL;DR - Operate in good faith, decide your own best communication standard, propagate the standard out to your stakeholders explicitly, deliver and people will respect you and also your comms standard.
PaulRobinson
It's going to be a different kind of focus.
Technologies are regularly predicted to diminish a capability that was previously considered important.
Babbage came up with the ideas for his engines after getting frustrated with log tables - how many people reading this have used a log table or calculated one recently?
Calculators meant kids wouldn't need to do arithmetic by hand any more and so would not be able to do maths. In truth they just didn't have to do it by hand any more - they still needed the skills to interpret the results, they just didn't have to do the hard work of creating the outputs by pen and paper.
They also lost the skill of using slide rules which were used to give us approximations, because calculators allowed us to be precise - they were no longer needed.
Computers, similar story.
Then the same came with search engines in our pockets. "Oh no, people can find an answer to anything in seconds, they won't remember things". This is borne out, there have been studies that show recall diminishes if your phone is even in the same room. But you still need to know what to look for, and know what to do with what you find.
I think this'll still be true in the future, and I think TFA kind of agrees, but seems to be doing the "all may be lost" vibe by insisting that you still need foundational skills. You don't need to know the foundational skills if you want to know what the answer to 24923 * 923 is, you can quickly find out the answer and use that answer however you need.
I just think the work shifts - you'll still need to know how to craft your inputs carefully (vibe coding works better if you develop a more detailed specification), and you'll still need to process the output, but you'll become less connected to the foundation and for 99% of the time, that's absolutely fine in the same way it has been with calculators, and so on.
kazinator
I frequent Hacker News and have not noticed that much buzz around AI and LLMs. The vast bulk of that buzz is insubstantial and therefore off topic here. Sites like LinkedIn on the other hand are overrun with the swill.
obscurette
I'm old enough to remember myriad of experts 10+ years ago who were active selling a view, that smartphones with constantly connected social media will change everything, we just have to learn to use it wisely.
kennyadam
They weren't wrong. Unfortunately, we didn't use it wisely and obliterated objective reality and allowed people to create spaces where they never have to engage with anything challenging.
rglover
And AI will be no different. People will rush head first into the fire and be flabbergasted when all of the hype and promise of utopia results in utter chaos and inequality.
If we assume that civilization is already teetering thanks to the smartphone/social media, the fallout of AI would make Thomas Cole blush.
marbro
[dead]
djsavvy
> This idea summarizes why I disagree with those who equate the LLM revolution to the rise of search engines, like Google in the 90s. Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and Exploitation (click on the top result). > LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead. Users may explore if the first solution does not work, but the first choice is always to exploit.
Well said, and an interesting idea, but most of my LLM usage (besides copilot autocomplete) is actually very search-engine-esque. I ask it to explain existing design decisions, or to search for a library that fits my needs, or come up with related queries so I can learn more.
Once I've chosen a library or an approach for the task, I'll have the LLM write out some code. For anything significantly more substantive code than copilot completions, I almost always do some exploring before I exploit.
trollbridge
I’m finding the same usage of LLMs in terms of what actually I use them for day to day. When I need to look up arcane information, an LLM generally does better than a Google search.
bluefirebrand
How do you verify the accuracy of "arcane information" produced by an LLM?
"Arcane Information" is absolutely the worst possible use case I can imagine for LLMs right now. You might as well ask an intern to just make something up
thih9
> LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead. Users may explore if the first solution does not work, but the first choice is always to exploit.
You can ask the llm to generate a number of solutions though - the exploration is possible and relatively easy then.
And I say that as someone who dislikes llms with a passion.
bikedspiritlake
Phrasing LLMs as encouraging exploitation is important, because they can still be powerful tools for exploration. The difference comes in the interface for LLMs, which is heavily focused on exploitation whereas search engine interfaces encourage exploration.
Newer models often end responses with questions and thoughts that encourage exploration, as do features like ChatGPT's follow up suggestions. However, a lot of work needs to be done with LLM interfaces to balance exploitation and exploration while avoiding limiting AI's capabilities.
Losing focus as a skill is something I see with every batch of new students. It’s not just LLMs, almost every app and startup is competing for the same limited attention from every user.
What LLMs have done for most of my students is remove all the barriers to an answer they once had to work for. It’s easy to get hooked on fast answers and forget to ask why something works. That said, I think LLMs can support exploration—often beyond what Googling ever did—if we approach them the right way.
I’ve seen moments where students pushed back on a first answer and uncovered deeper insights, but only because they chose to dig. The real danger isn’t the tool, it’s forgetting how to use it thoughtfully.