AI tooling must be disclosed for contributions
113 comments
·August 21, 2025thallavajhula
tick_tock_tick
> AI is only as smart as the human handling it.
I think I'm slowly coming around to this viewpoint too. I really just couldn't understand how so many people were having widely different experiences. AI isn't magic; how could I have expected all the people I've worked with who struggle to explain stuff to team members, who have near perfect context, to manage to get anything valuable across to an AI?
I was original pretty optimistic that AI would allow most engineers to operate at a higher level but it really seems like instead it's going to massively exacerbate the difference between an ok engineer and a great engineer. Not really sure how I feel about that yet but at-least I understand now why some people think the stuff is useless.
btown
One of my mental models is that the notion of "effective engineer" used to mean "effective software developer" whether or not they were good at system design.
Now, an "effective engineer" can be a less battle-tested software developer, but they must be good at system design.
(And by system design, I don't just mean architecture diagrams: it's a personal culture of constantly questioning and innovating around "let's think critically to see what might go wrong when all these assumptions collide, and if one of them ends up being incorrect." Because AI will only suggest those things for cut-and-dry situations where a bug is apparent from a few files' context, and no ambitious idea is fully that cut-and-dry.)
The set of effective engineers is thus shifting - and it's not at all a valid assumption that every formerly good developer will see their productivity skyrocket.
btucker
I've been starting to think of it like this:
Great Engineer + AI = Great Engineer++ (Where a great engineer isn't just someone who is a great coder, they also are a great communicator & collaborator, and love to learn)
Good Engineer + AI = Good Engineer
OK Engineer + AI = Mediocre Engineer
QuercusMax
I recently watched a mid-level engineer use AI to summarize some our code, and he had it put together a big document describing all the various methods in a file, what they're used for, and so forth. It looked to me like a huge waste of time, as the code itself was already very readable (I say this as someone who recently joined the project), and the "documentation" the AI spit out wasn't that different than what you'd get just by running pydoc.
He took a couple days doing this, which was shocking to me. Such a waste of time that would have been better spent reading the code and improving any missing documentation - and most importantly asking teammates about necessary context that couldn't just be inferred from the code.
biophysboy
I sort of think of it in terms of self-deskilling.
If an OK engineer is still actively trying to learn, making mistakes, memorizing essentials, etc. then there is no issue.
On the other hand, if they're surrendering 100% of their judgment to AI, then they will be mediocre.
aydyn
Is there a difference between "OK" and "Mediocre"?
geodel
Not Engineer + AI = Now an Engineer
Thats the reason for high valuation of AI companies.
jgilias
This fits my observations as well. With the exception that it’s sometimes the really sharp engineers who can do wonders themselves who aren’t really great at communication. AI really needs you to be verbose, and a lot of people just can’t.
katbyte
It’s like the difference between someone who can search the internet or codebase well bs someone who can’t
Using search engines is a skill
jerf
I've been struggling to apply AI on any large scale at work. I was beginning to wonder if it was me.
But then my wife sort of handed me a project that previously I would have just said no to, a particular Android app for the family. I have instances of all the various Android technologies under my belt, that is, I've used GUI toolkits, I've used general purpose programming languages, I've used databases, etc, but with the possible exception of SQLite (which even that is accessed through an ORM), I don't know any of the specific technologies involved with Android now. I have never used Kotlin; I've got enough experience that I can pretty much piece it together when I'm reading it but I can't write it. Never used the Android UI toolkit, services, permissions, media APIs, ORMs, build system, etc.
I know from many previous experiences that A: I could definitely learn how to do this but B: it would be a many-week project and in the end I wouldn't really be able to leverage any of the Android knowledge I would get for much else.
So I figured this was a good chance to take this stuff for a spin in a really hard way.
I'm about eight hours in and nearly done enough for the family; I need about another 2 hours to hit that mark, maybe 4 to really polish it. Probably another 8-12 hours and I'd have it brushed up to a rough commercial product level for a simple, single-purpose app. It's really impressive.
And I'm now convinced it's not just that I'm too old a fogey to pick it up, which is, you know, a bit of a relief.
It's just that it works really well in some domains, and not so much in others. My current work project is working through decades of organically-grown cruft owned by 5 different teams, most of which don't even have a person on them that understands the cruft in question, and trying to pull it all together into one system where it belongs. I've been able to use AI here and there for some stuff that is still pretty impressive, like translating some stuff into psuedocode for my reference, and AI-powered autocomplete is definitely impressive when it correctly guesses the next 10 lines I was going to type effectively letter-for-letter. But I haven't gotten that large-scale win where I just type a tiny prompt in and see the outsized results from it.
I think that's because I'm working in a domain where the code I'm writing is already roughly the size of the prompt I'd have to give, at least in terms of the "payload" of the work I'm trying to do, because of the level of detail and maturity of the code base. There's no single sentence I can type that an AI can essentially decompress into 250 lines of code, pulling in the correct 4 new libraries, and adding it all to the build system the way that Gemini in Android Studio could decompress "I would like to store user settings with a UI to set the user's name, and then display it on the home page".
I think I recommend this approach to anyone who wants to give this approach a fair shake - try it in a language and environment you know nothing about and so aren't tempted to keep taking the wheel. The AI is almost the only tool I have in that environment, certainly the only one for writing code, so I'm forced to really exercise the AI.
thewebguyd
> try it in a language and environment you know nothing about and so aren't tempted to keep taking the wheel.
That's a good insights. Its almost like to use AI tools effectively, one needs to stop caring about the little things you'd get caught up in if you were already familiar and proficient in a stack. Style guidelines, a certain idiomatic way to do things, naming conventions, etc.
A lot like how I've stopped organizing digital files into folders, sub folders etc (along with other content) and now I just just rely on search. Everything is a flat structure, I don't care where its stored or how it's organized as long as I can just search for it, that's what the computer is for, to keep track for me so I don't have to waste time organizing it myself.
Like wise for the code Generative AI produces. I don't need to care about the code itself. As long as its correct, not insecure, and performant, it's fine.
It's not 100% there yet, I still do have to go in and touch the code, but ideally I shouldn't have to, nor should I have to care what the actual code looks like, just the result of it. Let the computer manage that, not me. My role should be the system design and specification, not writing the code.
smokel
What you are describing also seems to align with the idea that greenfield projects are well-suited for AI, whereas brownfield projects are considerably more challenging.
danenania
The way I've been thinking about it is that the human makes the key decisions and then the AI connects the dots.
What's a key decision and what's a dot to connect varies by app and by domain, but the upside is that generally most code by volume is dot connecting (and in some cases it's like 80-90% of the code), so if you draw the lines correctly, huge productivity boosts can be found with little downside. But if you draw the lines wrong, such that AI is making key decisions, you will have a bad time. In that case, you are usually better off deleting everything it produced and starting again rather than spending time to understand and fix its mistakes.
Things that are typically key decisions:
- database table layout and indexes
- core types
- important dependencies (don't let the AI choose dependencies unless it's low consequence)
- system design—caches, queues, etc.
- infrastructure design—VPC layout, networking permissions, secrets management
- what all the UI screens are and what they contain, user flows, etc.
- color scheme, typography, visual hierarchy
- what to test and not to test (AI will overdo it with unnecessary tests and test complexity if you let it)
- code organization: directory layout, component boundaries, when to DRY
Things that are typically dot connecting:
- database access methods for crud
- API handlers
- client-side code to make API requests
- helpers that restructure data, translate between types, etc.
- deploy scripts/CI and CD
- dev environment setup
- test harness
- test implementation (vs. deciding what to test)
- UI component implementation (once client-side types and data model are in place)
- styling code
- one-off scripts for data cleanup, analytics, etc.
That's not exhaustive on either side, but you get the idea.
AI can be helpful for making the key decisions too, in terms of research, ideation, exploring alternatives, poking holes, etc., but imo the human needs to make the final choices and write the code that corresponds to these decisions either manually or with very close supervision.
devmor
I'm right there with you, and having a similar experience at my day job. We are doing a bit of a "hack week" right now where we allow everyone in the org to experiment in groups with AI tools, especially those that don't regularly use them as part of their work - and we've seen mostly great applications of analytical approaches, guardrails and grounded generation.
It might just be my point of view, but I feel like there's been a sudden paradigm shift back to solid ML from the deluge of chatbot hype nonsense.
Waterluvian
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.
But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”
cvoss
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.
If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.
KritVutGu
This is it exactly.
Slop generators being available to everyone makes everyone less trustworthy, from a maintainer's POV. Thus, the circle of trust, for any given maintainer, shrinks starkly.
People do not become maintainers because they want to battle malicious, or even criminally negligent crap. They expect benign and knowledgeable contributors, or at least benign and willing to do their homework ones.
Being a maintainer is already hugely thankless. It's hard work (harder than writing code), and it comes with a lot less recognition. Not to mention all the newcomers that (a) maintainers usually eagerly educate, but then (b) disappear.
Screw up the social contract for maintainers even more, and they'll go extinct. (Edit: if a maintainer gets a whiff of some contributor working against them, rather than with them, they'll either ban the contributor forever, or just quit the project.)
Any sane project should categorically ban AI-assisted contributions, and extend their Signed-off-by definition, after a cut-off-date, to carry an explicit statement by the contributor that the code is free of AI-output. If this rules out "agentic IDE"s, that's a win.
ToucanLoucan
The sheer amount of entitlement on display by very pro-AI people genuinely boggles the mind.
koolba
> You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.
No you don’t. You can’t outsource trust determinations. Especially to the people you claim not to trust!
You make the judgement call by looking at the code and your known history of the contributor.
Nobody cares if contributors use an LLM or a magnetic needle to generate code. They care if bad code gets introduced or bad patches waste reviewers’ time.
falcor84
Trust is absolutely a thing. Maintaining an open source project is an unreasonably demanding and thankless job, and it would be even more so if you had to treat every single PR as if it's a high likelihood supply-chain attack.
geraneum
> Nobody cares if contributors use an LLM or a magnetic needle to generate code.
That’s exactly opposite of what the author is saying. He mentions that [if the code is not good, or you are a beginner] he will help you get to finish line, but if it’s LLM code, he shouldn’t be putting effort because there’s no human on the other side.
It makes sense to me.
dsjoerg
You haven't addressed the primary stated rationale from the linked content: "I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so."
nosignono
> I wouldn’t really care how someone got to the end result that is a PR.
I can generate 1,000 PRs today against an open source project using AI. I think you do care, you are only thinking about the happy path where someone uses a little AI to draft a well constructed PR.
There's a lot ways AI can be used to quickly overwhelm a project maintainer.
oceanplexian
> I can generate 1,000 PRs today against an open source project using AI.
Then perhaps the way you contribute, review, and accept code is fundamentally wrong and needs to change with the times.
It may be that technologies like Github PRs and other VCS patterns are literally obsolete. We've done this before throughout many cycles of technology, and these are the questions we need to ask ourselves as engineers, not stick our heads in the sand and pretend it's 2019.
whatevertrevor
I don't think throwing out the concept of code reviews and version control is the correct response to a purported rise in low-effort high-volume patches. If anything it's even more required.
kelvinjps10
Why it's incorrect? And what would be the new way? AI to review the changes of AI?
Waterluvian
In that case a more correct rule (and probably one that can be automatically enforced) for that issue is a max number of PRs or opened issues per account.
raincole
When one side has much more "scalability" than the other, then the other side has very strong motivation to match up.
- People use AI to write cover letters. If the companies don't filter out them automatically, they're screwed.
- Companies use AI to interview candidates. No one wants to spend their personal time talking to a robot. So the candidates start using AI to take interviews for them.
etc.
If you don't at least tell yourself that you don't allow AI PRs (even just as a white lie) you'll one day use AI to review PRs.
oceanplexian
Both sides will use AI and it will ultimately increase economic productivity.
Imagine living before the invention of the printing press, and then lamenting that we should ban them because it makes it "too easy" to distribute information and will enable "low quality" publications to have more reach. Actually, this exact thing happened, but the end result was it massively disrupted the world and economy in extremely positive ways.
bootsmann
> Both sides will use AI and it will ultimately increase economic productivity.
Citation needed, I don’t think the printing press and gpt are in any way comparable.
ionelaipatioaei
> Both sides will use AI and it will ultimately increase economic productivity.
In some cases sure but it can also create the situation where people just waste time for nothing (think AI interviewing other AIs - this might generate GDP by people purchasing those services but I think we can all agree that this scenario is just wasting time and resource without improving society).
renrutal
I won't put it as "just another tool". AI introduces a new kind of tool where the ownership of the resulting code is not straightforward.
If, in the dystopian future, a justice court you're subjected to decides that Claude was trained on Oracle's code, and all Claude users are possibly in breach of copyright, it's easier to nuke from orbit all disclosed AI contributions.
alfalfasprout
The reality is as someone that helps maintain several OSS projects you vastly underestimate the problem that AI-assisted tooling has created.
On the one hand, it's lowered the barrier to entry for certain types of contributions. But on the other hand getting a vibe-coded 1k LOC diff from someone that has absolutely no idea how the project even works is a serious problem because the iteration cycle of getting feedback + correctly implementing it is far worse in this case.
Also, the types of errors introduced tend to be quite different between humans and AI tools.
It's a small ask but a useful one to disclose how AI was used.
wahnfrieden
You should care. If someone submits a huge PR, you’re going to waste time asking questions and comprehending their intentions if the answer is that they don’t know either. If you know it’s generated and they haven’t reviewed it themselves, you can decide to shove it back into an LLM for next steps rather than expect the contributor to be able to do anything with your review feedback.
Unreviewed generated PRs can still be helpful starting points for further LLM work if they achieve desired results. But close reading with consideration of authorial intent, giving detailed comments, and asking questions from someone who didn't write or read the code is a waste of your time.
That's why we need to know if a contribution was generated or not.
KritVutGu
You are absolutely right. AI is just a tool to DDoS maintainers.
Any contributor who was shown to post provably untested patches used to lose credibility. And now we're talking about accommodating people who don't even understand how the patch is supposed to work?
quotemstr
As a project maintainer, you shouldn't make rules unenforceable rules that you and everyone else know people will flout. Doing so comes makes you seem impotent and diminishes the respect people have for rules in general.
You might argue that by making rules, even futile ones, you at least establish expectations and take a moral stance. Well, you can make a statement without dressing it up as a rule. But you don't get to be sanctimonious that way I guess.
natrius
Unenforceable rules are bad, but if you tweak the rule to always require some sort of authorship statement (e.g. "I wrote this by hand" or "I wrote this with Claude"), then the honor system will mostly achieve the desired goal of calibrating code review effort.
voxl
Except you can enforce this rule some of the time. People discover that AI was used or suspect it all the time, and people admit to it after some pressure all the time.
Not every time, but sometimes. The threat of being caught isn't meaningless. You can decide not to play in someone else's walled garden if you want but the least you can do is respect their rules, bare minimum of human decency.
quotemstr
It. doesn't. matter.
The only legitimate reason to make a rule is to produce some outcome. If your rule does not result in that outcome, of what use is the rule?
Will this rule result in people disclosing "AI" (whatever that means) contributions? Will it mitigate some kind of risk to the project? Will it lighten maintainer load?
No. It can't. People are going to use the tools anyway. You can't tell. You can't stop them. The only outcome you'll get out of a rule like this is making people incrementally less honest.
KritVutGu
> As a project maintainer, you shouldn't make rules unenforceable rules
Total bullshit. It's totally fine to declare intent.
You are already incapable of verifying / enforcing that a contributor is legally permitted to submit a piece of code as their own creation (Signed-off-by), and do so under the project's license. You won't embark on looking for prior art, for the "actual origin" of the code, whatever. You just make them promise, and then take their word for it.
neilv
There is also IP taint when using "AI". We're just pretending that there's not.
If someone came to you and said "good news: I memorized the code of all the open source projects in this space, and can regurgitate it on command", you would be smart to ban them from working on code at your company.
But with "AI", we make up a bunch of rationalizations. ("I'm doing AI agentic generative AI workflow boilerplate 10x gettin it done AI did I say AI yet!")
And we pretend the person never said that they're just loosely laundering GPL and other code in a way that rightly would be existentially toxic to an IP-based company.
ineedasername
Courts (at least in the US) have already ruled that use of ingested data for training is transformative. There’s lots of details to figure, but the genie is out of the bottle.
Sure it’s a big hill to climb in rethinking IP laws to align with a societal desire that generating IP continue to be a viable economic work product, but that is what’s necessary.
alfalfasprout
> Courts (at least in the US) have already ruled that use of ingested data for training is transformative
This is far from settled law. Let's not mischaracterize it.
Even so, an AI regurgitating proprietary code that's licensed in some other way is a very real risk.
popalchemist
No more so than regurgitating an entire book. While it could technically be possible in the case of certain repos that are ubiquitous on the internet (and therefore overrepresented in training data to the point that they are "regurgitated" verbatim, in whole), it is extremely unlikely and would only occur after deliberate prompting. The NYT suit against Open AI shows (in discovery) that the NYT was only able to get partial results after deliberately prompting the model with portions of the text they were trying to force it to regurgitate.
So. Yes, technically possible. But impossible by accident. Furthermore when you make this argument you reveal that you don't understand how these models work. They do not simply compress all the data they were trained on into a tiny storable version. They are effectively multiplication matrices that allow math to be done to predict the most likely next token (read: 2-3 Unicode characters) given some input.
So the model does not "contain" code. It "contains" a way of doing calculations for predicting what text comes next.
Finally, let's say that it is possible that the model does spit out not entire works, but a handful of lines of code that appear in some codebase.
This does not constitute copyright infringement, as the lines in question a) represent a tiny portion of the whole work (and copyright only protecst against the reduplication of whole works or siginficant portions of the work), and B) there are a limited number of ways to accomplish a certain function and it is not only possible but inevitable that two devs working independently could arrive at the same implementation. Therefore using an identical implementation (which is what this case would be) of a part of a work is no more illegal than the use of a certain chord progression or melodic phrasing or drum rhythm. Courts have ruled about this thoroughly.
tick_tock_tick
> There is also IP taint when using "AI". We're just pretending that there's not.
I don't think anyone who's not monetarily incentivize to pretend there are IP/Copyright issues actually thinks there are. Luckily everyone is for the most part just ignoring them and the legal system is working well and not allowing them an inch to stop progress.
neilv
> I don't think anyone who's not monetarily incentivize to pretend there are IP/Copyright issues actually thinks there are.
Why do you think that about people who disagree with you? You're responding directly to someone who's said they think there's issues, and not pretending. Do you think they're lying? Did you not read what they said?
And AFAICT a lot of other people think similarly to me.
The perverse incentives to rationalize are on the side of the people looking to exploit the confusion, not the people who are saying "wait a minute, what you're actually doing is..."
So a gold rush person claiming opponents must be pretending because of incentives... seems like the category of "every accusation is a confession".
luma
Also ban StackOverflow and nearly any text book in the field.
The reality is that programmers are going to see other programmers code.
neilv
Huge difference, and companies recognized the difference, right up until "AI" hype.
JoshTriplett
"see" and "copy" are two different things. It's fine to look at StackOverflow to understand the solution to a problem. It's not fine to copy and paste from StackOverflow and ignore its license or attribution.
Content on StackOverflow is under CC-by-sa, version depends on the date it was submitted: https://stackoverflow.com/help/licensing . (It's really unfortunate that they didn't pick license compatible with code; at one point they started to move to the MIT license for code, but then didn't follow through on it.)
timeon
How is that same thing?
uberduper
> Or a more detailed disclosure: > I consulted ChatGPT to understand the codebase but the solution was fully authored manually by myself.
What's the reasoning for needing to disclose this?
kazinator
[delayed]
hodgehog11
How does this not lead to a situation where no honest person can use any AI in their submissions? Surely pull requests that acknowledge AI tooling will be given significantly less attention, on the grounds that no one wants to read work that they know is written by AI.
skogweb
I don't think this is the case. Mitchell writes that he himself uses LLMs, so it's not black and white. A PR author who has a deep understanding of their changes and used an LLM for convenience will be able to convey this without losing credibility imo
MerrimanInd
It just might. But if people generate a bias against AI generated code because AI can generate massive amounts of vaguely correct looking yet ultimately bad code then that seems like an AI problem not a people problem. Get better, AI coding tools.
Workaccount2
Make a knowledgeable reply and mention you used chat-gpt - comment immediately buried.
Make a knowledgeable reply and give no reference to the AI you used- comment is celebrated.
We are already barreling full speed down the "hide your AI use" path.
showcaseearth
I doubt a PR is going to be buried if it's useful, well designed, good code, etc, just because of this disclosure. Articulate how you used AI and I think you've met the author's intent.
If the PR has issues and requires more than superficial re-work to be acceptable, the authors don't want to spend time debugging code spit out by an AI tool. They're more willing to spend a cycle or two if the benefit is you learning (either generally as a dev or becoming more familiar with the project). If you can make clear that you created or understand the code end to end, then they're more likely to be willing to take these extra steps.
Seems pretty straightforward to me and thoughtful by the maintainers here.
KritVutGu
Good point. That's the point exactly. Don't use AI for writing your patch. At all.
Why are you surprised? Do companies want to hire "honest" people whose CVs were written by some LLM?
andunie
Isn't that a good thing?
jama211
What, building systems where we’re specifically incentivised not to disclose ai use?
hodgehog11
It might encourage people to be dishonest, or to not contribute at all. Maybe that's fine for now, but what if the next generation come to rely on these tools?
alfalfasprout
No one is saying to not use AI. The intent here is to be honest about AI usage in your PRs.
whimsicalism
i'm happy to read work written by AI and it is often better than a non-assisted PR
philjohn
I like the pattern of including each prompt used to make a given PR, yes, I know that LLM's aren't deterministic, but it also gives context of the steps required to get to the end state.
mock-possum
I’m using specsytory in vscode + cursor for this - it keeps a nice little md doc of all your LLM interactions, and you can check that into source control if you like so it’s included in pull requests, and can be referenced during code review.
andruby
> I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of
I really appreciate this point from mitchellh. Giving thoughtful constructive feedback to help a junior developer improve is a gift. Yet it would be a waste of time if the PR submitter is just going to pass it to an AI without learning from it.
Lerc
I think this seems totally reasonable, the additional context provided is, I think, important to the requirement.
Some of the AI policy statements I have seen come across more as ideology statements. This is much better, saying the reasons for the requirement and offering a path forward. I'd like to see more of this and less "No droids allowed"
rattlesnakedave
In my personal projects I also require all contributors to disclose rather they’ve used an editor with any autocomplete features enabled.
freedomben
Heh, that's a great way to make a point, but right now AI is nowhere near what a traditional editor autocomplete is. Yes you can use it that way, but it's by no means limited to that. If you think of AI as a fancy autocomplete, that's a good personal philosophy, but there are plenty of people that aren't using it that way
miloignis
Notably, tab completion is an explicltly called-out exception to this policy, as detailed in the changed docs.
king_geedorah
Re: "What about my autocomplete?" which has shown up twice in this thread so far.
> As a small exception, trivial tab-completion doesn't need to be disclosed, so long as it is limited to single keywords or short phrases.
RTFA (RTFPR in this case)
ovaistariq
I don’t see much benefit from the disclosure alone. Ultimately, this is code that needs to be reviewed. There is going to continue to be more and more AI assisted code generation, to the point where we see the same level of adoption of these tools as "Autocomplete". Why not solve this through tooling? I have had great effect with tools like Greptile, Cursor's BugBot and Claude Code.
wmf
If the code is obviously low quality and AI-generated then it doesn't need to be fully reviewed actually. You can just reject the PR.
Jaxan
Sure it needs to be reviewed. But the author does more than just reviewing, they help the person submitting the PR to improve their PR. If the other side is an AI, it can save them some time.
I’m loving today. HN’s front page is filled with some good sources today. No nonsense sensationalism or preaching AI doom, but more realistic experiences.
I’ve completely turned off AI assist on my personal computer and only use AI assist sparingly on my work computer. It is so bad at compound work. AI assist is great at atomic work. The rest should be handled by humans and use AI wisely. It all boils down back to human intelligence. AI is only as smart as the human handling it. That’s the bottom line.