Skip to content(if available)orjump to list(if available)

Why employees smuggle AI into work

Why employees smuggle AI into work

18 comments

·February 4, 2025

latentcall

My employer blocks all AI tools via firewall at the office. I get around this by just using my phone on data or the guest WiFi. I don’t use it often for my work (AIX/Linux admin) but it has been helpful for certain situations.

miohtama

This hits well

> He estimates that his increased productivity is equivalent to the company getting a third of an additional person working for free.

> He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative."

TuringNYC

>> It's a new frontier of IT and they just want to be conservative."

Also it could potentially legally encumber your work, esp in creative fields where output is published. There is real liability.

dijksterhuis

as someone who spent several years attacking ML models for security research, so does this

> Around 30% of the applications Harmonic Security has seen being used train using information entered by the user.

> That means the user's information becomes part of the AI tool and could be output to other users in the future.

> … Firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.

tbrownaw

> He's not sure why the company has banned external AI. "I think it's a control thing," he says.

They involve sending company IP to a third party to do whatever they want with it.

Or depending on your industry and job function, instead of your company's IP it might be other people's data and have contractual or even statutory rules about what you can do with it.

atoav

> He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative."

Yeah it is a real MIRACLE why companies don't want their employees to input their sensitive company data into some online form that they may or may not trust. We will likely never know what their real reasoning is.. /s

elektor

Great article.

I think it may be the first time I saw Kagi mentioned in a major publication.

Someone1234

I feel like privacy/isolated environment remains an unexploited USP in this space. There are enterprise products that somewhat offer it, but these require burdensome minimum head counts, dedicated contracts, and are all high cost.

In the SMB space there is definitely a vacuum. In particular when you look at the small print for products like Office 365's Copilot. We've tried to have Microsoft make us privacy assurances contractually, and they won't.

cheesemonster

I wonder if an on-prem box with preinstalled Deepseek would be an attractive proposal to data-sensitive companies.

heraldgeezer

Companies use Microsoft 365 or Google.

Just turn on their companies AIs adn block the rest?

But as always mgmnt has to approve.

outside1234

This is the "performance enhancing drugs" of mental work. Either you use it (and you make the Tour de France) or you don't (and you get no next contract).

It is easily a 6x performance increase in the amount of code I can write. I either have to use it, or get laid off.

CraigJPerry

6x?

I’m a huge fan of Sonnet but I can’t reconcile 6x. It’s a faster keyboard. Stuff like aider architect chat or whatever seems relatively useless.

What’s the secret for 6x?

ahoka

The secret is a low baseline.

Slartie

I'd say it's about 2-3x at most, in the best case scenarios. When I have to write some kind of wrapper or glue code on a green field, I approach that factor. And I really love using AI code completion in those kinds of task.

However, writing that kind of code maybe makes up 5% of my work. Analysis, trial-and-error, discussions etc. make up the other 95%, and AI only seldomly helps with that. It can sometimes be useful for research and spec ingestion, but it quickly becomes dangerous in those cases because as soon as you enter any kind of niche area (and unfortunately my work has a lot of those) LLMs tend to hallucinate and present made-up "knowledge" with enviable certainty.

belter

Be careful. That 6x performance boost can be easily used with metrics to detect which Developers are using AI...

johnbellone

Do you often go back to maintain that code that you wrote?

data-ottawa

This is the kicker for me.

It’s great at boilerplate like filling in configs, but the results of iteratively coding with AI are first draft quality.

Yesterday I used copilot to kick out a really quick REST api for a flask app. A lot of the code was boilerplate parsing args and DB lookups, so it was fine.

When I changed one of my db models it broke all the hard coded error handling in frustrating ways.

After writing suitable helpers and rewriting the module (with AI) it cleaned up into some good code.

Definitely speed up the exploration phase, but required a top to bottom rewrite for the final product. Overall I’d say a little faster than me doing this myself (but I don’t use flask often)

null

[deleted]