Skip to content(if available)orjump to list(if available)

Google begins requiring JavaScript for Google Search

marginalia_nu

To be fair, if my search engine is anything to go on, about 0.5-1% of the requests I get are from human sources. The rest are from bots, and not like people who haven't found I have an API, but bots that are attempting to poison Google or Bing's query suggestions (even though I'm not backed by either). From what I've heard from other people running search engines, it looks the same everywhere.

I don't know what Google's ratio of human to botspam is, but given how much of a payday it would be if anyone were to succeed, I can imagine they're serving their fair number of automated requests.

Requiring a headless browser to automate the traffic makes the abuse significantly more expensive.

shiomiru

If it's such a common issue, I would've thought Google already ignored searches from clients that do not enable JavaScript when computing results?

Besides, you already got auto-blocked when using it in a slightly unusual way. Google hasn't worked on Tor since forever, and recently I also got blocked a few times just for using it through my text browser that uses libcurl for its network stack. So I imagine a botnet using curl wouldn't last very long either.

My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.

supriyo-biswas

Given that curl-impersonate[1] exists and that a major player in this space is also looking for experience with this library, I'm pretty sure forcing the execution of JS using DOM stuff would be a much more effective deterrent to prevent scraping.

[1] https://github.com/lwthiker/curl-impersonate

jsnell

"Why didn't they do it earlier?" is a fallacious argument.

If we accepted it, there would basically only be a single point in time where a change like this could be legitimately made. If the change is made before there is a large enough problem, you'll argue the change was unnecessary. If it's made after, you'll argue the change should have been made sooner.

"They've already done something else" isn't quite as logically fallacious, but shows that you don't experience dealing with adversarial application domains.

Adversarial problems, which scraping is, are dynamic and iterative games. The attacker and defender are stuck in an endless loop of game and counterplay, unless one side gives up. There's no point in defending against attacks that aren't happening -- it's not just useless, but probably harmful, because every defense has some cost in friction to legitimate users.

> My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.

Yes, that kind of thing is very easy to just assert. But just think about it for like two seconds. How much more revenue are you going to make per user? None. Users without JS are still shown ads. JS is not necessary for ad targeting either.

It seems just as plausible that this is losing them some revenue, because some proprortion of the people using the site without JS will stop using it rather than enable JS.

shiomiru

> "Why didn't they do it earlier?" is a fallacious argument.

I never said that, but admittedly I could have worded my argument better: "In my opinion, shadow banning non-JS clients from result computation would be similarly (if not more) effective at preventing SEO bots from poisoning results, and I would be surprised if they hadn't already done that."

Naturally, this doesn't fix the problem of having to spend resources on serving unsuccessful SEO bots that the existing blocking mechanisms (which I think are based on IP-address rate limiting and the UA's HTTPS fingerprint) failed to filter out.

> Yes, that kind of thing is very easy to just assert. But just think about it for like two seconds. How much more revenue are you going to make per user? None. Users without JS are still shown ads. JS is not necessary for ad targeting either.

Is JS necessary for ads? No. Does JS make it easier to control what the user is seeing? Sure it does.

If you've been following the developments on YouTube concerning ad-blockers, you should understand my suspicion that Search is going in a similar direction. Of course, it's all speculation; maybe they really just want to make sure we all get to experience the JS-based enhancements they have been working on :)

supriyo-biswas

I run a semi-popular website hosting user-generated content, although it's not a search engine; the attacks on it have surprised me, and I've eventually had to put in the same kinds of restrictions on it.

I was initially very hesitant to restrict any kind of traffic, relying on ratelimiting IPs on critical endpoints that needed low friction, and captchas on the higher friction with higher intents, such as signup and password reset pages.

Other than that, I was very liberal with most traffic, making sure that Tor was unblocked, and even ending up migrating off Cloudflare's free tier to a paid CDN due to inexplicable errors that users were facing over Tor that were ultimately related to how they blocked some specific requests over Tor with 403, even though the MVPs on their community forums would never acknowledge such a thing.

Unfortunately, given that Tor is a free rotating proxy, my website got attacked on one of these critical, compute heavy endpoints through multiple exit nodes totaling ~20,000 RPS. I've reluctantly had to block Tor, and a few other paid proxy services discovered through my own research since then.

Another time, a set of human spammers distributed all over the world started sending out a large volume of spam towards my website; with something like 1,000,000 spam messages every day (I still feel this was an attack coordinated by a "competitor" of some sort, especially given a small percentage of messages entitled "I want to get paid for posting" or along those lines).

There was no meaningful differentiator between the spammers and legitimate users, they were using real Gmail accounts to sign up, analysis of their behaviours showed they were real users as opposed to simple or even browser-based automation, and the spammers were based out of the same residential IPs as legitimate users.

I, again, had to reluctantly introduce a spam filter on some common keywords, and although some legitimate users do get trapped from time to time, this was the only way I could get a handle on that problem.

I'm appalled by some of the discussions here. Was I "enshittifying" my website out of unbridled "greed"? I don't think so. But every time I come here, I find these accusations, which makes me think that as a website with technical users, we can definitely do better.

_factor

The problem is accountability. Imagine starting a trade show business in the physical world as an example.

One day you start getting a bunch of people come in to mess with the place. You can identify them and their organization, then promptly remove them. If they continue, there are legal ramifications.

On the web, these people can be robots that look just like real people until you spend a while studying their behavior. Worse if they’re real people being paid for sabotage.

In the real world, you arrest them and find the source. Online they can remain anonymous and protected. What recourse do we have beyond splitting the web into a “verified ID” web, and a pseudonymous analog? We can’t keep treating potential computer engagement the same as human forever. As AI agents inevitably get cheaper and harder to detect, what choice will we have?

supriyo-biswas

To be honest, I don't like initiatives towards a "verified web" either, and am very scared of the effects on anonymity that stuff like Apple's PAT, Chrome's now deprecated WEI or Cloudflare's similar efforts to that end are aimed at.

Not to say that these would just cement the position of Google and Microsoft and block off the rest of us from building alternatives to their products.

I feel that the current state of things are fine; I was eventually able to restrict most abuse in an acceptable way with few false positives. However, what I wished for was that more people would understand these tradeoffs instead of jumping to uncharitable interpretations not backed by real world experience as a conclusion.

marginalia_nu

> I'm appalled by some of the discussions here. Was I "enshittifying" my website out of unbridled "greed"? I don't think so. But every time I come here, I find these accusations, which makes me think that as a website with technical users, we can definitely do better.

It's if nothing else very evident most people fundamentally don't understand what an adversarial shit show running a public web service is.

dageshi

There's a certain relatively tiny audience that has congregated on HN for whom hating ads is a kind of religion and google is the great satan.

Threads like this are where they come to affirm their beliefs with fellow adherents.

Comments like yours, those that imply there might be some valid reason for a move like this (even with degrees of separation) are simply heretical. I think these people cling to an internet circa 2002ish and the solution to all problems with the modern internet is to make the internet go back to 2002.

_factor

The problem isn’t the necessary fluff that must be added, it’s how easy it becomes to keep on adding it after the necessity subsides.

Google was a more honorable company when the ads were on the right hand side only instead of tricking you in the main results. This is the enshitification people talk about. Decision with no reason other than pure profit at user expense. They were horrendously profitable when they made this dark pattern switch.

Profits today can’t be distinguished accurately between users who know it’s an ad and those who were tricked into thinking it was organic.

Not all enshitification is equal.

khana

[dead]

JohnMakin

[flagged]

altfredd

20000 RPS is very little — a web app / database running on an ordinary desktop computer can process up to 10000 RPS on a bare-metal configuration after some basic optimization. If that is half of your total average load, a single co-located server should be enough to eat entire "attack" without flinching. If you have "competitors" and I assume, that this is some kind of commercial product (including running profitable advertising-based business), you should probably have multiple geographically distributed servers and some kind of BGP-based DDoS protection.

Regarding Tor nodes — there is nothing wrong with locking them out, especially if your website isn't geo-blocked by any governments and there are no privacy concerns related to accessing it.

If, like Google, you lock out EVERYONE, even your logged in users, whose identities and payment details you have already confirmed, then... yes you are "enshittifying" or have ulterior motives.

> they were using real Gmail accounts to sign up

Using Gmail should be a red flag on its own. Google accounts can be purchased by millions, and immediately get resold after being blocked by target website. Same for phones. Only your own accounts / captchas / site rep can be treated as basis of trust. Confirmation e-mail is a mere formality to have some way of contacting your human users. By the time Reddit was created it was already useless as security measure.

marginalia_nu

RPS is a bad measure. 20k RPS is a little if you're serving static files, a raspberry pi could probably do that. It's a lot if you're mutating a large database table with each request, which depending on the service, isn't unheard of.

oefrha

This comment is so out of touch I’m almost speechless.

> > critical, compute heavy endpoints through multiple exit nodes totaling ~20,000 RPS

> 20000 RPS is very little

If I had to guess you’ve never hosted non-static websites so you can’t imagine what’s a compute heavy endpoint.

> Using Gmail should be a red flag on its own.

Yes, ban users signing up with Gmail then.

And this is not an isolated case, discussions on DDoS, CAPTCHAs, etc. here always have these out of touch people coming out of the woodwork. Baffling.

marcus0x62

I run a not-very-popular site -- at least 50% of the traffic is bots. I can only imagine how bad it would be if the site was a forum or search engine.

kragen

Maybe you could require hashcash, so that people who wanted to do automated searches could do it at an expense comparable to the expense of a human doing a search manually. Or a cryptocurrency micropayment, though tooling around that is currently poor.

supriyo-biswas

The only issue with a hash cash is there’s no way to know whether the user’s browser is the one who computed said proof of work, or has delegated it to a different system and is simply relaying its results. At scale, you’d end up with a large botnet that receives proof of work tokens to solve for the scraping network to use.

palmfacehn

My impression is that there's less effort for them to go directly to headless browsers. There are several foot guns in using a raw HTML parsing lib and dispatching HTTP requests. People don't care about resource usage, spammers even less and many of them lack the skills.

marginalia_nu

Most black hat spammers use botnets, especially against bigger targets which have enough traffic to build statistics to fingerprint clients and map out bad ASNs and so on, and most botnets are low powered. You're not running chrome on a smart fridge or an enterprise router.

gnfargbl

True, but the bad actor's code doesn't typically run directly on the infected device. Typically the infected router or camera is just acting as a proxy.

desdenova

Chrome is probably the worst browser possible to run for these things, so it's not the basis for comparison.

We have many smaller browsers, that run javascript, that work on low powered devices as well.

Starting from webkit and stripping down the rendering parts just to execute JavaScript and process the DOM, the RAM usage would be significantly lower.

supriyo-biswas

A major player in this space is apparently looking for people experienced in scraping without using browser automation. My guess is that not running a browser results in using far fewer resources, thus reducing their costs heavily.

Running a headless browser also means that any differences in the headless environment vs. a "headed" one can be discovered, as well as any of your Javascript executing within the page, which significantly makes it difficult to scale your operation.

marginalia_nu

My experience is that headless browsers use about 100x more RAM, and at least 10x more bandwidth and 10x more processing power, and page loads take about 10x as long time to finish (vs curl). Though these numbers may be a bit low, there are instances you need to add another zero to one or more of them.

There's also considerably more jank with headless browsers, since you typically want to re-use instances to avoid incurring the cost of spawning a new browser for each retrieval.

fweimer

The change rate for Chromium is also so high that it's hard to spot the addition of code targeting whatever you are doing on the client side.

victorbjorklund

so much more expensive and slow vs just scraping the html. It is not hard to scrape raw html if the target is well-defined (like google).

ForHackernews

> bots that are attempting to poison Google or Bing's query suggestions

This seems like yet another example of Google and friends inviting the problem they're objecting to.

nilslindemann

Just tested (ignoring AI search engines, non-english, non-free):

Search engines which require JavaScript:

Google, Bing, Ecosia, Yandex, Qwant, Gibiru, Presearch, Seekr, Swisscows, Yep, Openverse, Dogpile, Waldo

Search engines which do not require JavaScript:

DuckDuckGo, Yahoo Search, Brave Search, Startpage, AOL Search, giveWater, Mojeek

yla92

Kagi.com works without JS

nilslindemann

Have just updated my text: "ignoring non-free" :-)

null

[deleted]

null

[deleted]

anArbitraryOne

I've put off learning JavaScript for over 20 years, now I'm not going to be able to search for anything

phoronixrly

What's next? Not working for an adtech company?

fsflover

You can use DuckDuckGo without Javascript.

lemoncookiechip

What I find amusing is that this is Google. It's their bots, and now LLMs as well, that have hammered people's websites for years.

post-it

Have they hammered people's websites? I find that the Google bot makes as few requests as it can, and it respects robots.txt.

puttycat

I recently discovered how great the ChatGPT web search feature is. Returns live (!) results from the web and usually finds things that Google doesn't - mostly niche searches in natural language that G simply doesn't get.

Of course, it uses JavaScript, which doesn't help with the problem discussed here.

But I do think that Google is internally seeing a huge drop in usage which is why they're currently running for the money. We're going to see this all across their products soon enough (I'm thinking Gmail).

marginalia_nu

I've been experimenting with creating single-site browsers[1] for all websites I routinely visit, effectively removing navigational queries from search engines; between that and Claude being able to answer technical questions, it's remarkable how rarely I even use browsers for day-to-day tasks anymore (as in web views with tabs and url bars).

We've been using the web (as in documents interconnected with links between servers) for a great number of tasks it was never quite designed to solve, and the result has always been awkward. It's been very refreshing to move away from the web browser-search engine duo for these things.

For one, and it took me a while to notice what was off, but there are like no ads anymore, anywhere. Not because I use adblockers, but because I simply don't end up directed to places where there are ads. And let me tell you, if you've been away from that stuff for a while, and then come back, holy crap what a dumpster fire.

The web browser has been center stage for a long while, coasting on momentum and old habits, but it turns out it doesn't need to be, and if you work to get rid of it, you get a better and more enjoyable computing experience. Given how much better this feels, I can't help but feel we're in for a big shift in how computers are used.

[1] You can just launch 'chrome --app=url' to make one. Or use Electron if you want to customize the UI yourself.

rmgk

While I am glad that you seem to have found a new workflow that you like, your description strikes me as a personal experience.

I am aware that a lot of people use searches as a form of navigation, but it’s also very common that people use bookmarks, speed dial, history, pinned tabs, and other browser features instead of searching. My Firefox is configured to not do online searches when I type into the address bar, instead I get only history suggestions. This setup allows for quick navigation, and does not require any steps to set up new pages that I need to visit.

What I want to say that while you seem to imply that you found a different pattern of use that many people will soon migrate to, I think these patterns have always been popular. People discover and make use of them as needed.

It’s also strange that you put such a negative sentiment on interconnected documents. Do you not realize how important these connections were for you to be able to reach the point you are at now? How else would you have found the things that are useful to you? By watching ads?

Search engines are also … really not really a good example of the strengths of the interconnected web, as they are mostly a one way thing. Consider instead a Hacker News discussion about a blog, and some other blog linking to that discussion, creating these interconnected but still separate communities and documents.

marginalia_nu

> It’s also strange that you put such a negative sentiment on interconnected documents. Do you not realize how important these connections were for you to be able to reach the point you are at now? How else would you have found the things that are useful to you? By watching ads?

This is specifically in the context of getting things done, not e.g. reading an interesting article for the enjoyment, but as an indirect means accomplish a task.

mb7733

> I've been experimenting with creating single-site browsers[1] for all websites I routinely visit, effectively removing navigational queries from search engines

Surely it would make more sense to use bookmarks?

marginalia_nu

The bookmark interface on modern browsers is pretty awkward to access. It's a bigger upfront effort to set up an SSB, but they significantly streamline the user experience once they're set up in a way that aligns with what you want to do.

Web Browsers have a sort of inner platform tendency where they roll their own window management, and it just gets very messy and integrates incredibly poorly with the window management of the operating system.

You can open CI in your browser to see how your build is progressing, and in the same window, with a few keypresses, check your private email and then go buy new tires for your car, file your taxes, and after that go watch some porn.

Web browsers are streamlining an undesirable type of context switching: These are all tasks from separate domains, and I don't understand why it would be desirable that all of these things are easily accessible from the same window at the same time.

Having dedicated launchers opening specialized windows allows for a sort of workspace mise-en-place that makes interacting with the computer much more focused and deliberate. Each tool has its place and function.

nonrandomstring

As a serious computer user getting on for 25 years using text based search tools I've long made various "single-site" tools. A big inspiration way back was Surfraw [1], originally created by Julian Assange. Reality is, most of us use a small number of websites regularly. nearly all the info I want to touch is three keystrokes away on the command-line or from within emacs.

When search died, a few years ago practically now, I was still teaching a level-7 Research Methods course. The universities literally did not notice that all of the advice we gave students was totally obsolete and that it was not really possible to conduct academic research that way.

Research today is very much more like it was in the pre-interent era. You need to curate and keep in mind a set of reliable sources and personal, private collections.

Had the misfortune of needing to spend a week using a standard browser and sites like Google. It was beyond shocking. What I found I can only describe as a wastescape, a war zone, a bombed-out favela with burned out cars, overflowing sewers, piles of rubble and dead dogs lying in gutters.

My first thought was kinda, "Oh sweet Jesus Christ, what happened to my Internet?", and the very next one was "How does anyone get anything done now?" How does the economy still function? And of the course the answers are "They don't" and "It doesn't".

I think this is a really serious situation. There's simply no way that as "knowledge workers", scientists, or whatever people call us now, we can be as competitive as we were 10 or 20 years ago given the colossal degradation of our tools. We have to stop this foolish self-deception that things are "getting better". Google were a company that created free search. Well done. But that was then. We remain stuck in this strange mythology that advertising companies like Google and other enshitified BigTech are a net asset to the economy. Surely they're a vast parasitical drain and need digging into the ground so the rest of us can get on with something resembling progress?

[1] http://surfraw.org/

black3r

can it find OLD articles? I generally don't like the idea of a search engine which requires me to be logged in to track my search history (and I do mostly use Google in incognito/private browser windows), but I might ignore that if it allows me to do the one thing that Google refuses to do on phones anymore (which might be a sign that they're gonna phase that out from desktop interfaces soon)..

at0mic22

I believe the main intent is to block SERP analysers, which track result positions by keywords. Not that it would help a lot with bot abuse, but will make regular SEO agency life harder and more expensive.

Last month Google have also enstricted YouTube policies which IMHO is a sign, that they are not reaching specific milestones and that'd definitely be reflected over the alphabet stocks

ronjouch

Previous discussion: Google.com search now refusing to search for FF esr 128 without JavaScript (2025-01-16, 92 points), https://news.ycombinator.com/item?id=42719865

zelphirkalt

They are going to make Google search even more broken than it is already? Be my guest! Since they are an ads business, I guess they don't really care about their search any longer, or they have sniffed some potential to gather even more information on users using Google, if they require running JS for it to work. Who knows. But anyone valuing their privacy has long left anyway.

blindriver

Almost everyone I know has moved a lot of their searching onto ChatGPT or WhatsApp AI querying.

Everyone I know under 25 has stopped using Google search altogether.

I think the only people disabling JavaScript must be GenX graybeards such as myself or security experts.

markasoftware

> Everyone I know under 25 has stopped using Google search altogether.

completely unhinged take. Everyone I know under 25, as someone under 25, uses Google search at least an order of magnitude more than they use AI querying.

elicksaur

Everyone I know under 25 hasn’t heard of chatgpt.

blharr

How is that possible?

nibbles

[dead]

ChrisArchitect

Related discussion as linked in article: ("users on social media")

Google.com search now refusing to search for FF esr 128 without JavaScript

https://news.ycombinator.com/item?id=42719865

throeurir

It does not even work with javascript enabled! Always asking for some cookies permissions, captcha, Gmail login...

ant6n

…and all the results are ads and seo blogspam.

null

[deleted]