Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla reaffirms that Firefox will continue to support current content blockers (ghacks.net)
642 points by bj-rn on Sept 24, 2022 | hide | past | favorite | 201 comments



Good to hear the reaffirmation from Mozilla that the blocking WebRequest API would be retained. A little over three years ago, Mozilla had promised that it wouldn’t just blindly follow Google on Manifest v3 to the letter. [1]

Considering that uBlock Origin works best and can do the most in Firefox with this continuing support, I’m glad Mozilla is still walking the talk on this one.

We need at least one browser that makes the web usable, and that might as well be Firefox with uBlock Origin.

Edit: Oops. Missed providing the link reference below earlier.

[1]: https://blog.mozilla.org/addons/2019/09/03/mozillas-manifest...


I really need to learn the Firefox debugger. My years of chrome debugger muscle memory has tied me to that product (at least for work which is 95% of my desktop browsing)

Personally I’m a mobile personal browser person which is safari. I can’t stand to be at a computer after a day of being at the computer.


I use Firefox for web development (I got used to using it in the Firebug days and never switched). On the occasion I’ve had to use the chrome developer tools I’ve had no problems since they are so similar.


Firefox Dev tools are better. At least for messing with styles and html


Firefox DevTools are better for some things (styles, accessibility tree, network "Edit and Resend") and Chrome's are better for others (editing local files in the DevTools)


It's CSS Grid/Flexbox inspector is really nice. Does anyone use CSS grid/flexbox?


Can you give an example of a feature that's so different or missing in Firefox devtools?

The only differences that come to my mind are super minor (chrome logging 400er requests to console, Firefox having more advanced grid/flex debugging. I'm kinda interested if I'm missing a super obvious feature that's commonly used.


If he's anything like me, it's just the muscle memory part. My understanding of JavaScript grew basically alongside the evolution of the Chrome DevTools, so there's quite a lot of subconscious navigation that just feels more intuitively, to me, in Chrome.

It's also a psychological bond to a certain extent, having relied on them during stressful times in my career... before that I had only FireBug and that served a similar purpose but didn't teach me quite as well and did never reach the status of "if the Chrome Devtools are open, I can understand what's going on here"-sense of security & confidence.


Spot on. Every time I want to do X in Firefox and it takes me more than a few seconds to figure it out I get frustrated and use chrome. I do need to break this bad habit though. Maybe I’ll try again this week!


My favourite is ctrl+shift+l to clear the console as ctrl+l focused the URL bar. Damn it, guys. I only use chromium for work, though. The tools are somewhat easier to use


Yeah you can get over that in a few months. It is annoying though during the process. I switched a year ago.


I think the biggest one for me is less a case of it being missing, more Chrome's implementation being better, and that's the Timeline tab. Firefox not only doesn't implement it natively (redirecting you to a web app at `profiler.firefox.com`), it also lacks screenshots, dropped/rendered/frame durations.

Overall, it also seems less accessible to me. It may well be that it's much more powerful in some respects, but I've never got my head around it enough to work that out.

I do very much rate its Memory Tree Map view, though. Big advantage over Chrome for rapid debugging of where memory goes.


Performance tab in Chrome is miles better than in Firefox.


Mobile debugging using a real handheld? I'm using the FF devtools extensively, I only have to switch to Chrome when some weired mobile issue arises.


Changing latency and throughput independently to any value.

Sometimes stack traces are missing in the console which Chrome does show.


I always do web dev on Firefox. I worry that developing on Chrome could lead to accidentally breaking Firefox support. But I think the opposite is much less likely.


My experience is that Firefox sometimes fixes the errors I create. Which is bad.

For example, document.querySelector("input[type=text") (note the missing ]) worked in Firefox but didn’t in Chrome. Or something similar to this.


Nothing stops you from developing on Chrome and using Firefox for general web browsing.

Presumably, you don't need an ad blocker to test and debug your own code, unless you are testing specifically for ad-blocking browsers. And if you are doing things properly, you should be testing on both browsers anyways, at least as a final step.


If there are things that are missing it's worth asking for them if there isn't an equivalent.


"Usable" is an overstatement, but we at least gotta fight the fight.


Seeing this in this context just made me realize that it's weird to consider them "blockers", a loaded term. As if we're starting from some default, correct state, and then blocking part of it. We've always been able to choose what content we navigate to. These are content selectors/choosers, which just support our existing ability to choose what content we request and actions we take.


I disagree completely, "blockers" is accurate.

The user chooses what website page to visit, but it's the website that chooses want content gets served at any of its URLs. That's how the web works. Hence, when websites choose to include aspects of the page users don't want: "blockers", to block some of the page. Nothing loaded about the term it's just descriptive.

Nor are they "content selectors/choosers", nobody uses their adblocking plugin to find content, only to block parts of content they've found by visiting a website.


> "but it's the website that chooses want content gets served at any of its URLs."

No, not at all. The web was specifically designed so that the client chooses which parts of the available content to load, and in what way.

When I first started browsing the web I didn't have a graphical browser at home. My web client made decisions about which parts to display and which parts to not. It was not "blocking" anything, it was rendering the available content according to the capabilities of my system.

> "Nothing loaded about the term it's just descriptive."

It's a loaded term because it implies that websites ought to have control over display, when the web was specifically designed such that they should not.

> "Nor are they "content selectors/choosers", nobody uses their adblocking plugin to find content"

I modify many aspects of webpage display to help me assimilate content. When a website has chosen difficult to read colors, I change them so they're easier to see. I change fonts and font sizes. I modify page layouts, sometimes collapsing many separate pages into one long page, and sometimes I use screen readers and I don't even look at a visual rendering at all. I often remove annoying, obnoxious or intrusive content which also make it harder for me to read a page, many of which are ads.

The web is specifically designed from inception to give this agency to clients. Websites provide units of content and suggestions on how to best display it. That is all.


> No, not at all. The web was specifically designed so that the client chooses which parts of the available content to load, and in what way.

It was designed with this technical capability. But the social contract was always that you went to a site for it to tell you what to display. And we have many very nice, very non-standard sites because of it.

There is nothing wrong on blocking part of that content, but everybody's state of mind is that by default, any site content is expected to load.


>The web was specifically designed so that the client chooses which parts of the available content to load

Source that this is just a side effect from other design decisions?

>My web client made decisions about which parts to display and which parts to not

Your browser sucked because it couldn't even support the full standard.

>It's a loaded term because it implies that websites ought to have control over display

They should. Websites describe what should be shown using HTML and CSS.


> "Source that this is just a side effect from other design decisions?"

The source would be Tim Berners-Lee's original 1989 proposal for a hypertext information system. He specifically wrote: "We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics techniques and complex extra facilities."

These sentiments are also reflected in other technical documents such as RFC 1866. The intent is crystal clear.

> "Your browser sucked because it couldn't even support the full standard."

I think this is a common reaction from people who aren't old enough to remember a world before mega-complex browser platforms and non-portable websites came into being. My browser absolutely was standards compliant at the time, and it was expected that websites would be viewable without graphics.

Keep in mind: javascript had not yet been invented.

Again, HTML is specifically designed around portability, generality, and flexibility. This has been degraded by the rush of commercialization and related platform lock-in -- in large part due to things web 2.0. This is not a good thing.

> "They should. Websites describe what should be shown using HTML and CSS. "

I think you should read a bit deeper into the underpinnings of these standards you're referencing.


>We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics techniques and complex extra facilities.

This would support the opinion that entire web sites should be protable as opposed to just a small subset.


This is just not true. The host serves data to a client which can do what it wants with it.

The client can interpret the data from the website however it wants regardless of the intention of the website.

A website, for example, can use tags that a client doesn’t comply with.

The client is and always has been opinionated. It does not and never has had to obey the websites intentions.


While that's true of software set up to specifically subtract content from a server response, the code could just as well be set up to add page elements (e.g. for easier navigation) or just rearrange things. So "content customizer" or "user preference enforcer" would both be apt and less arguably pejorative.

"Blocker" implies a specific set of assumptions about a publisher having some say in how content gets presented. "UserAgent" entails a different worldview, one that the companies making browsers seem increasingly unwilling to defend against competing interests.


The websites are _requesting_ the users’ browsers to execute certain extra code the users deem unnecessary to their experience in using the site. The users have no moral responsibility to execute that code. If website owners want to detect and block users making such choices, that is their prerogative, but as long as they don’t, the users may continue to exercise the control and autonomy given by them by the website owners without any moral concerns.


> If website owners want to detect and block users

Oops. I guess you meant to say user request selector/filter then if the term "block" is so loaded.


I meant what I said.


The "end game" from the user's point of view would if someone indeed came up with "content selectors". It would be quite an achievement to build one that was actually useful.

Maybe easy version could work exactly the same way as blockers do, except they'd invert the way the rules are evaluted, and of course the rules would need to be custom as well. And then if the rules would fail to find some actual content on the page, then I guess you might not even know about it :).


That closely describes uMatrix in default deny mode. For each individual site and subdomain, the user gets to choose if they want to allow cookies, images, CSS, JS, XHR, frames. Too bad it's unmaintained, with most of its practical and commonly intended use cases fitting the simpler uBlock Origin.


But the underlying principle is still filtering out the undesired content instead of selecting the desired content.


Surely the principle the parent suggested was to initially filter out all of the content, and then selecting the desired content to load.

But isn't "filtering out all of the content" just another way of saying, "not loading any content" - until explicitly requested?


Perhaps I misunderstood. But let's say I do run uMatrix in deny more, I get the site HTML as the Tim Berners-Lee intended. Yet that already may contain some kind of advertising material, so to view only the actual content I'd need to have e.g. an XPath do it. So I would have a database of sites and XPath (or other ways) describing how to find the content, instead of describing how to get rid of non-content.

Can I tell uMatrix to only show a certain XPath from the page? If so, I agree it is a content selector, albeit pretty impractical :).


uMatrix is the absolute best, I can't imagine browsing without it. So the unmaintaned part is very sad and troubling.

I do install uBlock Origin for all non-technical family members, but uMatrix is so much more capable.


The end game would be some AI running on a gpu in your monitor, so the browser nor the originating website need to know about any blocking.


Except you'd still have client-side execution in that case. Obviously not everyone cares, but there are some straightforward objective reasons to care if you're blocking it being rendered anyway, like why waste cycles on it.


True, but it's hard to block execution without the website not knowing about it.


That's basically what the "reader mode" in the various browsers do


Anyone remember Proxomitron?


The end game for the web to be scraped into entries in a database to be displayed by real software instead of a web browser.


Sounds miserable and authoritarian


Blocking useless content makes finding useful content much more easier.

No one has enough time in the world to block all the infinite amount of useless content. We already do that perpetually by what we personally value the most and what we choose to ignore.


The website usually includes the address of the content to display or script to run. My post meant that I don't think we should take it as a given that our browser must fetch and run all of those resources automatically or else violate some protocol.

And if a user wants to have a policy of selectively choosing what content they want, but that content is hard to define or whitelist, then using a blacklist mechanism doesn't mean that the policy is to block. It's just the most common, practical method of choosing the content you want.


With CSS, at least, users being able to override styles was explicitly the goal of its first edition:

"One of the fundamental features of CSS is that style sheets cascade; authors can attach a preferred style sheet, while the reader may have a personal style sheet to adjust for human or technological handicaps."


Ideologically, I am inclined to agree. I have an epidermic reaction to advertising in general. However, these extensions exist in a context where they depend on the majority of users not using them. If every browser had them, the economy of the web would change completely. Very few users who take active steps against ads want to pull a Stallman and relinquish entire parts of the Internet altogether; they want to see the content they are seeing, content that is most of the time supported by ads, without ads. The word 'blocker' is thus appropriate because the explicit goal is an asymmetric relationship based on other people making ads profitable.

In reality, the more interesting question would be whether we shouldn't limit our media consumption to begin with, or form a culture that is less dependent on advertising. If browsing with ads becomes mandatory some day, I will probably make the choice to severely cut the amount of Internet I gorge on. It will take some willpower, but every time I switch off the blocker and see what websites really are like I want to gag, so I won't be that difficult.


How much would be gained if every piece of content actually had to actively earn your patronage versus passively pulling your eyes. I feel like that's the dynamic most AdBlock users see.

I personally don't care if click bait tabloids and exploitative re-hosters we're to crumble overnight.


This sounds very reasonable on paper. The elephant in the room, however, is that most people and even those who claim to want to move past ads don't want to pay that much for what they consume. The barrier between consuming something for free and paying even a tiny sum towards supporting the author is an enormous psychological chasm. The internet would definitely be much more restricted if the ad-financing model was not doable. There would be less crap as per Sturgeon's law, but the good parts would shrivel up too.

The case of HN is particularly enlightening because the community is very hostile to ads and for the most part has the technical ability to avoid them with a higher degree of sophistication than just installing Ublock. But there is also the majority of the focus and admiration placed on SWEs and companies that are directly and indirectly tied to the ad ecosystem. The energy given off is similar to McDonald's executives yelling at their children if they see them with a processed patty in hand.


I feel like it's only a psychological chasm because ad-supported content exists at all. If such sites did not exist, I feel like people might have to pay more, they would be more choosy, but then sites would also have to really put more effort into their content. That would eliminate the phenomenon of a thousand sites writing an SEO-farmed article on every possible topic imaginable, and bring it back to 1-2 sites that really put in effort because they care about the topic. Honestly I think the internet would suffer a bit for the first little bit but it would actually stabilize into something a lot more useful.


The internet is already useful in the way you are imagining it to be. The educational and practical resources are there, and are not that hard to find assuming you have gained minimal technological literacy and a certain understanding of internet culture and critical thinking relating to the content available.

There is a ocean of addictive distractions to wade through, but at the same time we know the following things:

- We have to be constantly mindful of the Gell-Mann amnesia effect

- News sources and journalism have always had significant quality problems when they venture beyond just reporting basic news, even in the case of business-oriented publications that as Chomsky has pointed out have a vested interest in delivering quality reporting

- Understand topics requires delving into the academic literature in many cases. In this scenario, the internet becomes the line towards that literature or those long-form articles but most of the users' time has to be spent reading and thinking instead of just browsing

- Identifying skinner boxes and addiction traps is easy. You usually know when you are in one. We have to just be honest about what our goals in life are and how much progress we are making towards them

What this does not solve is the problem of focusing on the wrong things in life or embarking in a career or interests that are ultimately deleterious to your life. But the internet at least makes it far easier to pivot and refocus as long as you let it do so and use it towards a conscious goal. This mindfulness is easier said than done but it's far from an impossibility.


Good point, I agree. I did not think of that. However, it's a bit hard to find certain things now. To take one example, sometimes I like to read reviews of camera equipment. I like to read reviews written by individuals who just use it rather than the SEO junk by most major websites. However, even if those small-time reviews are usually much more informative, they are VERY hard to find in search engines. The SEO junk just overwhelms page after page of search.

That's why I think the internet would be better off without a lot of the intense duplication that is out there today.


> The internet is already useful in the way you are imagining it to be.

Sort of. Many of the useful (not ad-driven) sites of the 90s are still around. Mine are. So in that sense, yes.

But they are drowned by the flood of ad and SEO-driven junk. Finding the good parts, if you don't already know, is becoming harder. Search engines no longer return most of the useful sites on any given topic.


It depends on what you define as useful. 90's style personal websites/Usenet/BBS forums can be useful. People certainly gathered there to make significant things happen and to create useful resources. That said they're still pretty infotainmenty, the signal to noise ratio is still pretty low compared to a focus on an actual topic of interest that comes through deeply engaging with the literature and/or delving into making or building things with it. We have to keep to the logic that the internet is a conduit towards real life actions and the internet of the 90's already cut into that. In a sense, being bothered by the SEO means you are still in an infotainment mindset anyway or at least surfing the shallower waves.

Today's internet has astoundingly accessible resources on all sorts of things. A motivated person can still sidestep all of the crap with relative ease. Or to put it another way, if they get lost in the shrubs, they weren't gonna make it in any case. Motivated people now have many more tools and platforms at their disposal to gather alongside one another and make things happen. The pre-September internet was more private, yes, but that was actually a massive flaw in an absolute sense. Now you can have a bright middle class Mongolian kid become an electronics expert on his own and the greybeards aren't barred from finding their watering hole either.


> The internet would definitely be much more restricted if the ad-financing model was not doable. There would be less crap as per Sturgeon's law, but the good parts would shrivel up too.

The web in the late 90s was in most ways far more pleasant and useful place than today. Content was a labor of love, not driven by advertising.

I wouldn't hesitate for an instant go back to an internet which bans advertising. It would be far better than what we have today, not just the ads but all the toxic data collection and spyware that those fuel.


I’m fine with the funding engine of the internet dying.

The way those ads get put these is massive privacy violations happen to fuel an engine that’s smarter than you with data. Then this engine shows the perfect ad at the perfect time in the perfect moment of weakness to get you to do what the advertiser wants.

If these ads were merely roadside billboards I don’t think I would care as much. These are internet ads connected to an extensive surveillance infrastructure. Manifest v3 indirectly means an increase in surveillance activity and the biggest companies and governments on earth expanding their power. Let’s also not just say that content blockers merely block ads, I use them to block scripts all the time which are there to expressly mine your data to manipulate you in the future. I’m not smart enough to not get manipulated if I give these companies my data.

If I block ads to prevent all that and we have less 6 figure FAANG jobs and convenient services on the internet - oh no! P2P is a thing, community run websites are a thing, alternative funding models are a thing, open source is a thing, lower wages are fine, we’ll be fine.


I for one would be happier if every site that relied on ads simply vanished. The whole reason we have an "attention economy" is the monetary gain from showing people ads, which encourages quantity (ie spam) over quality.

There was a link posted to HN today: "Homemade Heat Pump Manifesto". I remember when a web search about any topic would find you a few similar pages, of people diving deeply into topics and DIYing experiments and publishing everything they'd learned simply because they wanted to share hard-won knowledge.

On my projects-to-do-someday list, I would like to try mashing up a search engine to filter out results that contain advertising (load the page, check if anything matches adblock list, if so, discard). Obviously in today's adcancer-filled web, that is going to discard the vast majority of results. But I'd hope that it would eventually surface some interesting results that have been drowned out by spam, assuming they're even still indexed. But it certainly couldn't be any worse than all the false positives that drown the results these days.


I pay for kagi search, it has some features that do this very well. Well not exactly the same but you can focus your search better where ads are less of an issue


I’m fine with “content blockers”. They block content the same way my eyelids and eye muscles do when I decide to not look at billboards.

If you don’t have moral qualms with using your eyelids to block ads then you shouldn’t have with software content blockers either. It just outsources the act of blocking from your eyes into the computer.


To be honest, no, I don't use my eyelids to block anything, and I don't think that's a reasonable thing for a person to do. I don't use any muscles to block billboards; I just look at whatever I was originally planning to look at (usually the road ahead of me), and don't expend much mental or muscular effort to pay attention to OR ignore billboards. The closest I might come is to turn away from ads playing at the gas pump. But that could easily be classified as choosing not to look vs. blocking.


If I could wear blinkers that only block the billboards while driving, I would definitely wear them.

Ad blockers on the PC are the same thing.


the "block" ads.

Not everything needs to be analyzed to death for some sort of symbolism.


I call them HTML firewalls.


Bouncers?


I switched back to Firefox a while ago, and love it. I keep trying to think of how it could be kept afloat once Google stops funding it.

IIRC they get $400 million annually. There are 200 million FF users. $2 each wouldn't be bad... But realistically only 1% of the users would donate and $200 is unpalatable.

Maybe another organization picks it up and runs it more efficiently or at a loss for advertising?

Or...?


I don't really understand how a web browser needs 400m a year, that's a lot of employees


You can see how they roughly spend their revenue here [1]. Although it might be out of date now, it does highlight how expensive browser development really is at that scale.

[1] https://frankhecker.com/2020/08/15/how-mozilla-makes-money-a...


This does not break down how much money is spent on development of Firefox. I don't have references, but there has been many a discussion about Firefox development being only a small fraction of the whole, and the majority being spent on questionable endeavors (or perhaps I am thinking of Wikipedia?)


Do they really need 400m?


If you have it, it doesn't matter, you will spend it and scream for more. Look at wikipedia.

Wikipedia collected 162m in revenue (no ads!) and spent 112m dollars! I'd be curious to dig into what % of that goes to core wikipedia and what % goes ancillary projects. You'd have to dig very deeply into their finances to figure that out.

Firefox should set apart a large chunk of that 400m/year in an endowment intended for Firefox to continue and to be independent in perpetuity. That could easily be a multiple-billion dollar endowment by now and their spending and growth from that spending should be on revenue from that endowment.


If you're a company sitting on a large mountain of cash, then your finance team isn't doing its job. Money is there to be used, unless you happen to have some sort of monopoly. And Mozilla obviously doesn't.

Competing with Google and Microsoft funded competitors ain't cheap. And requires more than developers. People don't play that nice, at least not when there's an advantage to playing dirty. And there is.


This question was downvoted but I think it's fair. There's a lot of evidence that Mozilla is pretty bloated as a company, and if they shed a lot of that bloat I see no reason to suppose they couldn't keep maintaining Firefox on a much smaller budget.


One serious criticism of Mozilla is the fact that a major chunk of that money goes into the CEO's pockets and doesn't get spent on Firefox.

That single fact makes me reluctant to pay for Firefox. Unless I'm sure that all the money I donate goes to the right cause and not to a CEO's bank.


> But realistically only 1% of the users would donate and $200 is unpalatable.

I'd pay $200/yr for the privacy and control benefits of Firefox over google-controlled Chrome.


You might, but the majority of users, probably even a majority of the users on HN, would not. Unfortunately.


The big test is not now, when Chromium discontinues content blockers, but when renewal of the contract with Google is up. For Google there is a big incentive in getting rid of ad blockers, and they have a lot of power over Mozilla being the party that is responsible for most of Mozilla's revenue.


Google keeps Mozilla propped up to avoid anti-trust.


I think Google has enough regulatory capture to not worry about anti-trust. In a perfect world, anti-trust would have broken up Google and Meta a long time ago. Our anti-trust investigations have been a joke since MCI. Think about it, USA is respectfully speaking a corporatocratic country; biting the hands that feed is in nobody's interest. John Q Public does not pay the bills and doesn't hold the most personal data and location of the majority of the inhabitants of this planet.


> In a perfect world, anti-trust would have broken up Google and Meta a long time ago.

I'd go even further, as the example of AT&T shows that just breaking up a monopoly is insufficient, as it can reform over time. In addition, (dis)incentives must be created to counter whatever led to the market failure in the first place.

The Sherman Anti-Trust Act forbids attempts to establish a monopoly, regardless of the success or failure of such an attempt. Designing a system in which network effects tend toward monopoly (e.g. "Keep using our platform, because when your friends talk on this platform, it's the only way to hear what they're saying.") is an attempt to establish a monopoly.


Google just 10 days ago lost its appeal of a 4 billion dollar anti-trust fine.


4 billion dollars is less of a fine and more of a participation fee, a cost of doing business.


Curiously, how much do you think would be "fair"?

$4B is a lot of money, no matter how you look at it.


Fair would be a fine such that Google/Alphabet would be better off had they not participated in the antitrust behavior. Anything below that is just subtle encouragement.


Fair would be sanctions, restructuring, legal consequence for those responsible for inaction in dividing the constituents of the conglomerate. Think Ma Bell, Standard Oil. 4B likely wouldn't even cover the vig on all that profit.


Fair would be a fine large enough to actually induce existential worry. A $4 billion fine isn't enough to make Google worried for their continued existence. They know they'll be fine.


In Europe. The question people were asking at the time is if the fine was great enough to cause google to abandon Europe. If they did, they might be able to abandon Mozilla with impunity since nobody else is likely to do anything about it.


in Europe Union not USA


There is a world somewhere between perfect and hopeless. It's this one.

And no, as bad as the big bad corporation is, nobody has the regulatory capture required to ward off an anti-trust lawsuit in guaranteed perpetuity. Among other things, the law would have already changed to not bother with anti-trust if it did. All it takes to upend the regulatory applecart is a change of the driver. And when that day comes, you want to have plausible deniability.


On top of all the other reasons that has never made sense, like Google paying Apple for the same reason or the years Yahoo were paying Mozilla, there's an ongoing antitrust case against Google where their payment for the Firefox default search is evidence against them.


Or to keep Google Search dominant, which is why they pay Apple $billions per year to be the default engine in Safari, a deal that may invite antitrust.


This is one thing that has the potential to cause significant numbers of people to switch from Chrome to Firefox. The modern web is unusable without a good ad blocker and its inexplicable to me that it isn't a built-in feature enabled by default in every browser like pop-up blocking. You shouldn't have to install extensions just to get essential functionality from your browser.


The problem is that Google has a conflict of interest as both a browser developer and a platform for people to sell you ads. People selling ads to Google must be putting pressure on them to do this... Or maybe they have internal pressure to maximize revenues.

At the end of the day, yeah, it seems like it might cause people to switch. I'm also honestly wondering, how long until a real competitor to YouTube surfaces? I realize that this is hard to build (though we do have multiple streaming platforms in existence), but Google has been playing this slowly boiling frog experiment with ads on YouTube. Now you have multiple ads per video, coupled with ads baked within the videos themselves, and sponsored content. If you don't pay for YT premium, it's kind of unusable.

The worse the product gets, the more of an opening is created for the competition. Seems like a matter of when rather than a matter of if? What if Amazon or Apple created a YT alternative that was ad-free, maybe with some paid content to sponsor creators (pay $2 a month to get bonus content from this channel). They could afford to sink money into it and it could be very disruptive, but it's like they don't dare. Maybe they're afraid to associate their name with a product that could fail, kind of like Google Plus.


There is a conflict of interest but with the amount of data Google has, they have to know it won't work out in a way that benefits them if they block adblockers.

Technical users have what is probably the lowest tolerance to BS that's humanly possible when it comes to the web viewing experience. The web without an adblocker is really really bad, to the point where I would instantly switch to another browser that continued to allow adblockers. The user experience of that browser wouldn't even matter as long as pages rendered and it worked reasonably well, that is already a million times better than having a ton of ads forced upon you.

I don't know about you but I also can't remember the last time I saw a paid ad and thought "wow, I want that" and then bought it. It just doesn't happen for me. If I end up buying something it's because there was a gap somewhere and a product filled that gap. I'm going to attempt to research the problem (the gap) and find a solution (various products) from a non-biased source before I buy it. This involves organic search results. Basically a paid ad that would have been blocked by an adblocker is going to have a 0% chance of converting me into a buyer.

Also, most non-technical users don't even know what an adblocker is. Unless someone set one up for them they are seeing ads. It seems weird to me they would spend a lot of effort into trying to block adblockers. The audience who uses them likely has a very small chance of ever buying something because of a paid ad and a large chunk of users don't use adblockers.

This whole scenario reminds me of Let's Encrypt in a way. If push comes to shove and major browser vendors like Chrome prevented adblockers something tells me some of the best minds in this space will come together and make a browser that will be technically better in every way possible.


>Technical users have what is probably the lowest tolerance to BS that's humanly possible when it comes to the web viewing experience.

Lol! Truer words have never been spoken.


Ad blockers used to be something mostly for the nerds way back when. These days, browser extensions are as easy to install as phone apps. So, yes, quite a lot of regular users have some kind of adblock these days. Overall it's still the minority, but it's not necessarily a small one.


Suddenly Firefox is interesting and Chrome is not. We need to stop thinking of people as users and start thinking them computer owners. Users work when you are sharing a computer, but for installed software, we should think owner instead.


"These weeks in Firefox: issue 124" was referenced in the post, but was not linked.

https://blog.nightly.mozilla.org/2022/09/21/these-weeks-in-f...


as a person whohas exclusively used firefox all my life, even on my android, i have both firefox and focus, this is quite good news.

the only problem i can imagine is the invisible pressure by the unending desire to become chrome,you know with hiding the search bar by default "because people are used to that from chrome" and other shenanigans, i feel this support might be shortlived because the stupid people managing firefox are paid by google to fuck up firefox in the worst possible way and this just feels another attempt in that regard.

btw, i've been online since 2004, daily so i have used a lot of firefox


I was saddened recently when I learnt they changed "View image" to "Open image in new tab". I loved the way "View image" worked, allowing you to maintain the image in full view while preserving tab history. If you wanted to open it a new tab, you could middle click it or ctrl click it. But since users were used to the Chrome behaviour they changed it to match, and thus Firefox lost a functionality. Unfortunately, since this is such a niche use case, Mozilla has little incentive to restore it. (Note: Extensions that restore the context menu item fails when the image loading relies on the REFERER header, since it actively re-requests it.)

There are numerous smaller items like this, that just flowed a lot better, particularly if you're heavily browsing images, like tab previews, that have been sacrificed either in the name of performance or to be more like Chrome. Fortunately, SeaMonkey still maintains these classic features, but is unfortunately a memory hog.

My daily browser has also been Firefox since around 2004. SeaMonkey remains a time pocket, when I long for the old days of yore.


> sacrificed […] in the name of performance […] SeaMonkey still maintains these classic features, but is unfortunately a memory hog.

I mean, there might be a relation? I went from IE to Netscape (edit: Or NS to IE?), to Firefox to Chrome, then tried every 1-2 years to switch back to Firefox and always returned to Chrome because it was so much faster. It was only 2-5 years ago (can’t remember exactly) when FF switched to fully multiprocess and stopped using their unique extension model, that I could finally start using FF again because the performance was finally on par with Chrome.


I miss being able to long-press a link and clicking "send link to device."

Now it only lets you send the page you're currently on, which is annoying.

I send a lot of stuff to my desktop when I'm riding the bus.


use plasma integration to get that. it works 100% of the time


the thing that's been driving me crazy lately is that they got rid of the undo history for the search bar. if you submit a search, it resets the undo history to be more specific. I expect by now so few people use the search bar that this is probably not going to be fixed any time soon, sadly.


AFAIK that's because the new tab behavior keeps web apps' state intact. When the web was more static, opening the image in current tab was fine. But nowadays you'd just lose your place on an app-site so the new tab option gets you your image in full view but prevents bungling up the app.


I think along the same lines. Mozilla has forgotten a lot of the political fights of the past around freedom and customisation and instead focused on US partisan talking points supporting censorship and forgetting about the desires of core users.

I, too, feel that eventually they might just go: "Everyone is doing it and it cost us more effort to keep maintaining it, here you have our own internal blocking, which won't block Google related stuff because they fund us"

All I know is. It's my device. Not Google's and I intend to fight that battle, just like we did with Microsoft.


I doubt it, since this is an API, and not a visible difference. Seems that changes like that are mostly targeted at the UI.


I see that differently… my wife uses firefox and that API. She eventually replaced her laptop. The only problem was that the old one wasn't fast enough to process about 90,000 regexps each time she clicked a link.

Chrome replaced it with a less flexible API that has bounded runtime, preventing the browser from the kind of slowness that caused a certain dissatisfaction, even disharmony here.

You could say that people shouldn't configure so much adblocking that the browser has to evaluate 90,000 regexps for a simple click. Or you could say that the browser shouldn't prevent that case by design. It's not simple. IMO a worst case like this is one of the causes of "invisible pressure".


No well-known content blocker "process about 90,000 regexps" to find out whether a resource needs to be blocked or not, that's just not how it works internally.

Last time I ran benchmarks of all well-known content blockers using Ghostery's benchmark tool[1], all of them could process a network request under 20µs on average.

Some do have performance concerns, but it has nothing to do with network filtering, it has to do with other stuff they do beyond network filtering (for example see [2]) and declarativeNetRequest does not help there, so they will still suffer these performance issues under MV3.

---

[1] https://github.com/ghostery/adblocker/tree/master/packages/a...

[2] https://www.extremetech.com/computing/182428-ironic-iframes-...


I doubt that 90,000 rules were an issue. On 12 year old laptop (thinkpad x230) with 260,000 ublock rules (+240,000 cosmetic) the slowdown is imperceptible. Contrary, ad-ridden websites load much faster.


Disabling the blockers (plural because three) fixed it, so it was either that or that the extra processing pushed firefox over a RAM cliff and into swap, which sounds unlikely. Anyway, we didn't try any more debugging, she just got a new laptop.


You invented a problem and "didn't try any more debugging". Its like complaining about using three condoms, then changing partner to fix it.


Probably one of the extensions was the culprit. But clearly it's not about the number of rules and crippling that functionality would not bring performance benefit. Ublock origin is pretty lightweight even with 500,000 rules applied.


Most of those regexes fail after processing the first few characters.

I wonder if an optimizing regex-compiler could transform the set of regexes into a single finite state machine that’s more efficient than running individual regex recognizers in parallel. The optimal solution feels NP-hard, but I wonder if one could get sufficient improvements with some heuristic optimizations, like extracting common clauses, etc.

Maybe @burntsushi has some insights about this?


Taking a set of regexes and compiling them all into one automaton is exactly what RegexSet is: https://docs.rs/regex/latest/regex/struct.RegexSet.html

However, when you're talking about thousands of regexes, that's going to be a very large automaton. Probably impractically large. It's not NP-hard. It's "just" impractical.

Using heuristics to whittle down the set of possible regexes to match---likely using literals extracted from each regex---is exactly what you want to do for a problem like this. If all you have are literals, an Aho-Corasick automaton is feasible to build for 90,000 entries. Aho-Corasick, in my experience, doesn't really start to break down until you eclipse 1,000,000 entries.


Emacs used such a thing ~30 years ago. Common wisdom was that it was great when it worked, but a few carelessly written regexes would shoot the performance. If course there'll be some carelessly written ones in any set of >1k regexes.


I'm feeling nostalgic for the times where monopolies were broken up ...


Good, that's why I use Firefox. Still can't help but find it strange that ads are called "content" now.


Ads aren't considered content, uBO is called a content blocker because it can block all kinds of DOM content and not just ads.


Pihole blocks ads for an entire network, so why does a content blocker have to be built into the browser? Can't it just be a separate app you run, outside the browser? Even if the browser is using HTTPS the URL and IP address for everything not from the site you are on is visible. The app could even run as a fake proxy. Scripts would be harder to block if they are sent encrypted, but since you can view the scripts in a browser and hence know the exact content of the script maybe it would be easy to snip them out of the encrypted stream by automatically generating signatures or formulas for what to snip for the scripts from various sites. You could even use an embedded browser in the app to decode and format the entire page, automatically determine what to snip, do the snip, and then forward the modified stream to the users primary web browser (with some delay). Any checksums could be modified appropriately. The app could present a GUI in the primary browser that the user could use to decide what to block. Are the page contents protected by a cryptographic signature? Does not seem like it would matter since the embedded browser certainly has the ability to decrypt everything. Data in the reverse direction could be sent unmodified. Maybe the slowdown caused by filtering everything this way would be unacceptable to users? Some users might find it acceptable in exchange for greater control.


A blocker like uBlock Origin does much more than DNS level blocking. There are many cases where that isn't enough.


I read somewhere that even though Mozilla will continue supporting the ad blocking API from manifest v2, the rest of manifest v2 will be completely dropped so most of the extensions that I use like greasemonkey will stop working. Is that true?


There's a migration guide here: https://extensionworkshop.com/documentation/develop/manifest...

Since they aren't taking away the onBeforeRequest() functionality, I don't see a good reason why greasemonkey can't be ported over.


Under "Scripting API" it says "the code parameter is removed so that arbitrary strings can no longer be executed [...] you need to move any arbitrary strings executed as scripts to files". I don't see how something like greasemonkey can continue existing given that limitation.


Ah, yeah, that does look like a gap. Tampermonkey's related issue, though it seems to be mostly focused on Chrome: https://github.com/Tampermonkey/tampermonkey/issues/644


This is because Firefox is adding Manifest v3 support without dropping v2 support, unlike Chrome.


Googling "tampermonkey manifest v3" lead me to this issue, which has a lot of relevant discussion: https://github.com/Tampermonkey/tampermonkey/issues/644


No.


- [ ] ENH,SEC,UBY: indicate that DNS is locally overridden by entries in /etc/hosts

- [ ] ENH,SEC,UBY: Browser UI: indicate that a domain does not have DNSSEC record signatures

- [ ] ENH,SEC,UBT: Browser UI: indicate whether DNS is over classic UDP or DoH, DoT, DoQ (DNS-over-QUIC)

- [ ] ENH,SEC,UBY: browser: indicate that a page is modified by extensions; show a "tamper bit"

- [ ] ENH,SEC: Devtools?: indicate whether there are (matching) HTTP SRI Subresource Integrity signatures for any or some of the page assets

- [ ] ENH,SEC,UBY: a "DNS Domain(s) Information" modal_tab/panel like the Certificate Information panel


Two easy ways to block trackers and ads... without a firewall.

Adding blockers to the hosts table still works with Chrome... hope they don't muck with that...

https://github.com/StevenBlack/hosts

But if they do... there are always DNS solutions you can add to your Router.

https://nextdns.io/

I use Firefox, but even things like Windows spams ads at you if you let it. So many things have Google trackers built in too...


You can't use DNS/hosts to block ads on sites like YouTube because they use the same domain to load ads and videos. To block the ads you have to block the video itself.

So while still useful, that's not really a replacement for browser extensions.


This comment reminds me of the crazy spammer who used to frequent Slashdot, always with a long comment about how to use hosts file and how it is superior to every other method of blocking content.


The problem with these approaches is that it's much harder to exclude individual sites that break.


The maintainer of uBlock Origin posted a detailed comment about converting from v2 to v3 here:

https://github.com/uBlockOrigin/uBlock-issues/issues/338#iss...

It’s really worth a read to better understand the pros and cons of this new API.


Random note, but I missed the slimmer tabs from Vivaldi and Brave and the older Firefox themes.

This works, its just hidden. Says unsupported but works fine in Nightly even so far.

https://support.mozilla.org/en-US/kb/compact-mode-workaround...


Good. On Windows atleast Firefox is just as fast as Chrome. Even for heavy sites like Twitch/Youtube. Ublock origin works the best, vertical tabs from extensions etc so I dont really miss any features. Maybe the Vivaldi F2 command line.

Edge is still faster but its good that the default Windows browser is finally good.


why can't ublock origin work properly with this manifest v3?


Because an advertising company (Google) designed it just for such a purpose.


Considering that Google's ad providers, analytics, and general services are so easily blockable in the new API, that reasoning doesn't hold water.


This is a conspiracy theory. There is no evidence that this is true. Please stop spreading misinformation and conspiracy theories as if they were facts.

I know it's cool to not like Google and whatever, but I would hope that we can at least discuss facts on this website and not just sling FUD around.

Can you provide any detailed technical information that proves that Google intentionally crippled this new API for nefarious purposes?

Just to be clear, I am a long term Firefox user and probably will continue to use it. I just want to see some real proof for these claims.

The idea of an unprivileged content blocker sounds attractive to me, and considering how much effort Google puts into security on Chrome I don't think it's far fetched that this change is for security purposes.

Chrome was the first browser (afaik) to support running extensions with limited privileges, and I'm sure people were originally upset by this, but today it's clear that this was the right choice and massively increased browser security for the majority of users.


> Can you provide any detailed technical information that proves that Google intentionally crippled this new API for nefarious purposes?

This is not a technical decision by google, it is business strategy.

No company publically publishes strategy meeting notes, so asking for them is not a reasonable argument. Of course they are not public and anyone who was present at the meetings is under heavy NDAs.


Because Google is replacing the webRequest API with declarativeNetRequest. This new API has 2 benefits. The first is that it protects the privacy of users better. Extensions no longer get access to the full contents of a request. The second is that it ensures that poorly optimized web extensions can't slow down the performance of loading sites. Chrome is making this change to improve user privacy and improve performance.

uBlock Origin just needs to migrate to this new API. Despite the noise from people who just hate on big tech the only true difference as far as I know is that the browser will enforce a limit on the number of rules that can be added. This number exists to try and prevent bad performance from too many rules existing.


I want uBlock to get access to the full contents of requests. It actually helps my privacy, as it can perform tracker blocking.

> The second is that it ensures that poorly optimized web extensions can't slow down the performance of loading sites.

This is true for any code that is running on the browser. Luckily, uBlock Origin and the webRequest API allows me to block arbitrary Javascript and assets so that poorly written websites can't slow down the performance of loading sites.

There is an inferior port of uBlock to MV3: https://github.com/gorhill/uBlock/commit/a559f5f2715c58fea4d...

Are you paid by Google?


>It actually helps my privacy, as it can perform tracker blocking.

The goal is to increase increase the level of privacy of the entire ecosystem. While ublock origin may be trustworthy there are many extensions that are not. It would be better to find a more privacy preserving replacement compared to having to trust extensions to be good actors.

>This is true for any code that is running on the browser.

Typically the code doesn't blocking the page from loading though. And again if this change results in faster loading speeds for users ecosystem wide this change is a win.

>There is an inferior port of uBlock to MV3

The downsides seem to be from wanting to be permissionless and not from not being able to replicate the functionally with manifest v3.

>Are you paid by Google?

No, I have never been paid by Google.


> While ublock origin may be trustworthy there are many extensions that are not. It would be better to find a more privacy preserving replacement compared to having to trust extensions to be good actors

By running arbitrary code on your computer you are inherently trusting the author of the code to be a good actor.

> Typically the code doesn't blocking the page from loading though.

Tens of megabytes of bloat block pages from loading all the time.

> And again if this change results in faster loading speeds for users ecosystem wide this change is a win.

uBlock speeds up loading because it blocks useless bloat such as advertisements. MV3 restricts the ability to block content, ergo it will slow down loading speeds.

It's not actually designed for privacy or whatever, it's simply a way to gimp adblockers so that Google (one of the largest online advertisement companies) can get more money from their advertisement business. You must be really naive if you don't understand this simple concept.


>By running arbitrary code on your computer you are inherently trusting the author of the code to be a good actor.

While you may be running arbitrary code, there is only so much it can do from within the sandbox it is in. Because we can't stop 100% of bad actors that shouldn't mean we should give up on security.

>Tens of megabytes of bloat block pages from loading all the time.

That is a separate issue from web extensions. Just because X is slow, it doesn't mean we should not speed up Y.

>MV3 restricts the ability to block content

No, it does not. You just need to use a different API / give it permission to do so.

>It's not actually designed for privacy or whatever

That is one of the reasons Google provided, so yes it is.

>it's simply a way to gimp adblockers

Then why did Google work with adblock extension developers to improve the API by adding things like dynamic rules? The reason is that this is for improving privacy / performance as opposed to trying to kill off extensions.

>You must be really naive if you don't understand this simple concept.

If Google wanted to get rid of ad blockers they would make them against the rules in their extension store. You have to realize that Chrome is software that is used by billions of people and not just you. Google has a responsibility to protect people's privacy and there are engineers who want to be able to move metrics like the number of malicious extensions removed each month or p99 page load speed.


> If Google wanted to get rid of ad blockers they would make them against the rules in their extension store.

You fail to understand the grand strategy. Outright banning ad blockers would be quite radical and may push people away from using Chromium. Simply progressively gimping ad blockers increases Google's revenue from advertisements while keeping all those users.

I do not use Chrome. I use Mozilla Firefox, since it supports a better webRequest API so that uBlock can block ads despite things like CNAME cloaking.


>Simply progressively gimping ad blockers

The goal is not to gimp ad blockers and Google is open to working with adblock extension developers so that they can continuing functioning with the new API.

>I do not use Chrome. I use Mozilla Firefox, since it supports a better webRequest API so that uBlock can block ads despite things like CNAME cloaking.

Chrome supports / is planning to support forwarding the domain of the CNAME record. This means that CNAME cloaking would no longer be a thing.


The whole ecosyst includes the webpages that you view though. The loss of privacy is much bigger than the gain considering that ublock is up there on the only extension in use


>ublock is up there on the only extension in use

UBlock Origin is not the only extension that uses the webRequest API that people use. If that were the case they would not have removed it. Again ad / tracking blocking can still be done with the new API.


> UBlock Origin is not the only extension that uses the webRequest API that people use.

You are not required to install every extension in the world. Only install the ones you trust.

> Again ad / tracking blocking can still be done with the new API.

As the authors of ad blockers disagree with you, what evidence do you have to support your position?


>You are not required to install every extension in the world

Despite that there exist people who install malicious extensions for one reason or another.

>what evidence do you have to support your position?

Go and read the documentation. There is enough capabilities to implement one.


I can see your point about the goal to protect privacy, but it's another of those business decisions for the sake of the consumer that sacrifices choice of the consumer - which affects the likes of the HN crowd in a much higher percentage than the unwashed masses.

If there was a way for technical users to unlock the more risky settings, it would placate the conspiracy theorists. That doesn't appear to be the case, so the argument that Google is doing this to restrict and minimise ad blocking remain entirely valid.


It seems like at least in the case of Ublock, there is the ability for the creator to enable more broad access but they are choosing to not do so.

    >At this point I consider being permission-less the limiting
    factor: if broad "read/modify data" permission is to be used,
    than there is not much point for an MV3 version over MV2, just
    use the MV2 version if you want to benefit all the features
    which can't be implemented without broad "read/modify data"
    permission.


> The first is that it protects the privacy of users better.

That is an unjustifiably absolute statement.

In security you need to look at threat models, not just declaring something better or worse.

There are multiple parties you may (or may not) trust. For instance, the browser developer, the website developer and the extension developer. Different people legitimately have different threat models.

You seem to trust the browser developer (an advertising company driven by ad profits above all else) the most. In that scenario, your statement makes sense.

I trust the ad-blocker extension developer more than the other parties, so of course I want it to have full acess to block evil behavior.


>That is an unjustifiably absolute statement.

It's fine to be absolute due to the contents of requests no longer being able to be accessed by extensions. There being less data that can be slurped up is a win.

>For instance, the browser developer

If you are Google you already trust yourself.

>Different people legitimately have different threat models.

The context of this change is to protect people's privacy from malicious extension developers.

>You seem to trust the browser developer

That is beside the point I'm trying to make.

>I trust the ad-blocker extension developer more than the other parties, so of course I want it to have full acess to block evil behavior.

But do you trust that every web extension that will exist will not abuse that full access? I sure don't. This change isn't because of trustworthy / high quality extensions, but because of malicious and slow extensions. By changing the API exposed Google wants to reduce the number of extensions in that second category without breaking the extensions in the first.


> But do you trust that every web extension that will exist will not abuse that full access? I sure don't.

I sure don't.

But that is not in my threat model because I have zero intention of ever installing every web extension that will exist.

In my threat model the evil parties are the advertising/spyware industry (which includes google) so I want powerful browser extensions to help with that problem.


Your threat model isn't universal to the billion users of Chrome.


Because the webrequest blocking API is being removed.


The declarativeNetRequest was added which allows a similar capability without causing privacy or performance issues.


[flagged]


Could you please stop posting unsubstantive and/or flamebait comments to Hacker News? You've been doing it repeatedly and that's not what this site is for.

https://news.ycombinator.com/newsguidelines.html


Yeah I mean, they have to. It's open source software so it could be forked otherwise and also, if they abandoned this core principle anyway, they would lose their entire user base in droves.

There would be no faster way to nuke the entire product. Which is a good thing.


A browser is hard to fork, and a fork of a browser is hard to maintain. They might be able to get away with not doing this.

The reality is that this time, they are taking a good decision, on its own, not because they have to for some reason, and this should be recognized as-is. They took enough bad decisions we can attack, no need to belittle them for this one.


People belittle Firefox and its parent company out of both love and fear because it's one of the few bastions remaining in a sea of botnets and data siphons. And no, le shill lion doesn't count.


I think the assumption is that some browser engineers will also break off if a fork is ever created. Because I agree, a browser fork is hard to maintain. So we plebian users must rely on one or two experienced engineers being on our side.


Is it reasonable to make that assumption? How many ex-Mozilla work on Firefox forks now? How many ex-Google work on Chrome forks? Genuine questions...


I can't answer that, I guess what I was trying to say was that if a fork is ever created its chances of success depend on how many, if any, experienced browser engineers join it.


It should content block itself.


Most sane Chrome/Safari/Edge user


I’d take content blocking this comment


How long until Google’s campaign that Firefox is an illegitimate browser


i know that we're all sure that even if mozilla reverses this decision, we'll have good forks... but, we neeed to be sure to have them on our side... open internet needs people like mozilla


The fact that they're even spelling this out means it's going to be reversed soonish. Fortunately Firefox (or at least the original iisdea of a lightweight free-as-in-freedom browser) is ready for another fork.


This is baseless speculation.

Firefox already has more APIs than Chrome to support content blocking. E.g. DNS uncloaking and reliably blocking content loading on startup is impossible for extensions in Chrome.


Of course. It's based only on similar observations from other companies. We'll see in a year or three.


If they didn't spell this out loudly, people would accuse them being silent and waiting for the day to silently pull the plug, then.

With your point of view, there's no way to tell the truth.


No, in my point of view this sort of thing should be too obvious to mention. Once they start making it explit, people will immediately start questioning it.


We are not sharing the same perspective and experience, I may say, and that's OK.


There is: they could just say "We are going to eventually follow Google/remove it, albeit maybe not today".


You're assuming it's going to be removed eventually. I have no such assumption. Also your wording sets an unjust precedent when there's no reason to have one.


Is Mozilla really trying to juxtapose themselves here against another browser that removes support for power user customization like they don't do it? That's wild. Is there no one left at Moz that remembers them throwing away the entire ecosystem of XUL based extensions that allowed for many things that current Firefox builds don't re: blocking and UI customization.

Mozilla literally did this back in the version 37 and only Firefox forks remain to support the full set of extension features. I guess we're all amnesiacs or the pot is just being boiled slowly enough to not notice.


> Is Mozilla really trying to juxtapose themselves here against another browser that removes support for power user customization like they don't do it? That's wild. Is there no one left at Moz that remembers them throwing away the entire ecosystem of XUL based extensions that allowed for many things that current Firefox builds don't re: blocking and UI customization.

Dropping XUL wasn't some 'hahah, own the power users' thing, it was a move to allow the Firefox architecture to modernize, with XUL this wasn't possible. I have plenty of criticisms of Mozilla, but simplifying things to the point of removing all important context is not the way to do it imo.


Dropping WebRequest API wasn't some 'hahah, own the power users' thing, it was a move to allow the Chrome architecture to modernize, with WebRequest API this wasn't possible. I have plenty of criticisms of Chrome, but simplifying things to the point of removing all important context is not the way to do it imo.

See what I mean?


UI customization is fortunately still alive in the wake of XUL extensions.

A /r/firefoxcss mod has a wonderful collection of code snippets that they maintain, which you can browse here: https://mrotherguy.github.io/firefox-csshacks/ and they created a userChrome.js loader here: https://github.com/MrOtherGuy/fx-autoconfig

My favorite customization repository is https://github.com/aminomancer/uc.css.js - which really tests the limits of what is and isn't possible with userChrome.css and .js. My favorite feature is the implementation vertical tabs, without the use of extensions.

Some legacy extensions are maintained and can be found here: https://github.com/xiaoxiaoflood/firefox-scripts/tree/master... (although you will need to use xiaoxiaoflood's userChrome.js loader AFAIK).

Honorable mention goes to the Firefox CSS Store, which can be found here: https://trickypr.github.io/FirefoxCSS-Store.github.io/

Moving to WebExtensions was the logical choice for Firefox, technical/security reasons aside, as they are not alienating extension developers that target Chromium-based browsers.

Yes, they alienated their own extension developers. Yes, they could've handled the transition better, and worked harder towards supporting some of the many APIs/functionalities that extension developers needed (or still need) for their extensions to work in the WebExtensions ecosystem. I myself was quite mad for a very long time at how they handled the switch, but I think overall it's been a success - my own personal feelings aside.


I still miss Tab Mix Plus.

Having no way to properly save/manage sessions feels like I'm one crash away from losing my (way too many) tabs, although luckily that hasn't happened to me for a long time.

And no, I don't want to be told that I should be closing tabs. I have 64Gb RAM and I can search tabs by typing "%" into the url bar. Why should I ever have to close a tab if I don't want to?


It is still developed but you have to enable userchrome.js scripting support to install it.

https://github.com/onemen/TabMixPlus


> Is Mozilla really trying to juxtapose themselves here against another browser that removes support for power user customization like they don't do it?

No. Firefox is adding Manifest 3 support, and people naturally have questions about whether they'll follow Chrome's lead in killing the blocking API.


Mozilla blocks most extensions by default on Firefox Mobile... they created cumbersome collections for people that really want to install them. Sometime I wonder if their main sponsor is/isn't asking them to make some of those changes.


I just create my own collection with the extensions i like and go on with the day.


It's just a lot more of a pain then it used to be to install non-approved extensions, most people probably don't bother.


the old APIs were actually the only existing, internal and synchronous APIs which does not work in a multi-process, site-isolation architecture.

The internal APIs also had to go. What else would you do?


Unfortunately not many seem to remember that brief period when the old add-on APIs were being ported and made to work with a multi-process architecture, as part of the effort known as Electrolysis [1], with appreciable results, before being completely killed in favor of WebExtensions. I'm sure there were significant technical challenges, but let's not use these to hide the political reasons behind such a drastic move.

[1] https://wiki.mozilla.org/Electrolysis


These APIs were synchronous and thus blocking. People blamed Firefox being slow when in fact, it was mostly extension APIs and the extension code.


Electrolysis was an easier thing to handle, and some APIs would still have been lost.

Fission (process-per-origin) came later and would have broken many, many more.

So people would have still been pissed off at Mozilla (the difference between levels of breakage would have been lost), and Mozilla would still be on the hook for supporting a barely supportable API. Without the resources that would require.


> the pot is just being boiled slowly enough to not notice.

That metaphor is based on a myth¹:

> While some 19th-century experiments suggested that the underlying premise is true if the heating is sufficiently gradual, according to modern biologists the premise is false: a frog that is gradually heated will jump out. Furthermore, a frog placed into already boiling water will die immediately, not jump out. Changing location is a natural thermoregulation strategy for frogs and other ectotherms, and is necessary for survival in the wild.

¹ https://en.wikipedia.org/wiki/Boiling_frog


My least favorite Hacker News trope is pretending not to understand what a metaphor is to nitpick an inaccuracy in the literal interpretation.


I would bet that latexr knows what a metaphor is. They are simply pointing out that the usage of that specific metaphor makes no sense.

And the incorrect usage of the metaphor may give people, such as superkuh, an incorrect model of the world where frogs do not jump out of water that is gradually made more and more uncomfortable.

I like when others make my model of the world more accurate by notifying me of errors.


That is absolutely correct.


I don’t understand how you can see someone specifically call something a metaphor and think they are pretending not to understand what one is.

My least favourite Hacker News trope is when someone assumes ill-intent instead of understanding a comment for what it says.

I did not make a comment on the OP’s larger point, nor do I think their use of the metaphor invalidates their comment.


[flagged]


We've banned this account for repeatedly breaking the site guidelines.

https://news.ycombinator.com/newsguidelines.html




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: