Firefox has a tendency to embed optional extensions as impossible to uninstall core features these days, so it would not change much.
Firefox has a tendency to embed optional extensions as impossible to uninstall core features these days, so it would not change much.
Firefox has always had our backs
It’s been going in a less friendly direction for a while. Embedding of mandatory useless extensions, aggressive advertising, deals to display more and more content to more users, disregard for user settings on multiple updates, opt-out telemetry, and now telling you that you’re using it wrong.
Sure, you can navigate through various settings to disable most of these, and check back on updates for settings that toggles back, or are simply renamed and mysteriously got back to their default, intrusive value. But we should not have to do that.
And that’s not even touching the issue with the Mozilla Corporation itself.
Firefox is the alternative browser, but it certainly isn’t there to “have your back”.
Sure you can. You can also spend time disabling intrusive telemetry, you can also spend time reverting half the UI changes (not the other half though), you can also spend time removing integrated services you don’t use but are still running, you can (regularly) change back some settings that gets reverted every once in a while, you can also block some IP to prevent intrusive ads, you can toggle off part of the “user experience” that bloat the lockscreen…
Or you could, I don’t know, not have to do any of that and still have a working system that’s not trying to bend you over.
I had the occasion to discuss with people involved with Microsoft a few times, mostly on the research front. Great people, with great ideas, and very knowledgeable about their field. Of course they had nothing to do with the lobbying and the windows OS. Microsoft is very large; the corporate drones are only a small part of it. Unfortunately, it’s the part that decides what gets done and pushed out :(
Great, another victory of people keeping IP in closed box away from the public at the small cost of culture disappearing.
I believe the appropriate, corporate-friendly answer in this case is “go fuck yourself”
Yes, it is doable. But it also implies keeping track of individual sessions, to make sure you serve the right ad at the right time to the right people. Nothing impossible, but definitely more work to do per individual player, and on the scale of youtube this is quite a lot.
Streaming allow caching a bit ahead, yes. But the “a bit ahead” part does not mean you can get everything; a server could very well decide to not send more than a few seconds of buffer compared to the realtime play. So, if you’re at 00:00:20 in your video and an 30 second ad is present in the video stream, the server could decide to not send anything beyond the 00:00:55 tag until 30 seconds have elapsed, for example.
It would be very annoying to code server side, and very annoying for people with spotty internet, but it’s very doable.
Ads should be properly labeled in most market, so it should be trivial to detect what segment is ad and what isn’t. The real question is, what to do, and if the server refuses to serve the remainder of the video before the ads duration, what will it be replaced with.
It’s costly; either you prepare encodes ahead of time with different ads and serve that appropriately, or you splice ads live for each request, which is also costly in resources. You can’t get away with just a few variation; ads are usually targeted. It also come with other issues, like, it is mandatory in a lot of place to clearly identify ads, so there should be an obvious marker somewhere. If it’s in the UI, it can be detected and replaced live by a video of kittens for the duration of the ad, so I suppose they also have to handle any signal in the video… (It’s speculation, I didn’t get any of these yet).
I’m curious to see if this will hold, and how we will run around it in the long run.
or write your windows ISO using rufus and check the “no MS account” checkbox.
Officially, the world is taking the second option in general.
We already have them. Just don’t touch any big corporation stuff and suddenly everything work without requiring the blessing of a corporate overlord. There’s already open source tools, either open or freely accessible models, and the tooling, while relatively knew, keep improving. All working locally.
Heck, even performances improves in unexpected ways. This week I ran a chatbot at an almost acceptable speed on a cheap CPU.
As long as some politician don’t come out and outlaw software as a whole (good luck with that) we’ll be fine.
Oh, yeah, thanks for these researchers to have provided insightful feedback such as “don’t record private activity”, “don’t store data in a plaintext user-accessible sqlite database”, and “don’t do that automatically to everyone elligible, what are you thinking no stop”. No way anyone could ever figure these out beforehand. Microsoft was totally stumped when these showed up and most certainly is very honest when they say they’re reworking it now, and not at all abusing the PR outrage to slip us something as bad in the meantime.
Interesting, we get to either hate them for going full big brother, or hate them for going full adobe in the first place. It’s nice to have a choice sometimes.
Shouldn’t this be “…acknowledges he should not have accepted free trips from billionaire donor”?
I don’t know all the intricacy of US politics, but people in position getting shit-tons of gifts in exchange of “nothing, really, just a friendly gesture” should raise some eyebrows.
Oh no. Wait, I mean “oh yes”.
The whole page transferred about 7kB and shows 18.2kB of ressources according to the debug tools.
They’re not very good at it, are they? :D
This is not a subreddit, and this is the original article’s title.