A Chromium thing. Some Chromium-based browsers are going to keep some kind of internal ad blocker that has more functionality than MV3 allows for but I don’t know of any that are keeping the older functionality for extensions in general.
A Chromium thing. Some Chromium-based browsers are going to keep some kind of internal ad blocker that has more functionality than MV3 allows for but I don’t know of any that are keeping the older functionality for extensions in general.
What separate auth operation is needed besides authenticating with the local device to unlock a passkey?
More usable for the average user and more supported by actual sites and services, so yes.
They definitely knew it would impact their ad business but I think what did it was the competition authorities saying they couldn’t do it to their competitors either, even if they were willing to take the hit on their own services.
Impact on their business (bold added): https://support.google.com/admanager/answer/15189422
- Programmatic revenue impact without Privacy Sandbox: By comparing the control 2 arm to the control 1 arm, we observed that removing third-party cookies without enabling Privacy Sandbox led to -34% programmatic revenue for publishers on Google Ad Manager and -21% programmatic revenue for publishers on Google AdSense.
- Programmatic revenue impact with Privacy Sandbox: By comparing the treatment arm to control 1 arm, we observed that removing third-party cookies while enabling the Privacy Sandbox APIs led to -20% and -18% programmatic revenue for Google Ad Manager and Google AdSense publishers, respectively.
For scenario one, they totally need to delete the data used for age verification after they collect it according to the law (unless another law says they have to keep it) and you can trust every company to follow the law.
For scenario two, that’s where the age verification requirements of the law come in.
No, no, no, it’s super secure you see, they have this in the law too:
Information collected for the purpose of determining a covered user’s age under paragraph (a) of subdivision one of this section shall not be used for any purpose other than age determination and shall be deleted immediately after an attempt to determine a covered user’s age, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
And they’ll totally never be hacked.
From the description of the bill law (bold added):
https://legislation.nysenate.gov/pdf/bills/2023/S7694A
To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general’s regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.
In other words: sites will have to figure it out and make sure that it’s both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.
Doesn’t necessarily need to be anyone with a lot of money, just a lot of people mass reporting things combined with automated systems.
https://support.google.com/maps/answer/14169818
Update Google Maps to use Timeline on your device
Important: These changes are gradually rolling out to all users of the Google Maps app. You’ll get a notification when an update is available for your account.
Location History is now called Timeline, and you now have new choices for your data. To continue using Timeline, you must have an up-to-date version of the Google Maps app. Otherwise, you may lose data and access to your Timeline on Google Maps.
Timeline is created on your devices.
Basically they’re getting rid of the web version because they’re moving the data to being stored on local devices only. Part of this might be because they got a lot of flak for stuff like recording location data for people who went near reproductive health clinics and other sensitive things. They can’t be forced to respond to subpoenas for data if they don’t have the data and can thus stay out of it, so I wouldn’t necessarily say it’s all that altruistic on their part.
I feel like even if it was open-source, it would still be too big of a target for malware and data exfiltration to ever be justified for most people.
It’s always been a possibility that someone could do this but this makes it a default on feature for a lot of users you might interact with and makes them a prime target for malware to steal the sensitive data that wouldn’t have existed in most cases before.
It’s basically similar to this example from the health field:
Like givesomefucks said, it’s probably not that they were actually after that information specifically, but that it just got caught up in regular website analytics that services put on their sites. You can still infer a lot about a person’s health information by just looking at the URLs they visit, so I’d say it is a concern but I’m not sure it should go beyond companies/agencies/organizations needing to know about the risks and a “stop doing this” warning. If analytics services were doing this intentionally and evaluating and using that data explicitly at the direction of some human in their company, then I think it would be a much bigger issue and a much bigger story.
It also doesn’t help housing prices that the landlords are colluding to raise prices:
https://www.ftc.gov/business-guidance/blog/2024/03/price-fixing-algorithm-still-price-fixing
It isn’t just Airbnb’s fault, it’s landlords wanting to maximize their return, no matter the method (short-term rentals or price fixing collusion).
The trust in the unknown systems of the VPN provider may still be better than the known practices of your local ISP/government though. You shouldn’t necessarily rely on it too heavily but it’s good to have the option.
I think it was more targeting the client ISP side, than the VPN provider side. So something like having your ISP monitor your connection (voluntarily or forced to with a warrant/law) and report if your connection activity matches that of someone accessing a certain site that your local government might not like for example. In that scenario they would be able to isolate it to at least individual customer accounts of an ISP, which usually know who you are or where to find you in order to provide service. I may be misunderstanding it though.
Edit: On second reading, it looks like they might just be able to buy that info directly from monitoring companies and get much of what they need to do correlation at various points along a VPN-protected connection’s route. The Mullvad post has links to Vice articles describing the data that is being purchased by governments.
One example:
By observing that when someone visits site X, it loads resources A, B, C, etc in a specific order with specific sizes, then with enough distinguishable resources loaded like that someone would be able to determine that you’re loading that site, even if it’s loaded inside a VPN connection. Think about when you load Lemmy.world, it loads the main page, then specific images and style sheets that may be recognizable sizes and are generally loaded in a particular order as they’re encountered in the main page, scripts, and things included in scripts. With enough data, instead of writing static rules to say x of size n was loaded, y of size m was loaded, etc, it can instead be used with an AI model trained on what connections to specific sites typically look like. They could even generate their own data for sites in both normal traffic and the VPN encrypted forms and correlate them together to better train their model for what it might look like when a site is accessed over a VPN. Overall, AI allows them to simplify and automate the identification process when given enough samples.
Mullvad is working on enabling their VPN apps to: 1. pad the data to a single size so that the different resources are less identifiable and 2. send random data in the background so that there is more noise that has to be filtered out when matching patterns. I’m not sure about 3 to be honest.
Google has a dominant position in the advertising industry with AdSense and their other advertising-related products. Google also has a dominant position in the browser market with Chrome. Google can’t use that dominant position in the browser industry to make changes to their browser that would negatively affect their competitors in the advertising industry without consulting competition authorities which are trying to make sure they aren’t intentionally harming their competition in the ad market using their dominance in another market (the browser market) to benefit themselves. Firefox is small enough (and generally doesn’t have any other services they could leverage) that they can just make changes to their browser without running afoul of any competition concerns.
There’s also the advantage that Google has when it comes to the large number of popular first party services they have, like Gmail, Search, YouTube, etc. Using those services alone, they may be able to develop a profile of a user that’s better than the competition would be able to do with the new Topics API, Protected Audience API, etc and thus even just getting rid of third-party cookies without a replacement might be seen as anti-competitive. This is probably why places like the EU are also forcing services to make it possible to unlink those services and not have the data shared between them.
Not too surprising. I’m not sure I’ve actually seen anyone adopt their new ad technologies yet and nothing is listed in my browser. If their competition hasn’t adopted it but they have, it’s definitely anti-competitive for the ad market if they just shut off third-party cookies and only affect other companies (which seems to be what’s delaying it with the UK’s CMA).
Votes are public to not just to the original instance admins though but to any instance admin, right? If you setup your own instance and federate with another, then you should be able to view the votes for any communities on the one you federate with. The only privacy is that the default UI doesn’t display it, but a different UI could:
e.g. the one for this post on kbin.social that shows Lemmy upvotes as favorites.
I feel like this should be more prominently disclosed to Lemmy users.
From the article’s second paragraph: