Wtf is this article? If there’s a doubt about someone’s national security clearance, revoke it immediately until further review. Being anxious means nothing. Do or don’t.
Wtf is this article? If there’s a doubt about someone’s national security clearance, revoke it immediately until further review. Being anxious means nothing. Do or don’t.
The bill excludes holding responsible creators of open source models for damages from forked models that have been significantly altered.
The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.
I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.
The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.
So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.
As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.
Real life Jed Bartlet, Mexico edition.
One of the funniest things about most of these companies enforcing RTO is that their “on-site interviews” are still virtual. So you believe being in-person is more effective except when it comes to paying for travel expenses for interviewees.
Just shows the massive hypocrisy behind these RTO mandates.
The 2001 Nisqually earthquake was also a different mechanism event than the one that can cause a really large earthquake (intraslab vs subduction). The last major subduction earthquake in the region was centuries ago and these earthquakes can exceed Mw9.0. Luckily they are not very frequent but there are indications that Seattle’s due for one.
From the USGS shakemap and DYFI, it looks like there are some reports of strong shaking but mostly moderate to slight shaking. The depth is reported at 17 km which would have also contributed to relatively lower shaking.
So hopefully the damage will be contained especially compared to the 2015 event.
He took the plea deal probably because he can’t afford a lengthy legal battle, decline to pay his lawyers, getting court dates shifted, appointing his own judge,… There is an entirely separate US justice system for the rich and powerful.
Is that an AI generated hand holding an umbrella?