• 12 Posts
  • 721 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle
  • The only thing disgusting here is that you assume you’ve got a moral high ground and superiority over these Arab, Muslim, and Palestinian community leaders. I trust their opinion.

    I’m tired of champagne socialists pretending to be all for progressive causes and then they act like they know better than us brown people and are our wiser saviors. As a brown person, that makes you no better than a Republican in my eyes. Stop using us as a cudgel and patronizing us instead of listening to us. Strip away your imperialist mindset and listen to AAPI people to try and win for once.





  • There is an easy answer to this, but it’s not being pursued by AI companies because it’ll make them less money, albeit totally ethically.

    Make all LLM models free to use, regardless of sophistication, and be collaborative with sharing the algorithms. They don’t have to be open to everyone, but they can look at requests and grant them on merit without charging for it.

    So how do they make money? How goes Google search make money? Advertisements. If you have a good, free product, advertisement space will follow. If it’s impossible to make an AI product while also properly compensating people for training material, then don’t make it a sold product. Use copyright training material freely to offer a free product with no premiums.



  • That’s a slippery slope fallacy. We can compensate the person with direct ownership without going through a chain of causality. We already do this when we buy goods and services.

    I think the key thing in what you’re saying about AI is “fully open source… locally execute it on their own hardware”. Because if that’s the case, I actually don’t have any issues with how it uses IP or copyright. If it’s an open source and free to use model without any strings attached, I’m all for it using copyrighted material and ignoring IP restrictions.

    My issue is with how OpenAI and other companies do it. If you’re going to sell a trained proprietary model, you don’t get to ignore copyright. That model only exists because it used the labor and creativity of other people – if the model is going to be sold, the people whose efforts went into it should get adequately compensated.

    In the end, what will generative AI be – a free, open source tool, or a paid corporate product? That determines how copyrighted training material should be treated. Free and open source, it’s like a library. It’s a boon to the public. But paid and corporate, it’s just making undeserved money.

    Funny enough, I think when we’re aligned on the nature and monetization of the AI model, we’re in agreement on copyright. Taking a picture of my turnips for yourself, or to create a larger creative project you sell? Sure. Taking a picture of my turnips to use in a corporation to churn out a product and charge for it? Give me my damn share.







  • Afghanistan may be among the most humanitarian deployments the US military has done. The infant mortality rate fell significantly while the US was there, and women had the freedom to go to school and participate in the economy without violent oppression.

    Our mistake in Afghanistan was that we didn’t build a lasting change. We gave arms to the wrong people. We should’ve been training and arming the women to fight back and protect their democracy. Not men who were going to be fine either way.




  • This is a classic case of the differences between lawful good, lawful neutral, and neutral good.

    Lawful good would feel conflicted but settle on conviction, because it was premeditated and not self defense.

    Lawful neutral would convict and feel no conflict at all. The law was broken, nothing else matters.

    Neutral good would not convict, because they don’t think the law adequately handles this kind of situation.

    The problem is, within the legal system, neutral good is seldom an option – by definition it’s going to be some kind of lawful. And that sucks here.