• 2 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: December 11th, 2023

help-circle



  • I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.



  • Others have replied pointing out this is a strawman and that merit doesn’t make any sense as a metric if you have discrimination. In practice performance (‘merit’) is complex interaction between an individual’s skills and talent and the environment and support they get to thrive. If you have an environment that structurally and openly discriminates against a certain subclass of people and then chose on “merit” you are just further entrenching that discrimination.

    This is a project that seemed to be having specific problems on gender that was causing harm and leading to losing talent. In a voluntary role particularly this is a death spiral for the project as a whole. Without goodwill and passion open source projects of any meaningful size just wouldn’t survive.

    I’m glad you care enough about diversity and evidence to have worked out how to solve these problems without empowering and listening to those minorities. Please do share it.


  • This is a basic represention and inclusion issue. Unless you are actively seeking out voices of those minorities and addressing their concerns you will have a reinforcing loop where behaviour that puts people off engaging will continue and it will continue to limit people from those minorities being involved (and in the worst case causing active harm to some people who end getting involved). From what I understand the behaviour that has been demonstrated and from who those people leaving it is clear this is active issue within Nix. Having a diverse range of people and perspectives will actually make the outputs (software) and community generally better. It’s about recognising the problems in the formal and informal structures you are creating and working to address them.

    Additionally, but just to clarify nepotism would be giving positions based on relationships with people in power and not ensuring that your board contains a more representative set of backgrounds and perspectives.






  • My understanding is this was still a bit of a grey area: particularly with non-text media?

    I thought that the training would not be covered but that there is the possibility of LLMs regurgitating the training materials under certain circumstances which would be covered as a potential breach?

    Even without consideration of AI though I still think its an important question. Do users retain the copyright of thier work? I don’t want to see another repeat of other platforms where users contribute and build communities with a collective mindset giving the platform it’s value only to be enshittified.








  • These are really useful suggestions, thanks!

    Particularly excited about Trillium. I’m current trying Joplin but labour and time reflect and organize the noted means I’m rarely using it effectively.

    Habitica sounds interesting. I definitely feel I need something like that. My struggle sometimes is in splitting projects into bitesize chunks (some are easier than others) some of my work can be quite open ended thought projects. I get caught in a trap of doing the easier work to plan work (like coding) rather than necessarily the most urgent.