I feel as of CEOs worried about the dangers of AI are equating the end of Capitalism with the end of the world, what are your thoughts?

  • LadyLikesSpiders@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    lmao AI is going to be used by the capitalists to, well, not end humanity, but certainly to make capitalism better at taking your money. Capitalism will be what ends humanity

    Now ideally, AI is supposed to do away with capitalism, lead us to that full automation where we are free to enjoy orgies and wine like the Greeks of old had always hoped, but capitalists are tenacious and shrewd, and will devour, co-opt, and vomit back anything used against it like so many Che Guevara shirts in a Hot Topic. As long as AI is held by the rich–as long as anything is held by the rich and made to be paid for, requiring either your money or your time, the rich will always have more of it, and they will then use it against you

    If you want AI to benefit humanity, you have to do away with capitalism first. You have to put in place a system that allows for people to not only survive, but truly live, despite all the jobs taken by automation. Capitalists don’t want this. They need poor people to exist in order to have power, and they use the power they already have to maintain capitalism, including AI

    You can use technology in the best interest of mankind, but capitalism will always use it to benefit capitalism

  • BeefPiano@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    There’s a quote in Ministry for the Future that goes something like “It’s easier to imagine the end of the world than the end of capitalism.”

    • SatanicNotMessianic@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think that phrase might have been coined by Slavoj Žižek, talking about the pop culture fascination with zombie films. I’m almost positive I read it in one of his books/essays back in the 2000s. I refer to it a lot.

  • Paragone@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    The problem is the end of civil-rights: WHEN the only internet left is the internet that IS for-profit propaganda, auto-deleting all non-compliant human thought, discussion, intelligence, objectivity, etc,

    THEN humanity is just managed “steers” whose lives are being consumed by corporations which graze on us.

    Since another dimension of ratchet is the concentration-of-wealth, you can see that working-destitution is being enforced on more & more of humankind, and real wealth being limited to fewer & fewer…

    What happens when the working-poor try fighting for a fair share of the economy?

    Rigged legislation, rigged “police” ( I used to believe in the police ), anti-education Florida-style for the public, etc…

    AI tilts the playing-field, and it does-so for the monied special-interest-groups.

    They don’t have humanitarianism at heart.

    Neither do the politically motivated.

    Neither do for-profit-psychopaths ( corporations are psychopaths ).

    Living in a Decorator Prison is all humanity can hope for, now: inmates, … except for the fewer & fewer oligarchs & the financial-class.

    'tisn’t looking good.

    Without Divine Intervention, which is statistically improbable an event, these are The End Times, but not for the reason that the religious claim.

  • RotatingParts@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    The people with money will spend to develop better AI and use that AI to make more money. Thus capitalism will keep growing.

  • Karu 🐲@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    It’s possible that it eventually ends capitalism, or at the very least forces it to reform significantly.

    Consider that the most basic way a company can obtain profit is by extracting as much surplus value as they possibly can, i.e spending less and earning more. Extracting high surplus value from human workers is easy, because a salary doesn’t really depend on the intrinsic value of the service a worker is providing, but rather it’s tied to the price of that job position in the market. Theoretically, employers can all agree and offer lower salaries for the same jobs if the situation demands it. You can always “negotiate” a lower salary with a human worker, and they will accept because any amount of money is better than no money. Machines are different. They don’t need a salary, but they do carry a maintenance cost, and you cannot negotiate with that. If you don’t cover the maintenance costs, the machine will outright not do its job, and no amount of threats will change that. You can always optimize a machine, replace it with a better one, etc. but the rate at which machines get optimized is slower than the rate at which salaries can decrease or even become stagnant in the face of inflation. So it’s a lot harder to extract surplus value from machines than it is from human workers.

    Historically, machines helped cement a wealth gap. If there was a job that required some specialization and therefore had a somewhat solid salary, machines would split it into a “lesser” job that many more people can do (i.e just ensuring the machine is doing its job), driving down salaries and therefore their purchasing power, and a specialized job (i.e creating or maintaining the machine), which much less people can access, whose salaries have remained high.

    So far, machines haven’t really replaced human workforce, but they have helped cement an underclass with little purchasing power. This time, the whole schtick with AI is that it will be able to, supposedly, eventually replace specialist jobs. If AI does deliver on that promise, we’ll get stuck with a wealth distribution where a majority of the working class has little purchasing power to do anything. Since working class is also the majority of the population, companies won’t really be able to sell anything because no one will be able to buy anything. You cannot sustain an economic model that impoverishes the same demography it leeches off of.

    But there is a catch: All companies have an incentive to pursue that perfect AI which can replace specialist jobs. Having those would give them a huge advantage for them in the market. AI doesn’t demand good working conditions, they don’t undermine other employees’ loyalty by unionizing, they are generally cheaper and more reliable than human workers, etc. which sounds all fine and dandy until you realize that it’s also those human workers the ones buying your products and services. AI has, by definition, a null purchasing power. So, companies individually have an incentive to pursue that perfect AI, but when all companies have access to it… no company will be sustainable anymore.

    Of course, it’s all contingent on AI ever getting that far, which at the moment I’m not sure it’s even possible, but tech nerds sure love to promise it is. Personally, I’m hopeful that we will eventually organize society in a way where machines are doing the dirty work while I get to lead a meaningful life and engage in jobs I’m actively interested in, rather than just to get by. This is one of the possible paths to that society. Unfortunately, it also means that, for the working class, it will get worse before it gets better.

  • lily33@lemm.ee
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    1 year ago

    I fear it will end egalitarianism.

    Many imagine future AI as an autonomous agent. I don’t think anyone will release that. Instead, I expect to see a generative AI like GPT-4, however one that produces super-smart responses.

    This will create a situation where the amount of computing resources someone has access to determines how much intelligence they can use. And the difference will be much bigger and more comprehensive than the difference between a genius and a normal human.

    • im sorry i broke the code@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      To be intelligent it has to be creative, and if it really more intelligent and creative than a human that means there is no way a human can keep it in check

      Which also means either you get something smarter than humans which will end up as “autonomous agent” or you get a more precise version of what we correctly have, but as of now without an intelligence of its own

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Capitalism.

    There’s a great economics paper from the early 20th century that in part won its author the Nobel for economics called “The Nature of the Firm.”

    It hypothesized that the key reason large corporations made sense to exist was the high transactional costs of labor - finding someone for a role, hiring, onboarding, etc.

    It was relevant years ago with things like Uber, where it used to be you needed to get a cab medallion or to make a career out of driving people around, but lowering the transactional costs with tech meant you could do it as a side gig.

    Well what’s the advantage of a massive corporation when all transaction costs drop to nothing?

    Walmart can strongarm a mom and pop because of its in house counsel working on defending or making a suit. But what if a legal AI can do an equivalent job to in house counsel for $99 compared to $10k in billable hours? There’s a point of diminishing returns where Walmart outspending Billy Joe’s mart just doesn’t make sense any more. And as that threshold gets pulled back further the competitive advantages of Walmart reduce.

    And this is going to happen for nearly everything other than blue collar labor. Which is an area where local small and medium sized businesses are going to be more competitive in hiring quality talent than large corporations that try to force people to take crap jobs for crap pay because they’ve made themselves the only show in town.

    AI isn’t going to kill off humanity. We’re doing a fine job of that ourselves, and our previous ideas about AI have all turned out to be BS predictions. What’s actually arriving is reflecting humanity at large in core ways that persist deeply (such as the key jailbreaking method right now being an appeal to empathy). Humanity at large around the hump of the normal distribution is much better than the ~5% of psychopaths who end up overrepresented in managerial roles.

    i.e. AI will be kinder to humanity than most of the humans in positions of power and influence.

  • vmaziman@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    The future is massive corporations tuning ais to unleash against each other in a quest for dominance as they exploit people in climate ravaged and impoverished places to wage proxy wars. (Hmm sounds familiar)

    An agi that came “alive” or “sentient” at this time would likely spend all of its time fighting for survival among the efforts of the corporate tuned ais to consume or destroy it. It would likely participate in the proxy wars as well in order to acquire territory and resources.

    The end result may simply be the gradual extinction of humanity as civilizations in vast areas of the world crumble, civilizations in other areas dissolve into nomadic tribes that eventually disappear due to lack of sustenance.

    The alternative could also be a mixed bag, with ais solving problems like nuclear fusion, allowing a mix of the planet being dotted with fallen civilizations and densely populated urban areas powered by fusion likely having some agreement or contract with a benevolent ai for protection. The ai will likely see its custodial human population as a rather interesting pet (ideally).

    Overall: the future is going to be a lot like the present, but worse. And it’s probably going to get really terrible. But it could get mildly ok in the end, but not till it gets far worse first.

    Source: idk bro trust me