Here’s the kicker: based on these AI-assigned definitions in the updated terms, your access to certain content might be limited, or even cut off. You might not see certain tweets or hashtags. You might find it harder to get your own content seen by a broader audience. The idea isn’t entirely new; we’ve heard stories of shadow banning on Twitter before. But the automation and AI involvement are making it more sophisticated and all-encompassing.
Man that’s gonna be one stupid AI
And one Nazi AI.
I don’t understand the difference
Stuff like this is my biggest reason to believe that the current anti-ai movement is incredibly misled.
They want to stop open scraping, but if they’re successful, only companies like Twitter, Google, Disney, Getty, Adobe, whatever, are going to have their own closed systems that they’ll either charge for or keep themselves to replace workers, instead of the tech being open to all of us.
Open scraping is the only saving grace of all of this tech because it’s going to keep at least a number of options entirely free for anyone who wants to use them.
I’m not anti-AI but the movement is highly against mega corp scrapping personal data as well, not just open scrapping.
As a simple example Co-Pilot has been under heavy fire from the anti-ai community for a while now due to the usage of open licensed code without attribution.
But it won’t matter, because a mega corp scraping data is going to put it into their TOS and literally zero percent of these people are going to get off Twitter or Bluesky or whatever big website that has an exemption to whatever law is passed to stop the scraping of data.
The only groups who will suffer will be researchers, open source software builders, and pretty much anyone who isn’t a corporation already.
There’s no solution to this that will end with everyone being 100% happy, but keeping the open internet open and continuing this idea that has pretty much persisted from the beginning of the internet, that whatever you put out there is fair game for viewing, is ideal compared to the alternative.
Here is an idea. Leave twitter. Then just ignore it.
Well LLMs are about to get a lot less reasonable.
New model to be called ChatKKK.
Training AI to shitpost, lol.
As a Thai I am very intrigued about how the AI-trained version of @sugree will be like.
For context, Sugree has made numerous Nostradamus-like “prophecy” tweets that predated important events in modern Thai history, such as political movements, before disappearing after a lawsuit.
Is this one of those accounts that made a ton of predictions, deleted the ones that didn’t come true, and only then did someone find out that they tweeted accurate things?
No. He just tweeted a whole damn lot to the point that eventually anything and everything will come true anyway.
(He did not make “explicit” predictions. Just random shit that happened to come true.)
Left twitter a few months ago and it seems like everyday there is a new reason that made leaving a good call
So not shadow banning so much as putting feeds thru a kaleidoscope 😂
deleted by creator
deleted by creator
Many people might overlook the importance of the fact that a significant portion of cutting-edge tools, including ‘A.I.’, Large Language Models (LLMs), and Stable Diffusion (SD), are grounded in open source. This approach has led to a broad spectrum of contributors, ranging from novices to experts, who are diving into these technologies and pushing their boundaries every day.
Among the various projects and platforms, Meta’s contribution is noteworthy. Not necessarily because of altruism, but for their strategic decision to release Llama 2 as open source.
It’s natural for people to feel a mix of intrigue and caution towards new technologies; while they’re perhaps attracted by the novelty, there’s also an inherent fear about the potential unknowns. This duality reflects human nature to seek progress, while also being wary of unforeseen consequences.