I went to highschool for 1 year in the UK, where a uniform was mandatory for every student.
I can assure you, it does not promote discipline in any way. Kids fight, do stupid things, and skip classes regardless of how they’re dressed.
I went to highschool for 1 year in the UK, where a uniform was mandatory for every student.
I can assure you, it does not promote discipline in any way. Kids fight, do stupid things, and skip classes regardless of how they’re dressed.
A “grodge” sounds like some sort of distant cousin of the grue. Maybe they’re some sort of gremlins, and these people are gremlin breeders who are selling them?
You don’t do what Google seems to have done - inject diversity artificially into prompts.
You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for “american woman” you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For “german 1943 soldier” the accurate historical images are obviously far less likely to contain racially diverse people in them.
If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.
I guess my fate is in the hands of the RNG gods.
This is an interesting topic that I remember reading almost a decade ago - the trans-human AI-in-a-box experiment. Even a kill-switch may not be enough against a trans-human AI that can literally (in theory) out-think humans. I’m a dev, though not anywhere near AI-dev, but from what little I know, true general purpose AI would also be somewhat of a mystery box, similar to how actual neutral network behavior is sometimes unpredicable, almost by definition. So controlling an actual full AI may be difficult enough, let alone an actual true trans-human AI that may develop out of AI self-improvement.
Also on unrelated note I’m pleasantly surprised to see no mention of chat gpt or any of the image generating algorithms - I think it’s a bit of a misnomer to call those AI, the best comparison I’ve heard is that “chat gpt is auto-complete on steroids”. But I suppose that’s why we have to start using terms like general-purpose AI, instead of just AI to describe what I’d say is true AI.
lol @ the exact percent
But no, I don’t think shitposts by themselves are actually the problem. I think the problem is when when there’s so many people dedicated to making shitposts that serious communities with serious discussions start getting overwhelmed with shitposts, and when there’s so many people who are only interested in shitposts that they upvote those shitposts to the top, often downvoting anyone who might offer a contrarian non-funny opinion.
or IDK, I’m mostly speculating based on personal experience.
I think the fewer number of people, compared to reddit, on Lemmy combined with the fact that it’s not nearly as well known, plays a huge advantage to the quality of the comments. Not that there aren’t people like that here either, but I feel like the more popular a platform, is, the more it gets filled, proportionally, with people trying to make witty, shitty, pointless remarks that are often clickkbaity and avoid actual discussion, all in the interest of just getting more imaginary points.
Also the process of “enshitification” (not a term I made up, look it up if you hadn’t heard of it) has already started taking place on reddit due to its popularity.
I mean if you’re just going to reveal the existence of the secret control room, it’s isn’t much of a secret, is it?
I only discovered it recently, and have been reading it when I’m bored and remember it. Also just discovered the Bill Watterson “cameo” - it is pretty amazing.