- Canada has a minister of artificial intelligence, who does not regulate specific in-vehicle software. - I’m the minister of wheat, but don’t ask me to regulate banana bread. - This seems reasonable? - It is not car specific and is fundamentally no different than having a youtube video running on your car entertainment center’s dashboard. There SHOULD be some fucking regulations regarding what can actually run on those but the end result of that is android/apple auto going away and it all being super proprietary subscription models. - I think the point is that banana bread is also made from wheat where banana is just an added thing. 
 Here the AI is just added to a car. And it’s obviously still AI.
- No it’s basically the opposite. - He’s in charge of AI and the car has it. - Then is he also in charge of Youtube? And pornhub? And spotify? - This is the problem with these nonsense titles but, presumably, he is in charge of caring about “grok ai” in general, but not “grok ai in a tesla”. - Which lines up with the actual quote - Canada has a minister of artificial intelligence, who does not regulate specific in-vehicle software. The minister’s said in an emailed statement it wasn’t aware of Tesla’s plan to integrate Grok into vehicles sold in Canada but that it takes complaints seriously. - If there’s AI in a car, the minister of AI should regulate it, no? - Why the fuck would the minister of AI give a shit about YouTube, pornhub or Spotify unless they have AI? - Youtube actually uses AI widely, for instance they use AI to brush up thumbnails of peoples videos without permission. 
 But I suppose an AI minister would be very well aware that such things belong under his jurisdiction.- Is an AI minister a minister powered by AI vs a minister of AI which would be the minister is charge of regulating said AI? 
 
 
 
 
 
 
- Musk did it, and now he’s blaming it on grok 
- The island has been closed for years. Musk needs to get his supply somehow. 
- “xAI’s Grok was created based on a philosophy of sort of absolute, radical openness, and it will talk about anything with anyone,” said Mark Daley, chief AI officer at Western University in London, Ont. U of W’s Chief AI Officer Mark Daley poses for a photo Mark Daley, chief AI officer at Western University, says Grok should post warnings to alert people of explicit content. (Hugo Levesque/CBC) - “[Musk is] a free speech extremist. He wants Grok to be completely open, to have any conversation with anyone. And that’s a principled stance that he’s taken, but it may not be what every consumer is looking for.” - Musk is only a self-declared free speech extremist, as anyone who knows anything about how he’s handled Xitter and Grok would be quick to point out. Not sure this was a great choice of subject matter expert to interview for this article. - [Musk is] a free speech extremist - No, he isn’t. 
- ? - Musk HATES free speech. He loves right wing extremism and conspiracy theories and other brain rot like that, but mentioned human rights or what DEI really means and you’ll see how much he really cares about free speech . - FFS, he reprogrammed that grok multiple times because it was stating too many proven facts that Elmo personally disagrees with - Can you imagine having a brain implant from his company? I would fucking double dare you to say something that Elmo doesn’t like, he’ll fry your ass - He declared that “cis” was a slur and banned Twitter users who used it 
 
 
- This is what happens when you train the AI on Reddit and 4chan posts. - And mainly Xitter which is the number one prioritized source for Grok. 
 
- The part they didn’t even address in the article: even if this was NOT a child operating the system, what consumer is asking for a chat bot that responds to a conversation about soccer by veering off into send nudes territory? Where is the target audience who is like ‘Oh right, was distracted by the sports, back to the pornography’, while driving? - The bot isn’t really cognizant of its own replies. - …Well, it could be with a postfilter, ranging from sanity checking its own reply, to a tiny separate safety model (of which many exist), to a basic keyword check. But apparently Twitter has no resources for such things. - But the point is this isn’t really the intended design. 
 
- conservatives’ insatiable lust for children must be stopped - why are people preaching values and morality constantly on the wrong side? is it all some elaborate ruse? 
- Moron AI can’t tell the difference between scoring a goal or scoring a date! 🤣 
 And it can’t figure out when “send nudes” is a joke, or reeks of pedophilia!- The former is just moronic, but the latter is criminal! Why are we allowing this shit? 


