Los Angeles is using AI to predict who might become homeless and help before they do::undefined
This is what state run AI models should be doing, not any of that other wack ass shit
Color me skeptical, considering this city specifically has the single most notoriously corrupt and violent police force in the history of the nation. Yeah, that model is being trained to “help”.
No thanks. If this is remotely successful these fucks will next use it to Minority Report us.
Next?
Definitely. Policy should be made on the basis of what’s proven to be effective, not ideology.
AI could be more effective, provided that what’s been fed into it is not garbage
What could also help is a department of housing that anyone could walk into that provides them with temporary housing that leads to full time housing if needed.
I agree 100% as long as the criteria for obtaining government help is passing a drug test. I’m more than happy to have my tax dollars help someone who fell on hard times and needs some assistance to become a productive member of society again. I am not happy with my tax dollars going to house someone who would rather feed their meth or opiate addiction than get a job. Let them dig their hole and bury themselves in it.
Fuck you. Addicts are humans and should be treated with respect and decency.
Maybe, and this might be a bit out there but hear me out, maybe we should bin housing last policies and switch to housing first. Since it’s been, ya know, proven to reduce costs and help people.
I have a lot of concerns about AI, but if it’s getting people help and preventing them from ending up on the streets, I’m all for it.
The biggest problem I have with this idea comes from my recent experiences over the past few days with GPT-3.5 in particular.
Things like not being able to remember last responses or prompts, just making up facts, or data being outdated (September 2021 for GPT-3.5) and needing to be updated. Until issues like that are less of an issue, I don’t foresee being actually usable for cities or really anything outside of maybe just generating nonsense or random code snippets.
Also, I have concerns you’d see this being taken and used against minorities to discriminate against them. Whether that’s intentional or not I can’t say.
AI they are talking about is most likely completely different than chatgpt.
They are likely labeling people “at risk” using some very reliable old-school ML algorithm, such as xgboost.
Biases are clearly a problem, but they are more manageable than human biases, because of the mathematical form that help finding and removing them. This is why for instance EU regulations force to have mathematical models in many area, to replace “human intuition”. Because mathematical models are better for customers.
They aren’t doing anything new, just calling it AI