Someone wants to send you a communication of some sort. They draft a series of bullet points or short version.
They have an LLM elaborate it into a long-form email or report.
They send the long-from to you.
You receive it and have an LLM summarize the long-form into a short-form.
You read the short form.
Do you realize how stupid this whole process is? The LLM in step (2) cannot create new useful information from nothing. It is simply elaborating on the bullet points or short version of whatever was fed to it. It’s extrapolating and elaborating, and it is doing so in a lossy manner. Then in step (4), you go through ANOTHER lossy process. The LLM in step (4) is summarizing things, and it might be removing some of the original real information the human created in step (1), rather than the useless fluff the LLM in step (2) added.
WHY NOT JUST HAVE THE PERSON DIRECTLY SEND YOU THE BULLET POINTS FROM STEP (1)???!!
This is idiocy. Pure and simply idiocy. We send start with a series of bullet points, and we end with a series of bullet points, and it’s translated through two separate lossy translation matrices. And we pointlessly burn huge amounts of electricity in the process.
This is fucking stupid. If no one is actually going to read the long-form communications, the long-form communications SHOULDN’T EXIST.
That’s not what I am envisioning at all. That would be absurd.
Ironically, an gpt4o understood my post better than you :P
"
Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."
Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.
In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.
Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.
I read this comment chain and no? They are giving you actual criticism about the fundamental behaviour of the technology.
The person basically explained the broken telephone game and how “summarizing” will always have data loss by definition, and you just responded with:
In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense)
Just because you couldn’t notice the data loss doesn’t mean the principle isn’t true.
Your basically saying translating something from English to Spanish and then back to English again is flawless cause it worked for some words for you.
I’m not saying any thing you guys are saying that I’m saying. Wtf is happening. I never said anything about data loss. I never said I wanted people using LLMs to email each other. So this comment chain is a bunch of internet commenters making weird cherry picked, straw man arguments and misrepresenting or miscomprehending what I’m saying.
Legitimately, the llm grok’d the gist of my comment while you all are arguing against your own strawmen argument.
I have it parse huge policy memos into things I actually might give a shit about.
I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.
Here are two cases from your original comment that would have data loss. I get you didn’t use the phrase “data loss” but that doesn’t mean your examples didn’t have that flaw.
Sorry if you view all this as lemmy being “anti ai”. For me, I’m a big fan of ML and what things like image recognition can do. I’m not a fan of LLMs becoming so overhyped that it basically gave the other ML use cases a bad name.
So here’s the path that you’re envisioning:
Someone wants to send you a communication of some sort. They draft a series of bullet points or short version.
They have an LLM elaborate it into a long-form email or report.
They send the long-from to you.
You receive it and have an LLM summarize the long-form into a short-form.
You read the short form.
Do you realize how stupid this whole process is? The LLM in step (2) cannot create new useful information from nothing. It is simply elaborating on the bullet points or short version of whatever was fed to it. It’s extrapolating and elaborating, and it is doing so in a lossy manner. Then in step (4), you go through ANOTHER lossy process. The LLM in step (4) is summarizing things, and it might be removing some of the original real information the human created in step (1), rather than the useless fluff the LLM in step (2) added.
WHY NOT JUST HAVE THE PERSON DIRECTLY SEND YOU THE BULLET POINTS FROM STEP (1)???!!
This is idiocy. Pure and simply idiocy. We send start with a series of bullet points, and we end with a series of bullet points, and it’s translated through two separate lossy translation matrices. And we pointlessly burn huge amounts of electricity in the process.
This is fucking stupid. If no one is actually going to read the long-form communications, the long-form communications SHOULDN’T EXIST.
Yep, pretty much every single “good” use case of AI I’ve seen is basically a band aid solution to enshitification.
You know what’s a good solution to that? Removing the profit motive.
That’s not what I am envisioning at all. That would be absurd.
Ironically, an gpt4o understood my post better than you :P
" Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."
if you believe that ai summary, i have a bridge that i’d like to sell to you.
As the author of the post it summarized, I agree with the summary.
Now, tell me more about this bridge.
do look up the “forer effect” and then read that ai summary again.
Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.
In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.
Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.
I read this comment chain and no? They are giving you actual criticism about the fundamental behaviour of the technology.
The person basically explained the broken telephone game and how “summarizing” will always have data loss by definition, and you just responded with:
Just because you couldn’t notice the data loss doesn’t mean the principle isn’t true.
Your basically saying translating something from English to Spanish and then back to English again is flawless cause it worked for some words for you.
I’m not saying any thing you guys are saying that I’m saying. Wtf is happening. I never said anything about data loss. I never said I wanted people using LLMs to email each other. So this comment chain is a bunch of internet commenters making weird cherry picked, straw man arguments and misrepresenting or miscomprehending what I’m saying.
Legitimately, the llm grok’d the gist of my comment while you all are arguing against your own strawmen argument.
Here are two cases from your original comment that would have data loss. I get you didn’t use the phrase “data loss” but that doesn’t mean your examples didn’t have that flaw.
Sorry if you view all this as lemmy being “anti ai”. For me, I’m a big fan of ML and what things like image recognition can do. I’m not a fan of LLMs becoming so overhyped that it basically gave the other ML use cases a bad name.