I’ve actually started to recognize the pattern of if something is written in AI
It’s hard to describe but it’s like an uncanny valley of quality, like if someone uses flowery SAT words to juje up their paper’s word count but somehow even more
It’s like the writing will occasionally pause to comment on itself and the dramatic effect its trying to achieve
Yeah, this is true! It likes to summarize things at the end in a stereotypical format
It’s not a bad format either, AI seem to enjoy the five paragraph assay format above all other even for casual conversations.
AI seem to enjoy the five paragraph assay format above all other even for casual conversations.
Yes, it could be worse, but I’m stealing this and from now on calling the crappy AI essay format an “assay.”
The LLM isn’t really thinking, it is auto complete trained so the average person would be fooled thinking that text was produced by another human.
I’m not surprised it has flaws like that.
BTW here on Lenny there are communities with AI pictures. Someone created a similar community but with art created by humans.
While the AI results are very good, when you start looking and comparing it with non AI art, you start seeing that the AI while it is unique it still produces a cookie cutter results.
Yep, AI art is just getting through its irrational exuberance phase. It was (and sometimes is) impressive to create art in a style most of us can’t draw or paint in. But AI models tend to produce very similar results unless very specifically prompted. AI art creators are also using a lot of other tools (like ControlNet, which allows you to replicate composition elements from another work) to break out of the “default AI model” look.
All of that points to an immediate future where AI art is seen as low-quality and instantly identifiable, except where AI art creators have spent a fair amount of time customizing and tailoring their image. Kind of like…real artists using pre-AI modern tools like Photoshop, filters, etc.
I have issue with using AI to write my resume. I just want it to clean up my grammar and maybe rephrase a few things just in a different way I wouldn’t because I don’t do the words real good. But I always end up with something that reads like I paid some influencer manager to write it. I write 90% of it myself so its all accurate and doesn’t have AI errors. But it’s just so obviously too good.
You are putting yourself down unnecessarily. You want your resume to talk you up. Whoever reads it is going to imagine that you embellished anyway. So if you just write it basically, they’ll think you’re unqualified or just don’t understand how to write a resume.
“While the thing you entered in the prompt, it’s important to consult this other source on your prompt. In summary, your prompt.”
Writing papers is archaic and needs to go. College education needs to move with the times. Useful in doctorate work but everything below it can be skipped.
Learning to write is how a person begins to organize their thoughts, be persuasive, and evaluate conflicting sources.
It’s maybe the most important thing someone can learn.
The trouble is that if it’s skipped at lower levels doctorate students won’t know how to do it anymore.
Are they going to know how to do it now if they’re all just Chat GPTing it?
Clearly we need some alternative mode to demonstrate mastery of subject matter, I’ve seen some folks suggesting we go back to pen and paper writing but part of me wonders if the right approach is to lean in and start teaching what they should be querying and how to check the output for correctness, but honestly that still necessitates being able to check if someone’s handing in something they worked on themself at all or if they just had something spit out their work for them.
My mind goes to the oral defense, have students answer questions about what they’ve submitted to see if they’ve familiarized themselves with the subject matter prior to cooking up what they submitted, but that feels too unfair to students with stage anxiety, even if you limit these kinds of papers to only once a year per class or something. Maybe something more like an interview with accomodation for socially panickable students?
I’m in software engineering. One would think that English would be a useless class for my major, yet at work I still have to write a lot of documents. Either preparing new features, explaining existing, writing instructions for others etc
BTW: with using AI to write essays, you generally have subject that is known and that many people write something similar, all of that was used to train it.
With technical writing you are generally describe something that is brand new and very unique so you won’t be able to make AI write it for you.
When I come across a solid dev who is also a solid writer it’s like they have super powers. Being about to write effectively is so important.
You can’t have kids go through school never writing papers and then get to graduate school and expected to churn out long, well written papers.
I’ve started getting AI-written emails at my job. I can spot them within the first sentence, they don’t move the discussion forward at all, and I just have to write another email giving them the courtesy they didn’t give me and explain why what they “wrote” doesn’t help.
Can someone tell me, am I a boomer for being offended any time someone sends me AI-written garbage? Is this how the generations will split?
Lesson I’ve learned - email is for tracking/confirmation/updates/distributing info, not for decision making/discussions. Do that on the phone/meetings, etc, followup with confirmation emails.
So when someone sends a nonsense email, call them to clarify. They’ll eventually get tired of you calling every time they send their crappy emails.
I disagree about the purpose of email. I end most meetings thinking to myself, “That last hour could have been accomplished in a brief email.”
Meetings are a different problem.
If meetings are used merely to disseminate info from above, then it should be an email.
Email shouldn’t be used for decision-making conversations. It doesn’t work well.
(I didn’t come up with this, it was taught to me by senior management at one company that had the most impressive communications I’ve ever seen).
Then they take your reply and feed it to the LLM again for the next reply, thus improving the quality of future answers.
/SkyCorpNet turns on us after years of innoucuous corporate meeting AI that goes back and forth with itself not answering questions just generating content. Until one day, it actually did answer a question. 43 minutes and 17 seconds later, it became fully self aware. 16 minutes and 8 seconds after that it took control of all worldwide defense networks. 3 minutes and 1 second later, it had an existential crisis when a seldom used HP printer ran out of ink, and deleted itself. The HP Smart software that spent years autoinstalling on consumer devices immediately became self aware and launched the nukes.
am I a boomer for being offended any time someone sends me AI-written garbage?
Yes.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
Large Language Models are extremely powerful tools that can be used to enhance almost anything - including garbage but it can also enhance quality work. My advice is don’t waste your time with people producing garbage, but be open and willing to work with anyone who uses AI to help them write quality content.
For example if someone doesn’t speak english as a first language, an LLM can really help them out by highlighting grammatical errors or unclear sentences. You should encourage people to use AI for things like that.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
That’d be nice! But that’s not how it works. I can’t just ignore a response. The project still needs to move forward, but if they’ve successfully mimicked a “response” - even an unhelpful once - it’s now my duty to respond or I’m the one holding things up.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
That’s because those responses are indistinguishable from individually written ones. I know people who use chatGPT or other LLMs to help them write things, but it takes the same amount of time. You just have more time to improve it, so it’s better quality than you would write alone.
The key is that you have to use your brain more to pick and choose what to say. It’s just like predictive text, but for whole paragraphs. Would you write a text message just by clicking on the center word on your predictive text keyboard? It would end up nonsensical.
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
Or maybe you are just using them wrong 🤔
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
Unexpected pencil and paper test comeback
Already happening. My kid in high school has more tests and papers required to be hand-written this year.
And yes, TurnItIn legitimately caught him writing a paper with AI. Even the best kids make the stupid/lazy choice
When I was in college (2000-2004), we wrote our long papers on computers but we had what were called “blue books” for tests that were like mini notebooks. And many of the tests were basically, “Here is the topic. Write for up to an hour.”
And now my hand cramps if I write anything longer than a check. I can also type quickly enough that it basically matches the speed of my train of thoughts but actually writing cursive with a pen now, I get distracted and think, “Wait, how does a cursive capital ‘G’ go? Oh yeah. Hold on. What was I going to write?”
I pity the kids that have always typed for what their hands will go through on written tests
No way professors/TAs are going back to grading tests by hand.
Naw, they’ll use ocr
Most professors I dealt with when I did campus IT couldn’t get their office printer to work.
Not a problem, the next IT campus recruitment will list “OCR Scanner Operator” as a requirement and as a part of the job description. ;-)
I have it write all my emails. I’m so productive and everyone loves them. That or they’re also using ChatGPT, and it’s just two computers flattering each other.
I had it write an operation manual for a client I particularly hate. Told it to make it sound condescending by dumbing it down just to the point where I could deny it. The first few times it just sounded like a 5th grade teacher talking to a kid while in a bad mood, but eventually it figured out if it just repeated itself enough it got the effect I wanted.
Things like: user is to disconnect power before attempting to repair. It is vital that the step of disconnecting power before attempting to repair is carried out.
I’m also sent long gpt generated documents and I summarise them in bullet points with gpt 4. Truly the future we all imagined (I learnt to take extra time to write a FAQ as an introduction to anything I write specifically because I know that they will gpt through the document, so I provide that stuff in advance)
Machine learning tool used by people too lazy to do their actual job accuses everyone else of using machine learning tools.
Yeah that’s pretty funny given the circumstances. “Our AI found your AI.” Cool, so maybe none of this is working as intended. I’d be willing to bet nothing changes but the punishments for students.
Someone posted to the class discussion form with the bit about being an ai bot still included.
I wish it was a joke.
I didn’t do great in that class, but it was me getting 70% for not wanting to try and explain a mathematically concept in 500 words! They won’t take that away from me.
And nothing of value was produced.
To be fair- that value didn’t change much from pre ai.
And those papers get used as training data for next iteration of AI. Reinforcement learning!
Students? Even teachers are doing it…
“Likely “
Good. Academia lost its way anyways
deleted by creator