• 0 Posts
  • 36 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.

    This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.


  • Just from a quick look at https://fediverse.observer/, it looks like the Fediverse is mostly steady at 1-1.25 million monthly users (give or take) over the past two years with a slight decreasing trend. I think there are some reasons for this that are not entirely in our control.

    There seems to be a global sentiment of disconnecting from social media and the internet in general. So, I wouldn’t be surprised if ever platform is seeing a decaying user base. Anecdotally, among the people I see in real life, there is a general sense of exhaustion with online spaces. Whether it’s from corporate-own, enshittified platforms to even places on the Fediverse, the people with whom I interact tend to find the entire thing hollow. They’ve trimmed down to one or two platforms (if that). In fact, I’ve even started to get that way. In the past, if someone were wrong and arguing against a point I made, I’d engage, especially if it’s in something that I have expertise. Now, why bother? There’s no use arguing; people have little interest in admitting fault or engaging in good faith (again anecdotally). That said, I’ll concede that the Fediverse is a bit better on that front, but not by much.

    Then there’s the alternative nature of the Fediverse. It’s been rehashed over and over about how “difficult” it is to get on and use. It’s not actually that hard, but the barrier to entry is an extra step. That small extra step frightens people away from even joining. The only time that barrier gets broken is when a “legacy” social media platform does something anti-user. Then there is a refugee wave that comes in and goes out leading to a modest durable increase in users. Recently, there just hasn’t been a major controversy on a major platform that leads people here.

    Now, my final thought on this is to ask: Is a small and steady-ish population (despite modest decay) actually bad? In my view, I don’t think it is. Being smaller and with a smallish barrier to entry means that we exclude a sizable number of the low-effort population. So, there’s less (no zero) slop here. Plus, discussions, when had in good faith, can be much deeper and less filled with stupid low-effort jokes. Overall, I’m not too concerned with the number of people on the Fediverse. Growth isn’t necessarily the best thing. Even so, with the way most mainstream platforms are going, it’s inevitable that they will do something stupid that drives more people to the Fediverse at least for a time.

    TL;DR: The monthly population is mostly steady with a modest decay. Most social media is likely seeing similar trends. I don’t think the smaller userbase is that bad of a thing.




  • It’s also not all-or-none. Someone who otherwise is really interested in learning the material may just skate through using AI in a class that is uninteresting to them but required. Or someone might have life come up with a particularly strict instructor who doesn’t accept late work, and using AI is just a means to not fall behind.

    The ones who are running everything through an LLM are stupid and ultimately shooting themselves in the foot. The others may just be taking a shortcut through some busy work or ensuring a life event doesn’t tank their grade.


  • I see both points. You’re totally right that for a company, it’s just the result that matters. However, to Bradley’s, since he’s specifically talking about art direction, the journey is important in so much as getting a passable result. I’ve only dabbled with 2D and 3D art, but converting to 3D requires an understanding of the geometries of things and how they look from different angles. Some things look cool from one angle and really bad from another. Doing the real work allows you to figure that out and abandon a design before too much work is put in or modify it so it works better.

    When it comes to software, though, I’m kinda on the fence. I like to use AI for small bits of code and knocking out boilerplate so that I can focus on making the “real” part of the code good. I hope the real, creative, and hard parts of a project aren’t being LLM’d away, but I wouldn’t be surprised if that’s a mandate from some MBA.













  • It’s a fundamental misunderstanding of what Section 31 is supposed to be. Sloan wasn’t a good guy. 31 actively tried to commit genocide.

    The idea behind them is that arguments of ends justifying the means and “getting dirty” to preserve higher ideals is morally, philosophically, and practically bankrupt. The Federation didn’t need 31 to win the war, and in fact, their methods would have made it much worse. Section 31 as a plot device exists to show us that there will always be those looking to use higher ideals to support terrible actions, and we must be constantly vigilant against them.

    It truly pains me how that message has been twisted, and people think Section 31 are not only good guys but also cool.


  • The thing I’m heartened by is that there is a fundamental misunderstanding of LLMs among the MBA/“leadership” group. They actually think these models are intelligent. I’ve heard people say, “Well, just ask the AI,” meaning asking ChatGPT. Anyone who actually does that and thinks they have a leg up are insane and kidding themselves. If they outsource their thinking and coding to an LLM, they might start getting ahead quickly, but they will then fall behind just as quickly because the quality will be middling at best. They don’t understand how to best use the technology, and they will end up hanging themselves with it.

    At the end of the day, all AI is just stupid number tricks. They’re very fancy, impressive number tricks, but it’s just a number trick that just happens to be useful. Solely relying on AI will lead to the downfall of an organization.