This seems like it has pretty powerful potential for space flight.
Being able to aggressively min max packaging materials to secure materials could be critical for reducing payload sizes on shuttles, where every single individual gram counts.
Each kg of packaging is thousands of dollars to get into orbit, so that’s really appealing.
I’d be curious to see if Amazon is also working on box packing algorithms for maximizing fitting n parcels across x delivery trucks.
IE if you have 10,00 boxes to move, what’s the fewest delivery trucks you can fit those boxes into as fast as possible too, which introduces multiple complex concepts. Both packing to maximize space usage and the order you pack it in to minimize armature travel time…
I’d put money down amazon is perfecting this algorithm right now, and has been for awhile.
This is already worked in through mathematics, it is its own mathematical field. We can optimize packaging through formulas that are very fast and accurate. No need to train a AI for that. Especially not for space flight, AI are prone to hallucinations that is not something you want anywhere near any space mission that requires precision and predictability.
I believe Johannes Kepler started this field in the 1600s, it is not something new. It is definitely a complex problem, but not new and not unheard of. Amazon is not exactly inventing something new and amazing here…
When I use “AI” I’m using computer science terminology. Artificial intelligence is a subfield of CS, in that sense, any model that comes of that field is, by definition, AI.
Some AI, namely, LLMs, can hallucinate, but not AI in general. I just had a bit of fun in how I worded it, I guess I should’ve expected someone to become annoyingly nitpicky about it.
I don’t think I was being wrong, technically, I do think you can write that way if you want to be a bit facetious, but I’m not a native speaker so, maybe not.
AI in general is definitely prone to hallucinations. It is more commonly seen in LLMs because it is more widely used by the public. It is definitely a problem with all AI
Text to video, automated driving, object detection, language translations. I might be misusing the term, you could argue that the word is describing what LLMs commonly does and that is where the term is derived from. You can also argue that AI is sometimes correct and the human have issues identifying the correct answer. But In my mind it is much the same just different applications.
A car completely missing a firetruck approaching or a LLM just spewing out wrong statements is the same to me.
The model makes decisions thinking it is right, but for whatever reason can’t see a firetruck or stopsign or misidentifies the object… you know almost like how a human hallucinating would perceive something from external sensory that is not there.
I don’t mind giving it another term, but “being wrong” is misleading. But you are correct in the sense that it depends on every given case…
No, the model isn’t “thinking”, no model in use today has anything resembling an internal cognitive process. It is making a prediction. A covid test is predicting whether you have the Covid-19 virus inside you or not. If its prediction contradicts your biological state, it is wrong. If an object recognition algorithm does not predict there being a firetruck, how is that not being wrong in the same way?
Amazon probably does have some programmatic way of determining how much to fit in a truck, but that’s not what this is. Instead, it’s them trying to cheap out on packaging materials in the dumbest way possible, by figuring out what the reasonably acceptable minimum threshold is for packaging durability but not taking into account size or packing multiples of items at all (as far as I can tell).
This is a pure cost cutting measure on their part. Anything else is just a tangential side benefit.
This seems like it has pretty powerful potential for space flight.
Being able to aggressively min max packaging materials to secure materials could be critical for reducing payload sizes on shuttles, where every single individual gram counts.
Each kg of packaging is thousands of dollars to get into orbit, so that’s really appealing.
I’d be curious to see if Amazon is also working on box packing algorithms for maximizing fitting n parcels across x delivery trucks.
IE if you have 10,00 boxes to move, what’s the fewest delivery trucks you can fit those boxes into as fast as possible too, which introduces multiple complex concepts. Both packing to maximize space usage and the order you pack it in to minimize armature travel time…
I’d put money down amazon is perfecting this algorithm right now, and has been for awhile.
This is already worked in through mathematics, it is its own mathematical field. We can optimize packaging through formulas that are very fast and accurate. No need to train a AI for that. Especially not for space flight, AI are prone to hallucinations that is not something you want anywhere near any space mission that requires precision and predictability. I believe Johannes Kepler started this field in the 1600s, it is not something new. It is definitely a complex problem, but not new and not unheard of. Amazon is not exactly inventing something new and amazing here…
AI is not prone to hallucinations, LLMs are. I doubt Amazon is building a chatbot to optimise packaging.
What do you consider to be an AI?
And do you consider any of the existing systems to be the one?
When I use “AI” I’m using computer science terminology. Artificial intelligence is a subfield of CS, in that sense, any model that comes of that field is, by definition, AI.
Then it’s strange that you are separating AI and LLM, because in CS LLM is a type of artificial intelligence.
AI as a whole is not subject to the flaws of LLMs
Some AI, namely, LLMs, can hallucinate, but not AI in general. I just had a bit of fun in how I worded it, I guess I should’ve expected someone to become annoyingly nitpicky about it.
Technicalities matter in technological matters.
I don’t think I was being wrong, technically, I do think you can write that way if you want to be a bit facetious, but I’m not a native speaker so, maybe not.
AI in general is definitely prone to hallucinations. It is more commonly seen in LLMs because it is more widely used by the public. It is definitely a problem with all AI
Besides generative AI, which models can hallucinate?
Text to video, automated driving, object detection, language translations. I might be misusing the term, you could argue that the word is describing what LLMs commonly does and that is where the term is derived from. You can also argue that AI is sometimes correct and the human have issues identifying the correct answer. But In my mind it is much the same just different applications. A car completely missing a firetruck approaching or a LLM just spewing out wrong statements is the same to me.
Yeah, well it’s not the same. Models are wrong all the time, why use a different term at all when it’s just “being wrong”?
The model makes decisions thinking it is right, but for whatever reason can’t see a firetruck or stopsign or misidentifies the object… you know almost like how a human hallucinating would perceive something from external sensory that is not there.
I don’t mind giving it another term, but “being wrong” is misleading. But you are correct in the sense that it depends on every given case…
No, the model isn’t “thinking”, no model in use today has anything resembling an internal cognitive process. It is making a prediction. A covid test is predicting whether you have the Covid-19 virus inside you or not. If its prediction contradicts your biological state, it is wrong. If an object recognition algorithm does not predict there being a firetruck, how is that not being wrong in the same way?
Amazon probably does have some programmatic way of determining how much to fit in a truck, but that’s not what this is. Instead, it’s them trying to cheap out on packaging materials in the dumbest way possible, by figuring out what the reasonably acceptable minimum threshold is for packaging durability but not taking into account size or packing multiples of items at all (as far as I can tell).
This is a pure cost cutting measure on their part. Anything else is just a tangential side benefit.