this post was submitted on 17 Apr 2024
109 points (91.6% liked)

Technology

59366 readers
3600 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 48 comments
sorted by: hot top controversial new old
[–] [email protected] 76 points 7 months ago (2 children)

This...actually seems like a good use of AI? I generally think AI is being shoehorned into a lot of use cases where it doesn't belong but this seems like a proper place to use it. It's serving a specific and defined purpose rather than trying to handle unfiltered customer input or do overly generic tasks,

[–] [email protected] 12 points 7 months ago

Eh, I doubt that. Bin packing is a very well-researched problem. It's one of those nasty NP ones but we already have very good algorithms giving very good approximations in very short amounts of time the chance that throwing machine learning at the problem helps is not zero, but close to it. What that kind of approach certainly won't get you is guarantees, those approximation algorithms can be configured to spit out solutions that are at most 1% or whatever you want worse than the optimal solution.

I doubt this actually has anything to do with Amazon's logistics operations it's just their marketing team wanting to hype up Amazon for AI.

[–] [email protected] 5 points 7 months ago* (last edited 7 months ago) (2 children)

Yeah, it is one of the least bad uses for it.

But then again, using literal tera-watts-hours of compute power to save on the easiest actually recyclable material known to man (cardboard), maybe that's just me, maybe I'm too jaded, but it sounds like a pretty bad overall outcome.

It isn't a bad deal for Amazon, tho, who is likely to save on costs, that way, since energy is still orders of magnitude cheaper than it should be[^1], and cardboard is getting pricier.

[^1]: if we were to account for the available supply, the demand, and the future (think sooner than later) need for transition towards new energy sources... Some that simply do not have the same potential.

[–] [email protected] 32 points 7 months ago (2 children)

I think you're overstating the compute power and understating the amount of cardboard Amazon uses

[–] [email protected] 5 points 7 months ago* (last edited 7 months ago)

So this may be a more efficient use of computing power. Brute force calculation of combinations is costly because there are so many possibilities. A learning model can be fed data from brute force calculations and from humans tasked with packing efficiently to develop an empirical model (this is the AI part) of how to package assorted items. That model could take much less computing power than the Brite force method.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

I think you're overstating the compute power [...]

I don't actually think so. A100 GPUs in server chassis have a 400 or 500W TDP depending on the configuration, and even if I'm assuming 400, with 4 per watercooled 1U chassis, a 47U rack with those would consume about 100kW with power supply efficiency and whatnot.

Running those for a day only would be 2.4GWh.

Now, I'm not assuming Amazon would own 100s of those racks at every DC, but they probably would use at least a couple of such racks to train their model (time is money, right?). And training them for a week with just two of those would be 35GWh, and I can only extrapolate from there.

So I don't think that going to TWh is such an overstatement.

[...] and understating the amount of cardboard Amazon uses

That, very possibly.

I have seldom used Amazon ever, maybe 5 times tops, and I can only remember two times. Those two times, I ordered a smartphone and a bunch of electronics supplies, and I don't remember the packaging being excessive. But I know from plenty of memes that they regularly overdo it. That, coupled with the insane amount of shit people order online... And yes, I believe you are right on that one.

Even so, as long as it is cardboard, or paper, and not plastic and glue, it isn't a big ecological issue.

However, that makes no difference to Amazon financially, cost is cost, and they only care about that.

But let's not pretend they are doing a good thing then. It is a cost effective measure for them, that ends up worsening the situation for everyone else, because the tradeoff is good economically, and terrible ecologically.

If they wanted to do a good thing, they could use machine learning to optimise the combining of deliveries in the same area, to save on petrol, and by extension, pollution from their vehicles, but that would actually worsen the customer experience, and end up costing them more than it would save them, so that's never gonna happen.

[–] [email protected] 3 points 7 months ago

They may also save costs on trucking. Smaller boxes => less full truck.

[–] [email protected] 31 points 7 months ago

AI

Always Indian

[–] [email protected] 12 points 7 months ago

Just use whatever Temu uses.

Temu packagers could fit a whole factory in the boxes Amazon uses to ship my deodorant

[–] [email protected] 11 points 7 months ago (3 children)

Note that "optimizing" Amazon package can't possibly be a very high bar to clear. Just being smart enough to package multiple items coming from the same distribution center on the same delivery route into the same box would do it... Something that other online retailers figured out decades ago but apparently somehow Amazon still hasn't.

[–] [email protected] 28 points 7 months ago (1 children)

Used to work at an Amazon warehouse, things are a lot more complex then you seem to realize.

[–] [email protected] 7 points 7 months ago (1 children)

Care to expound? Can you explain why a small bottle of vitamins will sometimes come in a box 8 times it's size. Filled with air bubble packing? I've always got the sense that box size was not at all a priority for them.

[–] [email protected] 20 points 7 months ago (1 children)

Depends on how it is fulfilled, if it comes from an Amazon warehouse directly vs directly fulfilled by a third party (if it comes in an Amazon branded box with Amazon tape it probably got fulfilled at an Amazon warehouse).

If it did get fulfilled at an Amazon warehouse, the one I worked at it goes through a process wherein it is retrieved either by a "picker" manually or via a KIVA bot filled with items (depends on how old the warehouse is, I'd be surprised if they're not all converted to KIVA bot style by now as it's been nearly ten years since I've worked there and I worked in a brand new warehouse at the time and we had the bots style)

So the picker puts it into a bin with several other items all scanned together using the ASIN number (separate Amazon barcode, longer and shorter then other barcodes) which gets loaded onto a conveyer which eventually ends up at a sorter, if it's AFE (multi-item orders, the department I mostly worked in) it gets pushed to a certain line where it's manually further sorted from the yellow bin, scanned again and placed into a smaller grey bin (rebin) which goes to another sorter eventually into another line where it gets placed into a wall of cubby-holes (I believe that was called induction), the cubby holes would have all the items for an order, once it's "complete" you push it through to the other side of the cubby hole where the packers are, the packers have a screen that tells them what items are in the order, along with which box to use, they have a whole wall in front of them of different box sizes, along with a feed of the larger bubble cushin things and an automatic tape dispenser for the box side the system told the packer it needed (it didn't work a lot of the time so there were also buttons to select a specific box size of tape).

After all that the packer pushes it forward into another conveyer belt, where it is weighed automatically to hopefully ensure it is correct, if it is close enough to the correct weight, it goes out to shipping. (If not, it gets kicked out for problem solvers to figure out what's wrong with it, that was my main job).

Single item pack is less complex slightly for obvious reasons (don't have to stage the items together) but is the same basic idea.

Now to answer the questions specifically, why a small bottle of vitamins ends up in a large box, either they ran out of the correct box needed or it was just an incompetent worker who doesn't care what box they use regardless of what the system tells them they should use. Technically the system could kick it out, but that's a lot of extra time, effort and a wasted box.

[–] [email protected] 5 points 7 months ago (1 children)

Interesting. Thanks. I'm guessing that Amazon maybe isn't great at incentivizing workers to care. If the last step for a single item is a human putting it into a box, I could see it being easier to have a stack of big boxes that one would just default to rather than paying attention to size recommendations.

[–] [email protected] 4 points 7 months ago

I only lasted 6 months, if that's any indication lol.

It was a really cool job, but you can't have your phone while working (have to literally leave it outside in a locker, there are metal detractors you walk through to get in and out), the breaks are way too short (there were times more then half my break was spent just walking from my area to the break room when I did pick), and to top it off, it was a 4/10 shift (overnights for me) and frequently they would tell us on the last day right before midnight that we had to work another full 10 hour day tomorrow.

After several months of 5/12s during "peak" seasons (nov - march) I had enough.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

In my experience, every item from the same warehouse comes packaged together. Are you sure the items are sourced from the same warehouse, because they aren't going to unpack them and pack them together again when they reach the final distribution location. Perhaps it becomes super inefficient to pack items together in super large warehouses, where the items are sourced far apart from each other?

[–] [email protected] 2 points 7 months ago (1 children)

Bruh did you read the article at all? Nothing you talked about has anything to do with what this AI is for.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

Yes, I did. And what it talks about actually ignores my complaint, which is why I file their claim about "avoid more than 2 millions tons of packaging material worldwide" in the bogus column.

Their system obviously does not take into account multi-item orders at all, and seems to operate purely on a one-product, one-package model. Which is stupid. They're not trying to avoid landfill waste, they're trying to minimize returns due to breakages but without putting any human intervention into the process.

[–] [email protected] 6 points 7 months ago (2 children)

This seems like it has pretty powerful potential for space flight.

Being able to aggressively min max packaging materials to secure materials could be critical for reducing payload sizes on shuttles, where every single individual gram counts.

Each kg of packaging is thousands of dollars to get into orbit, so that's really appealing.

I'd be curious to see if Amazon is also working on box packing algorithms for maximizing fitting n parcels across x delivery trucks.

IE if you have 10,00 boxes to move, what's the fewest delivery trucks you can fit those boxes into as fast as possible too, which introduces multiple complex concepts. Both packing to maximize space usage and the order you pack it in to minimize armature travel time...

I'd put money down amazon is perfecting this algorithm right now, and has been for awhile.

[–] [email protected] 5 points 7 months ago (1 children)

This is already worked in through mathematics, it is its own mathematical field. We can optimize packaging through formulas that are very fast and accurate. No need to train a AI for that. Especially not for space flight, AI are prone to hallucinations that is not something you want anywhere near any space mission that requires precision and predictability. I believe Johannes Kepler started this field in the 1600s, it is not something new. It is definitely a complex problem, but not new and not unheard of. Amazon is not exactly inventing something new and amazing here..

[–] [email protected] 5 points 7 months ago (3 children)

AI is not prone to hallucinations, LLMs are. I doubt Amazon is building a chatbot to optimise packaging.

[–] [email protected] 3 points 7 months ago (1 children)

I mean, AI is used in fraud detection pretty often; when it hits a false positive (which happens frequently on a population-level basis), is that not a hallucination of some sort? Obviously LLMs can go off the rails much further because it's readable text, but any machine learning model will occasionally spit out really bad guesses almost any person could have done better with. (To be fair, humans are highly capable of really bad guesses too).

[–] [email protected] 6 points 7 months ago (1 children)

No, false positives and false negatives are not hallucinations. Otherwise things like a blood test not involving any ml would also be halucinating which removes all meaning from the term.

[–] [email protected] 2 points 7 months ago

That's fair. I think fundamentally a false positive/negative isn't that much different. Pretty much all tests—especially those dealing with real world conditions—are heuristic, as are all LLMs by necessity of the design. Hallucination is a pretty specific term given to AI as an attempt to assign agency to a system that doesn't actually have any (by implying it's crazy and making stuff up instead of a black box with deterministic inputs and outputs spitting out something factually wrong but with a similar format to what is trained on). I feel like the nature of any tool where "you can't trust this to be entirely accurate" should have an umbrella term that encompasses both types of providing inaccurate info under certain conditions.

I suppose the difference is that AI is a lot more likely to randomly go off, whereas a blood test is likelier to provide repeated false positives for the same person with their unique biology? There's also the fact that most medical tests represent a true/false dichotomy or lookup table, whereas an LLM is given the entire bounds of language.

Would an AI clustering algorithm (say, K-means for instance) giving an inaccurate diagnosis be a false positive/negative or a hallucination? These models can be programmed on a sliding scale and I feel like there's definitely an area where the line could get pretty blurry.

[–] [email protected] 0 points 7 months ago (1 children)

What do you consider to be an AI?
And do you consider any of the existing systems to be the one?

[–] [email protected] 1 points 7 months ago (1 children)

When I use "AI" I'm using computer science terminology. Artificial intelligence is a subfield of CS, in that sense, any model that comes of that field is, by definition, AI.

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (2 children)

Then it's strange that you are separating AI and LLM, because in CS LLM is a type of artificial intelligence.

[–] [email protected] 1 points 7 months ago (1 children)

Some AI, namely, LLMs, can hallucinate, but not AI in general. I just had a bit of fun in how I worded it, I guess I should've expected someone to become annoyingly nitpicky about it.

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (1 children)

Technicalities matter in technological matters.

[–] [email protected] 1 points 7 months ago

I don't think I was being wrong, technically, I do think you can write that way if you want to be a bit facetious, but I'm not a native speaker so, maybe not.

[–] [email protected] 1 points 7 months ago

AI as a whole is not subject to the flaws of LLMs

[–] [email protected] -2 points 7 months ago (1 children)

AI in general is definitely prone to hallucinations. It is more commonly seen in LLMs because it is more widely used by the public. It is definitely a problem with all AI

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (1 children)

Besides generative AI, which models can hallucinate?

[–] [email protected] 1 points 7 months ago (1 children)

Text to video, automated driving, object detection, language translations. I might be misusing the term, you could argue that the word is describing what LLMs commonly does and that is where the term is derived from. You can also argue that AI is sometimes correct and the human have issues identifying the correct answer. But In my mind it is much the same just different applications. A car completely missing a firetruck approaching or a LLM just spewing out wrong statements is the same to me.

[–] [email protected] 1 points 7 months ago (1 children)

Yeah, well it's not the same. Models are wrong all the time, why use a different term at all when it's just "being wrong"?

[–] [email protected] 1 points 7 months ago (1 children)

The model makes decisions thinking it is right, but for whatever reason can't see a firetruck or stopsign or misidentifies the object.. you know almost like how a human hallucinating would perceive something from external sensory that is not there.

I don't mind giving it another term, but "being wrong" is misleading. But you are correct in the sense that it depends on every given case..

[–] [email protected] 1 points 7 months ago (1 children)

No, the model isn't "thinking", no model in use today has anything resembling an internal cognitive process. It is making a prediction. A covid test is predicting whether you have the Covid-19 virus inside you or not. If its prediction contradicts your biological state, it is wrong. If an object recognition algorithm does not predict there being a firetruck, how is that not being wrong in the same way?

[–] [email protected] 1 points 7 months ago

Predicting? Ok, if you say so.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

Amazon probably does have some programmatic way of determining how much to fit in a truck, but that's not what this is. Instead, it's them trying to cheap out on packaging materials in the dumbest way possible, by figuring out what the reasonably acceptable minimum threshold is for packaging durability but not taking into account size or packing multiples of items at all (as far as I can tell).

This is a pure cost cutting measure on their part. Anything else is just a tangential side benefit.

[–] [email protected] 2 points 7 months ago

lol, can't wait for my Klein bottle shaped package

[–] [email protected] 2 points 7 months ago

So this is good to reduce packaging waste and probably fitting more packages on trucks/planes, reducing emissions I am guessing. But how much power does running it cost and how is the power being generated? Is it a net loss for their global emission, or is it just making Amazon save money? I'm still pretty dumb at this stuff

[–] [email protected] -1 points 7 months ago

Whew, why not let human do this ?