this post was submitted on 19 Jan 2024
255 points (95.4% liked)

Technology

59299 readers
4838 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia's H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta's focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google's DeepMind. The company's AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 35 points 9 months ago (1 children)

Who isn't at this point? Feels like every player in AI is buying thousands of Nvidia enterprise cards.

[–] [email protected] 15 points 9 months ago (4 children)

The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it's estimated they "only" used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn't worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?

[–] [email protected] 17 points 9 months ago

The estimated training time for GPT-4 is 90 days though.

Assuming you could scale that linearly with the amount of hardware, you'd get it down to about 3.5 days. From four times a year to twice a week.

If you're scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.

[–] [email protected] 5 points 9 months ago (1 children)

Or they just have too much money.

[–] [email protected] 4 points 9 months ago

Which will be solved by them spending it.

[–] [email protected] 3 points 9 months ago (1 children)

Would that be diminishing returns on quality, or training speed?

If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?

[–] [email protected] 4 points 9 months ago

Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.

My guess is that the main reason for all the GPUs is they're going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as "open" then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don't really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).

[–] [email protected] 2 points 9 months ago

Might be a bit of a tell that they think they have something.