this post was submitted on 02 Feb 2025
77 points (100.0% liked)
Technology
37934 readers
196 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Open source requires giving whatever digital information is necessary to build a binary.
In this case, the "binary" are the network weights, and "whatever is necessary" includes both training data, and training code.
DeepSeek is sharing:
In other words: a good amount of open source... with a huge binary blob in the middle.
Thanks for the explanation. I don't understand enough about large language models to give a valuable judgement on this whole Deepseek happening from a technical standpoint. I think it's excellent to have competition on the market and it feels that the US' whole "But they're spying on you and being a national security risk" is a hypocritical outcry when Facebook, OpenAI and the like still exist.
What do you think about Deepseek? If I understood correctly, it's being trained on the output of other LLMs, which makes it much more cheap but, to me it seems, also even less trustworthy because now all the actual human training data is missing and instead it's a bunch of hallucinations, lies and (hopefully more often than not) correctly guessed answers to questions made by humans.
Is there any good LLM that fits this definition of open source, then? I thought the "training data" for good AI was always just: the entire internet, and they were all ethically dubious that way.
What is the concern with only having weights? It's not abritrary code exectution, so there's no security risk or lack of computing control that are the usual goals of open source in the first place.
To me the weights are less of a "blob" and more like an approximate solution to an NP-hard problem. Training is traversing the search space, and sharing a model is just saying "hey, this point looks useful, others should check it out". But maybe that is a blob, since I don't know how they got there.
There are several "good" LLMs trained on open datasets like FineWeb, LAION, DataComp, etc. They are still "ethically dubious", but at least they can be downloaded, analyzed, filtered, and so on. Unfortunately businesses are keeping datasets and training code as a competitive advantage, even "Open"AI stopped publishing them when they saw an opportunity to make money.
Unless one plugs it into an agent... which is kind of the use we expect right now.
Accessing the web, or even web searches, is already equivalent to arbitrary code execution: an LLM could decide to, for example, summarize and compress some context full of trade secrets, then proceed to "search" for it, sending it to wherever it has access to.
Agents can also be allowed to run local commands... again a use we kind of want now ("hey Google, open my alarms" on a smartphone).