this post was submitted on 02 Feb 2025
77 points (100.0% liked)

Technology

37934 readers
196 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/25011462

SECTION 1. SHORT TITLE

This Act may be cited as the ‘‘Decoupling America’s Artificial Intelligence Capabilities from China Act of 2025’’.

SEC. 3. PROHIBITIONS ON IMPORT AND EXPORT OF ARTIFICIAL INTELLIGENCE OR GENERATIVE ARTIFICIAL INTELLIGENCE TECHNOLOGY OR INTELLECTUAL PROPERTY

(a) PROHIBITION ON IMPORTATION.—On and after the date that is 180 days after the date of the enactment of this Act, the importation into the United States of artificial intelligence or generative artificial intelligence technology or intellectual property developed or produced in the People’s Republic of China is prohibited.

Currently, China has the best open source models in text, video and music generation.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 4 hours ago (1 children)

Is there any good LLM that fits this definition of open source, then? I thought the "training data" for good AI was always just: the entire internet, and they were all ethically dubious that way.

What is the concern with only having weights? It's not abritrary code exectution, so there's no security risk or lack of computing control that are the usual goals of open source in the first place.

To me the weights are less of a "blob" and more like an approximate solution to an NP-hard problem. Training is traversing the search space, and sharing a model is just saying "hey, this point looks useful, others should check it out". But maybe that is a blob, since I don't know how they got there.

[–] [email protected] 2 points 2 hours ago

There are several "good" LLMs trained on open datasets like FineWeb, LAION, DataComp, etc. They are still "ethically dubious", but at least they can be downloaded, analyzed, filtered, and so on. Unfortunately businesses are keeping datasets and training code as a competitive advantage, even "Open"AI stopped publishing them when they saw an opportunity to make money.

What is the concern with only having weights? It's not abritrary code exectution

Unless one plugs it into an agent... which is kind of the use we expect right now.

Accessing the web, or even web searches, is already equivalent to arbitrary code execution: an LLM could decide to, for example, summarize and compress some context full of trade secrets, then proceed to "search" for it, sending it to wherever it has access to.

Agents can also be allowed to run local commands... again a use we kind of want now ("hey Google, open my alarms" on a smartphone).