this post was submitted on 13 Jan 2024
921 points (98.7% liked)

Technology

59168 readers
2125 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 10 months ago

This is the best summary I could come up with:


OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.

Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.

While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.

Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”


The original article contains 1,196 words, the summary contains 254 words. Saved 79%. I'm a bot and I'm open source!

[–] [email protected] 4 points 9 months ago (1 children)

My guess is this is being used to spout plausible sounding disinformation.

[–] [email protected] 5 points 9 months ago

That would count as harm and be disallowed by the current policy.

But a military application of using GPT to identify and filter misinformation would not be harm, and would have been prevented by the previous policy prohibiting any military use, but would be allowed under the current policy.

Of course, it gets murkier if the military application of identifying misinformation later ends up with a drone strike on the misinformer. In theory they could submit a usage description of "identify misinformation" which appears to do no harm, but then take the identifications to cause harm.

Which is part of why a broad ban on military use may have been more prudent than a ban only on harmful military usage.

[–] [email protected] 4 points 10 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 3 points 9 months ago

I'm honestly kind of shocked at this. I know for our annual evaluations this year, people were using ChatGPT to write their statements.

I thought for sure someone with a secret squirrel type job was going to use it for that innocuous purpose, end up inputting top secret information, and then the DoD would ban the practice completely.

load more comments
view more: ‹ prev next ›