81
ChatGPT's o3 Model Found Remote Zeroday in Linux Kernel Code
(linuxiac.com)
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
It matters, because it's a tool. That means it can be used correctly or incorrectly . . . and most people who don't understand a given tool end up using it incorrectly, and in doing so, damage themselves, the tool, and/or innocent bystanders.
True AI ("general artificial intelligence", if you prefer) would qualify as a person in its own right, rather than a tool, and therefore be able to take responsibility for its own actions. LLMs can't do that, so the responsibility for anything done by these types of model lies with either the person using it (or requiring its use) or whoever advertised the LLM as fit for some purpose. And that's VERY important, from a legal, cultural, and societal point of view.
Ok, good point. It also matters if AI is true intelligence or not. What I meant was the comment I replied to said
Like if it is not true AI nothing it does matters? The effects of the tool, even if not true AI, matters a lot.