"A straw man fallacy occurs when someone distorts or exaggerates another person's argument"
They distorted my argument by making shit up. That's called a straw man fallacy.
You think you're saying a lot, but you've said nothing.
"A straw man fallacy occurs when someone distorts or exaggerates another person's argument"
They distorted my argument by making shit up. That's called a straw man fallacy.
You think you're saying a lot, but you've said nothing.
I am not a corporate apologist. I never said I was a corporate apologist. My post history backs up the fact that I am not a corporate apologist. There's nothing "flimsy" about this. It's clear cut if you're willing to objectively look at the logic of the arguments presented.
I'm not using that one point to discredit their entire post. I posted two examples and stated their wall of text was so full of false statements that I wasn't interested in debating every single point with someone who already had their mind made up.
Did you not read my previous post? The first point I refuted is a strawman argument. They created a position I do not hold to make it easier to attack.
If you don't believe this to be a strawman argument, please explain your logic.
Many of their points are factually incorrect. The first point I refuted is a strawman argument. They created a position I do not hold to make it easier to attack.
Dissecting his wall of text would take longer than I'd like, but I would be happy to provide a few examples:
Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).
Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.
There are a ton of invalid assumptions about machine learning as well, but I'm not interested in wasting time on someone who believes they know everything.
You've made many incorrect assumptions and setup several strawmen fallacies. Rather than try to converse with someone who is only looking to feed their confirmation bias, I'll suggest you continue your learnings by looking up the Dunning Kruger effect.
Every technology is a tool - both safe and unsafe depending on the user.
Nuclear technology can be used to kill every human on earth. It can also be used to provide power and warmth for every human.
AI is no different. It can be used for good or evil. It all depends on the people. Vilifying the tool itself is a fool's argument that has been used since the days of the printing press.
People are constantly getting upset about new technologies. It's a good thing they're too inept to stop these technologies.
New technologies are not the issue. The problem is billionaires will fuck it up because they can't control their insatiable fucking greed.
If this is a reference to Asimov's novels, kudos! Though I believe in his books, humans would fill the glass to the brim to test if someone was a robot, because only a machine wouldn't spill a drop.
Invaders out of Ukraine.