this post was submitted on 20 Jul 2023
11 points (100.0% liked)
General Discussion
12084 readers
1 users here now
Welcome to Lemmy.World General!
This is a community for general discussion where you can get your bearings in the fediverse. Discuss topics & ask questions that don't seem to fit in any other community, or don't have an active community yet.
πͺ About Lemmy World
π§ Finding Communities
Feel free to ask here or over in: [email protected]!
Also keep an eye on:
For more involved tools to find communities to join: check out Lemmyverse!
π¬ Additional Discussion Focused Communities:
- [email protected] - Note this is for more serious discussions.
- [email protected] - The opposite of the above, for more laidback chat!
- [email protected] - Into video games? Here's a place to discuss them!
- [email protected] - Watched a movie and wanna talk to others about it? Here's a place to do so!
- [email protected] - Want to talk politics apart from political news? Here's a community for that!
Rules
Remember, Lemmy World rules also apply here.
0. See: Rules for Users.
- No bigotry: including racism, sexism, homophobia, transphobia, or xenophobia.
- Be respectful. Everyone should feel welcome here.
- Be thoughtful and helpful: even with βsillyβ questions. The world wonβt be made better by dismissive comments to others on Lemmy.
- Link posts should include some context/opinion in the body text when the title is unaltered, or be titled to encourage discussion.
- Posts concerning other instances' activity/decisions are better suited to [email protected] or [email protected] communities.
- No Ads/Spamming.
- No NSFW content.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So, your software would go to the link provided (if there's a link provided) and scan the text of the article for language that sounds biased. This is an interesting exercise in computer programming, but it wouldn't be useful. Imagine the biased reaction of the user that wants or does not want the article to be judged "biased" by a computer program. I could just hear people muttering to themselves, "damn algorithm." This is something software is getting better at, but it's still not reliable. Take, for example, some software from my field: The kind that detects plagiarism. When I get student papers, I have to scan them through the plagiarism detector. After that, I have to inspect the ones that were flagged as "potential plagiarism." I've had to use this type of software for over a decade, and it's still problematic. I've had situations in which I found the plagiarism and the software did not. I've had countless situations in which the software found plagiarism but there was no plagiarism. So, I don't know, your goals as a computer scientist are lofty. Still, I want you to keep your bias detecting software away from my reading in my day to day. Anyway, human beings either have the reading skills and knowledge about where to get the facts from or they do not. If they are ignorant enough to require a computer program to judge for them, they will question the software's judgment, anyway, whether it's right or wrong. Why? Everybody's got an agenda.
Yeah that is completely understandable.
I guess it's less of the standard "AI" that you may think of that simply just thinks of something and outputs something. But, has multiple preprocessing steps prior to detection and then post. So for instance, parsing an article by its sentences and analyzing the subjective statements such as "I feel great about XYZ", would be flagged, while searching for statements that either back up such Claims with Data. Such as in the standard format of "Claim, Lead-in, Data, Warrant" in writing for example. Then, checking the data source recursively until it finds it is infact valid. Now this "validity" is threatening, because yeah that can be controlled. But, there can definitely be transparent and community led approaches to adjust what source is considered valid. Without resources an initial solution would be, creating a Person graph of these sources authors and/or mapping against a database of verifiable research repos such as JSTOR, finding linked papers mentioning the same anecdotes, or simply following a trail of links, until the link hit's a trusted domain.
Then there is also the variable if all the sources were heavily weighted onto one side of the equation, where the topic can clearly have valid devil advocates/arguments. This is where bias can come in. Post processing would be finding possible "anti-arguments" to the claims and warrants (if available in there store of verifiable sources). The point is not to force a point, but to open the reader's paradigm
I see how using "fact-checking" in my OP was pretty negative/controversial. But, there's no sense of control of what is "morally right" or what is the "Capital T truth" trying to be imposed on my part as a computer scientist. I strongly agree that computer ethics need to be a focus. Seeing your perspective was a great take to keep in mind. But, the passion is mostly driven by the black-and-white culture of online opinions, hence your point about agenda.
Anyways, I'd like to say we are kind of agreeing. Not sure what caused that aggression. I do think of things in a product sense, but that is the byproduct (no pun intended) of my learning environment. If we are talking about philosophy, I should definitely read up some more. But, the capital T truth understandings majorly came from my observations of David Foster Wallace's book "This is Water". I will expand on it and circle back to improve my writing so it communicates my thoughts better.
This actually got me thinking quite a bit and was hoping you'd expand on it. Is it more directed to building things that are not driven by a personal truth?
Yeah, it's more of a reflection rather than a solution
Slowly removing passion. Its interesting seeing how things I would feel would increase passion (simply because it creates/saves more time), may have the complete opposite effect and thus going against the whole intention. I ignore this side a lot of the time.
Passion as in, spending the time to look into the thoughts of the paper and spending the time to observe each student's work fairly to help the student improve on their writing. Maybe the plagiarism checker is wrong about something, and makes you skip reading through that section. But, infact the student may have laid out some interesting thoughts that should have received positive reinforcement.
Our overall discussion reminded me of this piece by Aaron Swartz aswell, thought it would be nice read to suggest: http://www.aaronsw.com/weblog/anders, the specific piece is called "Confront reality"
Edit: or the whole series is quite good http://www.aaronsw.com/weblog/rawnerve
Edit2: some wording
OK, so you're looking for a way to figure out punditry, what the pundits say that is fact and that is just their opinion. I think that this type of goal is entertaining. What you're looking for is to create software that singles out journalists (they are usually the pundits). It looks easy watching TV, it's harder to with software. But you're right in that regard. Journalists aren't what they used to be. They are free to have an opinion and they are viewed as fact reporters. It's problematic. Humans are now better at figuring that out than AI. But if you can figure it out, that's great.
I may have misinterpreted the tones then, likewise
I said I don't... And I said it's not to find it, but to essentially provide the reader with the data points to do so on their own. Like I said in the OP:
I feel a definitive score is toxic. But, if it were to simply display the variables to look out for it can help make a objective decision yourself
Sure, I will. But, I will wait for more perspectives before I move onto the next. It would be a major mistake to continue on this alone. the idea is to have a team to compensate for flaws that you are potentially observing.