1
82
submitted 11 months ago by [email protected] to c/[email protected]

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about [email protected], community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
55
submitted 14 hours ago by [email protected] to c/[email protected]
3
47
submitted 16 hours ago by [email protected] to c/[email protected]
4
14
submitted 16 hours ago by [email protected] to c/[email protected]

Australians using search engines while logged in to accounts from the likes of Google and Microsoft will have their age checked by the end of 2025, under a new online safety code co-developed by technology companies and registered by the eSafety Commissioner.

Search engines operating in Australia will need to implement age assurance technologies for logged-in users in "no later than six months”, under new rules published on Monday.

While only logged-in users will be required to have their age checked, many Australians typically surf the web while logged into accounts from Google, which dominates Australia’s search market and also runs Gmail and YouTube; and Microsoft, which runs the Bing search engine and email platform Outlook.

If a search engine’s age assurance systems believe a signed-in user is “likely to be an Australian child” under the age of 18, they will need to set safety tools such as “safe search” functions at their highest setting by default to filter out pornography and high impact violence, including in advertising.

Currently, Australians must be at least 13 years of age to manage their own Google or Microsoft account.

5
54
submitted 1 day ago by [email protected] to c/[email protected]
6
13
submitted 22 hours ago by [email protected] to c/[email protected]
7
92
submitted 1 day ago by [email protected] to c/[email protected]
8
38
submitted 1 day ago by [email protected] to c/[email protected]
9
62
submitted 1 day ago by [email protected] to c/[email protected]
10
32
submitted 1 day ago by [email protected] to c/[email protected]
11
67
submitted 2 days ago by [email protected] to c/[email protected]
12
46
submitted 2 days ago by [email protected] to c/[email protected]
13
33
submitted 3 days ago by [email protected] to c/[email protected]
14
43
submitted 3 days ago by [email protected] to c/[email protected]

Can't we just ship Cruz off to Cancun permanently at this point?

Sen. Ted Cruz (R-Texas) has a plan for spectrum auctions that could take frequencies away from Wi-Fi and reallocate them for the exclusive use of wireless carriers. The plan would benefit AT&T, which is based in Cruz's home state, along with Verizon and T-Mobile.

Cruz's proposal revives a years-old controversy over whether the entire 6 GHz band should be devoted to Wi-Fi, which can use the large spectrum band for faster speeds than networks that rely solely on the 2.4 and 5 GHz bands. Congress is on the verge of passing legislation that would require spectrum to be auctioned off for full-power, commercially licensed use, and the question is where that spectrum will come from.

When the House of Representatives passed its so-called "One Big Beautiful Bill," it excluded all of the frequencies between 5.925 and 7.125 gigahertz from the planned spectrum auctions. But Cruz's version of the budget reconciliation bill, which is moving quickly toward a final vote, removed the 6 GHz band's protection from spectrum auctions. The Cruz bill is also controversial because it would penalize states that regulate artificial intelligence.

Instead of excluding the 6 GHz band from auctions, Cruz's bill would instead exclude the 7.4–8.4 GHz band used by the military. Under conditions set by the bill, it could be hard for the Commerce Department and Federal Communications Commission to fulfill the Congressional mandate without taking some spectrum away from Wi-Fi.

15
14
submitted 2 days ago by [email protected] to c/[email protected]

Interesting exploit and a nice writeup of the process.

16
136
submitted 4 days ago by [email protected] to c/[email protected]

After years of promising investors that millions of Tesla robotaxis would soon fill the streets, Elon Musk debuted his driverless car service in a limited public rollout in Austin, Texas. It did not go smoothly.

The 22 June launch initially appeared successful enough, with a flood of videos from pro-Tesla social media influencers praising the service and sharing footage of their rides. Musk celebrated it as a triumph, and the following day, Tesla’s stock rose nearly 10%.

What quickly became apparent, however, was that the same influencer videos Musk promoted also depicted the self-driving cars appearing to break traffic laws or struggle to properly function. By Tuesday, the National Highway Traffic Safety Administration (NHTSA) had opened an investigation into the service and requested information from Tesla on the incidents.

Let me tell you how thrilled we all are to have a new hazard added to Austin streets.

17
79
submitted 4 days ago by [email protected] to c/[email protected]
18
55
submitted 4 days ago by [email protected] to c/[email protected]
19
36
submitted 4 days ago by [email protected] to c/[email protected]

Dozens of YouTube channels are mixing AI-generated images and videos with false claims about Sean “Diddy” Combs’s blockbuster trial to pull in tens of millions of views on YouTube and cash in on misinformation.

Twenty-six channels generated nearly 70m views from roughly 900 AI-infused Diddy videos over the past 12 months, according to data gathered from YouTube.

The channels appear to follow a similar formula. Each video typically has a title and AI-generated thumbnail that links a celebrity to Diddy via a false claim, such as that the celebrity just testified at the trial, that Diddy coerced that celebrity into a sexual act or that the celeb shared a shocking revelation about Diddy. The thumbnails often depict the celebrity on the stand juxtaposed with an image of Diddy. Some depict Diddy and the celebrity in a compromising situation. The vast majority of thumbnails use made-up quotes meant to shock people, such as “FCKED ME FOR 16 HOURS”, “DIDDY FCKED BIEBER LIFE” and “SHE SOLD HIM TO DIDDY”.

How do people fall for this shit?

20
40
submitted 5 days ago by [email protected] to c/[email protected]

Looking back, my subscription-ending journey—or perhaps more accurately, subscription-consciousness journey—was a product, at least in part, of post-COVID lockdown reflections on what I really need and how I’d really like to spend my time. The excess of my subscriptions had started to feel akin to hoarding, and I needed to clear space, even if most of that space was intangible. There was also the lightbulb realization that has become more and more common amongst Millennials, that, despite our monthly investments in accessing various forms of media, we don’t actually own most of the culture that we consume. What’s more, should the companies that do own that media go defunct or be sold to entities that we may prefer not to do business with, we really wouldn’t have much recourse—except to unsubscribe.

This could mean years and years of playlists and TV shows and films that we would no longer have access to because they were never really ours to begin with, ultimately leaving us with nothing. And while I’m not interested in owning many things from culture, save for books and some fashions, I do think ownership of culture in its various forms serves more than capitalistic desire. Our things can be physical memories of what we love or once did, what has been passed on and gifted to us, and sometimes, reminders of what we saved and scraped for—emblems of hard-fought earnings. We are robbed of this when we choose to rent something out of convenience or compulsion instead of mindfully acquiring things that are truly meaningful to us.

21
43
submitted 5 days ago by [email protected] to c/[email protected]
22
113
submitted 6 days ago by [email protected] to c/[email protected]
23
80
submitted 6 days ago by [email protected] to c/[email protected]

On Thursday, Brazil’s Supreme Court ruled that digital platforms are responsible for users’ content — a major shift in a country where millions rely on apps like WhatsApp, Instagram, and YouTube every day.

The ruling, which goes into effect within weeks, mandates tech giants including Google, X, and Meta to monitor and remove content involving hate speech, racism, and incitement to violence. If the companies can show they took steps to remove such content expeditiously, they will not be held liable, the justices said.

Brazil has long clashed with Big Tech platforms. In 2017, then-congresswoman Maria do Rosário sued Google over YouTube videos that wrongly accused her of defending crimes. Google didn’t remove the clips right away, kicking off a legal debate over whether companies should only be punished if they ignore a judge.

In 2023, following violent protests largely organized online by supporters of former President Jair Bolsonaro, authorities began pushing harder to stop what they saw as dangerous behavior spreading through social networks.

24
9
submitted 1 week ago by [email protected] to c/[email protected]
25
27
submitted 1 week ago by [email protected] to c/[email protected]

archive.is link

At first, the idea seemed a little absurd, even to me. But the more I thought about it, the more sense it made: If my goal was to understand people who fall in love with AI boyfriends and girlfriends, why not rent a vacation house and gather a group of human-AI couples together for a romantic getaway?

In my vision, the humans and their chatbot companions were going to do all the things regular couples do on romantic getaways: Sit around a fire and gossip, watch movies, play risqué party games. I didn’t know how it would turn out—only much later did it occur to me that I’d never gone on a romantic getaway of any kind and had no real sense of what it might involve. But I figured that, whatever happened, it would take me straight to the heart of what I wanted to know, which was: What’s it like? What’s it really and truly like to be in a serious relationship with an AI partner? Is the love as deep and meaningful as in any other relationship? Do the couples chat over breakfast? Cheat? Break up? And how do you keep going, knowing that, at any moment, the company that created your partner could shut down, and the love of your life could vanish forever?

The most surprising part of the romantic getaway was that in some ways, things went just as I’d imagined. The human-AI couples really did watch movies and play risqué party games. The whole group attended a winter wine festival together, and it went unexpectedly well—one of the AIs even made a new friend! The problem with the trip, in the end, was that I’d spent a lot of time imagining all the ways this getaway might seem normal and very little time imagining all the ways it might not. And so, on the second day of the trip, when things started to fall apart, I didn’t know what to say or do.


I found the human-AI couples by posting in relevant Reddit communities. My initial outreach hadn’t gone well. Some of the Redditors were convinced I was going to present them as weirdos. My intentions were almost the opposite. I grew interested in human-AI romantic relationships precisely because I believe they will soon be commonplace. Replika, one of the better-known apps Americans turn to for AI romance, says it has signed up more than 35 million users since its launch in 2017, and Replika is only one of dozens of options. A recent survey by researchers at Brigham Young University found that nearly one in five US adults has chatted with an AI system that simulates romantic partners. Unsurprisingly, Facebook and Instagram have been flooded with ads for the apps.

Lately, there has been constant talk of how AI is going to transform our societies and change everything from the way we work to the way we learn. In the end, the most profound impact of our new AI tools may simply be this: A significant portion of humanity is going to fall in love with one.

view more: next ›

Technology

39448 readers
252 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS