1
79
submitted 9 months ago by [email protected] to c/[email protected]

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about [email protected], community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
10
submitted 29 minutes ago by [email protected] to c/[email protected]
3
6
submitted 14 minutes ago by [email protected] to c/[email protected]

Should posts like this have [Satire] added to the title? I know sometimes people have a hard time clicking links and only read the title before commenting (example "How to monetize a blog" I posted here previously), but I want to learn what y'all think it is best.

4
15
submitted 11 hours ago by [email protected] to c/[email protected]
5
28
submitted 19 hours ago by [email protected] to c/[email protected]

Meta CEO Mark Zuckerberg and Anduril founder Palmer Luckey — once on warring sides of the tech culture clash — are giving new meaning to the adage: all is fair in love and war.

The two executives buried the hatchet and announced a partnership Thursday to build next-gen extended reality gear for the US military. The system, dubbed Eagle Eye, will use AI and sensors in new headsets and other wearables to enhance vision, letting troops spot far-away threats with augmented reality, Luckey said on a podcast.

Anduril's Lattice, its AI command-and-control platform, will provide real-time battlefield intel. The partnership will also use tech from Meta's Reality Labs and Llama AI models.

The companies said they're building the tech with "private capital, without taxpayer support," promising to save the US military "billions of dollars," Anduril said in a statement. They will also be using tech "originally built for commercial use." Anduril raised $1.5 billion in August 2024 and is reportedly raising as much as $2.5 billion more, Reuters reported in February.

6
56
submitted 22 hours ago by [email protected] to c/[email protected]
7
39
submitted 1 day ago by [email protected] to c/[email protected]
8
62
submitted 1 day ago by [email protected] to c/[email protected]
9
67
submitted 2 days ago by [email protected] to c/[email protected]
10
79
submitted 2 days ago by [email protected] to c/[email protected]

Honestly I found this video in particular hard to watch.

It's a gut-wrenching story and hits heavy because we all know that these companies will never be dismantled and the people within them investigated and ultimately held responsible.

You might have seen this video in your yt recommendations, I don't usually watch veriatsium, but it's worth a watch.

11
140
submitted 3 days ago by [email protected] to c/[email protected]
12
27
submitted 2 days ago by [email protected] to c/[email protected]
13
21
submitted 2 days ago by [email protected] to c/[email protected]

I thought I couldn't be more disappointed in parts of the tech media, then OpenAI went and bought former Apple Chief Design Officer Jony Ive's "Io," a hardware startup that it initially invested in to create some sort of consumer tech device. As part of the ridiculous $6.5 billion all-stock deal to acquire Io, Jony Ive will take over all design at OpenAI, and also build a device of some sort.

At this point, no real information exists. Analyst Ming-Chi Kuo says it might have a "form factor as compact and elegant as an iPod shuffle," yet when you look at the tweet everybody is citing Kuo's quotes from, most of the "analysis" is guesswork outside of a statement about what the prototype might be like.

14
34
submitted 3 days ago by [email protected] to c/[email protected]
15
198
submitted 4 days ago by [email protected] to c/[email protected]

As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.

Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.

“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”

“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”

16
17
submitted 3 days ago by [email protected] to c/[email protected]
17
22
submitted 3 days ago by [email protected] to c/[email protected]

My highlight at 5:26; thin molten tin being shot out and vaporized for extreme ultraviolet light creation.

The video gives a lot of context around the machine and product; the company, other products, global chip manufacturing, long-term strategy, and looking forward, etc.

18
127
submitted 5 days ago by [email protected] to c/[email protected]

Smearing Vicks Vaporub in masks? Ravers were masking before it was cool (I can't hear "on X" and think oh, an online platform). But I digress ...

This has been a banner month for X. Last week, the social network’s built-in chatbot, Grok, became strangely obsessed with false claims about “white genocide” in South Africa—allegedly because someone made an “unauthorized modification” to its code at 3:15 in the morning. The week prior, Ye (formerly Kanye West) released a single called “Heil Hitler” on the platform. The chorus includes the line “Heil Hitler, they don’t understand the things I say on Twitter.” West has frequently posted anti-Semitic rants on the platform and, at one point back in February, said he identified as a Nazi. (Yesterday on X, West said he was “done with antisemitism,” though he has made such apologies before; in any case, the single has already been viewed tens of millions of times on X.)

So, we literally have a song titled Heil Hitler from a prominent artist. I'm sure it's not the first one crafted on American soil, just as I'm sure little Nazi rallies happen with some frequency nationwide, as these guys just love getting together and being racist fucks.

The now-cliche Nazi bar analogy gets brought into specific relief:

In July 2020, the Twitter user Michael B. Tager shared an anecdote that went viral. Tager was at “a shitty crustpunk bar” when the gruff bartender kicked out a patron in a “punk uniform”—not because the customer was making a scene, but because he was wearing Nazi paraphernalia. “You have to nip it in the bud immediately,” Tager recounted the bartender as saying. “These guys come in and it’s always a nice, polite one. And you serve them because you don’t want to cause a scene. And then they become a regular and after awhile they bring a friend.” Soon enough, you’re running a Nazi bar.

I'd not heard the origins of the term before, so that was a "fun" thing to learn.

But seriously: What the fuck is going on?

19
107
submitted 5 days ago by [email protected] to c/[email protected]

I first ran into the Copilot integration in Notepad a couple of days ago and immediately turned it right the fuck off.

In November, Microsoft began testing an update that allowed users to rewrite or summarize text in Notepad using generative AI. Another preview update today takes it one step further, allowing you to write AI-generated text from scratch with basic instructions (the feature is called Write, to differentiate it from the earlier Rewrite).

Like Rewrite and Summarize, Write requires users to be signed into a Microsoft Account, because using it requires you to use your monthly allotment of Microsoft's AI credits. Per this support page, users without a paid Microsoft 365 subscription get 15 credits per month. Subscribers with Personal and Family subscriptions get 60 credits per month instead.

Microsoft notes that all AI features in Notepad can be disabled in the app's settings, and obviously, they won't be available if you use a local account instead of a Microsoft Account.

20
77
submitted 5 days ago by [email protected] to c/[email protected]

This week, at its annual software conference, Google released an AI tool called Try It On, which acts as a virtual dressing room: Upload images of yourself while shopping for clothes online, and Google will show you what you might look like in a selected garment. Curious to play around with the tool, we began uploading images of famous men—Vance, Sam Altman, Abraham Lincoln, Michelangelo’s David, Pope Leo XIV—and dressed them in linen shirts and three-piece suits. Some looked almost dapper. But when we tested a number of articles designed for women on these famous men, the tool quickly adapted: Whether it was a mesh shirt, a low-cut top, or even just a T-shirt, Google’s AI rapidly spun up images of the vice president, the CEO of OpenAI, and the vicar of Christ with breasts.

It’s not just men: When we uploaded images of women, the tool repeatedly enhanced their décolletage or added breasts that were not visible in the original images. In one example, we fed Google a photo of the now-retired German chancellor Angela Merkel in a red blazer and asked the bot to show us what she would look like in an almost transparent mesh top. It generated an image of Merkel wearing the sheer shirt over a black bra that revealed an AI-generated chest.

Sounds like this is going tits up.

21
44
submitted 4 days ago by [email protected] to c/[email protected]

I want to share some thoughts that I had recently about YouTube spam comments. We all know these early bots in the YouTube comment section, with those "misleading" profile pictures and obvious bot like comments. Those comments are often either random about any topic or copied from other users.

OK, why am I telling you that? Well, I think these bots are there to be recognized as bots. Their job is to be seen as a bot and be deleted and ignored. In that case everyone feels safe, thinking all bots are now deleted. But in reality there are more sophisticated bots under us. So the easy bots job is to get delete and basically mislead us, so we don't think that any is left, because they are deleted.

What do you think? Sounds plausible, doesn't it? Or do I have paranoia? :D

22
18
submitted 4 days ago by [email protected] to c/[email protected]
23
175
submitted 6 days ago by [email protected] to c/[email protected]
24
57
submitted 6 days ago by [email protected] to c/[email protected]

At the Federal Trade Commission's monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta's family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.

As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.

"Mark Zuckerberg says social media is over," a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg's words. That chart, shared at the trial, showed the "percent of time spent viewing content posted by 'friends'" had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram.

Supposedly because of this trend, Zuckerberg testified that "it doesn't matter much" if someone's friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it's not so much focused on beating the FTC's flagged rivals in the connecting-friends-and-family business, Snap and MeWe.

But while Zuckerberg claims that hosting that kind of content doesn't move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta's own press releases seem to back that up.

25
30
submitted 1 week ago by [email protected] to c/[email protected]
view more: next ›

Technology

38749 readers
228 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS