IncognitoErgoSum

joined 1 year ago
[–] [email protected] 1 points 1 year ago (1 children)

Why not start up your own fediverse instance and make it that way, then?

[–] [email protected] 1 points 1 year ago (3 children)

If the mods/admins failed to act on your report of sexual harassment, delete the offending comment, and ban the person as appropriate, that's the issue you should be taking up in this thread, not demanding carte blanche to silence anyone you disagree with.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (5 children)

You don't need to block someone to end a conversation. Just say "you're acting in bad faith, and I'm done here", then stop replying to them. They'll most likely reply to you once or twice, and that'll be it. And if you use kbin's block function, you'll never even know.

If you're engaging with someone who is acting in bad faith for that long, you're most likely trying to convince the audience that the other person is wrong. If the fact that they're arguing in bad faith 10 hours in isn't abundantly clear to any person with half a brain reading your thread, then maybe they're not acting in bad faith and they just disagree with you on something you feel strongly about.

Also, you kind of said the quiet part loud there. "Engaging in bad faith" isn't, in and of itself, the same as harassment. I'm sure that there are individual communities on kbin where critics of particular ideas and ideologies are silenced, and if that's what you need in order for your ideas to stand, then I'd suggest staying in those communities. The general consensus here seems to be that if you're out arguing in public and someone isn't actually harassing you (even if they disagree with you in a way that you believe constitutes "bad faith"), then they should be allowed to speak. Reddit's toxic climate has just been exacerbated by their bad block feature, because now the motivation when you get into an argument is to be the first to block so that you're guaranteed to have the last word. It doesn't lead to useful discourse.

Bare minimum, if you want block to function this way, then you should have to delete any un-replied-to comments of yours in order to be able to do it so as to remove the perverse incentive to abuse the feature to "win" arguments. I'm sure you'd find that agreeable?

[–] [email protected] 2 points 1 year ago (9 children)

I'm guessing that when you're losing an argument, you like to post a response and then block the other person so you get the last word, then convince yourself that the other person was a "sealion" or something. Reddit's block system is primarily used that way. If you don't like how blocking works here, I recommend Reddit.

I personally came here to get away from Reddit's "features" like private downvotes and silencing people who disagree with you, because they promote exactly the kind of toxic discussion I want to avoid.

If you're being harassed, report it.

[–] [email protected] 2 points 1 year ago

Any vegan with half a brain knows that you need more than just fruit to be healthy. Assuming her death by infection is a result of her diet (which is possible, but we don't know that), she died of being an idiot, not a vegan.

[–] [email protected] 1 points 1 year ago

I said it was a neural network.

You said it wasn't.

I asked you for a link.

You told me to do your homework for you.

I did your homework. Your homework says it's a neural network. I suggest you read it, since I took the time to find it for you.

Anyone who knows the first thing about neural networks knows that, yes, artificial neurons are simulated with matrix multiplications, why is why people use GPUs to do them. The simulations are not down to the molecule becuase they don't need to be. The individual neurons are relatively simple math, but when you get into billions of something, you don't need extreme complexity for new properties to emerge (in fact, the whole idea of emergent properties is that they arise from collections of simple things, like the rules of the Game of Life, for instance, which are far simpler than simulated neurons). Nothing about this makes me wrong about what I'm talking about for the purposes of copyright. Neural networks store concepts. They don't archive copies of data.

[–] [email protected] 1 points 1 year ago (2 children)

LOL, I love kbin's public downvote records. I quoted a bunch of different sources demonstrating that you're wrong, and rather than own up to it and apologize for preaching from atop Mt. Dunning-Kruger, you downvoted me and ran off.

I advise you to step out of whatever echo chamber you've holed yourself up in and learn a bit about AI before opining on it further.

[–] [email protected] 6 points 1 year ago

Unfortunately, you pretty much have to specify a specific time and place for it to be actionable. These guys are very familiar with how those laws work and know exactly how to avoid getting caught by them.

[–] [email protected] 1 points 1 year ago
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not sure why you're asking that. You literally just asked me if I'm refusing to admit that AI could cause trouble for people's livelihoods. I don't know where you even got that idea. I never asked you anything about whether you admit it could help with things, because that's irrelevant (and also it would be a pretty silly blanket assumption to make).

Are you sure you're not projecting here? In this entire thread, have you budged an inch based on all the people arguing against your original post?

Who am I supposed to be budging for? Of the three people here who are actually arguing with me, you're the only one who isn't saying they're going to slash my car tires and likening personal AI use to eating steak in terms of power usage (it's not even in the same ballpark), or claiming that Stable Diffusion doesn't use a neural network. I only replied to the other guy's most recent comment because I don't want to be swiftboated -- people will believe other people who confidently state something that they find validating, even if they're dead wrong.

We just seem to mostly have a difference of opinion. I don't get the sense that you're making up your own facts. And fundamentally, I'm not convinced of the idea that only a small group of people deserve laws protecting their jobs from automation, particularly not at the expense of the rest of us. If we want to grant people relief from having their jobs automated away, we need to be doing that for everybody, and the answer to that isn't copyright law.

And as far as AI being used to automate dangerous jobs, copyright isn't going to stop that at all. Tesla's dangerous auto-pilot function (honestly, I have no idea if that's a neural network or just a regular computer program) uses data that Tesla gathers themselves. Any pharmaceutical company that develops an AI for making medicines will train it on their own trade secrets. Same with AI surgeons, AI-operated heavy machinery, and so on. None of that is going to be affected by copyright, and public concerns about safety aren't going to get in the way of stockholders and their profits anymore than it has in the past. If you want to talk about the dangers of overreliance on AI doing dangerous work, then by all means talk about that. This copyright fight, for those large companies, is a beneficial distraction.

[–] [email protected] 0 points 1 year ago

You need to do your own homework. I'm not doing it for you. What I will do is lay this to rest:

https://en.wikipedia.org/wiki/Stable_Diffusion

Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly [...]

https://jalammar.github.io/illustrated-stable-diffusion/

The image information creator works completely in the image information space (or latent space). We’ll talk more about what that means later in the post. This property makes it faster than previous diffusion models that worked in pixel space. In technical terms, this component is made up of a UNet neural network and a scheduling algorithm.

[...]

With this we come to see the three main components (each with its own neural network) that make up Stable Diffusion:

  • [...]

https://stable-diffusion-art.com/how-stable-diffusion-work/

The idea of reverse diffusion is undoubtedly clever and elegant. But the million-dollar question is, “How can it be done?”

To reverse the diffusion, we need to know how much noise is added to an image. The answer is teaching a neural network model to predict the noise added. It is called the noise predictor in Stable Diffusion. It is a U-Net model. The training goes as follows.

[...]

It is done using a technique called the variational autoencoder. Yes, that’s precisely what the VAE files are, but I will make it crystal clear later.

The Variational Autoencoder (VAE) neural network has two parts: (1) an encoder and (2) a decoder. The encoder compresses an image to a lower dimensional representation in the latent space. The decoder restores the image from the latent space.

https://www.pcguide.com/apps/how-does-stable-diffusion-work/

Stable Diffusion is a generative model that uses deep learning to create images from text. The model is based on a neural network architecture that can learn to map text descriptions to image features. This means it can create an image matching the input text description.

https://www.vegaitglobal.com/media-center/knowledge-base/what-is-stable-diffusion-and-how-does-it-work

Forward diffusion process is the process where more and more noise is added to the picture. Therefore, the image is taken and the noise is added in t different temporal steps where in the point T, the whole image is just the noise. Backward diffusion is a reversed process when compared to forward diffusion process where the noise from the temporal step t is iteratively removed in temporal step t-1. This process is repeated until the entire noise has been removed from the image using U-Net convolutional neural network which is, besides all of its applications in machine and deep learning, also trained to estimate the amount of noise on the image.

So, I'll have to give you that you're trivially right that Stable Diffusion does use a Markov Chain, but as it turns out, I had the same misconception as you did, that that was some sort of mathematical equation. A markov chain is actually just a process where each step depends only on the step immediately before it, and it most certainly doesn't mean that you're right about Stable Diffusion not using a neural network. Stable Diffusion works by feeding the prompt and partly denoised image into the neural network over some given number of steps (it can do it in a single step, although the results are usually pretty messy). That in and of itself is a Markov chain. However, the piece that's actually doing the real work (that essentially does a Rorschach test over and over) is a neural network.

[–] [email protected] 1 points 1 year ago (3 children)

When did I refuse to admit automation causes problems for people?

 

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn't actually what you want, then what's your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it's likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we're only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I'm posting this in a hostile space, and I'm sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that's fine (the jury is literally still out on that). What I'm interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don't is the absolute worst possibility.

 

I've been having some difficulty underextrusion on my new all-metal hotend. I've set my retraction distance to 1.5mm (1.0 leaves strings), but on regular PLA I'm getting occasional layers that don't print very well (particularly if there's a lot of stopping and starting), and glitter PLA is an absolute disaster.

Any suggestions for getting this to work? Does glitter PLA just not get along with all-metal hotends? Is it possible that there's hardened PLA up past the heat break that things are catching on, and if so, how can I clean that out (I've been using cleaning filament already, and it doesn't seem to be solving the issue)?

FIXED: Increasing my flow rate fixed this. I started at 110, which was a bit better, then went to 120, which was a bigger improvement, and at 125 the first layer printed perfectly.

EDIT: This was a symptom of my extruder needing to be tightened and recalibrated. It now prints perfectly with a flow rate of 100%.

view more: next ›