this post was submitted on 24 Sep 2024
87 points (98.9% liked)

the_dunk_tank

15900 readers
482 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 3 weeks ago (1 children)

I wonder if this has to do with integrating into modeling/vfx software like Houdini and Blender.

How's the energy demands and carbon output of that compared to prior conventional methods? I don't know; I'm actually asking.

If it's anything like the giant coal-powered hell factory databases that tech startups are using and expanding right now to chase the "AI" hype dragon, we don't need more of that.

[–] [email protected] 4 points 3 weeks ago (1 children)

Depends on texture size and model. The models before LLMs and SDF hype are extremely efficient with the right libraries. Without getting to into the weeds these older ones are no different then doing any other image processing (like applying a gauss blur on 8k image or something). Newer stuff uses diffusion models. So something like a 500x500 texture probably is the equivalent to leaving 25 watt lightbulb on for an hour at worst. But you do it once and it repeats so it's efficient. I know the early models were trained on cropped animal patterns, zoomed in materials, and other very vanilla datasets.

[–] [email protected] 3 points 3 weeks ago (1 children)

I appreciate the information.

[–] [email protected] 3 points 3 weeks ago

No problem. It's appalling companys are pushing this out at scale and virtually uncapped. These models use as much power as large scale scientific simulations/models. Video more so. You don't want a lot of people running these concurrently. VERY BAD lol