this post was submitted on 17 Jul 2023
43 points (95.7% liked)
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
54476 readers
419 users here now
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is too broad of a question for a too narrow of an answer. You can answer with broad statements and generalized estimations, but I don't think they really answer the question.
Encoding video balances three things (extensible by two more):
The codec you use also has a high impact on compression ratio opportunities and capabilities. AV1, HEVC, AVC? 10-bit?
If we define that we do not care about encoding time, so we will use the very slow preset and use all codec features available, compression ratio and quality falloff still depends a lot on what you actually encode.
I suspect in higher resolutions the gaps between different visual data compression ratio differs more - because a difference is elevated through higher resolution/more data.
That being said, I don't have or use 4K stuff, so I can't even check for some rough numbers and visual content to size differences.
There is no "this much is very good enough for 4K movies" because the reasonable minimum very much depends on what the movie contains.