this post was submitted on 03 Oct 2023
215 points (95.4% liked)

Linux

48082 readers
781 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

if you could pick a standard format for a purpose what would it be and why?

e.g. flac for lossless audio because...

(yes you can add new categories)

summary:

  1. photos .jxl
  2. open domain image data .exr
  3. videos .av1
  4. lossless audio .flac
  5. lossy audio .opus
  6. subtitles srt/ass
  7. fonts .otf
  8. container mkv (doesnt contain .jxl)
  9. plain text utf-8 (many also say markup but disagree on the implementation)
  10. documents .odt
  11. archive files (this one is causing a bloodbath so i picked randomly) .tar.zst
  12. configuration files toml
  13. typesetting typst
  14. interchange format .ora
  15. models .gltf / .glb
  16. daw session files .dawproject
  17. otdr measurement results .xml
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -5 points 1 year ago (5 children)

192 kHz for music.

The CD was the worst thing to happen in the history of audio. 44 (or 48) kHz is awful, and it is still prevalent. It would be better to wait a few more years and have better quality.

[–] [email protected] 16 points 1 year ago (3 children)

Why? What reason could there possibly be to store frequencies as high as 96 kHz? The limit of human hearing is 20 kHz, hence why 44.1 and 48 kHz sample rates are used

[–] [email protected] 6 points 1 year ago

On top of that, 20 kHz is quite the theoretical upper limit.

Most people, be it due to aging (affects all of us) or due to behaviour (some way more than others), can't hear that far up anyway. Most people would be suprised how high up even e.g. 17 kHz is. Sounds a lot closer to very high pitched "hissing" or "shimmer", not something that's considered "tonal".

So yeah, saying "oh no, let me have my precious 30 kHz" really is questionable.

At least when it comes to listening to finished music files. The validity of higher sampling frequencies during various stages in the audio production process is a different, way less questionable topic,

[–] [email protected] 3 points 1 year ago (4 children)

That is not what 96khz means. It doesn't just mean it can store frequencies up to that frequency, it means that there are 96,000 samples every second, so you capture more detail in the waveform.

Having said that I'll give anyone £1m if they can tell the difference between 48khz and 96khz. 96khz and 192khz should absolutely be used for capture but are absolutely not needed for playback.

[–] [email protected] 7 points 1 year ago

It means it can capture any frequency up to half the sample rate, perfectly. The "extra detail" in the waveform is higher frequencies beyond the range of human hearing

[–] [email protected] 4 points 1 year ago

That is what it means. Any detail in the waveform that is not captured by a 48kHz sample rate is due to frequencies that humans can't hear.

[–] [email protected] 2 points 1 year ago

this is a misconception about how waves are reconstructed. each sample is a single point in time. But the sampling theorem says that if you have a bunch of discrete samples, equally spaced in time, there is one and only one continuous solution that would hit those samples exactly, provided the original signal did not contain any frequencies above nyquist (half the sampling rate). Sampling any higher than that gives you no further useful information. There is stil only one solution.

tldr: the reconstructed signal is a continuous analog signal, not a stair step looking thing

[–] [email protected] 1 points 1 year ago (1 children)

because if you use a 40 kHz signal to "draw" a 10 kHz wave, the wave will have only four "pixels", so all the high frequencies have very low fidelity

[–] [email protected] 1 points 1 year ago

As long as the audio frequency is less than half the sample rate, it is a mathematical function with only one (exact) wave that is able to fit all 4 points, so it is perfectly reconstructed. This video provides a great visualization of it https://www.youtube.com/watch?v=cIQ9IXSUzuM

[–] [email protected] 11 points 1 year ago

I assume you're gonna back that up with a double blind ABX test?

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago)

44 KHz wasn't chosen randomly. It is based in the range of frequencies that humans can hear (20Hz to 20KHz) and the fact that a periodic waveform can be exactly rebuild as the original (in terms of frequency) when sampling rate is al least twice the bandwidth. So, if it is sampled at 44KHz you can get all components up to 22 KHz whics is more that we can hear.

[–] [email protected] 4 points 1 year ago

this is wrong. the first thing done before playing one of those files is running ithe audio through a low pass filter that removes any extra frequencies 192khz captures. because most speakers can't play them, and in fact would distort the rest of the sound (due to badly recreating them, resulting in aliasing).

192khz has a place, and it's called the recording studio. It's only useful when handling intermediate products in mixing and mastering. Once that is done, only the audible portion is needed. The inaudible stuff can either be removed beforehand, saving storage space, or distributed (as 192khz files) and your player will remove them for you before playback