this post was submitted on 04 Feb 2024
21 points (100.0% liked)

TechTakes

1401 readers
206 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

So: What's the worst/weirdest Wikipedia article in your field of specialization?

top 16 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 9 months ago (1 children)

for another example of Rationalist crank shit trying to pass itself off as computer science, check out the article for Minimum Description Length:

Selecting the minimum length description of the available data as the best model observes the principle identified as Occam's razor. Prior to the advent of computer programming, generating such descriptions was the intellectual labor of scientific theorists. It was far less formal than it has become in the computer age. If two scientists had a theoretic disagreement, they rarely could formally apply Occam's razor to choose between their theories. They would have different data sets and possibly different descriptive languages. Nevertheless, science advanced as Occam's razor was an informal guide in deciding which model was best.

With the advent of formal languages and computer programming Occam's razor was mathematically defined. Models of a given set of observations, encoded as bits of data, could be created in the form of computer programs that output that data. Occam's razor could then formally select the shortest program, measured in bits of this algorithmic information, as the best model.

note that this is uncited nonsense, but it sounds exactly like a LessWrong post. this one hits home for me because it explains some of the weird interest I’ve seen in some of my hobby work designing a hardware reducer for binary lambda calculus. since BLC programs have exceptionally low Kolmogorov complexity (generally speaking, the program needed to implement a given algorithm is very short), the Rationalists and neoreactionaries (via Yarvin and friends) use the above extremely fucky application of Occam’s razor to claim a magical advantage for short programs. while I really like playing with BLC and I feel it has interesting potential for exploring alternative caching and optimization strategies, its actual performance is kind of hilarious:

  • BLC programs take up a shitload of memory (around 500mb to a few gigs for a basic Lisp REPL) because their simple program strings expand to extremely complex garbage collected in-memory representations (which this fucky version of Occam’s razor elides, of course)
  • mathematical performance is awful because Church numerals are a unary system and operations are very expensive. this can be somewhat fixed by implementing binary operators (lambda calculus doesn’t really have a native concept of numbers at all, so you effectively get to choose your numerical base), but more efficient numbers are one reason why practical lambda calculus derivatives usually choose more complex encodings

but hey, speaking of Algebraic Information Theory, look whose weirdo fingerprints are on that article! that’s right, it’s the Burgin fucker from the super-recursive algorithms article and formerly of the Solomonoff induction article! fuck me I hate what the Rationalists are doing to my current hobby obsession.

[–] [email protected] 5 points 9 months ago (1 children)

Tangent: is there a term or phrase for when Occam’s Razor is misused or quoted incorrectly? My prior is that any time I see it I assume it’s going to be misused.

[–] [email protected] 7 points 9 months ago

One of my professors used to say that with the Occam's Razor "one must be wary not to cut themselves to the bone"

[–] [email protected] 10 points 9 months ago (2 children)

The super-recursive alg thing is weird, 'what if we could do hyperturing computation on turing machines' well... then it would no longer be hyperturing computation, it would just fit into the turing machine (and as apparently you cannot tell if the algorithm is done, or stops coming up with different results this is just the halting problem again, but with extra steps. I really hope people are not really trying to implement superrecalgs on our current model of turing machines because they half glanced at theoretical hyperturing papers).

[–] [email protected] 15 points 9 months ago* (last edited 9 months ago) (1 children)

Computer scientists have explored what happens if you give Turing machines more power and it turns out you get an infinite hierarchy of more complex halting problems, each of which unsolvable with the type of machine it asks about. It's fascinating, really, as is the logical counterpart the arithemtical hierarchy. Stuff like this is what drew me into theoretical CS in the first place.

The "super-recursive" article is a mockery of this field and basically reads like a personal insult. I can finally experience what medical experts feel when reading antivax nonsense.

[–] [email protected] 3 points 9 months ago

Yes, we call that hypercomputation nowadays I think, I had read about the concept when it was stilled called hyperturing or superturing or something, and the theorethical 'how do we go up a halting level' methods included a semi joke such as 'an oracle which just gives you the correct answer' (which I guess comes from some of the stranger quantum mechanics theories at the times).

This super-recursive stuff just feels to me like they grabbed onto the wrong parts of the earlier research (as a theorethical concept the oracle isn't a problem, if you are just listing all the valid methods no matter how viable they are). I never was that much into theoretical CS but it was an eyeopener, and interesting at the time.

So your knowledge about this is prob way bigger than mine and sorry for not using the correct terms.

[–] [email protected] 5 points 9 months ago

it appears to be one crank, except he's an academic and got this shit peer reviewed

[–] [email protected] 7 points 9 months ago (1 children)

nice! I had a very incomplete draft for this thread but I’m glad you got there first since my writing time is currently limited.

for the field of computer science, from the discussion thread you mentioned, @[email protected] started us down the rabbit hole that is the Solomonoff induction Wikipedia article, whose mess of a Turing machines section was obviously authored by the same mind as the super-recursive algorithms article, seems to be based on Rationalist buzzwords and no actual science (Solomonoff induction being one, but this author of course also dips into thinking machine bullshit, DNA-based computing, and all your favorite dime store futurist tropes), is utterly taken with what sounds like basic computability (inductive Turing machines are special because they can (gasp) implement algorithms), and frequently degrades into the CS equivalent of a flat earther trying to do science. it’s a wild fucking ride to read both articles as a shared universe of bullshit, and I wonder how much garbage this author in particular has managed to spew onto Wikipedia

peer review: a very good time if you know basic CS theory and want something to laugh at

[–] [email protected] 9 points 9 months ago (2 children)

oh fuck, the above section was so bad that our own David Gerard took it out back and shot it, rest in peace to the time cube of CS. the rest of the article is vastly more sane without the Turing machines section (Solomonoff induction is a real thing, though not in the way that the Rationalists would like it to be), so let this saved copy of the old version serve as a companion to the insanity of the super-recursive algorithms article

[–] [email protected] 8 points 9 months ago

it helps when they fail to cite anything at all

unfortunately, there is a lot of absolute bilge that achieves the low bar of being in a peer-reviewed paper, which they then festoon this sort of rubbish with. sigh. thankfully this didn't even manage that.

[–] [email protected] 6 points 9 months ago (1 children)

I would like to take the author of that bilge and trap him in a basilisk-esque hell dungeon where they are forced to simulate a Turing machine with paper and pencil for all eternity.

[–] [email protected] 3 points 9 months ago
[–] [email protected] 4 points 9 months ago (1 children)

fwiw, I just applied some editing skills to [[Super-recursive algorithm]]. It's still promoting a nonsense book, but at least it's not trying to claim credit for the whole concept of hypercomputation.

[–] [email protected] 5 points 9 months ago

that’s so much better! I didn’t think anything sensible could be derived from the article — now it’s a fair summary of the sources and a dire warning that the reader is entering crank town.

check out that talk page though! I have no idea how this thing survived all the scrutiny it got as far back as 2009. I do like when someone barges into the page with a “but wait, the new Burgin preprint will clear up any confusion from the computer science orthodoxy who don’t understand his work!” and the only reply was essentially “we’re not confused, we just think it’s garbage”

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

well, I’ve found another one dredging the lambda calculus bits of Wikipedia. behold, the Plessey System 250 article, which appears to describe a heavily fictionalized and extremely cranky version of what I’m assuming is a real (and much more boring) British military computer from the 70s:

It is an unavoidable characteristic of the von Neumann architecture[citation needed] that is founded on shared random access memory and trust in the sharing default access rights. For example, every word in every page managed by the virtual memory manager in an operating system using a memory management unit (MMU) must be trusted.[citation needed] Using a default privilege among many compiled programs allows corruption to grow without any method of error detection. However, the range of virtual addresses given to the MMU or the range of physical addresses produced by the MMU is shared undetected corruption flows across the shared memory space from one software function to another.[citation needed] PP250 removed not only virtual memory[1] or any centralized, precompiled operating system, but also the superuser, removing all default machine privileges.

It is default privileges that empower undetected malware and hacking in a computer. Instead, the pure object capability model of PP250 always requires a limited capability key to define the authority to operate. PP250 separated binary data from capability data to protect access rights, simplify the computer and speed garbage collection. The Church machine encapsulates and context limits the Turing machine by enforcing the laws of the lambda calculus. The typed digital media is program controlled by distinctly different machine instructions.

this extremely long-winded style of bullshitting (the church machine limits the Turing machine by enforcing the laws of lambda calculus? how in fuck do you propose it applies alpha or beta reduction to a fucking Turing machine?) continues on the article’s talk page, where 3 years ago a brave Wikipedian looked up the actual machine in question and poked holes in essentially every part of the article — the machine did have an OS (and a bunch of other normal computer shit the article claims it could function without), didn’t seem to implement any form of lambda calculus (or Church machine, whatever that is) on hardware, and is overall not a very interesting machine other than whatever security features it implemented for military work. the crank responsible for this nonsense then promptly flooded the Wikipedian with an incredible volume of nonsense until he went away.

e: I checked the crank’s user page and it gets so much worse

[–] [email protected] 2 points 9 months ago

ah, further reading: these things are notable for being the first commercial computers to implement a capability system for security and seem to have been used mostly as embedded systems controlling telephone switches. the thing about them not having a superuser is vacuously true — their use case didn’t really seem to need it.