this post was submitted on 22 Mar 2024
29 points (93.9% liked)

Programming Languages

1167 readers
5 users here now

Hello!

This is the current Lemmy equivalent of https://www.reddit.com/r/ProgrammingLanguages/.

The content and rules are the same here as they are over there. Taken directly from the /r/ProgrammingLanguages overview:

This community is dedicated to the theory, design and implementation of programming languages.

Be nice to each other. Flame wars and rants are not welcomed. Please also put some effort into your post.

This isn't the right place to ask questions such as "What language should I use for X", "what language should I learn", and "what's your favorite language". Such questions should be posted in /c/learn_programming or /c/programming.

This is the right place for posts like the following:

See /r/ProgrammingLanguages for specific examples

Related online communities

founded 1 year ago
MODERATORS
29
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

In this post I want to make this kind of simplicity more precise and talk about some reasons it's important. I propose five key ideas for simple programming languages: ready-at-hand features, fast iteration cycles, a single way of doing things, first-order reasoning principles, and simple static type systems. I discuss each of these at length below.

Aside: simplicity in languages is interesting. I'd say most popular languages, from Rust and Haskell to Python and JavaScript, are not simple. Popular PL research topics, such as linear types and effect systems, are also not simple (I suppose all the simple concepts have already been done over and over).

Making a simple language which is also practical requires a careful selection of features: powerful enough to cover all of the language's possible use-cases, but not too powerful that they encourage over-engineered or unnecessarily-clever (hard-to-understand) solutions (e.g. metaprogramming). The simplest languages tend to be DSLs with very specific use-cases, and the least simple ones tend to have so much complexity, people write simpler DSLs in them. But then, many simple DSLs become complex in aggregate, to implement and to learn...so once again, it's a balance of "which features have the broadest use-cases while remaining easy to reason about"?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 7 months ago (2 children)

You mention that Haskell isn’t simple. That may be so. However, all of Haskell compiles down to Haskell Core which is incredibly simple. It’s similar with many languages. Purescript back end is being rewritten in Chez Scheme which is also incredibly simple.

My point is, if you stack enough simple layers on simple layers, things get very complex.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (2 children)

That's sort of obvious and seems to kind of miss the point of a programming language. A language is an abstraction over the capabilities of a (possibly virtual) machine. The machine itself can generally only do relatively simple things; but writing assembly code is usually more difficult than writing the same functionality in a higher level language, because individual machine instructions are such a small building block for designing higher-level behaviors. So it's hardly surprising that simple layers stacked on each other result in complexity. The point of the article (and of language design in general) is about how to balance expressive power versus simplicity of language concepts.

[–] [email protected] 5 points 7 months ago (1 children)

The machine itself can generally only do very simple things

I disagree. Assembly languages for modern architectures are a complexity hell. You need books with thousands of pages to explain how they work. In comparison the lambda calculus is much simpler.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

I should have said "relatively simple", not "very simple".

https://programming.dev/comment/8548915

[–] [email protected] 4 points 7 months ago (1 children)

The machine itself can generally only do very simple things

That really hasn't been true for at least 2 decades. And nowadays assembly code is no more that another abstraction layer, as microcode in the processor becomes increasingly complex. It's as out-of-date an idea as the idea that C code is 'close to the metal'.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

I should have said "relatively simple", not "very simple". Yes, modern assembly instructions can often be relatively complex (though not on all architectures). But the point is that every abstraction layer presents a simpler API compared to what's below, but must be implemented in terms of complex combinations of the fundamentally simple units of functionality in the layer below it. This is true of assembly, yes, but that doesn't make it less true of higher level languages.

[–] [email protected] 1 points 7 months ago

Haskell is simple in some ways and complicated in others.

It doesn't have optional or named parameters. There are no objects or methods. No constructors. It doesn't distinguish syntactically between procedures and functions. There are no for loops or while loops. && and || aren't treated specially. It doesn't even have functions with more than one argument. Every function takes one argument and returns one result.