Isn't that how B worked?
To be fair, the C example could be detangled a lot by introducing a typedef:
typedef int Callback_t(int, int);Callback_t *(*fp)(Callback_t *, int);
Both of those declarations look weird to me. In Haskell it would be:
a :: Stringbob :: (String, Int, Double) -> [String]bob (a, b, c) = ...
... except that makes bob
a function taking a tuple and it's much more idiomatic to curry it instead:
bob :: String -> Int -> Double -> [String]bob a b c = ...-- syntactic sugar for:-- bob = \a -> \b -> \c -> ...
The [T]
syntax also has a prefix form [] T
, so [String]
could also be written [] String
.
OCaml makes the opposite choice. In OCaml, a list of strings would be written string list
, and a set of lists of strings would be string list set
, a list of lists of integers int list list
, etc.
I am 100% confident that your claim is factually wrong.
I don't understand the complaint. What exactly is the issue?
I'll update my mems when Microsoft decides to implement C99. (Hey, it's only been a quarter of a century ...)
Isn't this COBOL or 4GL or something?
@racketlauncher831 As far as the C compiler is concerned, there is literally no difference between those two notations. If you declare a function parameter as an array (of T), the C compiler automatically strips the size information (if any) and changes the type to pointer (to T).
(And if we're talking humans, then char *args[]
does not mean "follow this address to find a list of characters" because that's the syntax for "array of pointers", not "pointer to array".)
@masterspace Love the confidence, but your facts could do with some work.
-
"Interpreted language" is not a thing. Interpreted vs compiled is a property of a particular implementation, not the language.
(I wonder how you would classify lazy functional languages like Haskell. The first implementations were all interpreters because it was not clear that the well-known graph reduction technique for lazy evaluation could be compiled to native code at all.Today, hugs (a Haskell interpreter written in C) isn't maintained anymore, but GHC still comes with both a native compiler (ghc) and an interpreter (runghc, ghci).) -
Most implementations that employ interpretation are compiler/interpreter hybrids. It is rare to find an interpreter that parses and directly executes each line of code before proceeding to the next (the major exception being shells/shell scripts). Instead they first compile the source code to an internal representation, which is then interpreted (or, in some cases, stored elsewhere, like Python's
.pyc
files or PHP's "opcache"). -
You can tell something is a compile-time error if its occurrence anywhere in the file prevents the whole file from running. For example, if you take valid code and add a random
{
at the end, none of the code will be executed (in Javascript, Python, Perl, PHP, C, Haskell, but not in shell scripts). -
The original "lint" tool was developed because old C compilers did little to no error checking. (For example, they couldn't verify the number and type of arguments in function calls against the corresponding function definitions.) "Linting" specifically refers to doing extra checks (some of which may be more stylistic in nature, like a
/*FALLTHRU*/
comment in switch statements) beyond what the compiler does by itself. -
If I refer to an undeclared variable in a Perl function, none of the code runs, even if the function is defined at the end of the file and never called. It's a compile-time error (and I don't have to install and configure a linting tool first). The same is true of Haskell.
What's funny to me is that Javascript copieduse strict
directly from Perl (albeit as a string because Javascript doesn't haveuse
declarations), but turned it into a runtime check, which makes it a lot less useful.
I haven’t used Perl though, what do you like better about it?
"Undeclared variable" is a compile-time error.
barubary
0 post score0 comment score
Prost! 🍻