steveL wrote:A) Who has been judging LISP and C based on resource? I certainly haven't. Have you?
CasperVector wrote:You were attempting to refute "we should also reconsider the restrictions put by these [resource] limits on ...", based on the resources available when Lisp was "conceived and implemented".
It feels like you are misreading my tone: I was not
attempting to "refute" anything; we're just chatting afaic.
CasperVector wrote:Admittedly it was me that was judging LISP and C based on resources, but you were attempting to find an inconsistency in my argument.
... which was based on
practical implementations of C and Lisp, so the focus in the following sentence was not "
resources available", but "
when they were respectively created":
CasperVector wrote:Before judging Lisp and C based on the resources available when they were respectively created, do note that Unix was born independent of C.
Forgive me, but that seems like semantic quibbling at the edges; you were "judging based on resource" due to the idea that
implementation constraints restricted what could be done.
Let's just deal with that notion; it is incorrect imo. So yes, I am refuting the
premise of what you're saying, in effect (my bad). Let me explain why I believe that.
"Technicians" of the 1960s and before, were much better educated than nowadays. The level of mathematical reasoning and intuition expected vastly outweighs what is taught to the vast majority of people currently, say before the age of 21.
Consider the papers and books drawing from that background, especially from before 1995 (1950s-70s for seminal groundwork) when publishing a book still took a vast effort, so that editing was much more thorough, and one could not even think of going to print without a track record or reputation from published papers and RFCs, as well as implementation experience.
They are written by people with a background in mathematics, electronics and physics; for other people with a similar background (or the time and will to learn it.)
They have been pored over, and reworked several times, with many "typists" and other staff having an input, as well as "peers" - those with whom the authors have collaborated, and wider.
There is no bulshytt, because it would be spotted from miles away, and would result in a reputational loss from which a career might never recover.
As such, when it comes to mathematical papers around expressions, and their computation, we are not talking at the level of a script-kiddy; the discussion is at a mathematical level, which does not concern itself with implementation limits, nor indeed imprecision when talking conceptually. Imprecision is an implementation artifact; quantisation is a known process, and quantization a known phenomenon (not the basis for a nonsensical term misinterpreting the "next [subatomic] particle-level" that we can quantise, or "quantify" if that is easier.)
So, the notion that implementation limits constrained a language like LISP, designed for processing mathematical symbolic expressions, and firstly as a thought-experiment in
how to process mathematical expressions, is simply ludicrous.
The only place they "limited" C, was in the standardization of separate phases for CPP and CC proper.
However, proper separation (or "structural hygiene" if you prefer) is critical as the basis of Chomsky-level separation.
In terms of a portable assembler, which I keep bringing you back to and you keep avoiding as a notion, this (loss of integration or "homoiconicity") is not a problem in any event. (It shows in the bugbear of no sizeof in CPP, or more broadly, no ability to query types. And sure, it
can be integrated easily enough: but mandating it would open the door to a world of pain, if not done properly; and it is not required for the main purpose so parsimony dictates we avoid the issue and nod along to the idea that "implementation limitations" enforced separation.)
The first goal is simply a standard expression parser which enables our code to run on any platform, whilst allowing us to use symbolic constants "to retain mnemonic information for as long as possible" (PoP) and critically, affect machine-state (since the CPU is an FSM.)
This Standard C delivers, without pretending that it is wrapping anything else.
I keep bringing you back to it for two reasons:
1) You must always know the "getout" to the layer below; whatever you are implementing.
2) CPUs already, and have always since first development of an ALU, worked at the level of standard mathematical binary and unary operators.
The first point applies to all programming, ime; you code down in layers once you're done exploring the domain. When working on the implementation of any layer, you must know where the layer below is, when it gets called and so on; and you must document this in the internal API comments. (It makes it
so much easier to implement.)
The second point applies at a fundamental level to all coding languages;
this is what you are wrapping. (Bartee, 1972; Zaks, 1982. both 3rd ed.)
Since we start with "parsing human-encoded mathematical expressions" as a goal (in order to translate them for calculation at runtime), and they work at the level of binary operators in the main [with 3 std unary operators and one ternary (also the most basic form of an 'if', which is what it originally was)], the question remains: why on earth would we
not want to translate directly from binop to binop, once it comes time to generate output?
We easily work on sequences or series all the time; that's precisely what iteration and recursion, both standard assembler constructs, are for.
"Accumulator" is a register in
every CPU (that has an ALU. such as every core on your current machine or device.) [Labelling and multiple-units make no difference.]
There is no reason we should constrain ourselves to a level 1 modality, the level of regular expressions (only even more constrained), when we have a formal theory for level 2, we think at level 2, and
so does the CPU to which we are talking (at least: it sure as hell feels like that when you code asm.)
That's not to say we cannot
use a level-1 modality, nor indeed talk about proper macros.
But we shall not restrict ourselves to FP, any more than we do in implementation across the UNIX ecosystem; or indeed we would IRL in terms of sequential thinking with no layer/s above (the "executive" at minimum; monomania is a problem for a reason.)
Hell, you're not just missing out on decent level-2; you're completely missing out on Prolog and its modality as well.
And no, it is not sufficient to say "well I can implement all those in FP, therefore I don't need them".
Functional equivalence is not an excuse (nor a pretext: it is useful at implementation level, not conceptual); or we can all go back to writing logical asm without any higher-level constructs at all. After all, all you need is a control-stack, and you can use as many or as few data-stacks as you like. And shazzam, we can implement all the same stuff you can, and more.
It would
still be a PITA as a general-purpose coding language.
(IOW: that is the same exact argument as Ye Olde "Turing-equivalence" Argument, which we dismissed ages ago in the context of code languages, since it is not even about them, and thus ofc does not speak to the im/practicality of their usage.)
CasperVector wrote:I look forward to seeing a few examples which you consider representative.
Forget all that (it leads down the path of dopamine-triggering criticism for the sake of it, ime); have you read "Syntactic Structures", yet?
It's a very slim book, so it won't take long to read.
CasperVector wrote:Just in case, perhaps except for Laurent, the people involved in my posts (PL researchers, the Dale folks, me, etc) are surely not monolingual in the PL sense.
And since Laurent is French and the skarnet mailing lists are in English, he is definitely not monolingual in the natural language sense.
Oh sure, that's why I felt free to make the comment.
Programming got a lot better after Standard C Amendment 1 (Sept. 94) came in, both before and after it had bedded-down. (OFC it started to get shit, too, what with all the spivs coming into the industry from the mid-to-late 1980s -- at UK side.)
As with the other sciences it has always been an international affair.
What I find noteworthy is how the vast majority of the people I rate as programmers working now are from non-Anglophone cultures (eg: Torvalds, T'so; to give familar names.)
I think that perspective really is useful to someone who needs to translate and interpret human intent into terms a machine can execute, in order to implement the desired result.
OFC semi-autism tends to give a different perspective, too. ;-)
Oh, btw, I really would like to see what you would do in terms of
processing yacc, Casper, from Scheme or another FP language, as described before.
If it helps, you can avoid all embedded actions and simply treat them as a CODE (or ACTION) symbol as shown in the grammar therein. [1]
I'd understand if you don't have time or inclination; but it really is a good way in to the whole language implementation thing, if you're a LISPer.
I'll leave you with a thought my boss berates with, on occasion: "Lexing IS Symbolic Processing." ;-)
==
[1] It's easy enough for lex to handle that, and not relevant to code at symbolic level, so there is no point bothering with all the LISP bits around application of symbols (or if working in yacc: with trying to parse C) -- so long as you track the input line number (at minimum.) It's pure sequential, with a couple of levels of stack. cf: Wirth, ibid.
Enforce the same requirement for ';' at statement end, as the original yacc did; every yacc coder uses them anyway. That keeps its grammar purely context-free in FSM terms. [IOW: this is trivial for a LISP.]