Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
UNIX way, C, LISP et al.
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sat Jul 07, 2018 9:03 pm    Post subject: Reply with quote

CasperVector wrote:
khayyam strongly argued that freedom is by nature a socio-political issue; I still do not agree with him on details here and there, but important is that I realised that I am very bad at socio-political issues.
Heh, the road to enlightenment begins with realising how little one knows..

That wasn't just khayyam's position btw; I stated it too, only in passing as this was meant to be a more technical conversation, which is why I split the topic in the first place. That is what I meant by "You cannot solve sociopolitical problems via technical solutions to something completely different."
WRT software, computing has never operated in a vacuum (or we'd have no data to work on.)

What khayyam said about fraud was apposite, because, if you are thinking about the socio-political, it is important to know exactly how the world really works, despite what the middle-classes like to think.
The "economic" system we are forced to live under (and I use "forced" deliberately, as wars have been, and are, triggered to keep it dominant) is a complete and utter fraud.
The term I recall for this (from my rather privileged upbringing) is: "the Mechanism".

It's "not personal", and "the mechanism will provide", for us few who happen to be connected to the spigot.
CasperVector wrote:
Had we both been more explicit in the wording, we would have avoided the displeasure; I really hope there is a research field/subfield about saving good-natured people from this kind of harmful implicity in debates.
Well, the flip side of it is that khayyam really does know how to type the hind-leg off a donkey. ;-)
You're right: we're all good-natured, by default, so we can just get along, and move past this kind of thing.
CasperVector wrote:
if C used S-expressions, it would be much easier to implement structural macros, and then C with classes / templates / zero-cost abstractions would be probably just gymnastics with the macros; however, this is not the case, and now we end up with C++ with its increasing burdens^Wfeaturesets.
I know what you mean about hygienic macros. But really, this is just about correctly stacking (or scoping) during eval.
For instance bash is "slippery" in arithmetic, in that it keeps evaluating, and correctly handles each value in its "own" brackets:
Code:
a=5 b=7 x='a + b'; echo $(($x * 6)) vs $((x * 6))
Notice the order of evaluation is correct, following mathematical precedence as written, unlike a C macro; unless we explicitly tell it to expand as string.

This is just top-down recursive evaluation, so the computer tracks the stack for us, and is where everyone starts out.
You need to bootstrap something with top-down evaluation; the simplest thing to take on is lex, since there are only two precedence levels really (alternation and concatenation, possibly starred.)
Simplest of all, ofc, is just to use the toolset as intended.
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Sun Jul 08, 2018 5:54 am    Post subject: Reply with quote

steveL wrote:
I know what you mean about hygienic macros. But really, this is just about correctly stacking (or scoping) during eval.
For instance bash is "slippery" in arithmetic, in that it keeps evaluating, and correctly handles each value in its "own" brackets:
Code:
a=5 b=7 x='a + b'; echo $(($x * 6)) vs $((x * 6))
Notice the order of evaluation is correct, following mathematical precedence as written, unlike a C macro; unless we explicitly tell it to expand as string.
This is just top-down recursive evaluation, so the computer tracks the stack for us, and is where everyone starts out.
You need to bootstrap something with top-down evaluation; the simplest thing to take on is lex, since there are only two precedence levels really (alternation and concatenation, possibly starred.)

I see, and what you say is very enlightening. I think the most elegant aspect of Scheme is that it minimises the macro mechanism, and you can safely do, in macros, most things the language itself allows.
One tiny example. Laurent has advised against using the `printf'-family of functions, due to their runtime inefficiency. This would be unnecessary, if they were just cleanly expanded macros.
And one large example. See how the powerful (even more so than Haskell, at least according to one PL expert) type system in Typed Racket is built using only Racket with its macros.
These are examples that are impossible (almost for the former, definitely for the latter) without structural macros. Which is why I believe homoiconicity does practically and dramatically reduce the linguistic complexity.

steveL wrote:
Simplest of all, ofc, is just to use the toolset as intended.

I am not at all opposing to learning the current toolset. Actually I use quite a lot of them, and like many of them, more or less. (Or otherwise what forums would I be in? ;)
But I also advocate for the research of an elegant unified linguistic foundation that applications (more or less) isomorphic to the current ones can be built on.
This is not to pursue a One True Way, but to pursue a minimalist greatest common divisor. In other words, the Unix philosophy blossoming in the languages used to build our systems.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sun Jul 08, 2018 2:49 pm    Post subject: Reply with quote

CasperVector wrote:
I think the most elegant aspect of Scheme is that it minimises the macro mechanism, and you can safely do, in macros, most things the language itself allows.
Indeed. I concur that functional languages have the best macros.
A tiny part of me insists that's because they have to ;) but that doesn't change the value of the approach.

The thing I would like to get across is that in its own way, shell has shown the best approach to handling strings, at least in my experience.
Again, this might be because it had to, but a better way to see it is that each has focussed on doing the one thing well.

In doing so, they have nurtured the specific approach, into polished elegance. And yes, it is best if the macro language and the implementation language are one and the same, I totally agree with that.

Let's just be careful not to knock C, because to do so is to misunderstand the level it works at. Have a read of Lions' on UNIX
to see what I mean. (You won't be able to read it in one go; you will come back to it over years.)
The commentary is wonderful, but I recommend Bach's "The Design of the UNIX Operating System" along with it.

The point being, the level of CPP is when we have an assembler (which might be manual to start with) and a minimal compiler, and we don't want to use magic numbers, but symbolic defines, that the compiler sees as constants.
This is in the days before we had 'enum' and indeed any type-checking, and a vastly reduced address-space, 64KB, with much less actual RAM. And we have to write the core OS, before we can actually use a toolchain in any comfort, and we need to build the linker and loader before we can do that.
All we really have is an assembler, and the ability to start the output once we load it to core.

Now, sure, we can just write a small lisp interpreter. But that's not going to help us code to asm, only to continual evaluation, by definition; and it doesn't express how we actually think (it was never meant to, so that's not a criticism.)

Before we can get to a nice environment where everyone can express themselves, we need a way to keep our asm-level code portable, and that, like it or not, is C. It is not meant to be at a higher-level; it is meant to provide the foundation on which we build such levels.
And that has been proven, first with sh, then make, lex, yacc and awk, and all the languages that have flowed from there.

This does not take anything away from other computing, or other strands like Forth, COBOL, SNOBOL, Pascal, Modula and ADA.
Computers do not compete like humans do. They simply get on with it, and know zero about the outside world, only about the bit patterns in front of them, at CPU level.
CasperVector wrote:
One tiny example. Laurent has advised against using the `printf'-family of functions, due to their runtime inefficiency.
Hmm I cannot agree with that stance, so I checked the page.
I couldn't find any reasoning in support of that position; indeed the facility quoted relies on vsnprintf, which is a lovely function, ime.
It (and the pair snprintf) was standardised in C99, but the approach was proven in C95 (C89 Amendment 1, from Sep 1994) with swprintf (and they really should have added snprintf at the same time.)
CasperVector wrote:
This would be unnecessary, if they were just cleanly expanded macros.
They are in fact an example of a "little language", as Kernighan and Pike put it. Read Chapter 9 of "The Practice of Programming", for more discussion.
Not that I disagree with you on macros, or indeed overuse of printf.
People don't seem to know that puts is effectively "writeln" from other languages. So you see a lot of: printf("%s\n", "some literal"); which really annoys me on an efficiency level, and more so on the ignorance of return value, especially when I see it in books.
CasperVector wrote:
And one large example. See how the powerful (even more so than Haskell, at least according to one PL expert) type system in Typed Racket is built using only Racket with its macros.
These are examples that are impossible (almost for the former, definitely for the latter) without structural macros. Which is why I believe homoiconicity does practically and dramatically reduce the linguistic complexity.
Couldn't get that page to load; will come back to this when my head is not quite so tired.
CasperVector wrote:
I advocate for the research of an elegant unified linguistic foundation that applications (more or less) isomorphic to the current ones can be built on.
Well, just start with yacc and lex, and explore parsing a LISP, which is ofc trivial, and build up from there.
You need to read the awkbook, if you haven't ("The AWK Programming Language".)
awk is what Aho and Weinberger (of the dragon book, and papers on underlying theory) did next, with Kernighan, ie: the discoverers of what is now taught as Core Computing Theory.
Their book lays out fundamental algorithms (again not surprising given Aho's involvement), and shows just how useful #awk is.

You can tell, once you've used both awk and lex for any length of time, that awk is designed for prototyping, so that you can just copy the pattern from the awk script, directly to lex. Consider: #define length yyleng ("length" is an awk builtin.)
It's practically an evaluation front-end to lex and yacc. As sh is to C, in fact.

It really is a beautiful approach, and wonderfully elegant, ime

The underlying principle is ofc an old one: separate out whatever you do into functional layers, that build on each other, and allow you to switch modes as appropriate to the problem.
You only drop down below the convenience layer, once you have worked out the overall approach at the O(N) level, ie algorithmically.

Working that out is the largest, and surprisingly the hardest, part of the programmed design.
It is difficult because it is always so easy to get lost in "complexity of one's own making."

So working at a functional level, with sparse clean code, that only does what it needs to, is the only way to keep focus.

Optimisation comes much later: once you actually have some feedback from usage, and only after you've actually got it working correctly.
Most software projects don't even reach that state for a few years, both in FLOSS and commerce.

So it's odd that people focus so much on "big datasets" and "multi-parallel" approaches, when seemingly everything they implement
and publish, doesn't even work properly to begin with.
Quote:
This is not to pursue a One True Way, but to pursue a minimalist greatest common divisor. In other words, the Unix philosophy blossoming in the languages used to build our systems.
Sounds cool.
But you need experience of the tools, to be able to implement any such thing, and the only way to get that is by doing (if you haven't already.)

The best intro to yacc (a true goldmine, in fact) is chapter 8 of UPE ("The UNIX Programming Environment"), again from Kernighan (& Pike). Indeed, Kernighan's reference awk implementation even has a couple of the same variable names as the code there.
I highly recommend "Lex & Yacc" (1992) by Levine, Mason and Brown; it provides insight into the background, when POSIX was only just standardised (so you could not rely on its build flags, -b and -p.)
By all means get the later one on flex, too.

Like I said, and as you know, it is trivial to lex and parse a functional language, so I'd like to see what you come up with, in say a year or so.
I am not saying "show us the code" blah blah; I am saying: Show us what you mean, in a yacc grammar file.
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Wed Jul 11, 2018 12:08 pm    Post subject: Reply with quote

steveL wrote:
Let's just be careful not to knock C, because to do so is to misunderstand the level it works at.
The point being, the level of CPP is when we have an assembler (which might be manual to start with) and a minimal compiler, and we don't want to use magic numbers, but symbolic defines, that the compiler sees as constants.
This is in the days before we had 'enum' and indeed any type-checking, and a vastly reduced address-space, 64KB, with much less actual RAM. And we have to write the core OS, before we can actually use a toolchain in any comfort, and we need to build the linker and loader before we can do that.
All we really have is an assembler, and the ability to start the output once we load it to core.

These seem to be even less than the minimal requirements for Unix v6, and most (perhaps more than three nines?) embedded devices now enjoy vastly more resources than these.
I still admire what Unix pioneers achieved with such extremely limited resources, which I would not even dear to dream about.
However, we should also reconsider the restrictions put by these limits on consistency and completeness of the languages.
(I think this also follows what the original author of "worse is better" latter suggested.)

steveL wrote:
Now, sure, we can just write a small lisp interpreter. But that's not going to help us code to asm, only to continual evaluation, by definition; and it doesn't express how we actually think (it was never meant to, so that's not a criticism.)
Before we can get to a nice environment where everyone can express themselves, we need a way to keep our asm-level code portable, and that, like it or not, is C. It is not meant to be at a higher-level; it is meant to provide the foundation on which we build such levels.
And that has been proven, first with sh, then make, lex, yacc and awk, and all the languages that have flowed from there.

There are Scheme compilers, and some of them are really decent, both in terms of efficiency and elegance, eg. Chez Scheme which has recently been open-sourced.
Chez Scheme can self-compile in seconds, and produces code that sometimes outperforms C, and the implementation is much smaller than GCC or LLVM.
I guess a decent Scheme compiler could be made from the assembler through Forth, a minimal (without GC or other advanced features) Scheme interpreter, and then a minimal Scheme compiler.
(BTW, note that this might also minimise the hazards of "trusting trust", as one would begin with the bare assembler.)

steveL wrote:
Hmm I cannot agree with that stance, so I checked the page.
I couldn't find any reasoning in support of that position; indeed the facility quoted relies on vsnprintf, which is a lovely function, ime.
They are in fact an example of a "little language", as Kernighan and Pike put it.

I find this very understandable, since Laurent emphasises minimalism more than most people. Remember his reluctance to implementing virtual dependencies in s6-rc? They are still unsupported as of now.
Many thanks for the hint on little languages; I think structural macros will certainly help to reduce runtime overhead of little languages, perhaps in the cost of code sizes.
(But we might be able to teach the preprocessor to arrange a one-time expansion for some kind of "compressed" code, so we might still win in terms of both time and space complexity.)

steveL wrote:
Well, just start with yacc and lex, and explore parsing a LISP, which is ofc trivial, and build up from there. [...]

I see, and many thanks for the helpful advices. I am interested in how you would respond to following opinion by the PL expert I mentioned previously, which was also a student of Dan Friedman.
"State-of-the-art compiler courses care very little about parsing; compiler optimisation, including code compression, instead of parsing, is the main subject of compiler techniques."
"Syntaxes of mainstream languages (including Haskell) make parsers hard to write and make the languages hard to transform, for really few benefits."
"By using S-expressions, Lisp is easy to parse, and it is easy to implement code transforms from one well-defined 'language' into another well-defined 'language'."
(Disclaimer: I do not consider myself knowledgable enough to be able to say which approach is better, so I just rephrase and ask for opinions from those who are knowledgable.)
Chez Scheme (by R. Kent Dybvig, a former colleague of Dan Friedman, which also taught the above-mentioned PL expert) does its work in multiple tiny passes, thanks to the elegant expressiveness of S-expressions.
One of my hobby projects, a code generator for s6-rc, also used this idea, using the Unix directory hierarchy as the "intermediate representation".
(More importantly, Chez Scheme inspired nanopass; incidentally, the name of one author of nanopass can be abbreviated into "AWK".)

steveL wrote:
But you need experience of the tools, to be able to implement any such thing, and the only way to get that is by doing (if you haven't already.)
Like I said, and as you know, it is trivial to lex and parse a functional language, so I'd like to see what you come up with, in say a year or so.
I am not saying "show us the code" blah blah; I am saying: Show us what you mean, in a yacc grammar file.

As you might have already guessed, I would do not deny that I have not written a single compiler, perhaps except for the s6-rc code generator mentioned above.
(I also wrote a calculator for the types of enemies in a Plants vs. Zombies game level, using lex and yacc, but I guess that is even less of a compiler...)
(Nevertheless, I consciously used constant folding and dead-code elimination in reagent, another one of my hobby projects, which mainly focused on refactoring and code simplification.)
I do not doubt that a fraction of what I imagined would be impractical, but I do believe that what I think, generally, about homoiconicity, etc., might be enlightening to some real experts.
I would certainly try what you suggested, but I am not major in computer science, and would not necessary have a programming-related job, so perhaps I can only say "let's see what will happen".
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C


Last edited by CasperVector on Thu Jul 12, 2018 10:39 am; edited 14 times in total
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10587
Location: Somewhere over Atlanta, Georgia

PostPosted: Wed Jul 11, 2018 2:04 pm    Post subject: Reply with quote

steveL wrote:
... The best intro to yacc (a true goldmine, in fact) is chapter 8 of UPE ("The UNIX Programming Environment"), again from Kernighan (& Pike). Indeed, Kernighan's reference awk implementation even has a couple of the same variable names as the code there.
I highly recommend "Lex & Yacc" (1992) by Levine, Mason and Brown; it provides insight into the background, when POSIX was only just standardised (so you could not rely on its build flags, -b and -p.)
By all means get the later one on flex, too. ...
I can also recommend flex & bison by Mason. Very readable.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sat Jul 14, 2018 11:11 am    Post subject: Reply with quote

CasperVector wrote:
I still admire what Unix pioneers achieved with such extremely limited resources, which I would not even dear to dream about.
However, we should also reconsider the restrictions put by these limits on consistency and completeness of the languages.
There was none. Remember, LISP was conceived and implemented under even less resource.

You are still conflating a portable assembler, a necessary first-step, with languages dependent on a userland running on top of the base provided by, and implemented in, that portable asm.
Until you make that distinction in your mind, and see where having that distinction leads, you will continue to muddle what is clear.

Computing is not solely about computation: first and foremost, it is about the results.
IOW: "side-effects" are the whole bloody point. (This comes up a lot with functional programmers.)

That is not to say that a LISP, or other language, cannot be used in implementation of the base; just that there is no real point, since it's already covered, and in a way that actually suits how we prefer to express ourselves. (and without a load of self-styled "purists" complaining about all the "side-effects" everywhere.)

Before you argue on this, you should read "Syntactic Structures" (Chomsky, 1957) which is the theoretical underpinning of all computer languages. It will also help your work in yacc.
I understand that you don't agree with me on "how we prefer to express ourselves", because you prefer to express yourself in FP.
First read that book, and then read it again a couple of weeks later. Then we can discuss.
steveL wrote:
Hmm I cannot agree with that stance, so I checked the page.
I couldn't find any reasoning in support of that position; indeed the facility quoted relies on vsnprintf, which is a lovely function, ime.
They are in fact an example of a "little language", as Kernighan and Pike put it.
CasperVector wrote:
I find this very understandable, since Laurent emphasises minimalism more than most people.
That may well be, but I have just pointed out a contradiction between what you wrote above, and the reference you provided in support.
Without making a big deal about it, it behooves you to explain the contradiction, withdraw the remark, or otherwise explain the relevance of the reference or what you meant while conceding the reference does not support your statement.
CasperVector wrote:
Remember his reluctance to implementing virtual dependencies in s6-rc? They are still unsupported as of now.
While I agree with what he wrote there about only using one service for dnsproxy, I do not agree with him in not supporting "provides" (in openrc terms.)
It is not a disjunction; it is a set of alternatives, one of which must have been/be provided in some form or another, for the dependency to be fulfilled. (Yes, I realise that a set of alternatives is a "disjunction" in propositional logic. This is not propositional logic.)
This is nothing radically new (it's effectively an "alias"); it dovetails quite easily into a dependency resolver.

It is also relatively constrained in the openrc/initd space, since admins only want one "net" provided, even if they have several interfaces (which is less common) and further, dhcpcd takes care of everything when it comes to net. (if we are talking "next-gen" design, though ofc it's been around for about a decade that I recall, which means longer.)
The same applies to admin-configured "provides". (constrained.)

The admin knows what they're doing, much better than the developer who is not even present, by definition. Our job is solely to point out inconsistencies, and report status of monitored jobs; never to try and make up policy.
Defaults are for QA and distributors to worry about. Again, not our job, so let's not muck it up for those to whom the task does, and will always, fall: bug-wranglers and admins.
SEP ("Somebody Else's Problem") is the best conclusion of all, ime. It means: "I don't have to worry about this." :-)

Headspace is the most valuable asset a programmer has, since "Thinking is 70% of the game."

That is why commercial distros and the Microshites of this world, including Crapple, all try to capture "developer mindshare"; because they know it is a limited resource, and people tend to work with what they already know.
(That they're filling our heads and the Net with garbage is irrelevant to the only purpose: to transfer wealth in their direction.)

This above all, is why we must brush aside that tendency to accept without question, before we can even begin to approach a problem-space. Anything else means surrendering half our most vital resource to prejudice and its tedious ongoing maintenance, before we have started.

And that is not delivering our best.
Even if one doesn't care about end-users (in which case, we don't want to work with you), the end-product will be outstripped by other implementations, from less obnoxious cultures.
CasperVector wrote:
Many thanks for the hint on little languages.
The hint was intended to get you to read the book. ;) If it does that, it served its purpose.
CasperVector wrote:
I think structural macros will certainly help to reduce runtime overhead of little languages, perhaps in the cost of code sizes.
I doubt this; the whole point of these little languages is that the code is already present, in the library function that handles the format, and the format encapsulates more work if we did it ourselves.
Your implementation cannot add less code than none at all.
CasperVector wrote:
.. we might be able to teach the preprocessor to arrange a one-time expansion for some kind of "compressed" code, so we might still win in terms of both time and space complexity.
You can definitely apply strength-reduction to many of the calls to printf; to an extent compilers do this already.
I strongly recommend you take a look at cocinelle.
You will like its syntax, too.
CasperVector wrote:
I am interested in how you would respond to following opinion by the PL expert I mentioned previously, which was also a student of Dan Friedman.
"State-of-the-art compiler courses care very little about parsing; compiler optimisation, including code compression, instead of parsing, is the main subject of compiler techniques."
"Syntaxes of mainstream languages (including Haskell) make parsers hard to write and make the languages hard to transform, for really few benefits."
This is self-contradictory. First s/he pooh-poohs parsing(!) by saying no-one cares about it any more, then s/he says "parsing is hard", and the hand-wavey conclusion is "so let's change the language to make the parsing job easier":
CasperVector wrote:
"By using S-expressions, Lisp is easy to parse, and it is easy to implement code transforms from one well-defined 'language' into another well-defined 'language'."
That is simply a cop-out (and incorrectly implies LISP was designed to solve this "problem" no one else has.)

The reason there is nothing about parsers in "advanced" compiler courses, is because there is nothing more to be done; parsing is well-defined, and the theory and implementation both have been thoroughly worked-out in the 1970s.
Those discoveries are what forms the basis of any halfway-decent degree's "Core Computing 101".
So no, you don't tend to worry about it 10 or 20 years later; you were taught it at the start, and all your work since then has relied on its application.

Parsing is very much a "compiler technique", and it is essential. You just don't need to worry about it, since one can simply use yacc.
Further, the interesting "advanced" work in compilers is not so much about optimisation, as it is about "type theory" (a misnomer, but forgivable; all of this is simply application of Group Theory.)

In passing, I would just say that "change the input format" is a terrible cop-out for a computer-scientist of any description[1].
To be blunt: it is risible.

The whole of the above argument reads to me along the same lines as "shell is bad" (because I can't write it.)
"Parsing is hard" (no it's not: use yacc) so "change the input format" and condition the users to accept our much worse implementation (in fact it does not even fulfil the basic requirement, so it is a much worse impl of something not even requested), rather than admit we just don't know (how to use) the toolset and (here's the important part): learn it, already.
You're right: "worse is better", really is worse.

Should this cause any friction, please remind whomever that the well-defined transformative base is RPN, and if s/he doesn't know how to generate it from those "ill-defined" (pfft) inputs, then ffs get UPE. It's been part of the literature since the early 1980s, so there is zero excuse for such ignorance of fundamental research (or "prior Art" if you want to be academic about it), and indeed basic methodology (like tracking the literature to source.)

In any event, I would say: just get on and do it already. You already know lex and yacc, so set aside some money for books, and allow for at least a year or two of study if you're doing this part-time.
There is nothing to stop you, at all. All the tools are available for nothing, and all the literature is open to you.
You even have a head-start. ;-)

Just make sure to use IRC: chat.freenode.org or .net and check out #haskell if you're not already in there.
##posix and ##workingset when you want to actually get stuff done.
--

[1] I am not talking about reworking a grammar which already accepts arithmetic expressions, to keep it "context-free"; but about deciding that you know what, we just cba to parse actual expressions, because we mistakenly believe it to be difficult (and the cargo-cult tells us that's okay, really; "we can't write sh"^W^W "use yacc, either.")
The latter amounts to ignoring fundamental requirements decided upon before implementation was even conceived.
It is simply unacceptable.
Back to top
View user's profile Send private message
berferd
Tux's lil' helper
Tux's lil' helper


Joined: 13 May 2004
Posts: 117

PostPosted: Sat Jul 21, 2018 4:59 pm    Post subject: Reply with quote

Akkara wrote:
...Although I don't know what it says about a language if one needs to spend 6 months on an IRC channel before they are qualified enough to use it properly :)...


Sometimes the only way to win is not to play.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Fri Jul 27, 2018 10:52 pm    Post subject: Reply with quote

berferd wrote:
Sometimes the only way to win is not to play.
That's very true.
Unfortunately it has nothing whatsoever to do with learning a code language well enough to implement.
--
"Well enough" is in terms a programmer would use; not a "developer".
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 20054

PostPosted: Tue Aug 07, 2018 3:34 am    Post subject: Reply with quote

I split off the sociopolitical discussion per request.

[split] big companies, free software and unicorns.

I think I caught everything without taking too much. If not, let me know.
_________________
Quis separabit? Quo animo?
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sat Aug 18, 2018 7:44 am    Post subject: Reply with quote

pjp wrote:
I split off the sociopolitical discussion per request.

[split] big companies, free software and unicorns.
Thanks, pjp.
Love the title. :-)
steveL wrote:
I am not talking about reworking a grammar which already accepts arithmetic expressions, to keep it "context-free"; but about deciding that you know what, we just cba to parse actual expressions, because we mistakenly believe it to be difficult (and the cargo-cult tells us that's okay, really; "we can't write sh"^W^W "use yacc, either.")
The latter amounts to ignoring fundamental requirements decided upon before implementation was even conceived.
It is simply unacceptable.
Just wanted to follow up on this, as it's been pointed out to me, that it reads harshly, especially when one considers LISP's primary early usage for symbolic (mathematical) expressions, eg: of polynomial derivatives wrt x.

One part of me wants to leave it at "Turing complete" which is an old cop-out (since every Turing-complete language can emulate any other, at least in theory: practice is another matter.)

There is an insight here though, into why LISPers tend to avoid yacc (quite apart from many useful features being unnecessary, since LISP is at level 1 on the Chomsky ladder.)
LISP is essentially a language to write parsers with; whereas yacc gives you a language to specify the input (at symbolic level, or level 2; sufficient for all mathematical and arithmetic expressions), and obviates parsing code altogether. Both allow you to focus solely on what happens in response, and when.

Personally, I'm happier continuing to use yacc, where it is also simple to handle tokens at input level, before evaluation -- or we would not be able to code assignments (vs dereference.)
What LISP calls "symbols" we call "tokens"; a "tokseq" is a "symbolic expression", or a (LISP) list if you prefer (since it is really a superset: a code file is a seq of tok, as a file is a seq of chr.)

WRT code languages generally, good ones don't tend to change much at grammar level, and even less after the first 5 years of user-feedback and iteration: one thing time has shown, is that the core of a language should be small, not large.
I much prefer: foo(..) for function application, at grammar level too: IDENT LPAR is always a function (or functional form): much more obvious for the user, as it is in the grammar/for the context-free parser.

Still, it's no wonder that the LISPs are the best at macros, since they operate at token level, in linguistic terms.
--
So yes, the above is wrong on the conceptual level; however the point (where we started) is to have a language that compiles to asm. While LISP/FP may be a useful part of the pipeline, it is not the whole story; whereas C and yacc cover it all, and thereby your language can compile to asm, as well as your parser; your prototyping is much easier; and you are guaranteed to be able to convert it to a top-down parser any time you like, so long as you deal strictly with conflicts. (Nowadays, you don't even need to bother with that, if it's just multi-threading at parse-time that is your concern. info bison)
If speed concerns you, consider how fast awk has always been. (cf: usage in #bash ##workingset #awk.)
==
There's two overall aspects to my decision: firstly, I like being able simply to see what tokens and structures I am dealing with, in the source in front of me. While we can parse with a LISP, it's so much simpler in design to have the tokens laid out in front of me, in mnemonic form, and know precisely what they are (since we generated them in the first place.)
This is at the level of asm: where I feel most comfortable. This is the other aspect: remember, ALL of this is about computing on a digital machine.

You can only wrap, or automate, what you know.
There is no point in pretending you are wrapping something completely different, either; wrapping just enables you and others to think in a different manner.

I see no benefit in not dealing directly with C and asm, since that is what the CPU will see.
Binary operators are fundamental to asm, as they are to mathematics, so there is no mismatch. (I don't have to go down to level 1 in my thinking, when I am in asm: and it definitely has an inherent functional flow, so I lose nothing in terms of being able to apply modality.)

Further, yacc takes away the need for any of the code to check input validity (as I would need to do in a LISP: after all it's a language for writing what yacc does.)
This to me is similar to not having to worry about strings in sh, or the ability to use higher-order functions in a LISP; it takes away a level of worry from me, which the tool or runtime will deal with (using established theory. cf: the Dragon book.)

LISPs are much more useful in optimisation, and might have more place in automating yacc conversion to topdown parse, were it not already long-established prior Art; cf: Djikstra's "Shunting Yard" (code parsing and generation) and Wirth's "Compiler Construction" (generation of a topdown parser.)
Still, if I were 30 years younger and starting out, I'd definitely explore LISPs (SML is lovely, so are scheme and haskell); but I'd also make sure to learn about PROLOG (in addition to assembler and C, across architectures.)

If you're already into LISPs, consider how you would automate topdown conversion of a yacc parser from its (context-free) grammar rules.
And learn about PROLOG too, if you haven't used it already.

Just bear in mind: all you are ever producing is control-logic for a CPU.
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Mon Sep 03, 2018 2:03 pm    Post subject: Reply with quote

steveL wrote:
There was none. Remember, LISP was conceived and implemented under even less resource.

But practical Lisp implementations were indeed more resource-demanding, due to characteristics like garbage collection and run-time type checking.
Before judging Lisp and C based on the resources available when they were respectively created, do note that Unix was born independent of C.

steveL wrote:
You are still conflating a portable assembler, a necessary first-step, with languages dependent on a userland running on top of the base provided by, and implemented in, that portable asm.
Until you make that distinction in your mind, and see where having that distinction leads, you will continue to muddle what is clear.

There seems to be zero reason that a portable assembly could not be expressed in a homoiconic form, eg. using S-expressions.
BTW, regarding bootstrapping, I find the GNU Mes project quite interesting.

steveL wrote:
Computing is not solely about computation: first and foremost, it is about the results. IOW: "side-effects" are the whole bloody point. (This comes up a lot with functional programmers.)
That is not to say that a LISP, or other language, cannot be used in implementation of the base; just that there is no real point, since it's already covered, and in a way that actually suits how we prefer to express ourselves.
(and without a load of self-styled "purists" complaining about all the "side-effects" everywhere.)

You seem to be again confusing Lisp with functional programming (i.e. excluding procedural programming etc.), which you then seem to confuse with purely functional programming (i.e. eliminating side effects at all costs).
Lisp is admittedly to a large extent based on lambda calculus, and does discourage side effects; however, it (at least Scheme) does not object to using another paradigm when the latter does significantly reduce the total complexity.
I suggest that you skim through The Little Schemer / The Seasoned Schemer and note how many times `!' occurs in the code before again identifying Lisp with functional programming.

steveL wrote:
That may well be, but I have just pointed out a contradiction between what you wrote above, and the reference you provided in support.
Without making a big deal about it, it behooves you to explain the contradiction, withdraw the remark, or otherwise explain the relevance of the reference or what you meant while conceding the reference does not support your statement.

On exactly the same page, Laurent does recommend using "type-specific formatting functions instead in production-quality code", which means buffer_puts() etc, so I see no contradiction at all.

steveL wrote:
While I agree with what he wrote there about only using one service for dnsproxy, I do not agree with him in not supporting "provides" (in openrc terms.)
It is not a disjunction; it is a set of alternatives, one of which must have been/be provided in some form or another, for the dependency to be fulfilled.
(Yes, I realise that a set of alternatives is a "disjunction" in propositional logic. This is not propositional logic.)
This is nothing radically new (it's effectively an "alias"); it dovetails quite easily into a dependency resolver.

I think Laurent intended to say that he does not want to implement disjunctions in the dependency resolution engine, but is fine with it implemented in an separate executable.
Which seems doable, although not yet implemented, perhaps because there are not enough people pushing Laurent people for the functionality.

steveL wrote:
I doubt this; the whole point of these little languages is that the code is already present, in the library function that handles the format, and the format encapsulates more work if we did it ourselves.
Your implementation cannot add less code than none at all. You can definitely apply strength-reduction to many of the calls to printf; to an extent compilers do this already.
I strongly recommend you take a look at cocinelle. You will like its syntax, too.

I want not to deny the presence of the code, but to discuss how the expansion of the format strings can be done better, in order to achieve optimal run-time complexity while keeping the source code clean.
Actually I know cocinelle, and think a lot of manpower could be saved when writing such static code analysis/transformation tools, had we used a homoiconic language in the first place.
On a deeper level, the latter is very consistent with "achieve optimal run-time complexity while keeping the source code clean": here "source code" refers to both the compilee and the compiler.
(Which is why the time consumption of a compiler's self-compilation matters, and hence the accolades for Chez Scheme, which self-compiles in seconds, and produces executables that sometimes outperform C.)

steveL wrote:
This is self-contradictory. First s/he pooh-poohs parsing(!) by saying no-one cares about it any more, then s/he says "parsing is hard", and the hand-wavey conclusion is "so let's change the language to make the parsing job easier".
That is simply a cop-out (and incorrectly implies LISP was designed to solve this "problem" no one else has.)

I find his logic to be "parsing is cost-ineffective, and therefore minimised in state-of-the-art compilers", so again no contradiction.
WRT ease of code transformations, I find this quite consistent with the goal of reducing the total complexity, which I think I have said quite a lot about.
And I do not think anyone who knows the history of Scheme would consider the statement as implying Lisp had been explicitly designed to ease code transformations:
https://en.wikipedia.org/wiki/Scheme_(programming_language)#Minimalism wrote:
In 1998 Sussman and Steele remarked that the minimalism of Scheme was not a conscious design goal, but rather the unintended outcome of the design process. "We were actually trying to build something complicated and discovered, serendipitously, that we had accidentally designed something that met all our goals but was much simpler than we had intended....we realized that the lambda calculus—a small, simple formalism—could serve as the core of a powerful and expressive programming language."


steveL wrote:
The reason there is nothing about parsers in "advanced" compiler courses, is because there is nothing more to be done; parsing is well-defined, and the theory and implementation both have been thoroughly worked-out in the 1970s.
Those discoveries are what forms the basis of any halfway-decent degree's "Core Computing 101".
So no, you don't tend to worry about it 10 or 20 years later; you were taught it at the start, and all your work since then has relied on its application.

As stated above, the reason is surely not "there is nothing more to be done"; moreover, even the "reason" itself is not true to begin with, for instance cf. PEG.

steveL wrote:
Parsing is very much a "compiler technique", and it is essential. You just don't need to worry about it, since one can simply use yacc.
Further, the interesting "advanced" work in compilers is not so much about optimisation, as it is about "type theory" (a misnomer, but forgivable; all of this is simply application of Group Theory.)

Judging from the SLOCs (or more fundamentally, Kolmogorov complexity), the interesting parts in compilers are surely very much about optimisation.
Actually, I think quite a few compiler tecnhiques are not trivial corollaries of type theory; call it "applied type theory" if you want, but then I would not just call compilers tecnhiques "about type theory".
And BTW, type theory turns out to be roughly on the same level of category theory, which is roughly on the same level of set theory; in comparison, group theory is a part of algebra, which is usually based upon set theory.

steveL wrote:
In passing, I would just say that "change the input format" is a terrible cop-out for a computer-scientist of any description. To be blunt: it is risible.
The whole of the above argument reads to me along the same lines as "shell is bad" (because I can't write it.)

Sorry, but I find your analogy unfortunately like "Bach did not write operas because he was incapable of writing them" (as demonstrated in Schweigt stille, plaudert nicht, he clearly could).
The person I repeatedly referred to wrote this post (sorry, I do not want to translate it in its entirety), which I think proves that he is perfectly able to write decent parsers.
(That person is highly controversial in my circle; frankly, I find him too radical in expression. These however do not falsify quite a few technical points he makes.)
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C


Last edited by CasperVector on Wed Sep 05, 2018 2:35 am; edited 14 times in total
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Mon Sep 03, 2018 2:36 pm    Post subject: Reply with quote

steveL wrote:
There's two overall aspects to my decision: firstly, I like being able simply to see what tokens and structures I am dealing with, in the source in front of me. While we can parse with a LISP, it's so much simpler in design to have the tokens laid out in front of me, in mnemonic form, and know precisely what they are (since we generated them in the first place.)

After all, the syntax is a UI, and the choice of a UI is a "religious" problem, so I do not want to convince you that you should use a homoiconic language.
However, you seemingly still do not object to the original motivation of what I pursue: a homoiconic linguistic basis for most practical aspects of programming, which dramatically reduces the total complexity of the system.
I said it, and now say it once more: I guess reconciling Lisp and Unix would be much easier than reconciling quantum mechanics and general relativity; and it would be, in a perhaps exaggerated sense, as meaningful.

steveL wrote:
Further, yacc takes away the need for any of the code to check input validity (as I would need to do in a LISP: after all it's a language for writing what yacc does.)
This to me is similar to not having to worry about strings in sh, or the ability to use higher-order functions in a LISP; it takes away a level of worry from me, which the tool or runtime will deal with (using established theory. cf: the Dragon book.)

On the contrary, I find Lisp to be like what yacc produces, as ASTs can be trivially represented as S-expressions.
(Hence the saying that it facilitates code transformations from one well-defined "language" into another well-defined "language".)

[Following are replies to parts of some older posts, and are mostly clarification of relevant concepts:]
steveL wrote:
I know what you mean about hygienic macros. But really, this is just about correctly stacking (or scoping) during eval.

Actually, this is not about hygienic macros, but about structural macros: with the latter, hygiene can be enforced in multiple ways; without the latter, you would lose a lot.
eg. Makefile or the shell can be emulated in Lisp using macros, and a possibly unprecedentedly powerful type system can be constructed.
The same is definitely impossible in C, or otherwise C++ would not even have existed.

steveL wrote:
I am not saying "show us the code" blah blah; I am saying: Show us what you mean, in a yacc grammar file.

Now I guess you have realised that what I want is completely independent of parsing, and that a `.y' file is therefore definitely not my goal. Instead, I currently have two ideas:
* Introduce gradual typing into Scheme, in a way like that with Typed Racket; and study how features like garbage collection could be optimised out by the compiler.
* Since C is a portable assembly, perhaps I can learn from Chez Scheme's own assembler; semi-low-level code generators like DJB's qhasm might also be a good reference.
In any case, I would need to learn much more about PL theory, and let's see what will happen.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Sep 19, 2018 5:42 am    Post subject: Reply with quote

steveL wrote:
There was none. Remember, LISP was conceived and implemented under even less resource.
CasperVector wrote:
But practical Lisp implementations were indeed more resource-demanding, due to characteristics like garbage collection and run-time type checking.
Before judging Lisp and C based on the resources available when they were respectively created, do note that Unix was born independent of C.
Eh, what now?
A) Who has been judging LISP and C based on resource? I certainly haven't. Have you?
So why slip that in there?
B) UNIX was not "born independent of C". I thought we'd established that ages ago.
In the same vein, it was developed alongside and including SH, which has always been a central part of the design (or at least some sort of shell.)

I am sorry but the rest seems to me to be along the same lines, wherein you backtrack on understanding we had already agreed upon (all of it pretty basic.)

I will come back to it when I am not quite so ragged from long hours of work, which perhaps is clouding my reaction.

I do find the pages you have linked interesting, such as GNU mes, and the one on Type Theory, linking to category theory. Much of what I have read in linked pages, makes me want to comment about the laxity of the whole "modern" approach; but I am not sure that would be very useful at this juncture.

I think much of the problem stems from mono-lingual people who have zero clue about other ways of thinking, or even that they exist at all.
That these same people are mono-lingual in English, the most dumbed-down language of all, makes it much harder to get through.
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Wed Sep 19, 2018 8:35 am    Post subject: Reply with quote

steveL wrote:
A) Who has been judging LISP and C based on resource? I certainly haven't. Have you?

You were attempting to refute "we should also reconsider the restrictions put by these [resource] limits on ...", based on the resources available when Lisp was "conceived and implemented".
Admittedly it was me that was judging LISP and C based on resources, but you were attempting to find an inconsistency in my argument.
... which was based on practical implementations of C and Lisp, so the focus in the following sentence was not "resources available", but "when they were respectively created":
CasperVector wrote:
Before judging Lisp and C based on the resources available when they were respectively created, do note that Unix was born independent of C.


steveL wrote:
B) UNIX was not "born independent of C". I thought we'd established that ages ago.
In the same vein, it was developed alongside and including SH, which has always been a central part of the design (or at least some sort of shell.)

From my understanding, their link in development mostly formed when Unix was being ported to PDP-11, instead of when it was born.
Noticed the (roughly) similar relations between "born" vs "ported" and "conceived and implemented" vs "practical implementations"? Hence the analogy.
(The time spans for the two relations are surely quite different, but they to some extent do reflect the development of fundamental features in the two software systems.)

steveL wrote:
I am sorry but the rest seems to me to be along the same lines, wherein you backtrack on understanding we had already agreed upon (all of it pretty basic.) [...]
Much of what I have read in linked pages, makes me want to comment about the laxity of the whole "modern" approach; but I am not sure that would be very useful at this juncture.

I look forward to seeing a few examples which you consider representative.

steveL wrote:
I think much of the problem stems from mono-lingual people who have zero clue about other ways of thinking, or even that they exist at all.
That these same people are mono-lingual in English, the most dumbed-down language of all, makes it much harder to get through.

Just in case, perhaps except for Laurent, the people involved in my posts (PL researchers, the Dale folks, me, etc) are surely not monolingual in the PL sense.
And since Laurent is French and the skarnet mailing lists are in English, he is definitely not monolingual in the natural language sense.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Oct 03, 2018 3:11 pm    Post subject: Reply with quote

steveL wrote:
A) Who has been judging LISP and C based on resource? I certainly haven't. Have you?
CasperVector wrote:
You were attempting to refute "we should also reconsider the restrictions put by these [resource] limits on ...", based on the resources available when Lisp was "conceived and implemented".
It feels like you are misreading my tone: I was not attempting to "refute" anything; we're just chatting afaic.
CasperVector wrote:
Admittedly it was me that was judging LISP and C based on resources, but you were attempting to find an inconsistency in my argument.
... which was based on practical implementations of C and Lisp, so the focus in the following sentence was not "resources available", but "when they were respectively created":
CasperVector wrote:
Before judging Lisp and C based on the resources available when they were respectively created, do note that Unix was born independent of C.
Forgive me, but that seems like semantic quibbling at the edges; you were "judging based on resource" due to the idea that
implementation constraints restricted what could be done.
Let's just deal with that notion; it is incorrect imo. So yes, I am refuting the premise of what you're saying, in effect (my bad). Let me explain why I believe that.

"Technicians" of the 1960s and before, were much better educated than nowadays. The level of mathematical reasoning and intuition expected vastly outweighs what is taught to the vast majority of people currently, say before the age of 21.

Consider the papers and books drawing from that background, especially from before 1995 (1950s-70s for seminal groundwork) when publishing a book still took a vast effort, so that editing was much more thorough, and one could not even think of going to print without a track record or reputation from published papers and RFCs, as well as implementation experience.
They are written by people with a background in mathematics, electronics and physics; for other people with a similar background (or the time and will to learn it.)
They have been pored over, and reworked several times, with many "typists" and other staff having an input, as well as "peers" - those with whom the authors have collaborated, and wider.
There is no bulshytt, because it would be spotted from miles away, and would result in a reputational loss from which a career might never recover.

As such, when it comes to mathematical papers around expressions, and their computation, we are not talking at the level of a script-kiddy; the discussion is at a mathematical level, which does not concern itself with implementation limits, nor indeed imprecision when talking conceptually. Imprecision is an implementation artifact; quantisation is a known process, and quantization a known phenomenon (not the basis for a nonsensical term misinterpreting the "next [subatomic] particle-level" that we can quantise, or "quantify" if that is easier.)

So, the notion that implementation limits constrained a language like LISP, designed for processing mathematical symbolic expressions, and firstly as a thought-experiment in how to process mathematical expressions, is simply ludicrous.

The only place they "limited" C, was in the standardization of separate phases for CPP and CC proper.
However, proper separation (or "structural hygiene" if you prefer) is critical as the basis of Chomsky-level separation.

In terms of a portable assembler, which I keep bringing you back to and you keep avoiding as a notion, this (loss of integration or "homoiconicity") is not a problem in any event. (It shows in the bugbear of no sizeof in CPP, or more broadly, no ability to query types. And sure, it can be integrated easily enough: but mandating it would open the door to a world of pain, if not done properly; and it is not required for the main purpose so parsimony dictates we avoid the issue and nod along to the idea that "implementation limitations" enforced separation.)

The first goal is simply a standard expression parser which enables our code to run on any platform, whilst allowing us to use symbolic constants "to retain mnemonic information for as long as possible" (PoP) and critically, affect machine-state (since the CPU is an FSM.)
This Standard C delivers, without pretending that it is wrapping anything else.

I keep bringing you back to it for two reasons:
1) You must always know the "getout" to the layer below; whatever you are implementing.
2) CPUs already, and have always since first development of an ALU, worked at the level of standard mathematical binary and unary operators.

The first point applies to all programming, ime; you code down in layers once you're done exploring the domain. When working on the implementation of any layer, you must know where the layer below is, when it gets called and so on; and you must document this in the internal API comments. (It makes it so much easier to implement.)

The second point applies at a fundamental level to all coding languages; this is what you are wrapping. (Bartee, 1972; Zaks, 1982. both 3rd ed.)

Since we start with "parsing human-encoded mathematical expressions" as a goal (in order to translate them for calculation at runtime), and they work at the level of binary operators in the main [with 3 std unary operators and one ternary (also the most basic form of an 'if', which is what it originally was)], the question remains: why on earth would we not want to translate directly from binop to binop, once it comes time to generate output?
We easily work on sequences or series all the time; that's precisely what iteration and recursion, both standard assembler constructs, are for.
"Accumulator" is a register in every CPU (that has an ALU. such as every core on your current machine or device.) [Labelling and multiple-units make no difference.]

There is no reason we should constrain ourselves to a level 1 modality, the level of regular expressions (only even more constrained), when we have a formal theory for level 2, we think at level 2, and so does the CPU to which we are talking (at least: it sure as hell feels like that when you code asm.)

That's not to say we cannot use a level-1 modality, nor indeed talk about proper macros.
But we shall not restrict ourselves to FP, any more than we do in implementation across the UNIX ecosystem; or indeed we would IRL in terms of sequential thinking with no layer/s above (the "executive" at minimum; monomania is a problem for a reason.)

Hell, you're not just missing out on decent level-2; you're completely missing out on Prolog and its modality as well.

And no, it is not sufficient to say "well I can implement all those in FP, therefore I don't need them". Functional equivalence is not an excuse (nor a pretext: it is useful at implementation level, not conceptual); or we can all go back to writing logical asm without any higher-level constructs at all. After all, all you need is a control-stack, and you can use as many or as few data-stacks as you like. And shazzam, we can implement all the same stuff you can, and more.
It would still be a PITA as a general-purpose coding language.
(IOW: that is the same exact argument as Ye Olde "Turing-equivalence" Argument, which we dismissed ages ago in the context of code languages, since it is not even about them, and thus ofc does not speak to the im/practicality of their usage.)
CasperVector wrote:
I look forward to seeing a few examples which you consider representative.
Forget all that (it leads down the path of dopamine-triggering criticism for the sake of it, ime); have you read "Syntactic Structures", yet?
It's a very slim book, so it won't take long to read.
CasperVector wrote:
Just in case, perhaps except for Laurent, the people involved in my posts (PL researchers, the Dale folks, me, etc) are surely not monolingual in the PL sense.
And since Laurent is French and the skarnet mailing lists are in English, he is definitely not monolingual in the natural language sense.
Oh sure, that's why I felt free to make the comment.
Programming got a lot better after Standard C Amendment 1 (Sept. 94) came in, both before and after it had bedded-down. (OFC it started to get shit, too, what with all the spivs coming into the industry from the mid-to-late 1980s -- at UK side.)
As with the other sciences it has always been an international affair.

What I find noteworthy is how the vast majority of the people I rate as programmers working now are from non-Anglophone cultures (eg: Torvalds, T'so; to give familar names.)
I think that perspective really is useful to someone who needs to translate and interpret human intent into terms a machine can execute, in order to implement the desired result.
OFC semi-autism tends to give a different perspective, too. ;-)

Oh, btw, I really would like to see what you would do in terms of processing yacc, Casper, from Scheme or another FP language, as described before.
If it helps, you can avoid all embedded actions and simply treat them as a CODE (or ACTION) symbol as shown in the grammar therein. [1]

I'd understand if you don't have time or inclination; but it really is a good way in to the whole language implementation thing, if you're a LISPer.

I'll leave you with a thought my boss berates with, on occasion: "Lexing IS Symbolic Processing." ;-)
==
[1] It's easy enough for lex to handle that, and not relevant to code at symbolic level, so there is no point bothering with all the LISP bits around application of symbols (or if working in yacc: with trying to parse C) -- so long as you track the input line number (at minimum.) It's pure sequential, with a couple of levels of stack. cf: Wirth, ibid.
Enforce the same requirement for ';' at statement end, as the original yacc did; every yacc coder uses them anyway. That keeps its grammar purely context-free in FSM terms. [IOW: this is trivial for a LISP.]
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Fri Oct 05, 2018 8:54 am    Post subject: Reply with quote

steveL wrote:
Forgive me, but that seems like semantic quibbling at the edges; you were "judging based on resource" due to the idea that implementation constraints restricted what could be done.
Let's just deal with that notion; it is incorrect imo. So yes, I am refuting the premise of what you're saying, in effect (my bad). Let me explain why I believe that.

In retrospect, the discussion on resource constraints originated here, where you defended C based on the resource available for it to be a portable assembly:
steveL wrote:
Let's just be careful not to knock C, because to do so is to misunderstand the level it works at.
The point being, the level of CPP is when we have an assembler (which might be manual to start with) and a minimal compiler, and we don't want to use magic numbers, but symbolic defines, that the compiler sees as constants.
This is in the days before we had 'enum' and indeed any type-checking, and a vastly reduced address-space, 64KB, with much less actual RAM. And we have to write the core OS, before we can actually use a toolchain in any comfort, and we need to build the linker and loader before we can do that.
All we really have is an assembler, and the ability to start the output once we load it to core.

My point is that the constraints are gone for a long time, and that a homoiconic portable assembly, that is affordable in most application scenarios, is perhaps feasible.
That is to say that functionalities like GC, which might be necessary for the proposed code generator to exert its full expressiveness, are no longer an obstacle in the resource sense.
The generated code does not necessary make use of functionalities like GC, and therefore can still be small. And by cross-compilation, extremely resource-constrained systems can also benefit.

steveL wrote:
In terms of a portable assembler, which I keep bringing you back to and you keep avoiding as a notion, this (loss of integration or "homoiconicity") is not a problem in any event.
I keep bringing you back to it for two reasons:
1) You must always know the "getout" to the layer below; whatever you are implementing.
2) CPUs already, and have always since first development of an ALU, worked at the level of standard mathematical binary and unary operators.
Since we start with "parsing human-encoded mathematical expressions" as a goal (in order to translate them for calculation at runtime), and they work at the level of binary operators in the main [with 3 std unary operators and one ternary (also the most basic form of an 'if', which is what it originally was)], the question remains: why on earth would we not want to translate directly from binop to binop, once it comes time to generate output?

Homoiconicity is surely not a "problem", and IMO it does bring about huge benefits; I think this is at the heart of our disagreement.
I do not find your two reasons to contradict what I propose, since these structures can easily be encoded in S-expressions; as I said:
CasperVector wrote:
I find Lisp to be like what yacc produces, as ASTs can be trivially represented as S-expressions.

I value homoiconicity because of the dramatic reduction, from it, in the total complexity of the system. Not unlike why Unix people often prefer CLIs over GUIs.

steveL wrote:
But we shall not restrict ourselves to FP, any more than we do in implementation across the UNIX ecosystem; or indeed we would IRL in terms of sequential thinking with no layer/s above (the "executive" at minimum; monomania is a problem for a reason.)
Hell, you're not just missing out on decent level-2; you're completely missing out on Prolog and its modality as well.
[...]; have you read "Syntactic Structures", yet? It's a very slim book, so it won't take long to read.
Oh, btw, I really would like to see what you would do in terms of processing yacc, Casper, from Scheme or another FP language, as described before.
If it helps, you can avoid all embedded actions and simply treat them as a CODE (or ACTION) symbol as shown in the grammar therein.
I'd understand if you don't have time or inclination; but it really is a good way in to the whole language implementation thing, if you're a LISPer.

First, homoiconicity != FP, or otherwise does programming in Tcl or Dale automatically qualify as FP? Second, concerning Prolog, have you heard of miniKanren?
It is also logic programming, only more elegant, and its simplicity (entire implementation on the last two pages of the book) is a direct consequence of homoiconicity.
I have not read the Chomsky book, and may or may not read it in the future; anyway, as I said (cf. this for an intuitive example on what I mean):
CasperVector wrote:
[...] what I want is completely independent of parsing.

BTW, I mainly use Python, the shell and occasionally C, which however does not prevent me from appreciating Lisp, or otherwise I would have never switched from Ubuntu to Gentoo.
I also do quite a lot of work in LaTeX, and often imagine what I pursue as a minimalist linguistic basis plus various "macro packages" for use in different application scenarios.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sat Oct 06, 2018 7:44 am    Post subject: Reply with quote

CasperVector wrote:
I have not read the Chomsky book, and may or may not read it in the future; anyway, as I said "what I want is completely independent of parsing."
I often imagine what I pursue as a minimalist linguistic basis plus various "macro packages" for use in different application scenarios.
We'll discuss further when you've actually read the linguistic theory underlying all the code languages and work you outline.

FTR: my boss thinks it's "terribly written; if only he could use the term 'participle', for instance, without talking around it instead, the exposition would be much clearer."
He still made me read it, however, before he'd even discuss language work with me (I spent the first few years on build-systems and "learning UNIX" as instructed.)

Not being rude, but there simply is no point in discussion that flails around without any underlying shared theoretical understanding.

Point being: "linguistic theory" that provides the "basis" is precisely what you are missing here, when you evade the prior Art, a fundamentally-flawed approach to research.

As well discuss Group Theory with someone who thinks they don't need Set Theory first, or can "get by" whilst never having learnt it, and never reading about it.

Again, no offence, but tis simply a waste of time which become more precious to me the older I get.
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Sat Oct 06, 2018 9:26 am    Post subject: Reply with quote

steveL wrote:
We'll discuss further when you've actually read the linguistic theory underlying all the code languages and work you outline.
Not being rude, but there simply is no point in discussion that flails around without any underlying shared theoretical understanding.
Point being: "linguistic theory" that provides the "basis" is precisely what you are missing here, when you evade the prior Art, a fundamentally-flawed approach to research.

Well, what you say has resulted in an increase in the probability I would read that book; as a direct result, I looked it up on Wikipedia for a glimpse, and found this:
https://en.wikipedia.org/wiki/Syntactic_Structures wrote:
For Chomsky, the study of syntax is thus independent of semantics (the study of meaning).

From my understanding, parsing is almost pure synatctic analysis (constructing ASTs); the other compiler techniqies mainly work on the semantic level (transforming ASTs).
(BTW, Scheme has `define-syntax', `syntax-rules', `syntax-case' etc, but these functionalities actually work on the semantic level to be precise, IMHO.)
However, I note that only the last main-part chapter of the book, "Syntax and Semantics", deals with semantics, so I guess the book would be fairly irrelevant to my research.
As long as most (in terms of Kolmogorov complexity) compiler techniques work on ASTs, and ASTs can be comfortably encoded in S-expressions, my point stands.
(And as shown in the examples I have given before, the validity check of ASTs can be trivially encoded in the transformations, as easily as with yacc(1) if not even more easily.)
Finally, quoting the person I repeatedly referred to (henceforth called by his family name, Wang; the text has been translated):
https://web.archive.org/web/20171216175605/http://www.yinwang.org/blog-cn/2015/09/19/parser wrote:
So the parser is to the compiler just like ZIP is to JVM, like JPEG is to Photoshop. The parser is of a necessary yet ancillary/secondary role in the compiler, in comparison with the most important parts.

(On a slightly philosophical level, natural languages and PLs do differ, and it can be beneficial to tailor our methodology according to the requirements when handling them.)
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Tue Oct 16, 2018 7:57 pm    Post subject: Reply with quote

steveL wrote:
We'll discuss further when you've actually read the linguistic theory underlying all the code languages and work you outline.
Not being rude, but there simply is no point in discussion that flails around without any underlying shared theoretical understanding.
Point being: "linguistic theory" that provides the "basis" is precisely what you are missing here, when you evade the prior Art, a fundamentally-flawed approach to research.
CasperVector wrote:
Well, what you say has resulted in an increase in the probability I would read that book; as a direct result, I looked it up on Wikipedia for a glimpse
*sigh* I'm too old for this.
Either you want to learn or you do not.

If you just want to spend hours typing more speculation on the internet, can I suggest IRC: chat.freenode.net; I'd start with #bash and ##workingset, if I were in your shoes.
Neither of them are suitable for this discussion, but at least you'd get different perspectives from actual UNIX people; not just one grump.

I'd suggest ##yacc but it was quiet when we started it back up again a few years ago, and I haven't been online for ages.
You could learn a lot from #ed, too. (same story.)

The point is: you have spent far more time on this discussion, trying to evade the Theory, all the while declaiming your rigour, and your passion for excellence, than would have taken to read the book several times, and indeed cogitate thereupon, in-between.

Think about that.

Again.
CasperVector wrote:
From my understanding
Your understanding is flawed.

Get that through your head. Now, please.

Your preconceptions are based on forty-third hand knowledge, at best.

I am not trying to diss you; I just want you to admit to yourself that you cannot plug gaps in understanding via the cargo-cult; the most you can do is add to your confusion, and make your own situation worse.

There is no need to climb-down, nor to explain to us quite how you came to this unfortunate circumstance, nor indeed that you agree with that statement.
The rest of us (over 40 and still doing this crap for a living) already know it to be true.

Welcome to "complexity of one's own making."

You broke it; you get to keep the pieces -- and live with them.
CasperVector wrote:
Finally, quoting the person I repeatedly referred to
What for, exactly?
To put it more precisely: to what avail do you quote this person, repeatedly?

Does Wang putting the same point in slightly different terms several years ago, change anything whatsoever?
Am I supposed to suddenly backdown from everything I have said? None of which you have actually grappled with, using anything like rigour, afaict.

From my perspective it feels like you're trying to get a reaction, same as you were doing with khayyam (as it appeared then.)

You should realise: None of this moves your understanding forward.

Only you can do that, by going and learning about Set Theory before you try to work on combinatorial algorithms in the domain of Graph Theory, on a machine you do not know, for a user 30 years after you are dead.

Or IOW: READ THE FSCKING BOOK.

It takes less than an hour to read, you numpty. ;)

For all your talk of being interested in a "minimal linguistic theory" you seem blissfully ignorant of Chomsky, the guy who laid out precisely that field, and who continued to work on it to this day, or at least way past 2000.

Think about that, and the title of his 1995 work (40 years after the basic intro you refuse so bravely to read): "The Minimalist Program."

"Syntactic Structures" was published the year after "The Logical Structure of Syntactic Theory", which was reprinted in 1975 (back in the days when printing a book took a lot more work and labour than it does now, and never happened lightly in CS -- unlike the last 20 years of bulshytt.)

I hope you can see my point, now. If not, have fun with that (or whatever else you get up to.)
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Wed Oct 31, 2018 2:20 pm    Post subject: Reply with quote

steveL wrote:
The point is: you have spent far more time on this discussion, trying to evade the Theory, all the while declaiming your rigour, and your passion for excellence, than would have taken to read the book several times, and indeed cogitate thereupon, in-between.
Your understanding is flawed. [...] Your preconceptions are based on forty-third hand knowledge, at best.

Since you insist on it so much, I have read the book, from which I have found little (if at all) in conflict with my points:
* The discussions on research methodology in linguistics are mainly applications of well-known scientific methodology.
* Regular grammars and CFGs have already been well known and widely used in computer science now.
* The rest, which is mostly on transformative grammars, are of little interest in PLs, presumably due to performance and complexity reasons.
* Therefore the book is, *wrt PLs* (or more precisely, language features), little more than what I have already known from the TCS and NLP courses I attended.

steveL wrote:
Am I supposed to suddenly backdown from everything I have said? None of which you have actually grappled with, using anything like rigour, afaict.

Fine, let's be more "rigorous" while essentially reiterating already-stated arguments, using exactly the same methodology from the Chomsky book:
* Is "minimising parsing" to be identified with "eliminating parsing", and (if I interpret you correctly) therefore to be implicative of regular grammar?
* Are S-expressions really merely on the regular-grammar level, if they require balanced parentheses and can represent ASTs so naturally?
* If the parser is merely a generator of ASTs, how does the migration to a minimal parser of a homoiconic format harm the rest parts of the compiler?
* In which way is Prolog inherently superior to miniKanren in logic programming, so that you alleged that I was "missing out on decent level-2"?
* How is Dale fundamentally less adequate than C as a portable assembler, so that you claimed that I kept avoiding the portable assembler as a notion?
* If homoiconicity enables capabilities unachievable in other PLs while vastly reducing the total complexity, how is the lack of it not an inadequacy?
(Note how easily the last four questions can probably be answered without citing Chomsky, which is exactly why I guessed the book would be irrelevant.)

steveL wrote:
What for, exactly? To put it more precisely: to what avail do you quote this person, repeatedly?
Does Wang putting the same point in slightly different terms several years ago, change anything whatsoever?

Because each time his writing brought some new insight, usually falsifing something you said:
* The Zip to JVM / JPEG to Photoshop simile succintly summarises what I have been arguing (keyword: "Kolmogorov complexity").
* An unprecedentedly powerful type system can be constructed using structural macros, contrary to your claim that they were trivial.
* He is perfectly capable of writing decent parsers (and probably better than those you write), contrary to your allegation.
* Compiler techniques are much more than pure type theory, in sharp contrast with your assertion.
(And it is exactly his writing that made me learn about miniKanren, which falsifies your claim on logic programming.)

steveL wrote:
For all your talk of being interested in a "minimal linguistic theory" you seem blissfully ignorant of Chomsky, the guy who laid out precisely that field, and who continued to work on it to this day, or at least way past 2000.
Think about that, and the title of his 1995 work (40 years after the basic intro you refuse so bravely to read): "The Minimalist Program."
Only you can do that, by going and learning about Set Theory before you try to work on combinatorial algorithms in the domain of Graph Theory, on a machine you do not know, for a user 30 years after you are dead.

You completely miss the fact that the book's influence on PL research, mainly due to regular grammars and CFGs, is due to such a little portion therein which has now been explained much better elsewhere.
You also confuse "minimising parsing" with "eliminating parsing" and then again deny the significance of homoiconicity (or more fundamentally, deny the difference between natural languages and PLs), as I have mentioned above.
Regarding the relation between parsers and PLs, I find it much more like that between set theory and group theory (which you amusingly used to support your argument): to plagiarise Wang, "necessary yet ancillary/secondary".
(Also note that group theory was born ~40 years before set theory, and that most non-Bourbaki instructors do not really emphasise set theory when teaching group theory.)

steveL wrote:
From my perspective it feels like you're trying to get a reaction, same as you were doing with khayyam (as it appeared then.)

Since you clearly know that unpleasant discussion was a result from drastically different interpretation of the same dialogue, I consider this as deliberate distortion of what happened.
Incidentally, you are much worse an interlocuter in the field of PLs than khayyam is in the discipline of political research; at least his evidences did not turn out to be against his very points.
(And ask yourself, how many *language features* have you seriously implemented? Wang at least claimed to have implemented almost each and every one he knew of.)
Finally, as a more naughty alternative to the response above, isn't "to get a reaction" a stated goal of mine in the first place?
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C


Last edited by CasperVector on Wed Apr 17, 2019 11:44 am; edited 2 times in total
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Thu Nov 08, 2018 4:42 pm    Post subject: Reply with quote

steveL wrote:
The point is: you have spent far more time on this discussion, trying to evade the Theory, all the while declaiming your rigour, and your passion for excellence, than would have taken to read the book several times, and indeed cogitate thereupon, in-between.
CasperVector wrote:
I have read the book
Hallelujah, Mohammed got off his arse and walked up the road.
Quote:
from which I have found little (if at all) in conflict with my points:
What TF? Why would you think there would be anything in a Theory, to conflict with later application thereof? Is this your "critique"?
I see it is. It's "all well-known" now, so "why should I actually understand it at base."
CasperVector wrote:
Therefore the book is, *wrt PLs* (or more precisely, language features), little more than what I have already known from the TCS and NLP courses I attended.
Ah that explains it; you don't realise that what you've been studying is in essence "Neuro-Linguistic Programming", a very nasty field indeed.
And you've absorbed a good deal of preconception by osmosis.

Give it a couple of months, and you'll be re-reading Chomsky, and suddenly it will make much more sense.
steveL wrote:
Am I supposed to suddenly backdown from everything I have said? None of which you have actually grappled with, using anything like rigour, afaict.
CasperVector wrote:
Fine, let's be more "rigorous" while essentially reiterating already-stated arguments, using exactly the same methodology from the Chomsky book:
No, let's not, until you learn the meaning of "rigour", which you will soon find has very little to do with "picking holes" in someone else's points, selected at potted random, and everything to do with turning over your own arguments from every angle, picking holes therein, and going back to the sketchpad and reworking everything, throwing out everything you just sweated over, perhaps for months: simply because you found holes in it, not because anyone else pointed them out.

In this instance, for example, you do not have the good sense at least to ameliorate the tone, somewhat, by acknowledging that I felt put-out by your asking me to recritique the same fscking point.
You still show zero awareness of it, beyond "allowing" for it to appear in the part you quoted (and then going on notionally to rip my argument to shreds. Note the operative term in that last clause.)

I am sorry if that disappoints, but cannot pretend it bothers me personally. I am up to my eyeballs in work, and pissing contests with someone 20 or more years younger, or indeed anyone at all, simply seems.. folly.

In a nutshell: learn some consideration in your discourse. (God knows I've had to during this exchange.)
If that insults, tant pis; and QED, afaic.

BTW: your points wrt sexpr show that you are stuck in (recti-)linearity, without realising it. Just a heads-up.
Try having a word with some haskell-ites on IRC, to get a broader perspective.
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Mon Nov 12, 2018 3:49 am    Post subject: Reply with quote

steveL wrote:
Why would you think there would be anything in a Theory, to conflict with later application thereof? Is this your "critique"?
I see it is. It's "all well-known" now, so "why should I actually understand it at base."
Ah that explains it; you don't realise that what you've been studying is in essence "Neuro-Linguistic Programming", a very nasty field indeed.
And you've absorbed a good deal of preconception by osmosis.

I merely said the theory did not conflict with my original points, contrary to your claim that it would support yours.
IOW, the theory is, in the worst case, a non sequitur in our conversation; moreover, the theory even proved to be against your points.
BTW, the book proved to be even more instructive to me, in a surprising way: the methodology surely helped to reorganise the arguments I made.

steveL wrote:
No, let's not, until you learn the meaning of "rigour", which you will soon find has very little to do with "picking holes" in someone else's points, selected at potted random, and everything to do with turning over your own arguments from every angle, picking holes therein, and going back to the sketchpad and reworking everything, throwing out everything you just sweated over, perhaps for months: simply because you found holes in it, not because anyone else pointed them out.
In this instance, for example, you do not have the good sense at least to ameliorate the tone, somewhat, by acknowledging that I felt put-out by your asking me to recritique the same fscking point.
You still show zero awareness of it, beyond "allowing" for it to appear in the part you quoted (and then going on notionally to rip my argument to shreds. Note the operative term in that last clause.)

Blah blah, blanket deny of all the flaws I have found in your arguments, totally oblivious of how little (if any) substance remain therein.
Why would I repeat my arguments, if there had been direct answers, for once, instead of blanket statements like "cf. Chomsky"?
Quoting yourself: "either you want to learn or you do not"; wrt "ripping" your arguments, did I take any of them out of context?

steveL wrote:
BTW: your points wrt sexpr show that you are stuck in (recti-)linearity, without realising it. Just a heads-up.
Try having a word with some haskell-ites on IRC, to get a broader perspective.

Lisp has the infrastructure to do everything Haskell does, but the converse is not true.
BTW, do not hesitate to ask the same group of people whether Lisp is just regular grammar.

steveL wrote:
I am sorry if that disappoints, but cannot pretend it bothers me personally. I am up to my eyeballs in work, and pissing contests with someone 20 or more years younger, or indeed anyone at all, simply seems.. folly.

Why should I be disappointed, if what I follow had been leaded by Gödel, Turing, von Neumann et al?
(Never mind if homoiconicity, a notion as profound as complexity, does not bother you; "if that insults, tant pis".)
Actually I feel encouraged, now that you blatantly resort to blanket denial, after all your arguments have been dissected and falsified.
And I surely prefer the "cargo-cult" championed by McCarthy, Steele, Friedman et al, over the one born 14 years later.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Mon Nov 12, 2018 7:48 am    Post subject: Reply with quote

As I said, a pissing-contest. QED (again.)

Already evinced by the unpleasantness in your prior reply, btw:
CasperVector wrote:
you are much worse an interlocuter in the field of PLs than khayyam is in the discipline of political research.. [unsupported]
And ask yourself, how many *language features* have you seriously implemented..
Hint: as soon as you find yourself reaching for ad-hominem, you know you are on the wrong path.

Bleeding obvious: when your interlocutor (the person on the other end of a conversation) starts talking about a "pissing-contest" you really should look back at your previous statements to ask yourself what was meant; in any event you try to turn it down a notch, not up.

Nota: All I had said was:
steveL wrote:
From my perspective it feels like you're trying to get a reaction, same as you were doing with khayyam (as it appeared then.)
Note how qualified that is, and how much effort there is to allow for misunderstanding on mine own part.
The constrast seems rather stark, from where I sit.

BTW: You missed the difference between linear and recti-linear, I think, in your haste to diss me yet again.
But you really do need to ponder what I am talking about, and the various Theoria I referred you to.

As you noted, Chomsky has already enabled you to reframe your conceptions; that is precisely what I was on about.

Since you have dissed me yet again, let me just conclude with:
Application denied: Inability to raise head, and marked rudeness when confronted with own limitations, and no insight into when brain has just been stretched, nor any ability to express positivity instead of sullen nastiness. Do not refile. [Admin: Keep details in to-garbage list.]
Back to top
View user's profile Send private message
CasperVector
Apprentice
Apprentice


Joined: 03 Apr 2012
Posts: 156

PostPosted: Mon Nov 12, 2018 2:48 pm    Post subject: Reply with quote

steveL wrote:
[... quotation ...] Hint: as soon as you find yourself reaching for ad-hominem, you know you are on the wrong path.
Nota: All I had said was: [... quotation ...] Note how qualified that is, and how much effort there is to allow for misunderstanding on mine own part.
The constrast seems rather stark, from where I sit.

You guessed, incorrectly, based on already falsified speculation, whereas I was fully confident I did not misunderstand you in what you quoted:
* How many of your evidences have turned out to be falsified, or even against your very points, did you not even notice? FYI, khayyam had none.
* It is abundantly clear that you lack experience in implementation of language features (ask for the reason if you really want, though I suggest not).
(Corollary: "'shell is bad' because they can't write it" is more suitable for you than for Wang. Moral: gather more evidences before saying it again.)
So noticing that what we said were both non sequitur in nature, what I said was at most roughly as ad hominem as what you said was.

steveL wrote:
BTW: You missed the difference between linear and recti-linear, I think, in your haste to diss me yet again.
But you really do need to ponder what I am talking about, and the various Theoria I referred you to.
As you noted, Chomsky has already enabled you to reframe your conceptions; that is precisely what I was on about.

Another non sequitur, not even as good as propaganda buzzwords like "socket activation": at least the latter has some vaguely relevant meaning.
By "Theoria", did you mean "non sequuntur or arguments against steveL's own points"? I still see no response on them wrt the role of the parser.
If you had really understood Chomsky's methodology, you would certainly not have used the book to support you points.
(Or did you push me to read the book, just to falsify your own points? I find this much more noble an excuse.)

steveL wrote:
Since you have dissed me yet again, let me just conclude with:
Application denied: Inability to raise head, and marked rudeness when confronted with own limitations, and no insight into when brain has just been stretched, nor any ability to express positivity instead of sullen nastiness. Do not refile. [Admin: Keep details in to-garbage list.]

Did you even have a glimpse on the formal publications I had mentioned, or any reflection when your evidences turned against you yourself?
Never mind the ad hominem you made yourself, and never mind all your "transliterated" or abbreviated curses throughout this website.
So much for "insight" and "rudeness". Anyway, please feel free to announce your conclusions to the mirror, and it will surely help.
BTW, again complete evasion of the substance. Guess who would have a change of mind in a few months, or perhaps sooner?
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Nov 14, 2018 9:18 am    Post subject: Reply with quote

Some people would not recognise a metaphor when slapped across the face with a big wet fish.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3  Next
Page 2 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum