FORTH? Really!?

(rescrv.net)

Comments

selvor 17 minutes ago
I like that the article mentions Manfred von Thun, his iteration over FORTH was a joy to use, spent many hours writing code with it. It's a breeze. It was the true mirror reflection of Lisp in FORTH, and helps one reap the benefits of concatenation without getting bogged down in the nitty gritty details as one often ends up dealing with in FORTH, at least in the beginning until a mature vocabulary is built for the problem.

As I'm still so far behind in the LLM tech in terms of how it works, so I don't know what to think of this article, but my experience with using them as a user to generate FORTH code was often a failure. They just can't get it right, most likely due to lack of training data. OTOH, I also found writing FORTH as a human way more difficult than any other language I used, even more so than hand written assembly. But it amortizes itself fairly quickly, and things get a lot easier after a point (what I called earlier as vocabulary maturity). But more importantly writing FORTH code is way more fun and satisfying to me somehow.

d3nit 6 hours ago
From the title alone I tought it will be another FORTH interpreter implementation article, but I was happy to see someone actually using it for anything besides proving their interpreter with a Fibonacci calculation.
codr7 5 hours ago
My own baby started out as a Forth dialect, but now sits somewhere between Logo and Common Lisp on the complexity scale. Forth is a good starting point imo, you don't waste any time on non essentials.

https://gitlab.com/codr7/shik

jandrewrogers 7 hours ago
The observation that concatenative programming languages have nearly ideal properties for efficient universal learning on silicon is very old. You can show that the resource footprint required for these algorithms to effectively learn a programming language is much lower than other common types of programming models. There is a natural mechanical sympathy with the theory around universal learning. It was my main motivation to learn concatenative languages in the 1990s.

This doesn't mean you should write AI in these languages, just that it is unusually cheap and easy for AI to reason about code written in these languages on silicon.

haolez 6 hours ago
Diffusion text models to the rescue! :)
rescrv 17 hours ago
Looking to discuss with people about whether LLMs would do better if the language had properties similar to postfix-notation.