Husband, father, kabab lover, history buff, chess fan and software engineer. Believes creating software must resemble art: intuitive creation and joyful discovery.
Views are my own.
Besides the fun of stretching your mental muscles to think in a different paradigm, Forth is usually used in the embedded devices domain (like that of the earlier Mars rover I forgot the name of).
This project for me is mostly for the excitement and joy I get out of implementing a Forth (which is usually done in Assembler and C) on the JVM. While I managed to keep the semantics the same the underlying machinery is vastly different from, say, GForth. I find this quite a pleasing exercise.
Last but not least, if you like concatenative but were unable to practice fun on the JVM, bjForth may be what you’re looking for.
Hope this answers your question.
Whoa! This is pretty rad! Thanks for sharing!
That’s definitely an interesting idea. Thanks for sharing.
Though it means that someone down the line must have written a bootstrap programme with C/Assembler to run the host forth.
In case of jbForth, I decided to write the bootstrap too.
That’s impossible unless you’ve got a Forth machine.
Where the OS native API is accessible via C API, you’re bound to write, using C/C++/Rust/etc, a small bootstrap programme to then write your Forth on top of. That’s essentially what bjForth is at the moment: the bootstrap using JVM native API.
Currently I’m working on a set of libraries to augment the 80-something words bjForth bootstrap provides. These libraries will be, as you suggested, written in Forth not Java because they can tap into the power of JVM via the abstraction API that bootstrap primitives provide.
Hope this makes sense.
Haha…good point! That said bjForth is still a fully indirect threaded Forth. It’s just that instead of assembler and C/C++ it calls Java API to do its job.
Thanks for the pointer! Very interesting. I actually may end up doing a prototype and see how far I can get.
I usually capture all my development-time “automation” in Make and Ansible files. I also use makefiles to provide a consisent set of commands for the CI/CD pipelines to work w/ in case different projects use different build tools. That way CI/CD only needs to know about make build
, make test
, make package
, … instead of Gradle/Maven/… specific commands.
Most of the times, the makefiles are quite simple and don’t need much comments. However, there are times that’s not the case and hence the need to write a line of comment on particular targets and variables.
Can you provide what you mean by check the environment, and why you’d need to do that before anything else?
One recent example is a makefile (in a subproject), w/ a dozen of targets to provision machines and run Ansible playbooks. Almost all the targets need at least a few variables to be set. Additionally, I needed any fresh invocation to clean the “build” directory before starting the work.
At first, I tried capturing those variables w/ a bunch of ifeq
s, shell
s and define
s. However, I wasn’t satisfied w/ the results for a couple of reasons:
clean
target as a shell command at the top of the file.Then I tried capturing that in a target using bmakelib.error-if-blank
and bmakelib.default-if-blank
as below.
##############
.PHONY : ensure-variables
ensure-variables : bmakelib.error-if-blank( VAR1 VAR2 )
ensure-variables : bmakelib.default-if-blank( VAR3,foo )
##############
.PHONY : ansible.run-playbook1
ansible.run-playbook1 : ensure-variables cleanup-residue | $(ansible.venv)
ansible.run-playbook1 :
...
##############
.PHONY : ansible.run-playbook2
ansible.run-playbook2 : ensure-variables cleanup-residue | $(ansible.venv)
ansible.run-playbook2 :
...
##############
But this was not DRY as I had to repeat myself.
That’s why I thought there may be a better way of doing this which led me to the manual and then the method I describe in the post.
running specific targets or rules unconditionally can lead to trouble later as your Makefile grows up
That is true! My concern is that when the number of targets which don’t need that initialisation grows I may have to rethink my approach.
I’ll keep this thread posted of how this pans out as the makefile scales.
Even though I’ve been writing GNU Makefiles for decades, I still am learning new stuff constantly, so if someone has better, different ways, I’m certainly up for studying them.
Love the attitude! I’m on the same boat. I could have just kept doing what I already knew but I thought a bit of manual reading is going to be well worth it.
That’s a great starting point - and a good read anyways!
Thanks 🙏
Agree w/ you re trust.
Thanks. At least I’ve got a few clues to look for when auditing such code.
I didn’t like the capitalised names so configured xdg to use all lowercase letters. That’s why ~/opt
fits in pretty nicely.
You’ve got a point re ~/.local/opt
but I personally like the idea of having the important bits right in my home dir. Here’s my layout (which I’m quite used to now after all these years):
$ ls ~
bin
desktop
doc
downloads
mnt
music
opt
pictures
public
src
templates
tmp
videos
workspace
where
bin
is just a bunch of symlinks to frequently used apps from opt
src
is where i keep clones of repos (but I don’t do work in src
)workspace
is a where I do my work on git worktrees (based off src
)Thanks! So much for my reading skills/attention span 😂
Which Debian version is it based on?
RE Go: Others have already mentioned the right way, thought I’d personally prefer ~/opt/go
over what was suggested.
RE Perl: To instruct Perl to install to another directory, for example to ~/opt/perl5
, put the following lines somewhere in your bash init files.
export PERL5LIB="$HOME/opt/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}"
export PERL_LOCAL_LIB_ROOT="$HOME/opt/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}"
export PERL_MB_OPT="--install_base \"$HOME/opt/perl5\""
export PERL_MM_OPT="INSTALL_BASE=$HOME/opt/perl5"
export PATH="$HOME/opt/perl5/bin${PATH:+:${PATH}}"
Though you need to re-install the Perl packages you had previously installed.
This is fantastic! 👏
I use Perl one-liners for record and text processing a lot and this will be definitely something I will keep coming back to - I’ve already learned a trick from “Context Matching” (9) 🙂
I couldn’t agree more 😂
Except that, what the author uses is pretty much standard in the Go ecosystem, which is, yes, a shame.
To my knowledge, the only framework which does it quite seamlessly is Spring Boot which, w/ sane and well thought out defaults, gets the tracing done w/o the programmer writing a single line of code to do tracing-related tasks.
That said, even Spring’s solution is pretty heavy-weight compared to what comes OOTB w/ BEAM.
I got to admit that your point about the presentation skills of the author are all correct! Perhaps the reason that I was able to relate to the material and ignore those flaws was that it’s a topic that I’ve been actively struggling w/ in the past few years 😅
That said, I’m still happy that this wasn’t a YouTube video or we’d be having this conversation in the comments section (if ever!) 😂
To your point and @[email protected]’s RE embedded systems:
That’s absolutely true that such a mindset is probably not going to work in an embedded environment. The author, w/o explicitly mentioning it anywhere, is explicitly talking about distributed systems where you’ve got plenty of resources, stable network connectivity and a log/trace ingestion solution (like Sumo or Datadog) alongside your setup.
That’s indeed an expensive setup, esp for embedded software.
The narrow scope and the stylistic problem aside, I believe the author’s view is correct, if a bit radical.
One of major pain points of troubleshooting distributed systems is sifting through the logs produced by different services and teams w/ different takes of what are the important bits of information in a log message.
It get extremely hairy when you’ve got a non-linear lifeline for a request (ie branches of execution.) And even worse when you need to keep your logs free of any type of information which could potentially identify a customer.
The article and the conversation here got me thinking that may be a combo of tracing and structured logging can help simplify investigations.
Thanks for sharing your insights.
Thinking out loud here…
In my experience with traditional logging and distributed systems, timestamps and request IDs do store the information required to partially reconstruct a timeline:
That said, logs do shine when things go wrong; when you start your investigation by using a stacktrace in the logs as a clue. That (stacktrace) is something that I’m not sure a tracing solution will be able to provide.
they should complement each other
Yes! You nailed it 💯
Logs are indispensable for troubleshooting (and potentially nothing else) while tracers are great for, well, tracing the data/request throughout the system and analyse the mutations.
Not really I’m afraid. Effects can be anywhere and they are not wrapped at all.
In technical terms it’s stack-oriented meaning the only way for functions (called “words”) to interact with each other is via a parameter stack.
Here’s an example:
TIMES-10
is a word which pops one parameter from stack and pushes the result of its calculation onto stack. The( x -- y)
is a comment which conventionally documents the “stack effect” of the word.Now when you type
12
and press RETURN, the integer 12 is pushed onto stack. ThenTIMES-10
is called which in turn pushes10
onto stack and invokes*
which pops two values from stack and multiplies them and pushes the result onto stack.That’s why when type
.S
to see the contents of the stack, you get120
in response.Another example is
This simple example demonstrates the reverse Polish notation (RPN) Forth uses. The arithmetic expression is equal to
5 * (20 - 10)
the result of which is pushed onto stack.PS: One of the strengths of Forth is the ability to build a vocabulary (of words) around a particular problem in bottom-to-top fashion, much like Lisp. PPS: If you’re ever interested to learn Forth, Starting Forth is a fantastic resource.