I have been so old-school recently I'm practically covered in chalk. Here's a screenshot of what I've been working on. (Yup, photo stolen from my screens review.)
This is the 'main menu' of my new miniature programming environment. It's Turing-complete, real-time, multitasking, has an interactive editor/debugger, and is field-programmable. It's intended to drive small robot-like toys and devices, and is "non-linguistic" in that it only uses numbers and math-like symbols, not words, for all programming concepts.
It is also the simplest possible language I could create that fulfils those requirements. It's less complicated than most calculators. In fact, it tries very hard to act like one.
And it fits in 26K. It would probably work great as an industrial controller, too.
Primary Features:
- 128 pre-allocated global variables, called 'signals'
- (up to) 64 code fragments which can perform math operations and conditional tests on the signals.
- A 'trigger list' per signal, which defines the code fragments to run when a signal is changed.
- Some signals correspond to sensor inputs, so sensor updates will trigger code.
- Some signals map to robot 'outputs' such as motor speeds, lights, and noises.
Here are several things that seem, at first glance, to be missing from the "language".
- Names.
- Loops.
- Function calls.
- Trinary operators.
- Recursion.
- Comments
The lack of names is intentional, (and there's no memory for comments) but the other concepts can actually be built out of the few axioms we start with.
This programming language is based entirely around "accumulator logic". The "accumulator" is technically the name of the big number displayed on a calculator screen. The number to which all operations are done when you press the buttons. Some calculators have buttons to store or reload the accumulator from "memory" slots, and it's this basic concept I ruthlessly extend, so keep it in mind.
Here is the complete list of all possible math and skip operators the language supports:
:= | Left Assign | =: | Right Assign | =0 | Skip if Zero |
+= | Left Add | =+ | Right Add | !0 | Skip if Not Zero |
-= | Left Minus | =- | Right Minus | >0 | Skip if greater than Zero |
*= | Left Multiply | =* | Right Multiply | <0 | Skip if less than Zero |
/= | Left Divide | =/ | Right Divide | | |
&= | Left AND | =& | Right AND | | |
|= | Left OR | =| | Right OR | | |
^= | Left XOR | =^ | Right XOR | | |
What's up with the "left" and "right" versions? Basically, it's the choice of whether the "next thing" modifies the accumulator, or the accumulator modifies the thing. All operators are 'symmetric', (including a few that don't make complete sense) and have only one "operand" (parameter) per op.
Assignment is the easiest to understand. If the "right hand thing" is one of the memory slots, then "left assignment" will load the accumulator with the memory value. If we "right assign", that's storing the accumulator back to the memory slot.
Instead of just assigning (copying) the value, we can 'combine' the source and destination numbers in various mathematical ways, like adding, multiplying, or boolean logic. But we always apply one thing onto another. (No triplets!)
There are some cases in which the symmetry is a little broken. Assigning the accumulator with the constant number "3" is clearly fine, but trying to set the constant "3" to any other value makes no sense, so it becomes a "NOP" - a "no operation". The closest thing we have to 'commenting out'.
However, either the left hand or right hand can be "indirected", and this is where the language powers up... I use surrounding brackets to indicate whether the side is treated as a raw number, or as a signal reference. Here's what a fragment of that code looks like:
:= 3 | Load accumulator with the constant value "3" |
=:(3) | Store that in signal index (3) |
+=(4) | Add the value of signal (4) to the accumulator. |
+= 1 | Increment the accumulator by one. |
=+(5) | Add the accumulator to the value in signal (5) |
Once again, the order and type of operations is exactly that if you had to solve the problem yourself using a calculator, albeit one with 128 memories.
Indirection is also symmetric: you can put brackets around the left hand side (the accumulator) and operations will occur
to the signal that the accumulator value references. So the first two lines of the above code (which put the value "3" into signal "3") could be rewritten:
:= 3 | Load accumulator with the constant value "3" |
():= 3 | Indirectly load that signal (3) with the constant value "3" |
When you are using the editor/debugger the accumulator values are shown down the left, so you know what's going where.
The comparison operators are a little different in that they don't have any indirection. They always test the current accumulator value, and their operand number is used as a count of "lines to skip" if the test is true. They don't even specify another value to test with... if you want to check equality with a number, you need to subtract that number beforehand yourself and then test for zero.
:=(9) | Load accumulator with signal (9) |
-= 13 | Subtract "13", which is our test number |
=0 >> 1 | If it was equal, skip the next line. |
:= 1 | Load accumulator with "1" |
=:(8) | Store the accumulator (now "0" or "1") into signal (8) |
That's it. And while there are many obvious ways the language could be improved with extra "syntactic sugar" to make certain operations more convenient, they come at a cost.
Building Up From There
Function Calls:
So if there are no explicit function calls, how do we chain code together?
Traditionally we would define functions, their parameters, and return values. We'd name variables and have a call stack. We would have a line of code that 'calls' the function, and passes in the named variables on one side, to come out the other side with new names.
Remember that modifying a signal causes all the code attached to that signal to trigger and run. We can specify a code list in the original trigger, or we could (in one code) write a value to another signal which triggers more code.
What's happened here is that we now have to
manually do the job the compiler used to. Variable names compile down, eventually, to a simple memory reference number. We now have 128 globals that we can put values into, and then call code which also has total access to those variables. So we can use the signals as the equivalent of parameter slots!
So instead of having a function which accepts three parameters, we can use any three signals we want as our parameter holders, and we make sure that changing one of them (or all of them) triggers the relevant code.
So we don't call functions explicitly, but we can change 'variables' which
implicitly runs more code later.
However, we have lost the ability to wait for a reply. "Triggering" other code does not run it immediately. The current code fragment has to finish, before anything else can happen. In other words, we actually
can't do procedural calls, but a form of message-passing.
Loops:
The language contains a 'skip forward' comparison operator, but no 'skip backwards' test, or 'goto' equivalent which would seem to be necessary to implement loops.
However, there's no reason why code can't write back to the signal that triggered it, in which case it will run again. Simple FOR-loops can be constructed by setting a signal to a start value, and decrementing by one if the value is still above zero. When the self-triggering code reaches zero, it will stop updating and the loop will end.
Since every triggered signal gets a chance to run (in order), multiple signals which are looping will be interleaved - a primitive form of multi-tasking.
Why?
The choices made seem weird, and restrictive, but let's look at some of the consequences that a LOGO or BASIC-inspired language couldn't do:
Multitasking: We could potentially have all 128 signals triggering themselves, checking for conditions to be satisfied, and then looping again. No signal is ever 'blocked' because another is performing a long calculation - since code always runs linearly, can't loop and can't jump to other code, each fragment is guaranteed to finish quickly.
Fixed Overhead: Moreover there is an essential problem with the idea of recursive function calls - they require a stack of unknown size. Every embedded call burns up more stack space, and there's not much to start with. We 'flag' code to be run, and get to it once the current code is complete.
Responsiveness: This is where the "Nervous System" idea comes in. We are not creating one big master program with a mainline code path - we are creating "fragments" that hang around waiting for their trigger conditions. "Reflexes", essentially. And we
expect that sometimes those reflexes will work at cross-purposes. Hopefully the robot will
externalize this conflict (by twitching to and fro) rather than sullenly sitting in an infinite loop.
LOGO has been the exemplar of educational programming for decades, but the kind of programming it teaches is close to obsolete. Web pages use javascript to attach event handlers to buttons, most of which never run. Databases use table triggers to cascade updates. Graphics cards have vertex and fragment "shader" hardware which do not allow variable-length loops or recursive calls or dynamic memory allocation, in order to get massively parallel performance.
Learning LOGO today would be counterproductive. It barely even prepares you for driving a turtle around, let alone modern programming concepts. And what I've found is that if you strip those modern esoteric concepts (Like "flow-based programming" or VHDL) down to their simplest axioms, their power remains.