Some current thoughts on the language...
I'm unsure of what I really hope to do long-term. When starting out, I deliberately didn't look too closely at the most popular alternatives, and instead saw how far early personal inspiration went. I've become bored with some of my old ideas, but not yet figured out what I really want to extend capabilities towards.
There's several established alternatives, mature and varyingly similar, but not quite the same thing as my project. Big names include:
- csound provides an assembly-looking language, with scripts divided into two parts ("orchestra" defining instruments, "score" defining sequence of use). Has grown since '86 to provide very general functionality. (A relative descended from a '78 program, now RTcmix, used only the "score" part while adding an early C-like scripting syntax extension to improve expressiveness.)
- SuperCollider, launched in '96, has likewise developed to provide very general functionality, but with a C-family-looking OO and functional language.
- ChucK, since '03, looks somewhat C-like and is more minimal, and is mainly set apart by its approach to timing and concurrency.
Back in the 80's, programs in the general genre were mainly to render audio data from scripts without interaction. Later, "live coding" with "real-time" programs became prominent, and both the popular old programs and new ones are now often (re)designed with that in mind. By contrast, I still have in mind the older use.
Newer languages of this kind seem to often follow functional and/or OO paradigms. (But some "live coding" languages have unique esoteric terseness.) There's also more general languages for audio development reflecting that, like Faust and SOUL. I hope to make something a bit different, intuitively a language for composing complex tracks grown from a simple start. It may or may not progress for real.
Some planned and possible changes
What features to offer? The syntax could be extended in the direction of function abstraction, e.g. by making lists of things work like functions without parameters when referred to, and then extending that by adding parameter-defining. Syntax could also be extended in the opposite direction, towards handling the output of sound generators more generally and allowing doing more with audio outputs, e.g. adding effects/filters taking audio input(s) and producing a new one.
But the language still wouldn't be Turing complete, and the program wouldn't provide a full-featured toolbox for all general uses. I'm however, like many, interested in simple building blocks that can be combined flexibly with a large range of results. Maybe after some unknown number of years, I'll figure out some new way of doing audio synthesis, complementary to older ones. Or not. The vague sense that I haven't hit upon what I look for yet may lead to something, or to nothing, in the end.
A git branch, old-dev_202006, contains a reworked version I forked from a simpler 2011 starting point. It adds a noise generator,
N (with white noise added using
Nwh), and when added back to the main program - some redesign ideas are carried over first in rough order of increasing size - the wave oscillator will be re-renamed to
W instead of the current
O (so a sine oscillator will be
Wsin instead of
(What are those big letters, really? I clarified that to myself: class names, but the language only offers pre-defined classes for creating instances. The type or mode label, such as
wh, is like a constructor argument. Both classes support changing those types later for an instance, making those types a bit different from conventional sub-classes.)
A not-yet implemented idea is a parse context nesting syntax which works with timing offsets, and can perhaps also be combined with a kind of linkage/inclusion connective. The idea for the connective is to allow linking all members of a list to another list, by writing it between the lists. If the nesting syntax is used to enter a subscope before the connective is used, closing that subscope after the second list would return the parsing context to before the connective was used, so that other things can be done in relation to the first list. (This syntax idea, sketchily described, is meant to allow to build DAGs rather than just lists, while keeping a terse and simple-enough syntax, which is an old to-do.)
I think conventional wisdom may be most needed in redesigning the audio generation end of the program, turning it into something that handles lists of simpler instructions to fill and use buffers in various ways. The use of an oscillator can involve a number of buffers, and translating into lower-level instructions will increase the number of instructions rather than remove the need to deal with oscillators, say, yet such a splitting to work out input/output dependencies first and do the rest later seems like the next step to allow the language to become more flexible.