Per's blog

The DomTerm terminal emulator has a number of unique features. In this article we will explore how it enables dynamic re-flow for Lisp-style pretty-printing.

The goal of pretty-printing is to split a text into lines with appropriate indentation, in a way that conforms to the logical structure of the text.

For example if we need to print the following list:

((alpha-1 alpha-2 alpha-3) (beta-1 beta-2 beta-3 beta-4))
in a window 40 characters wide, we want:
((alpha-1 alpha-2 alpha-3)
 (beta-1 beta-2 beta-3 beta-4))
but not:
((alpha-1 alpha-2 alpha-3) (beta-1
 beta-2 beta-3 beta-4))

Pretty-printing is common in Lisp environments to display complicated nested data structures. Traditionally, it is done by the programming-language runtime, based on a given line width. However, the line width of a console can change if the user resizes the window or changes font size. In that case, previously-emitted pretty-printed lines will quickly become ugly: If the line-width decreases, the breaks will be all wrong and the text hard to read; if the line-width increases we may be using more lines than necessary.

Modern terminal emulators do dumb line-breaking: Splitting a long lines into screen lines, but regardless of structure or even word boundaries. Some emulators remember for each line whether an overflow happened, or whether a hard newline was printed. Some terminal emulators (for example Gnome Terminal) will use this to re-do the splitting when a window is re-sized. However, that does not help with pretty-printer output.

Until now. Below is a screenshot from Kawa running in DomTerm at 80 colutions.

We reduce the window size to 50 columns. The user input (yellow background) is raw text, so its line is split non-pretty, but the output (white background) gets pretty re-splitting. (Note the window size indicator in the lower-right.)

We reduce the window size to 35 columns:

It also works with saved pages

DomTerm allows you to save the current session as a static HTML page. If the needed DomTerm CSS and JavaScript files are provided in the hlib directory, then dynamic line-breaking happens even for saved log files. (The lazy way is to create hlib as a symbolic link to the hlib directory of the DomTerm distribution.)

Try it yourself on a saved session. The --debug-print-expr flag causes Kawa to print out each command before it is compiled and evaluated. The result (shown in red because it is sent to the standard error stream) is pretty-printed dynamically.

Structured text

This is how it works.

When an application pretty-prints a structure, it calls special output procedures to mark which parts of the output logically belong together (a logical block), and where line-breaks and indentation may be inserted. In Kawa the default print formatting for lists and vectors automatically calls these procedures when the output is a pretty-printing stream. The pretty-printing library calculates where to put line-breaks and indentation, based on these commands and the specified line length.

However, when the output stream is a DomTerm terminal, Kawa's pretty-printing library does not actually calculate the line-breaks. Instead it encodes the above-mentioned procedure calls as special escape sequences that get sent to DomTerm.

When DomTerm receives these escape sequences, it builds a nested DOM structure that corresponds to the orginal procedure calls. DomTerm calculates how to layout that structure using a variant of the Common Lisp pretty-printing algorithm, inserting soft left-breaks and indentation as needed.

When DomTerm detects that its window has been re-sized or zoomed, it first removes old soft line-breaks and identation. It does re-runs the layout algorithm.

When a page is saved, the nested DOM structure is written out too. If the saved page is loaded in a browser, and the necessary JavaScript libraries are available, then the pretty-printing algorithm is run on the saved page, both on initial load, and whenever the window is re-sized.

Hyphenation and word-breaking

DomTerm supports general soft (optional) line breaks: You can specify separate texts (optionally with styling) for each of the following: The text used when the line is not broken at that point; the text to use before the line-break, when broken (commonly a hyphen); the text to use after the line-break (following indentation). One use for this words that change their spelling when hyphenated, as may happen in German. For example the word backen becomes bak-ken. You can handle this using ck as the non-break text; k- as the pre-break text; and k as the post-break text.
Created 30 Jan 2017 13:23 PST. Last edited 3 Feb 2017 12:13 PST. Tags:
Created 22 Nov 2016 23:07 PST. Last edited 22 Nov 2016 23:07 PST.

I have recently spent a lot of time on DomTerm, which is becoming a fairly decent terminal emulator. Being based on HTML5 technologies lets you do many interesting things, including embed graphics. The new qtdomterm standalone terminal emulator is especially nice; alternatively, you can also use the domterm --browser command to start a process that uses a window/tab in your default desktop browser.

The gnuplot package is a powerful graphing package. It has a command-line that lets you specify complicated functions and plots, and then give a plot command to display the resulting plot. Unfortunately, it is difficult to find a a good front-end that is usable for both command-line interaction and graphics. People usually have to specify output to either a file or an external viewer. Would that one could insert the plots directly in the REPL output stream!

The development version of gnuplot (i.e. 5.1, available from CVS) has native support for DomTerm. It has domterm as a new terminal type, which you can select explicitly (with the command set term domterm), or use by default (since gnuplot checks the DOMTERM environment variable, which is set by DomTerm).

This works by using the pre-existing svg output driver to generate SVG, but surrounding the SVG by escape sequences, and then printing the result to standard output. DomTerm recognizes the escape sequence, extracts the SVG (or more generally clean HTML) and inserts it at the cursor.

You can save the session output to an html file. Here is an example. In qtdomterm you can use the File / Save As menu item; otherwise ctrl-shift-S should work. This is a single html file, with embedded SVG; images are embedded with a data: URL. The file is actually XML (xhtml), to make it easier to process. The saved file does need to css and js (JavaScript) files to be readable, so you need to link or copy the hlib directory in the DomTerm source distribution.


Created 18 Aug 2016 20:45 PDT. Last edited 21 Sep 2016 23:06 PDT. Tags:
Let's say we want to get the contents of a file as a value, using a simple function, without using a port or looping. Kawa has a function to do that:
(path-data path)
The path can be a Path object, or anything that can be converted to a Path, including a filename string or a URL.

You can also use the following syntactic sugar, which is an example of SRFI-108 named quasi-literals:

This syntax is meant to suggest the shell input redirection operator <pname. The meaning of &<{pname} is the same as (path-data &{pname}), where &{pname} is a SRFI-109 string quasi-literal. (This is almost the same as (path-data "pname") using a traditional string literal, except for the rules for quoting and escaping.)

What kind of object is returned by &<{pname}? And what is printed when you type that at the REPL? Fundamentally, in modern computers the contents of a file is a sequence of uninterpreted bytes. Most commonly, these bytes represent text in a locale-dependent encoding, but we don't always know this. Sometimes they're images, or videos, or word-processor documents. It's like writing assembly code: you have to know the types of your values. At best we can guess at the type of a file based on its name or extension or looking for magic numbers. So unless we have more information, we'll say that path-data returns a blob, and we'll implementing it using the gnu.lists.Blob type.

$ cat README
Check doc directory.
$ kawa
#|kawa:1|# (define readme &<{README})
#|kawa:2|# readme:class
class gnu.lists.Blob
You can explicitly coerce a Blob to a string or to a bytevector:
#|kawa:3|# (write (->string readme))
"Check doc directory.\n"
#|kawa:4|# (write (->bytevector readme))
#u8(67 104 101 99 107 32 100 111 99 32 100 105 114 101 99 116 111 114 121 46 10)
#|kawa:5|# (->bytevector readme):class
class gnu.lists.U8Vector

The output of a command (which we'll discuss in the next article): is also a blob. For almost all programs, standard output is printable text, because if you try to run a program without re-direction, and it spews out binary data, it may mess up your terminal, which is annoying. Which suggests an answer to what happens when you get a blob result in the REPL: The REPL should try to print out the contents as text, converting the bytes of the blob to a string using a default encoding:

#|kawa:6|# &<{README}
Check doc directory.
It makes sense look at the bytes to see if we can infer an encoding, especially on Windows which doesn't use a default encoding. Currently Kawa checks for a byte-order mark; more sniffing is likely to be added later.

What if the file is not a text file? It might be reasonable to be able to configure a handler for binary files. For example for a .jpg image file, if the the console can display images, it makes sense to display the image inline. It helps if the blob has a known MIME type. (I believe a rich text console should be built using web browser technologies, but that's a different topic.)

Writing to a file

The &<{..} operation can be used with set! to replace the contents of a file:

(set! &<{README} "Check\n")

If you dislike using < as an output operator, you can instead using the &>{..} operation, which evaluates to function whose single argument is the new value:

(&>{README} "Check\n")

You can use &>> to append more data to a file:

(&>>{README} "or check\n")

The current directory

Functions like path-data or open-input-file or the sugar we seen above all use the current directory as a base directory for relative pathname. You get the current value of the current directory with the expression (current-path). This returns a path object, which prints as the string value of the path.

The initial value of (current-path) is the value of the "user.dir" property, but you change it using a setter:

(set! (current-path) "/opt/myApp/")
A string value is automatically converted to a path, normally a filepath.

The procedure current-path is a parameter, so you can alternatively call it with the new value:

(current-path "/opt/myApp/")
You can also use the parameterize form:
(parameterize ((current-path "/opt/myApp/"))
  (list &<{data1.txt} &<{data2.txt}))
Created 8 Mar 2014 17:01 PST. Last edited 25 Mar 2014 23:43 PDT. Tags:

Oracle's Java group uses the JIRA bug tracker. I've been experimenting with processing and filtering the XML backup file that JIRA can generate. The obvious choice seemed to be XSLT, and it worked pretty well on an evaluation instance of JIRA. Then I got a copy of our 1.5GB production instance. Ouch. XSLT works by building an in-memory DOM of the XML document and then matching templates. There is no chance that is going to work.

Next I tried Joost, an implementation of Streaming Transformations for XML (STX). This has the look of XSLT, but is designed to support one-pass streaming. This worked, but it took about 45 minutes on my laptop, which was discouraging, though better than nothing. (It's also worth noting that STX and Joost don't seem to be actively developed, perhaps because people are focusing on the streaming mode that XSLT 3.0 will have.)

Next I tried the StAX cursor classes that ship with Java 6. StAX is claimed to support fast XML processing. It took about 20 minutes. Better, but hardly fast.

Later (after the Kawa experiments discussed below), I tried an alternative StAX processor: The Aalto XML Processor took just under 1 minute for a full copy. This is pretty good. While not as fast as Kawa, Aalto is constrained by the StAX API, which isn't optimized for performance as well as Kawa's private APIs. This also doesn't seem to be actively maintained.

The Kawa source code includes a smattering of XML tools (including an implementation of XQuery 1.0). I wrote the XML parser with careful attention to performance. For example it avoids copying or needless object allocation. So I was curious how it would handle the 1.5GB JIRA dump. Needless to say - it didn't, at first. But I tracked down and fixed the bugs.

The result was quite gratifying: About 20 seconds for a full copy. I then modified the application to skip Issue, Action, Label, ChangeItem, and CustomFieldValue elements, including their children. That run took about 15 seconds, writing a 296MB output file.

This is the entire program:

import gnu.text.*;
import gnu.xml.*;
import gnu.lists.*;

public class KawaFilter extends FilterConsumer {
    int nesting = 0;
    int skipNesting = -1;

    public boolean skipElement(XName name) {
        String local = name.getLocalName();
        if (local=="Issue" || local=="Action" || local=="Label"
            || local=="ChangeItem" || local=="CustomFieldValue")
            return true;
        return false;

    public KawaFilter(Consumer base) {

    public void startElement(Object type) {
        if (! skipping && type instanceof XName
                && skipElement((XName) type)) {
            skipNesting = nesting;
            skipping = true;

    public void endElement() {
        if (skipNesting == nesting) {
            skipping = false;
            skipNesting = -1;

    public static void main(String[] args) throws Throwable {
        String inname = args[0];
        OutputStream outs = new FileOutputStream(args[1]);
        XMLPrinter xout = new XMLPrinter(outs);
        KawaFilter filter = new KawaFilter(xout);
        SourceMessages messages = new SourceMessages();
        XMLParser.parse(inname, messages, filter);
        messages.printAll(System.err, 20);

Most of this is boiler-plate (and should be abstracted out). The only context-specific code is the skipElement method, which is called when the start of an element is seen. The method takes an element tag (QName), and returns true if the element and it's children should be skipped. The startElement and endElement are called by the parser. They work by setting the protected skipping field in the FilterConsumer class.

One could easily modify KawaFilter to rename elements or attributes, or to remove attributes. Changing skipElement so it could base its decision on the names of ancestor elements would also be easy. More challenging would be to allow skipElement to base its decision on the existance or values of attributes. The reason is that startElement is called before the attributes is called: This is for performance, so we don't have to build a data structure for the attributes. However, it would not be difficult to add an option to remember the attributes, and defer the skipElement call to the end of the element start-tag, rather than the beginning.

You run KawaFilter thus:

$ java java -cp .:kawa.jar KawaFilter input.xml output.xml

For kawa.jar you need to build Kawa from the Subversion repository (until I make a binary release containing the fixed XML classes).

Created 4 Jan 2014 21:31 PST. Last edited 12 Jan 2014 17:16 PST. Tags:
A number of programming languages have facilities to allow access to system processes (commands). For example Java has java.lang.Process and java.lang.ProcessBuilder. However, they're not as convenient as as the old Bourne shell, or as elegant in composing commands.

If we ignore syntax, the shell's basic model is that a command is a function that takes an input string (standard input) along with some string-valued command-line arguments, and whose primary result is a string (standard output). The command also has some secondary outputs (including standard error, and the exit code). However, the elegance comes from treating standard output as a string that can be passed to another command, or used in other contexts that require a string (e.g. command substitution). This article presents how Kawa solves this problem.

Process is auto-convertable to string

To run a command like date, you can use the run-process procedure:

#|kawa:2|# (define p1 (run-process "date --utc"))
Equivalently you can use the &`{command} syntactic sugar:
#|kawa:2|# (define p1 &`{date --utc})

But what kind of value is p1? It's an instance of gnu.kawa.functions.LProcess, which is a class that extends java.lang.Process. You can see this if you invoke toString or call the write procedure:

#|kawa:2|# (p1:toString)
#|kawa:3|# (write p1)

An LProcess is automatically converted to a string or a bytevector in a context that requires it. More precisely, an LProcess can be converted to a blob because it implements Lazy<Blob>. This means you can convert to a string (or bytevector):

#|kawa:9|# (define s1 ::string p1)
#|kawa:10|# (write s1)
"Wed Jan  1 01:18:21 UTC 2014\n"
#|kawa:11|# (define b1 ::bytevector p1)
(write b1)
#u8(87 101 100 32 74 97 110 ... 52 10)

However the display procedure prints it in "human" form, as a string:

#|kawa:4|# (display p1)
Wed Jan  1 01:18:21 UTC 2014

This is also the default REPL formatting:

#|kawa:5|# &`{date --utc}
Wed Jan  1 01:18:22 UTC 2014

Command arguments

The general form for run-process is:

(run-process keyword-argument... command)

The command is the process command-line. It can be an array of strings, in which case those are used as the command arguments directly:

(run-process ["ls" "-l"])
The command can also be a single string, which is split (tokenized) into command arguments separated by whitespace. Quotation groups words together just like traditional shells:
(run-process "cmd a\"b 'c\"d k'l m\"n'o")
   ⇒ (run-process ["cmd"   "ab 'cd"   "k'l m\"no"])

Using string templates is more readable as it avoids having to quote quotation marks:

(run-process &{cmd a"b 'c"d k'l m"n'o})
You can also use the abbreviated form:
&`{cmd a"b 'c"d k'l m"n'o}
This syntax is the same as of SRFI-108 named quasi-literals. In general, the following are roughly equivalent (the difference is that the former does smart quoting of embedded expressions, as discussed later):
(run-command &{command})

Similarly, the following are also roughly equivalent:

(run-command keyword-argument... &{command})

A keyword-argument can specify various properties of the process. For example you can specify the working directory of the process:

(run-process directory: "/tmp" "ls -l")
You can use the shell keyword to specify that we want to use the shell to split the string into arguments. For example:
(run-process shell: #t "command line")
is equivalent to:
(run-process ["/bin/sh" "-c" "command line"])
You can can also use the abbreviation &sh:
&sh{rm *.class}
which is equivalent to:
&`{/bin/sh -c "rm *.class"}

In general, the abbreviated syntax:

is equivalent to:
&`[shell: #t args...]{command}

Command and variable substitution

Traditional shells allow you to insert the output from a command into the command arguments of another command. For example:
echo The directory is: `pwd`
The equivalent Kawa syntax is:
&`{echo The directory is: &`{pwd}}

This is just a special case of substituting the result from evaluating an expression. The above is a short-hand for:

&`{echo The directory is: &[&`{pwd}]}

In general, the syntax:

evaluates the expression, converts the result to a string, and combines it with the literal string. (We'll see the details in the next section.) This general form subsumes command substitution, variable substitution, and arithmetic expansion.

Tokenization of substitution result

Things gets more interesting when considering the interaction between substitution and tokenization. This is not simple string interpolation. For example, if an interpolated value contains a quote character, we want to treat it as a literal quote, rather than a token delimiter. This matches the behavior of traditional shells. There are multiple cases, depending on whether the interpolation result is a string or a vector/list, and depending on whether the interpolation is inside a quotes.

  • If the value is a string, and we're not inside quotes, then all non-whitespace characters (including quotes) are literal, but whitespace still separates tokens:
    (define v1 "a b'c ")
    &`{cmd x y&[v1]z}  ⟾  (run-process ["cmd" "x" "ya" "b'c" "z"])
  • If the value is a string, and we are inside single quotes, all characters (including whitespace) are literal.
    &`{cmd 'x y&[v1]z'}  ⟾  (run-process ["cmd" "x ya b'c z"])

    Double quotes work the same except that newline is an argument separator. This is useful when you have one filename per line, and the filenames may contain spaces, as in the output from find:

    &`{ls -l "&`{find . -name '*.pdf'}"}

    If the string ends with one or more newlines, those are ignored. This rule (which also applies in the previous not-inside-quotes case) matches traditional shell behavior.

  • If the value is a vector or list (of strings), and we're not inside quotes, then each element of the array becomes its own argument, as-is:
    (define v2 ["a b" "c\"d"])
    &`{cmd &[v2]}  ⟾  (run-process ["cmd" "a b" "c\"d"])
    However, if the enclosed expression is adjacent to non-space non-quote characters, those are prepended to the first element, or appended to the last element, respectively.
    &`{cmd x&[v2]y}  ⟾  (run-process ["cmd" "xa b" "c\"dy"])
    &`{cmd x&[[]]y}  ⟾  (run-process ["cmd" "xy"])
    This behavior is similar to how shells handle "$@" (or "${name[@]}" for general arrays), though in Kawa you would leave off the quotes.

    Note the equivalence:

    &`{&[array]}  ⟾  (run-process array)
  • If the value is a vector or list (of strings), and we are inside quotes, it is equivalent to interpolating a single string resulting from concatenating the elements separated by a space:
    &`{cmd "&[v2]"}  ⟾  (run-process ["cmd" "a b c\"d"])

    This behavior is similar to how shells handle "$*" (or "${name[*]}" for general arrays).

  • If the value is the result of a call to unescaped-data then it is parsed as if it were literal. For example a quote in the unescaped data may match a quote in the literal:
    (define vu (unescaped-data "b ' c d '"))
    &`{cmd 'a &[vu]z'}  ⟾  (run-process ["cmd" "a b " "c" "d" "z"])
  • If we're using a shell to tokenize the command, then we add quotes or backslashes as needed so that the shell will tokenize as described above:
    &sh{cmd x y&[v1]z}  ⟾  (run-process ["/bin/sh" "-c" "cmd x y'a' 'b'\\'''c' z'"])
Smart tokenization only happens when using the quasi-literal forms such as &`{command}. You can of course use string templates with run-process:
(run-process &{echo The directory is: &`{pwd}})
However, in that case there is no smart tokenization: The template is evaluated to a string, and then the resulting string is tokenized, with no knowledge of where expressions were substituted.

Input/output redirection

You can use various keyword arguments to specify standard input, output, and error streams. For example to lower-case the text in in.txt, writing the result to out.txt, you can do:

&`[in-from: "in.txt" out-to: "out.txt"]{tr A-Z a-z}
(run-process in-from: "in.txt" out-to: "out.txt" "tr A-Z a-z")

These options are supported:

in: value
The value is evaluated, converted to a string (as if using display), and copied to the input file of the process. The following are equivalent:
&`[in: "text\n"]{command}
&`[in: &`{echo "text"}]{command}

You can pipe the output from command1 to the input of command2 as follows:

&`[in: &`{command1}]{command2}
in-from: path
The process reads its input from the specified path, which can be any value coercible to a filepath.
out-to: path
The process writes its output to the specified path.
err-to: path
Similarly for the error stream.
out-append-to: path
err-append-to: path
Similar to out-to and err-to, but append to the file specified by path, instead of replacing it.
in-from: 'pipe
out-to: 'pipe
err-to: 'pipe
Does not set up redirection. Instead, the specified stream is available using the methods getOutputStream, getInputStream, or getErrorStream, respectively, on the resulting Process object, just like Java's ProcessBuilder.Redirect.PIPE.
in-from: 'inherit
out-to: 'inherit
err-to: 'inherit
Inherits the standard input, output, or error stream from the current JVM process.
out-to: port
err-to: port
Redirects the standard output or error of the process to the specified port.
out-to: 'current
err-to: 'current
Same as out-to: (current-output-port), or err-to: (current-error-port), respectively.
in-from: port
in-from: 'current
Re-directs standard input to read from the port (or (current-input-port)). It is unspecified how much is read from the port. (The implementation is to use a thread that reads from the port, and sends it to the process, so it might read to the end of the port, even if the process doesn't read it all.)
err-to: 'out
Redirect the standard error of the process to be merged with the standard output.

The default for the error stream (if neither err-to or err-append-to is specifier) is equivalent to err-to: 'current.

Note: Writing to a port is implemented by copying the output or error stream of the process. This is done in a thread, which means we don't have any guarantees when the copying is finished. A possible approach is to have to process-exit-wait (discussed later) wait for not only the process to finish, but also for these helper threads to finish.

Here documents

A here document is a form a literal string, typically multi-line, and commonly used in shells for the standard input of a process. Kawa's string literals or string quasi-literals can be used for this. For example, this passes the string "line1\nline2\nline3\n" to the standard input of command:
(run-process [in: &{
    }] "command")

The &{...} delimits a string; the &| indicates the preceding indentation is ignored.


Writing a multi-stage pipe-line quickly gets ugly:

&`[in: &`[in: "My text\n"]{tr a-z A-Z}]{wc}
Aside: This would be nicer in a language with in-fix operators, assuming &` is treated as a left-associative infix operator, with the input as the operational left operand:
"My text\n" &`{tr a-z A-Z} &`{wc}

The convenience macro pipe-process makes this much nicer:

  "My text\n"
  &`{tr a-z A-Z}

All but the first sub-expressions must be (optionally-sugared) run-process forms. The first sub-expression is an arbitrary expression which becomes the input to the second process expression; which becomes the input to the third process expression; and so on. The result of the pipe-process call is the result of the last sub-expression.

Copying the output of one process to the input of the next is optimized: it uses a copying loop in a separate thread. Thus you can safely pipe long-running processes that produce huge output. This isn't quite as efficient as using an operating system pipe, but is portable and works pretty well.

Setting the process environment

By default the new process inherits the system environment of the current (JVM) process as returned by System.getenv(), but you can override it.
env-name: value
In the process environment, set the "name" to the specified value. For example:
&`[env-CLASSPATH: ".:classes"]{java MyClass}
NAME: value
Same as using the env-name option, but only if the NAME is uppercase (i.e. if uppercasing NAME yields the same string). For example the previous example could be written:
(run-process CLASSPATH: ".:classes" "java MyClass")
environment: env
The env is evaluated and must yield a HashMap. This map is used as the system environment of the process.

Process-based control flow

Traditional shell provides logic control flow operations based on the exit code of a process: 0 is success (true), while non-zero is failure (false). Thus you might see:

if grep Version Makefile >/dev/null
then echo found Version
else echo no Version

One idea to have a process be auto-convertible to a boolean, in addition to be auto-convertible to strings or bytevectors: In a boolean context, we'd wait for the process to finish, and return #t if the exit code is 0, and #f otherwise. This idea may be worth exploring later.

Currently Kawa provides process-exit-wait which waits for a process to exit, and then returns the exit code as an int. The convenience function process-exit-ok? returns true iff process-exit-wait returns 0.

(process-exit-wait (run-process "echo foo")) ⟾ 0
The previous sh example could be written:
(if (process-exit-ok? &`{grep Version Makefile})
  &`{echo found}
  &`{echo not found})
Note that unlike the sh, this ignores the output from the grep (because no-one has asked for it). To match the output from the shell, you can use out-to: 'inherit:
(if (process-exit-ok? &`[out-to: 'inherit]{grep Version Makefile})
  &`{echo found}
  &`{echo not found})
Created 30 Dec 2013 01:34 PST. Last edited 1 Nov 2014 14:50 PDT. Tags:
Kawa 1.14 is now evailable from the usual places: (source code) (runnable jar)

For a extended list of changes, see the news page. Below are some highlights.

More R7RS functions

The proposed new standard for Scheme R7RS has a number of new functions, and also extensions to existing functions. Kawa 1.14 implements (and documents) most of the new functions and extensions, and also implements a number of the syntactic changes:

  • After adding list procedures make-list, list-copy, list-set! all the R7RS list procedures are implemented.
  • Other added procedures: square, boolean=?, string-copy!, digit-value, get-environment-variable, get-environment-variables, current-second, current-jiffy, jiffies-per-second, and features.
  • The predicates finite?, infinite?, and nan? are generalized to complex numbers.
  • The procedures write, write-simple, and write-shared are now consistent with R7RS.
  • String and character comparison functions are generalized to more than two arguments (but restricted to strings or characters, respectively).
  • The procedures string-copy, string->list, and string-fill! now take optional (start,end)-bounds. All of the R7RS string functions are now implemented.
  • Support => syntax in case form.
  • Support backslash-escaped special characters in symbols when inside vertical bars, such as '|Hello\nworld|.
  • The new functions and syntax are documented in the Kawa manual; look for the functions in the index.
  • The most important functionality still missing is support for the library (module) system, and specifically the define-library form. Of course Kawa has its own powerful module system which is in spirit quite compatible, but we still need to decide on the pragmatic issues, including how modules are resolved to files (probably using a module search path), and translation to class names.

    Another feature to ponder is how to handle the R7RS exception mechanism, and how it maps to JVM exceptions. We probably won't implement all of the R7RS semantics, but it would be desirable if (for example) some simple uses of guard would work on Kawa.

    Better support for multi-line strings and templates

    SRFI-109 specifies a syntax for quasi-literal strings. These have a numbers of advantages over traditional double-quote literals. The are quasi-literals in the sense that they can contain embedded expressions. For example, if name has the value "John", then:
    &{Hello &[name]!}
    evaluates to "Hello John!". Other benefits include control over indentation (layout), built-in support for format specifiers, and a huge number of named special characters. See the documentation and the SRFI-109 specification for details.

    Named quasi-literals (SRFI-108)

    SRFI-108 can be viewed as a generalization of SRFI-109, in that the constructors have names, and you can define new constructors. This could be used for writing documentation (similar to the Scribble system), or constructing complex nested data structures (a generalization of XML).

    Better support for command scripts

    A number of little improvements have been made to make it easier to write Kawa command scripts (simple one-file applications). Most importantly the documentation has been re-written. The value returned by (car (command-line)) (i.e. the "name" of the script/command invoked) is now accurate and useful in many more cases; you also have the option of setting explicitly it with the option.

    Experimental Java 8 support

    There is some very preliminary support for next year's Java 8. These are enabled if you run configure with the --with-java-source=8 (or if ant auto-detects This flag doesn't yet do very much:

    • It sets the classfile major version number.
    • It adds an optimized version of the Stream#forEach method for gnu.lists.SimpleVector and its subclasses, which includes the FVector used to implement Scheme vectors.
    • Enhance gnu.lists.Consumer to implement java.util.function.Consumer and the specialized classes java.util.function.{Int,Long,Double}Consumer.

      Because output port implements gnu.lists.Consumer, this means you can pass a port to the forEach method:

      #|kawa:11|# ((['a 'b 'c 'd]:stream):forEach (current-output-port))
      a b c d

    Of course a large extent streams just Work Out of the Box with Kawa data types, because they implement the apropriate Java interfaces. For examples lists and vectors both implement java.util.List, and thus they automatically have the stream method.

    Created 4 Oct 2013 21:44 PDT. Last edited 5 Oct 2013 14:30 PDT. Tags:

    JavaFX 1.0 was a next-generation GUI/client platform. It had a new Node-based GUI API and used a new language, JavaFX Script, whose goal was to make it easier to program Rich Client applications. (Yours truly was hired by Sun to work on the JavaFX Script compiler.) In 2010 the JavaFX Script language was cancelled: JavaFX would still refer to a new GUI based API based on many of the same concepts, but the primary programming language would be Java, rather than JavaFX Script.

    Java is a relatively low-level and clumsy language for writing Rich Client appliations, though it's not too painful. Still, there was a reason we worked on JavaFX Script: It had a number of features to make such programs more convenient. Luckily, other JVM languages - and specifically Kawa-Scheme - can take up the slack. Below I'll show you a simple Hello-World-type example, and then explain how you can try it yourself. In later acticles I'll show different examples.

    Simple buttons and events

    Our first example is just 3 buttons and 2 trivial event handlers. It is translated from written in Java by Richard Bair.

    (require 'javafx-defs)
     title: "Hello Button"
     width: 600 height: 450
      text: "Click Me"
      layout-x: 25
      layout-y: 40
      on-action: (lambda (e) (format #t "Event: ~s~%" e))
      on-key-released: (lambda (e) (format #t "Event: ~s~%" e)))
      text: "Click Me Too"
      layout-x: 25
      layout-y: 70)
      text: "Click Me Three"
      layout-x: 25
      layout-y: 100))

    For those new to Scheme, the basic syntactic building block has the form:

    (operator argument1 ... argumentN)
    The operator can be a function (like format), an arithmetic operator in prefix form (like (+ 3 4)), a command, a keyword (like lambda), or a user-defined macro. This general format makes for a lot of flexibility.

    The first two lines in HelloButton.scm are boiler-plate: The require imports various definitions and aliases, while the (javafx-application) syntax declares this module is a JavaFX Application.

    The javafx-scene form (a macro) creates a scene, which is a collection of graphical objects. The Scene has certain named properties (title, width, and height), specified using keyword arguments. The Scene also has 3 Button children. Finally, the make-scene command puts the scene on the stage (the window) and makes it visible.

    Each Button form is an object constructor. For example:

     text: "Click Me Three"
     layout-x: 25
     layout-y: 100)
    is equivalent to the Java code:
    javafx.scene.control.Button tmp = new javafx.scene.control.Button();
    tmp.setText("Click Me Three");
    return tmp;

    The on-action and on-key-released properties on the first Button bind event handlers. Each handler is a lambda expression or anonymous function that takes an event e as a parameter. The Kawa compiler converts the handler to a suitable event handler object using SAM-conversion features. (This conversion depends on the context, so if you don't have a literal lambda expression you have to do the conversion by hand using an object operator.)

    Getting it to run

    Downloading JavaFX 2.x beta

    For now JavaFX is only available for Windows, but Mac and GNU/Linux ports are being worked on and mostly work. (I primarily use Fedora Linux.) The primary JavaFX site has lots of information, including a link to the download site. You will need to register, as long the software is beta. Download the zipfile and extract it to some suitable location.

    In the following, we assume the variable JAVAFX_HOME is set to the build you've installed. For example (if using plain Windows):

    set JAVAFX_HOME=c:\javafx-sdk2.0-beta
    The file %JAVAFX_HOME%\rt\lib\jfxrt.jar should exist.

    Downloading and building Kawa

    The JavaFX support in Kawa is new and experimental (and unstable), so for now you will have to get the Kawa source code from SVN.

    There are two ways to build Kawa. The easiest is to use Ant - on plain Windows do:

    ant -Djavafx.home=%JAVAFX_HOME%
    or on other platforms (including Cygwin):
    ant -Djavafx.home=$JAVAFX_HOME

    Alternatively, you can use configure and make (but note that on Windows you will need to have Cygwin installed to use this approach):

    $ KAWA_DIR=path_to_Kawa_sources
    $ cd $KAWA_DIR
    $ ./configure --with-javafx=$JAVAFX_HOME
    $ make

    Running the example

    On Windows, the easiest way to run the example is to use the kawa.bat created when building Kawa. It sets up the necessary paths for you.

    %KAWA_HOME%\bin\kawa.bat HelloButton.scm

    On Cygwin (or Unix/Linux) you can use the similar I suggest setting your PATH to find kawa.bat or, so you can just do:

    kawa HelloButton.scm

    Using the kawa command is equivalent to

    java -classpath classpath kawa.repl HelloButton.scm
    but it sets the classpath automatically. If you do it by hand you need to include %JAVAFX_HOME%\rt\lib\jfxrt.jar and %KAWA_DIR%\kawa-version.jar.

    This is what pops up:


    If you click the first button the action event fires, and you should see something like:

    Event: javafx.event.ActionEvent[source=Button@3a5794[styleClass=button]]

    If you type a key (say n) while that button has focus (e.g. after clicking it), then when you release the key a key-released event fires:

    Event: KeyEvent [source = Button@3a5794[styleClass=button],
    target = Button@3a5794[styleClass=button], eventType = KEY_RELEASED,
    consumed = false, character = , text = n, code = N]

    Note: Running a JavaFX application from the Kawa read-eval-print-loop (REPL) doesn't work very well at this point, but I'm exploring ideas to make it useful.

    Compiling the example

    You can compile HelloButton.scm to class files:

    kawa --main -C HelloButton.scm

    You can execute the resulting application in the usual way:

    java -classpath classpath HelloButton
    or use the kawa command:
    kawa HelloButton


    Created 28 Jul 2011 11:06 PDT. Last edited 16 Aug 2011 15:36 PDT. Tags:

    Read JavaFX-using-Kawa-intro first. That introduces JavaFX and how you can use Kawa to write rich GUI applications.

    This example demonstrates simple animation: A rectangle that moves smoothly left to right and back again continuously. This is example is converted from written in Java by Kevin Rushforth. Here is the entire program HelloAnimation.scm:

    (require 'javafx-defs)
    (define rect
       x: 25  y: 40  width: 100  height: 50
       fill: Color:RED))
     title: "Hello Animation"
     width: 600 height: 450  fill: Color:LIGHTGREEN
      cycle-count: Timeline:INDEFINITE
      auto-reverse: #t
       (Duration:millis 500)
       (KeyValue (rect:xProperty) 200))):play)

    The first two lines are boilerplate, as in JavaFX-using-Kawa-intro. The (define rect (Rectange ...)) defines a variable rect, and initializes it to a new Rectangle, using an object constructor as seen before.

    The (javafx-scene ...) operations creates the scene with the specified title, width, height and fill (background color) properties. It adds the rect to the scene, and then creates a window to make it visible.

    Finally, we animate rect. This requires some background explanation.

    Key values and key frames

    To animate a scene you create a TimeLine data structure, which describes what properties to modify and when to do so. Once you have done so, call play on the TimeLine, which instructs JavaFX's animation engine to start running the animation. In this example, the animation continuous indefinitely: When done, it reverses (because auto-reverse was set to true) and starts over.

    The timeline is divided into a fixed number of key frames, which are associated with a specific point in time. In the current example there is the implicit starting key-frame at time 0, and an explicit key-frame at time 500 (milliseconds). (The KeyFrame specifies a duration or ending time, where the start time is the ending time of the previous key-frame, or 0 in the case of the first key-frame.) The animator (or programmer) specifies various properties (such as positions and sizes of objects) at each key-frame, and then the computer smoothly interpolates between the key-frames. By default the interpolation is linear, but you can specify other kinds of interpolation.

    Each KeyFrame consists of some KeyValue objects. A KeyValue species which property to modify and the ending value that the property will have at the end of the key-frame. In the example, we have a single KeyValue to modify rect:x, ending at 200. (The start value is 25, as set when rect was constructed.)

    In JavaFX, the properties to animate are specified using Property objects. Typeically, a property is a reference to a specific field in a specific object instance. You register the property with the animation engine (as shown in the sample), and then the animation engine uses the property to modify the field. In the example the expression (rect:xProperty) (equivalent to the Java method call rect.xProperty()) returns a property that references the x property of the rect object. The Property object also has ways to register dependencies (rather like change listeners) so that things get updated when the property is changed. (For example the rectangle is re-drawn when its x property changes.)

    Here is a static screenshot - obviously an animiated gif would be nicer here! HelloAnimation.png

    More information

    ToDo: Links to documentation and other JavaFX examples.
    Created 28 Jul 2011 11:05 PDT. Last edited 16 Aug 2011 16:06 PDT. Tags:
    The Java Architecture for XML Binding (JAXB) 2.0 lets you use annotations to how specify Java class instances get printed (serialized) as XML documents. Kawa recently gained support for annotations, so let us look at an example. You will need Java 6 for the JAXB 2.0 support, and Kawa from SVN (or Kawa 1.12 when it comes out). Nothing else is needed.

    Our example is a simple bibliography database of books and some information about them. The DTD and sample input data are taken from the XML Query Use Cases. The complete example is in the file testuite/jaxb-annotations3.scm in the Kawa source distribution.

    Defining the Java classes and their XML mapping

    We start with some define-alias declarations, which serve the role of Java's import:
    (define-alias JAXBContext javax.xml.bind.JAXBContext)
    (define-alias StringReader
    (define-alias XmlRegistry javax.xml.bind.annotation.XmlRegistry)
    (define-alias XmlRootElement javax.xml.bind.annotation.XmlRootElement)
    (define-alias XmlElement javax.xml.bind.annotation.XmlElement)
    (define-alias XmlAttribute javax.xml.bind.annotation.XmlAttribute)
    (define-alias BigDecimal java.math.BigDecimal)
    (define-alias ArrayList java.util.ArrayList)
    The bibliography is represented by a Java singleton class Bib, which has a books field, which is an ArrayList collection of Book objects:
    (define-simple-class Bib ( ) (@XmlRootElement name: "bib")
      (books (@XmlElement name: "book" type: Book) ::ArrayList))

    The XmlRootElement annotation says that the Bib is represented in XML as a root element (document element) named <bib>. Each Book is represented using a <book> element. Note the annotation value name: "book" is needed because I decided to use the plural books for the field name (because it is an List), but the singular book for the tag name for each book element.

    Each book is represented using the class Book, which has the fields year, title, publisher (all strings), authors and editors (both ArrayLists), and price (a BigDecimal). Note that the while the other fields get mapped to XML child elements, the year is mapped to an XML attribute, also named year.

    (define-simple-class Book ()
      (year (@XmlAttribute required: #t) ::String)
      (title (@XmlElement) ::String)
      (authors (@XmlElement name: "author" type: Author) ::ArrayList)
      (editors (@XmlElement name: "editor" type: Editor) ::ArrayList)
      (publisher (@XmlElement) ::String)
      (price (@XmlElement) ::BigDecimal))

    Finally, the Author and Editor classes, both of which inherit from Person:

    (define-simple-class Person ()
      (last (@XmlElement) ::String)
      (first (@XmlElement) ::String))
    (define-simple-class Author (Person))
    (define-simple-class Editor (Person)
      (affiliation (@XmlElement) ::String))

    Setting up the JAXB context

    Next we need to specify a JAXBContext, which manages the mapping between Java classes and the XML document schema (structure). Once JAXB knows the classes involved, it can figure out the XML representation by analyzing the classes and their annotations (using reflection). A simple way to do this is to just list the needed classes when creating the JAXBContext:

    (define jaxb-context ::JAXBContext
      (JAXBContext:newInstance Bib Book Person Editor))

    (Actually, you don't need to list all the classes: if you just list the root class Bib, JAXB can figure out the rest.)

    An alternative is to make use of an ObjectFactory. Specifically, if the element classes are in a package named my.package then you need to create a class ObjectFactory (in the same package), and specify the package name to the JAXBContext factory method newInstance. The ObjectFactory class needs to have a factory method for at least the root class Bib:

    (define jaxb-context ::JAXBContext
      (JAXBContext:newInstance "my.package"))
    (define-simple-class ObjectFactory () (@XmlRegistry)
      ((createBib) ::Bib (Bib)))

    Unmarshalling - reading XML and creating objects

    At this point actually parsing the XML and creating objects is trivial. You create an Unmarshaller from the JAXBContext, and then just invoke one of the Unmarshaller unmarshall methods.

    (define (parse-xml (in ::Bib
      ((jaxb-context:createUnmarshaller):unmarshal in))
    (define bb (parse-xml (current-input-port)))

    Update the values

    The whole point of unmarshalling the XML is presumably so you can do something with the data. First, an example of modifying existing records: let's deal with inflation and increase all the prices by 10%:

    ;; Multiply the price of all the books in bb by ratio.
    (define (adjust-prices (bb::Bib) (ratio::double))
      (let* ((books bb:books)
             (nbooks (books:size)))
        (do ((i :: int 0 (+ i 1))) ((>= i nbooks))
          (let* ((book ::Book (books i)))
            (set! book:price (adjust-price book:price ratio))))))
    ;; Multiply old by ratio to yield an updated 2-decimal-digit BigDecimal.
    (define (adjust-price old::BigDecimal ratio::double)::BigDecimal
      (BigDecimal (round (* (old:doubleValue) ratio 100)) 2))
    (adjust-prices bb 1.1)

    Next, let's add a new book:

     (Book year: "2006"
           title: "JavaScript: The Definitive Guide (5th edtion)"
           authors: [(Author last: "Flanagan" first: "David")]
           publisher: "O'Reilly"
           price: (BigDecimal "49.99")))

    Notice the use of square brackets (a new Kawa feature) for the authors single-element sequence.

    Marshalling - writing XML from objects

    Finally, we might want to write the updated data out to an XML file. For that we create a Marshaller. It has a number of marshall methods that take a jaxbElement and a target specification, and serializes the former to the latter as XML. For example, you could just pass in a File to specify the name of the XML output file. However, Kawa includes an XML pretty-printer, and it would be nice to have pretty-printed and indented XML. To do that we have to use a SAX2 ContentHandler as the bridge between JAXB and Kawa's XML tools. The Kawa XMLFilter implements ContentHandler and can forward XML-like data to an XMLPrinter. The latter doesn't do pretty-printing by default - you have to set the *print-xml-indent* fluid variable to the symbol 'pretty.

    ;; Write bb as pretty-printed XML to an output port.
    (define (write-bib (bb ::Bib) (out ::output-port))::void
      (let ((m (jaxb-context:createMarshaller)))
        (fluid-let ((*print-xml-indent* 'pretty))
          ;; We could marshal directly to 'out' (which extends
          ;; However, XMLPrinter can pretty-print the output more readably.
          ;; We use XMLFilter as glue that implements org.xml.sax.ContentHandler.
          (m:marshal bb (gnu.xml.XMLFilter (gnu.xml.XMLPrinter out))))))
    (write-bib bb (current-output-port))
    Created 3 Jan 2011 18:31 PST. Last edited 7 Jan 2011 11:46 PST. Tags: