Ir al contenido principal

Hi, I'm Mariano Guerra, below is my blog, if you want to learn more about me and what I do check a summary here: or find me on twitter @warianoguerra or mastodon

A Simple, Understandable Production Ready Preact Project Setup

I wrote a similar post in the past, it's time for an updated version. This is a setup similar to the one I'm using with GlooData.


First we need to create our project, change folder names accordingly:

mkdir myproj
cd myproj
mkdir js lib css img

Create a package.json file at the root of the folder, we will need it later to install one or more build tools:

npm init

fetch deps (you can put this in a makefile, a justfile, a shell script or whatever)

wget -O lib/preact.js

Now let's do our first and last edit on our index.html:

<!doctype html>
   <meta charset="utf-8">
   <meta name="viewport" content="width=device-width, minimum-scale=1.0, initial-scale=1, maximum-scale=1.0, user-scalable=no">
   <title>My App</title>
   <script type=module src="./js/app.js?r=dev"></script>
   <link rel="shortcut icon" href="img/favicon.png">
   <div id="app"></div>

Here's a simple entry point that shows how to use preact without transpilers, modify to your tastes, js/app.js:

import {h, Fragment, render} from '../lib/preact.js';

const c =
        (tag) =>
        (attrs, ...childs) =>
            h(tag, attrs, ...childs),
    // some tags here, you get the idea
    button = c('button'),
    div = c('div'),
    span = c('span');

function rCounter(count) {
    return div(
        {class: 'counter'},
        button({id: 'dec'}, '-'),
        span(null, count),
        button({id: 'inc'}, '+')

function main() {
    const rootNode = document.getElementById('app'),
        dom = rCounter(0);

    render(dom, rootNode);


Also, I'm not covering state management in this post, I may do it in a future one if there's interest in it.


For development I use basic-http-server, I just start it at the root of the project, open the browser at the address it logs and just edit, save, reload, no transpilation, no waiting, no watchers.

If you don't want to install it you can use any other, you probably have python at hand, I find it a little slower to load, specially when you have many js modules:

python3 -m http.server


You could just publish it as it is, since any modern browser will load it.

If not there are three steps you can do, bundle it into a single file, minify it and provide it as an old style js file instead of an ES module. Let's do that.

First the bundling, there are two options I use, one if you have deno at hand, the other if you have npm at hand.

First the common part, create a dist folder to put the build artifacts:

rm -r dist
mkdir -p dist/js
cp index.html dist/
cp -r img dist/

Bundling with Deno

deno bundle js/app.js > dist/app.bundle.js

Bundling with Rollup

Do this once at the root of your project:

npm install --save-dev rollup


rollup js/app.js --file dist/app.bundle.js --format iife -n app

Minifying with Uglify-js

Do this once at the root of your project:

npm install --save-dev uglify-js


uglifyjs dist/app.bundle.js -m -o dist/js/app.js

And that's it.

Now some extra/optional things.

Reproducible Dev Environments with Nix

How do you get this environment up and running consistently and reproducible?

I use nix, you can if you want too.

First you need to install it.

Then create a shell.nix file at the root of your project with the following:

{ pkgs ? import (fetchTarball "") {} }:

pkgs.mkShell {
    LOCALE_ARCHIVE_2_27 = "${pkgs.glibcLocales}/lib/locale/locale-archive";
    buildInputs = [
    shellHook = ''
        export LC_ALL=en_US.UTF-8
        export PATH=$PWD/node_modules/.bin:$PATH

Then whenever you want to develop in this project run the following command at the root of your project:


Now you have a shell with all the tools you need.

Task automation with just

I use just to automate tasks, here are some snippets, you should check the docs for more details:

set shell := ["bash", "-uc"]
cdn := ""

ver_immutable := "4.1.0"
ver_preact := "10.11.2"

ROLLUP := "node_modules/.bin/rollup"
UGLIFY := "node_modules/.bin/uglifyjs"

    @just --list

    deno run --allow-all server.js

    basic-http-server -a

fetch-deps: clean-deps create-deps
    just fetch immutable.js immutable@{{ver_immutable}}/dist/
    just fetch preact.js preact@{{ver_preact}}/dist/preact.module.js

    rm -rf deps

    mkdir -p deps

fetch-full NAME URL:
    wget {{URL}} -O deps/{{NAME}}

fetch NAME URL:
    wget {{cdn}}/{{URL}} -O deps/{{NAME}}

    rm -rf dist

    mkdir -p dist/js dist/img

    {{ROLLUP}} js/app.js --file dist/app.bundle.js --format iife -n app
    {{UGLIFY}} dist/app.bundle.js -m -o dist/js/app.js
    rm dist/app.bundle.js
    cp img/favicon.png dist/img/
    sed "s/type=module //g;s/r=dev/r=$(git describe --long --tags --dirty --always --match 'v[0-9]*\.[0-9]*')/g" index.html > dist/index.html

dist: fetch-deps clear-dist mkdir-dist dist-bundle

Older browser and cache busting

The script tag with type=module will not work for really old browsers, you may also want to make sure the browsers load the latest bundle after a deploy, for that you can run a line similar to this to replace the script tag in the index.html in your dist folder with one that achieves the two objectives:

sed "s/type=module //g;s/r=dev/r=$(git describe --long --tags --dirty --always --match 'v[0-9]*\.[0-9]*')/g" index.html > dist/index.html

The Proxy trick to not repeat yourself ™️

Above you saw something like this:

import {h, Fragment, render} from '../lib/preact.js';

const c =
        (tag) =>
        (attrs, ...childs) =>
            h(tag, attrs, ...childs),
    // some tags here, you get the idea
    button = c('button'),
    div = c('div'),
    span = c('span');

There's a trick to avoid writing the tag name twice, it avoids mistakes and minifies better, here it is:

import {h, Fragment, render} from '../lib/preact.js';

const genTag = new Proxy({}, {
      get(_, prop) { return (attrs, ...childs) => h(prop, attrs, ...childs); },
    // some tags here, you get the idea
    {button, div, span} = genTag;

Reproducible Riak Core Lite Tutorial with Nix


Over the years I've created and recreated guides, posts and tutorials on how to setup an environment to create riak_core based applications.

Most of the repetitive work and troubleshooting is around the moving target that is a current Erlang/Elixir/Rebar3/Mix setup.

With this attempt I hope people will be able to setup and follow the guides with a reproducible environment that always reflects the one I had when I wrote the guide.

For that I will use Nix, to follow it you just need git and nix.

Follow instructions here to Install Nix if you haven't done that already.

Riak Core Lite with Erlang

Clone the Riak Core Lite Rebar3 Template

mkdir -p ~/.config/rebar3/templates
git clone ~/.config/rebar3/templates/rebar3_template_riak_core_lite

Enter a nix-shell with the needed tools:

nix-shell ~/.config/rebar3/templates/rebar3_template_riak_core_lite/shell.nix

Now your shell should have Erlang and rebar3 available, try it:

erlc -h
rebar3 help

Now let's create a new Riak Core Lite application, go to the folder where you want to store your new project and then:

Create the project:

rebar3 new rebar3_riak_core_lite name=ricor

Build it:

cd ricor
rebar3 release

Try it:

./_build/default/rel/ricor/bin/ricor console
(ricor@> ricor:ping().

The output should look something like, the number will probably be different:

{pong,'ricor@', 1324485858831130769622089379649131486563188867072}

With this environment you should be able to follow previous tutorials and guides like the Riak Core Tutorial and maybe the Little Riak Core Book.

Riak Core Lite with Elixir

I recommend you to follow Rkv: Step by Step Riak Core Lite Key Value Store Project

Clone the project:

git clone

Enter a nix-shell with the needed tools:

cd rkv
nix-shell shell.nix

Now your shell should have Erlang, Elixir, rebar3 and mix available, try it:

erlc -h
rebar3 help
elixirc -h
mix --help

Fetch deps, compile test and run:

mix deps.get
mix compile
mix test
iex --name dev@ -S mix run

Play with the API:


# {:error, :not_found}


# :ok

Rkv.put(:k2, :v2)

# :ok


# {:ok, :v2}


# :ok


# {:error, :not_found}

Rkv.put(:k2, :v2)


# {:ok, :v2}

You can follow the guide by switching to each tag in order:

I will try to keep the shell.nix files on both languages up to date from time to time to keep with major Erlang/Elixir versions, you can try to update the nix hash yourself and see if it still builds by following the instructions here: Nix Possible URL values

Have fun!

Nikola Restructured Text Roles Plugin Example Project

This guide assumes you have python 3 and nikola installed.

See the Nikola Getting Started Guide for instructions to install it.

Environment with Nix

If you have Nix installed you can get the environment by running nix-shell on the root of this project with this shell.nix file:

{ pkgs ? import (fetchTarball "") {} }:
with pkgs;

mkShell {
  LOCALE_ARCHIVE_2_27 = "${glibcLocales}/lib/locale/locale-archive";
  buildInputs = [
  shellHook = ''
    export LC_ALL=en_US.UTF-8

Setup a Nikola Site

In case you don't have a nikola site around or you want to play in a temporary project we will create one here:

nikola init my-site
cd my-site

I answered with the default to all of them by hitting enter on all questions, feel free to give different answers.

Create a New Plugin

In the base folder of the nikola site create a folder for your plugin, I will name mine my_rst_roles:

mkdir -p plugins/my_rst_roles

Create a configuration file with the plugin metadata and customize it at plugins/my_rst_roles/my_rst_roles.plugin:

Name = my_rst_roles
Module = my_rst_roles

PluginCategory = CompilerExtension
Compiler = rest
MinVersion = 7.4.0

Author = Mariano Guerra
Version = 0.1.0
Website =
Description = A set of custom reStructuredText roles

Create a python module inside the folder that will contain the plugin logic at plugins/my_rst_roles/

from docutils import nodes
from docutils.parsers.rst import roles

from nikola.plugin_categories import RestExtension
from import add_node

class Span(nodes.Inline, nodes.TextElement):

def visit_span(self, node):
    attrs = {}
    self.body.append(self.starttag(node, "span", "", **attrs))

def depart_span(self, _node):

add_node(Span, visit_span, depart_span)

class Plugin(RestExtension):

    name = "my_rst_roles"

    def set_site(self, site): = site

        generic_docroles = {
            "my-foo": (Span, "my-base my-foo"),
            "my-bar": (Span, "my-base my-bar"),

        for rolename, (nodeclass, classes) in generic_docroles.items():
            generic = roles.GenericRole(rolename, nodeclass)
            role = roles.CustomRole(rolename, generic, {"classes": [classes]})
            roles.register_local_role(rolename, role)

        return super(Plugin, self).set_site(site)

Create a New Post or Page to test it

Create a post:

nikola new_post -t "Nikola Restructured Text Roles Plugin Example Project"

Put some content in it:

echo 'Hi :my-foo:`hello`, :my-bar:`world`!' >> posts/nikola-restructured-text-roles-plugin-example-project.rst

Build the site

nikola build

Check the Generated HTML

You can serve it with:

nikola serve

And open http://localhost:8000 and inspect it with the browser's developer tools, here's a blog post friendly way with grep:

grep my-foo output/posts/nikola-restructured-text-roles-plugin-example-project/index.html

The output should be something like this:

Hi <span class="my-base my-foo">hello</span>, <span class="my-base my-bar">world</span>!</p>

More Advanced Transformations

I needed to process the children of the Span node by myself and stop docutils from walking the children between visit_span and depart_span, to do that here's a simplified version:

class Span(nodes.Inline, nodes.TextElement):
    def walk(self, visitor):
        # don't stop
        return False

    def walkabout(self, visitor):
        # don't stop
        return False

def visit_span(self, node):
    cls = " ".join(node.get('classes', []))
    child_str = " ".join([format_span_child(child) for child in node.children])
    self.body.append("<span class=\"" + cls + "\">" + child_str)

def depart_span(self, _node):

def format_span_child(node):
    return node.astext()

Here are links to the implementations of some of the relevant functions:

A Real World Use Case

I did this to embed inline UI components in the documentation for instadeq, you can see it in action in This Walkthrough Guide if you scroll a little.

I still have to improve the style and add some missing roles but it's much better than having to describe the position and look of ui components instead of just showing them.

As usual, thanks to Roberto Alsina for Nikola and for telling me how to get started with this plugin.

EFLFE: Elixir Flavoured Lisp Flavoured Erlang


In 2019 I was invited to give a talk at ElixirConfLA, at that point I didn't know Elixir so I decided to "make a joke" and instead of learning Elixir I would create a transpiler from Erlang to Elixir.

The Proof of Concept as a Joke was a lot of work but at least I learned a lot about Elixir and Pretty Printers.

One year later I was invited to CodeBEAM Brasil and I decided to push the PoC to completion to achieve the goal of transpiling Erlang/OTP and the transpiler itself.

This year I was invited again and I felt the pressure to continue with the tradition.

Last year my talk was with Robert Virding Co-creator of Erlang and creator of LFE (Lisp Flavoured Erlang), I had the idea to transpile LFE too.

At that point LFE 1.0 compiled to an internal representation (Core Erlang) that was one level below the one I was using in Elixir Flavoured Erlang.

I knew that the next major version of LFE was going to switch to the representation I was using, so it was just a matter of waiting for the release.

Timing helped and LFE 2.0 was released in June.

I went code diving and found how to get the data at the stage I needed and then fixing some corner cases around naming (LFE uses lispy Kebab case).


The result is EFLFE: Elixir Flavoured Lisp Flavoured Erlang an Lisp Flavoured Erlang to Elixir transpiler.

Business in the Front

Aliens in the Back

Run it with a configuration file and a list of LFE files:

./efe pp-lfe file.conf my-code.lfe

And it will output Elixir files for each.



(defmodule ping_pong
    (start_link 0)
    (ping 0))
    (init 1)
    (handle_call 3)
    (handle_cast 2)
    (handle_info 2)
    (terminate 2)
    (code_change 3))
  (behaviour gen_server))        ; Just indicates intent

(defun start_link ()
    #(local ping_pong) 'ping_pong '() '()))

;; Client API

(defun ping ()
  (gen_server:call 'ping_pong 'ping))

;; Gen_server callbacks

(defrecord state
  (pings 0))

(defun init (args)
  `#(ok ,(make-state pings 0)))

(defun handle_call (req from state)
  (let* ((new-count (+ (state-pings state) 1))
         (new-state (set-state-pings state new-count)))
    `#(reply #(pong ,new-count) ,new-state)))

(defun handle_cast (msg state)
  `#(noreply ,state))

(defun handle_info (info state)
  `#(noreply ,state))

(defun terminate (reason state)

(defun code_change (old-vers state extra)
  `#(ok ,state))


defmodule :ping_pong do
  use Bitwise
  @behaviour :gen_server
  def start_link() do
    :gen_server.start_link({:local, :ping_pong}, :ping_pong, [], [])

  def ping() do, :ping)

  require Record
  Record.defrecord(:r_state, :state, pings: 0)

  def init(args_0) do
    {:ok, r_state(pings: 0)}

  def handle_call(req_0, from_0, state_0) do
    new_count_0 = r_state(state_0, :pings) + 1

      new_state_0 = r_state(state_0, pings: new_count_0)
      {:reply, {:pong, new_count_0}, new_state_0}

  def handle_cast(msg_0, state_0) do
    {:noreply, state_0}

  def handle_info(info_0, state_0) do
    {:noreply, state_0}

  def terminate(reason_0, state_0) do

  def code_change(old_vers_0, state_0, extra_0) do
    {:ok, state_0}

  def unquote(:"LFE-EXPAND-EXPORTED-MACRO")(_, _, _) do

Where The Code Gets Ugly

LFE and Elixir both share the fact that they support macros, macros are expanded at compile time and are a language feature, that means that when I get the code to transpile it, it's already expanded.

If the module you are transpiling uses macros you will transpile the macro expanded version of the code, which may be okay or not depending on the kind of code that the macro generates.

Remaining Work

The remaining work is to understand the details of variable scoping in LFE and see if it's compatible with Elixir so that I can translate it as is like I'm doing now.

If they differ I have to see if I can do some local analysis to transform it so that the resulting code behaves semantically like the original.

If you try it and have some questions let me know at @warianoguerra or in the repo's issue tracker.

How to transpile a complex Erlang project with Elixir Flavoured Erlang: erldns

A user (yes, there's another one!) of Elixir Flavoured Erlang asked why a project wasn't transpiling correctly, I went to look and here are the notes:

Assuming you have the efe escript in your PATH, cd to a folder of your choice and do:

git clone
git clone

Create a file called erldns.conf with the following content:

   includes => ["../include/", ".", "../priv/", "../../"],
   macros => #{},
   encoding => utf8,
   output_dir => "./out/"

And then run:

efe pp erldns.conf erldns/src/*.erl

Now the context:

We clone dns_erlang because erldns includes headers files from it, like here include_lib("dns_erlang/include/dns_records.hrl")

Since we want to find files using include_lib that refer to dns_erlang, we also include ../../ (which will find any project cloned at the same level as erldns)

erldns also includes headers from the include and priv folders, paths in include are relative to the source file, that's why both start with ../

Currently efe will silently remove parts of the code that can't find or parse properly, that's why you may notice that code that references external records or constants in header files not found in the includes list will silently be missing, in the future I may warn about that.

Elixir Flavoured Erlang: Who Transpiles the Transpiler?

When a compiler is implemented in the language it compiles it's called bootstrapping.

What's the name for a transpiler transpiled to the target language?

Note: The recomended reading setup is to open the Inception Button and click it when appropiate

I've been working on an off on Elixir Flavoured Erlang: an Erlang to Elixir Transpiler for a while, in recent months a nice netizen called eksperimental started reporting cases where the transpiler was generating semantically incorrect Elixir code, that means that the code compiled but didn't do the same thing as the Erlang version.

I started noticing a pattern in the reports and thinking: Is he/she trying to do what I wanted to do since the beginning of this project?

As the name suggests, Elixir Flavoured Erlang is an Erlang to Elixir transpiler... written in Erlang.

The joke that started this project was that I could write Elixir without ever actually writing Elixir and the final step would be to transpile the transpiler with itself and make it work.

Since I closed all open issues (except keeping comments) I decided to give it a go.

What I needed to do to try this is:

  • Create an escript mix project and configure it accordingly

  • Transpile the project from Erlang to Elixir into the lib folder

  • Build the project

  • Transpile something with both versions

  • Diff both outputs to check they are equal

The testing strategy during development was to transpile Erlang/OTP to Elixir (otp.ex).

Transpiling it in this case should exercise most of the code paths.

All the steps are automated in the make inception target

1st Attempt, almost...

The first attempt failed by giving me the usage as if I was passing the wrong command line options, but I wasn't.

The problem was caused by Elixir passing binary strings to the escript entry point while Erlang passes list strings, it was fixed by catching the Elixir case, converting the arguments to list strings and calling main again with them.

2nd attempt, is absence of evidence evidence of absence?

After fixing that I ran it again and the diff didn't generate any output.

I wasn't sure if it worked or not, I introduced a change in the output manually and ran the diff line again, this time it displayed the difference.

That meant the transpiled transpiler is identical to the original at least when transpiling Erlang/OTP.

Some of the recent changes

In the previous post about the project I listed the special cases and tricks I had to do to transpile OTP, here are the main changes introduced after that.

Quoting all reserved keywords when used as identifiers

Elixir reserved keywords

Just one example since all are the same except for the identifier:

'true'() -> ok.

Translates to

def unquote(:true)() do

Fixed improper cons list conversion

cons([]) -> ok;
cons([1]) -> ok;
cons([1, 2]) -> ok;
cons([1 | 2]) -> ok;
cons([[1, 2], 3]) -> ok;
cons([[1, 2] | 3]) -> ok;
cons([[1 | 2] | 3]) -> ok;
cons([0, [1, 2]]) -> ok;
cons([0, [1 | 2]]) -> ok;
% equivalent to [1, 2, 3]
cons([0 | [1, 2]]) -> ok;
cons([[1, [2, [3]]]]) -> ok;
cons([[-1, 0] | [1, 2]]) -> ok.

Made single line to save vertical space

def cons([]) do # ...
def cons([1]) do # ...
def cons([1, 2]) do # ...
def cons([1 | 2]) do # ...
def cons([[1, 2], 3]) do # ...
def cons([[1, 2] | 3]) do # ...
def cons([[1 | 2] | 3]) do # ...
def cons([0, [1, 2]]) do # ...
def cons([0, [1 | 2]]) do # ...
def cons([0, 1, 2]) do # ...
def cons([[1, [2, [3]]]]) do # ...
def cons([[- 1, 0], 1, 2]) do # ...

Transpile map exact and assoc updates into map syntax, Map.put and Map.merge

In Erlang there are two ways to set a map key: exact and assoc.

Exact uses the := operator and will only work if the key already exists in the map:

1> M = #{a => 1}.
#{a => 1}

2> M#{a := 2}.
#{a => 2}

3> M#{b := 2}.
** exception error: {badkey,b}

Assoc uses the => operator and works if the key exists or if it doesn't:

1> M = #{a => 1}.
#{a => 1}

2> M#{a => 2}.
#{a => 2}

3> M#{b => 2}.
#{a => 1,b => 2}

In Elixir only exact has syntax using the => operator (: as syntactic sugar for atom keys):

iex(1)> m = %{a: 1}
%{a: 1}

iex(2)> m = %{:a => 1} # equivalent
%{a: 1}

iex(3)> %{m | a: 2}
%{a: 2}

iex(4)> %{m | b: 2}
** (KeyError) key :b not found in: %{a: 1}

iex(4)> %{m | :b => 2}
** (KeyError) key :b not found in: %{a: 1}

The easiest solution would be to transpile all cases to Map.merge/2

But the right solution would be to use the most ideomatic for each case:

  • If Erlang uses exact, transpile to Elixir map update syntax

  • If Erlang uses assoc

  • If Erlang uses both split it and use the right syntax for each

Let's see it with examples:

put_atom() ->
        M = #{},
        M0 = M#{},
        M1 = M#{a => 1},
        M2 = M1#{a := 1},
        M3 = M#{a => 1, b => 2},
        M4 = M3#{a := 1, b => 2},
        M5 = M1#{a := 1, b := 2},
        M6 = M1#{a := 1, b := 2, c => 3, d => 4},
        {M0, M1, M2, M3, M4, M5, M6}.

put_key() ->
        M = #{},
        M1 = M#{<<"a">> => 1},
        M2 = M1#{<<"a">> := 1},
        M3 = M#{<<"a">> => 1, <<"b">> => 2},
        M4 = M3#{<<"a">> := 1, <<"b">> => 2},
        M5 = M1#{<<"a">> := 1, <<"b">> := 2},
        M6 = M1#{<<"a">> := 1, <<"b">> := 2, <<"c">> => 3, <<"d">> => 4},
        {M1, M2, M3, M4, M5, M6}.

quoted_atom_key(M) ->
        M#{'a-b' := 1}.

Compiles to:

def put_atom() do
  m = %{}
  m0 = m
  m1 = Map.put(m, :a, 1)
  m2 = %{m1 | a: 1}
  m3 = Map.merge(m, %{a: 1, b: 2})
  m4 = Map.put(%{m3 | a: 1}, :b, 2)
  m5 = %{m1 | a: 1, b: 2}
  m6 = Map.merge(%{m1 | a: 1, b: 2}, %{c: 3, d: 4})
  {m0, m1, m2, m3, m4, m5, m6}

def put_key() do
  m = %{}
  m1 = Map.put(m, "a", 1)
  m2 = %{m1 | "a" => 1}
  m3 = Map.merge(m, %{"a" => 1, "b" => 2})
  m4 = Map.put(%{m3 | "a" => 1}, "b", 2)
  m5 = %{m1 | "a" => 1, "b" => 2}
  m6 = Map.merge(%{m1 | "a" => 1, "b" => 2}, %{"c" => 3, "d" => 4})
  {m1, m2, m3, m4, m5, m6}

def quoted_atom_key(m) do
  %{m | "a-b": 1}

What's next

To be sure it works in all cases I would like to make it possible to translate Erlang projects and run the project tests after transpiling, if you are interested in helping contact me on twitter @warianoguerra

Why PARC worked: Reaction against the bubblegum kind of technology from the 60s - Alan Kay

In Butler Lampson, "Personal Distributed Computing—The Alto and Ethernet Software 1:25:42" Alan Kay says:

The most important thing I got from Butler and Chuck's talk today is that's not enough to have an idea and it's not enough to actually go out and build it.

One of the things that Butler specially and Bob Tailor had decided was to be conservative.

PARC is always talked about as the forefront of technology and everything else, but in fact was, part of what was done at PARC I think was a reaction against the bubblegum kind of technology that we all used to build in the 60s that could barely work for the single person who had designed and build it, Butler and Bob and Chuck did not want to have that happen again.

So we have to me two interesting streams at PARC one was kind of a humbleness which I'm sure no Xerox executive will ever recognise that word as applied to us but in fact it was saying "we can't do everything, we have to hold some limits in order be able to replicate this systems", and then there's the incredible arrogance on the others side of saying BUT we have to be able to build every piece of hardware and software in order to control our own destiny.

So you have these two things, the conservative attitude and then pulling out all the stops once the idea that you had to replicate the systems was made, I think that to me sums up why PARC worked.

The other talk Alan Kay mentions is Chuck Thacker, "Personal Distributed Computing—The Alto and Ethernet Hardware"

A Playlist with more talks from the conference: ACM Conference on the History of Personal Workstations

Elixir Flavoured Erlang: an Erlang to Elixir Transpiler

Last year I was invited to ElixirConf Latin America in Colombia to give a talk, I proposed to also give a tutorial about Riak Core and they said that it should be in Elixir, so I started looking into Elixir to translate my Riak Core material to it.

At the same time I was learning about pretty printers and I decided to use it as a joke in my talk and a way to learn Elixir by implementing a pretty printer for Elixir from the Erlang Abstract Syntax Tree.

The joke didn't work, but it resulted in the prototype of Elixir Flavoured Erlang.

This year I was invited to give another talk about languages on the Erlang virtual machine at Code BEAM Brasil 2020 and I thought it would be a good idea to continue working on it and maybe announce it at the talk.

To measure progress I built some scripts that would transpile the Erlang standard library to Elixir and then try compiling the resulting modules with the Elixir compiler, I would pick one compiler error, fix it and try again.

With this short feedback loop and a counter that told me how many modules compiled successful it was just a matter of finding errors and fixing them. At the beginning each fix would remove lot of compiler errors and some times surface new ones, after a while each error was a weird corner case and progress slowed.

Some days before the talk I managed to transpile all of Erlang/OTP and 91% of the Elixir translations compiled successfully.

The result is of course Elixir Flavoured Erlang, but as a side effect I have Erlang/OTP in Elixir, so I decided to publish it too.

Enter otp.ex: Erlang/OTP transpiled to Elixir.

The objective of this repository is to allow Elixir programmers to read Erlang code for projects they use, most of the code compiles but I can't ensure that it behaves identically to the original source.

While writing the readme of efe I needed some example that wasn't OTP so I decided to also transpile a widely used project on Erlang and Elixir: the Cowboy web server

The ^ match operator in Elixir

In Elixir variable bindings by default rebind to the new value, if they are already bound and you want to pattern match on the current value you have to add the ^ operator in front:

iex(1)> a = 1
iex(2)> a = 2
iex(3)> a
iex(4)> ^a = 3
** (MatchError) no match of right hand side value: 3

In Erlang variables are bound once and then always pattern match, the easy part of the translation is that I know that when a variable is bound and in match position I have to add the ^, the thing is that I can't add the ^ on the first binding and I have to know where variables are in match position.

For this I do a pass on the Erlang Abstract Syntax Tree and I add annotations on variables to know if it's already bound and if it's in match possition, the pretty printer in the second pass checks those annotations to know if it has to add the ^ or not.

Why some modules don't compile?

Here's a list of reasons why the remaining modules don't compile after being transpiled.

For comprehensions must start with a generator

There's a weird trick in Erlang where you can generate an empty list if a condition is false or a list with one item if a condition is true by having a list comprehension that has no generator but has a filter.

I've been told that it's an artifact of how list comprehensions used to be translated to other code in the past.

1> [ok || true].

2> [ok || false].

The fact is that it's valid Erlang and is used in some places in the standard library.

For simple cases in efe I insert a dummy generator:

for _ <- [:EFE_DUMMY_GEN], true do

for _ <- [:EFE_DUMMY_GEN], false do

For more advanced cases with many filters I have to analyze if inserting a generator at the beginning doesn't change the result, that's why some cases are left as is.

Erlang records don’t evaluate default expressions, Elixir defrecord do

Erlang records are not part of the language, they are expanded by the Erlang Preprocessor.

What the preprocessor does is to insert the default values "as is" on the places where a record is created, this means that if the default is a function call it won't be evaluated during definition, there will be a function call for each instantiation of the record.

Elixir has a module to deal with Erlang Records using macros, the thing is that Elixir will evaluate the defaults when they are defined, this means that if the call doesn't return a constant the behavior won't be the same. If the call returns a value that can't be represented as a constant in the code it won't compile either.

Another issue is if the function being called is declared after the record is defined, it will fail with an error saying that the function doesn't exit.

There could be a solution here by creating another module that tries to emulate the way default values behave in Erlang (they behave as "quoted" expressions) but I don't know so much about Elixir macros to know how to do it.

Named lambda functions

In Erlang lambda functions can have names to allow recursion, in Elixir this is not supported, there's no way to automatically change the code in a local/simple way, it's easy to change the code by hand so I decided to transpile it as if Elixir supported named lambda functions and get a compiler error.

Expressions in bitstrings

In Elixir size in bitstring expects an integer or a variable as argument, Erlang allows any expression there, it's easy to fix by hand by extracting the expression into a variable and putting the variable there, it could be doable but for now I just leave the expression in place and get a compiler error.

Variable defined inside scope and used outside

In Erlang variables introduced within the if, case or receive expressions are implicitly exported from the bodies, this means this works:

case 1 of A -> ok end, A.
% or this
case 1 of 1 -> B = 2 end, B.

Elixir has more strict scoping rules and that is not allowed, this is highly discouraged in Erlang but used in some places in the standard library.

Corner cases all the way down

Here's a list of small differences that I had to fix.

Erlang vs Elixir imports

In Erlang you can import functions from a module in multiple imports and they "add up".

In Elixir later imports for the same module "shadow" previous ones.

The solution is to group imports for the same module and emit only one import per module.

In Erlang you can import a function more than once, in Elixir it's a compiler error, the solution is to deduplicate function imports.

Auto imported functions

Erlang "auto imports" many functions from the erlang module, Elixir auto imports just a few, the solution is to detect local calls to auto imported functions and prefix them with the :erlang module.

Lowercase variables that become keywords

Erlang variables start with uppercase, Elixir variables with lowercase, this means in Erlang variable names can't clash with language keywords but the lowercase versions can, that's why I have to check if the variable is a keyword and add a suffix to them.

Local calls and Kernel autoimports

Elixir auto import functions from the Kernel module that may clash with local functions in the current Erlang module, for this case I have to detect Kernel functions and macros that are also local functions and add an expression to avoid auto importing them, like this:

import Kernel, except: [to_string: 1, send: 2]

Private on_load function

Erlang allows to define a private function to be run when the module loads, Elixir only allowed public functions, this has been reported and fixed in Elixir but not yet released.

Function capture/calls with dynamic values

In Erlang the syntax to pass a reference to a function is uniform for constants and variables:

fun calls/3
fun cornercases:calls/3
fun M:F/Arity
fun M:calls/3
fun M:F/3
fun cornercases:F/Arity
fun cornercases:calls/Arity
fun M:calls/Arity}

In Elixir I had to special case when any part is a variable.

Function.capture(m, f, arity)
Function.capture(m, :calls, 3)
Function.capture(m, f, 3)
Function.capture(:cornercases, f, arity)
Function.capture(:cornercases, :calls, arity)
Function.capture(m, :calls, arity)

Something similar happens with function calls:

M = erlang
F = max
M:max(1, 2)
M:F(1, 2)
erlang:F(1, 2)
erlang:max(1, 2)
max(1, 2)


m = :erlang
f = :max
m.max(1, 2)
apply(m, f, [1, 2])
apply(:erlang, f, [1, 2])
:erlang.max(1, 2)
max(1, 2)

Binary operators

In Erlang binary operators are builtin.

In Elixir they are macros from the Bitwise module.

The fix was easy, just use the module.

Call Expressions

In Erlang there's no extra syntax to call a function that is the result of an expression:

fun () -> ok end().
% or

In Elixir it has to be wrapped in parenthesis and a dot added before the call:

(fn () -> :ok end).()
# or

Weird function names

In Erlang to declare or call function names whose names are not valid identifiers the name has to be in single quotes:

'substring-after'() ->
    wxMenu:'Destroy'(A, B).

In Elixir the declaration is different from the call.

def unquote(:"substring-after")() do
    :wxMenu.'Destroy'(a, b)

When the function is a keyword in Elixir the declaration is the same but a local call must be prefixed with the module to be valid syntax:

keyword_methods() ->
    {nil(), in()}.

nil() -> nil.
in() -> in.


def keyword_methods() do

def unquote(:nil)() do

def unquote(:in)() do

Erlang non short circuit boolean operators

For historical reasons Erlang's boolean operators and and or do not short circuit, this means they evaluate both sides before evaluating itself, for short circuit versions the newer and recommended andalso and orelse operators exist. Still the old versions are used in some places.

Elixir only has short circuit versions, to solve this I replace calls to those operators to the functions in the Erlang module that do the same, since I need to force the evaluation of both sides and function calls evaluate the arguments before calling it does what I need.

o_and(A, B) -> A and B.
o_or(A, B)  -> A or B.
o_xor(A, B) -> A xor B.


def o_and(a, b) do
  :erlang.and(a, b)

def o_or(a, b) do
  :erlang.or(a, b)

def o_xor(a, b) do
  :erlang.xor(a, b)

The problem is in guards, where only a subset of functions can be used, in Erlang since and and or are operators they are allowed, but in Elixir the function calls are not, only in this case I replace the non short circuit version for the short circuit ones since guards are expected to be side effect free and the evaluation of a side effect free expression on the right side should not change the result of the guard.

But there's a corner case in the corner case, a guard evaluates to false if the guard throws, if the right side throws then the semantics will differ, but well, I tried hard enough:

2> if true orelse 1/0 -> ok end.
3> if true or 1/0 -> ok end.
** exception error: no true branch found when evaluating an if expression

6> if (false andalso 1/0) == false -> ok end.
7> if (false and 1/0) == false -> ok end.
** exception error: no true branch found when evaluating an if expression

Valid character syntax

The character type is a syntax convenience to write numbers, Erlang supports more character ranges than Elixir, it was a matter of figuring out the valid ranges and generating the numbers instead for the ones that were not allowed:

chars() ->
    [$\s, $\t, $\r, $\n, $\f, $\e, $\d, $\b, $\v, $\^G, $\^C].

printable_chars() ->
    [$a, $z, $A, $Z, $0, $9, $\000, $\377, $\\, $\n].


def chars() do
    [?\s, ?\t, ?\r, ?\n, ?\f, ?\e, ?\d, ?\b, ?\v, ?\a, 3]

def printable_chars() do
    [?a, ?z, ?A, ?Z, ?0, ?9, ?\0, 255, ?\\, ?\n]

Escape interpolation

Erlang doesn't support string interpolation, Elixir does, any case that looks like string interpolation coming from Erlang must be quoted because it's not:

["#{", '#{', "'p'"].


['\#{', :"\#{", '\'p\'']

Did you know that in Elixir you can interpolate in atoms?

iex(1)> a = "an_atom"

iex(2)> :"#{a}"

Constant expressions in match position

Erlang allows expressions that evaluate to a constant on match position, Elixir doesn't so I had to implement a small evaluator to do it before translating expressions.

match(1 bsl 32 - 1) -> ok.


def match(4294967295) do

catch expression

Erlang has a catch expression which Elixir does not, luckily since in Elixir everything is an expression I can expand it to a try/catch expression, the only downside is the extra verbosity.

Erlang/OTP as a fuzzer for the Elixir compiler

As I said I tested efe by transpiling the Erlang standard library and trying to compile it with the Elixir compiler.

The thing is that OTP has a lot of code, some of it really old and some of it using Erlang in weird ways, that meant that in some cases I would crash the Elixir compiler in the process or I would get an unexpected error that may be undefined behavior.

I reported the ones that made sense and the Elixir team had the patience to handle them and fixed them really fast, here's a list:

Future of Coding Weekly 2020/08 Week 5

For some reason tinyletter decided to not publish the newsletter in the archive so I'm posting it here.

If you want to subscribe to the newsletter, it's here:

Subtext 1 Demo, Layered Text, VR Visual Scripting, Automated Game Design, Dynamic Sketching in AR, Tiny Structure Editors for Low, Low Prices & more

Two Minute Week

🎥 This Week in Instadeq: Event Triggers via Mariano Guerra

🧵 conversation

This week I added support for Event Triggers, a way to react to changes and do things on other entities

Share Our Work

💬 Chris Rabl

🧵 conversation

I've been doing more and more writing lately, and have been wishing for a tool that allows me to write my outlines, drafts, and final compositions in the same editor window with the ability to toggle any of those "layers" on and off at will, merge them, copy selections to new layers, etc. It would work sort of like Photoshop but for writing... I have a feeling these principles could also work as an IDE extension (imagine being able to hide the "code" layer and show only the "comments" layer, or the "documentation" layer). Curious to hear your thoughts, or whether anyone else is working on something similar?

🎥 layered text

📝 Using Gizmos via Scott Anderson

🧵 conversation

A year ago I was working on VR Visual Scripting in Facebook Horizon. They've recently started to share some more information leading up to Facebook Connect. I figured the scripting system would either be largely the same, or entirely rewritten since I left. It seems like it's mostly in tact based on documentation shared

🎥 Create and Share Interactive Visualizations from SpaceX's JSON API and 🎥 Create and Share Visualizations of Premier League Matches from a CSV via Mariano Guerra

🧵 conversation

📝 root via Dennis Heihoff

🧵 conversation

What started with me reverse engineering notion became a data-first recursive UI resolver I called root.

Here's how it differs from most common technologies today:

  • Approaches to UI development like react.js + graphQL require UI components to request data in a shape that satisfies the UI tree. This means the shape of the data is determined by the UI tree. Root takes an inverse approach where the UI tree is determined by the shape of the data.
  • A major benefit of this approach is that the UI layout is thus entirely determined by data, data that can be traversed, transformed and stored in arbitrary ways and by arbitrary means.
  • This is powerful for unstructured, user-determined, block-based UI's like rich documents (think Roam Research, Notion etc.) enabling queries and functions that, based on users' demands, derive the optimal presentation of a document.

It packs a few more punches. The best example is probably this (in about 200 LoC).

Thinking Together

📝 model of computation via Nick Smith

🧵 conversation

Why isn't any kind of logic programming considered a model of computation? Why do we talk about Turing Machines and recursive functions as fundamental, but not inference? I can't find any resources discussing this disparity. It's like there are two classes of academics that don't talk to each other. Am I missing something?

📝 Motoko, a programming language for building directly on the internet - Stack Overflow Blog via Mike Cann

🧵 conversation also discussed here 🧵 conversation

Anyone played with Motoko yet? looks really interesting, kind of reminds me of Unison in some ways

📝 via Cameron Yick

🧵 conversation

Pondering: how important is it for a making environment to be made from the same medium you’re making with if your main goal isn’t making interfaces? The Jupyter ecosystem has come quite far despite relatively few people using it to write JS:

🐦 JD Long: Observation from Jupyter Land: The Jupyter ecosystem has a big headwind because the initial target audience for the tool (Julia, Python, R) has a small overlap with the tool/skills needed to expand the ecosystem, namely Javascript.

That's not a criticism, just an observation.

💬 Hamish Todd

🧵 conversation

In the thing I am making, you can't have a variable without choosing a specific example value for that variable. This is surely something that's been discussed here before since Bret does it in Inventing On Principle. What do folks think of it?


📝 Tiny Structure Editors for Low, Low Prices! via Jack Rusher

🧵 conversation

Fun paper from 2020 IEEE Symposium on Visual Languages and Human-Centric Computing

🎥 Subtext 1 demo (from 2005) via Shalabh Chaturvedi

🧵 conversation

Jonathan Edwards recently uploaded the Subtext 1 demo (from 2005).

It has a lot of interesting takes - and most (all?) that I agree with. E.g. edit time name resolution, debugging by inspection, a concrete model of time, inline expansion of function calls, and more.

📝 It's the programming environment, not the programming language via Ope

🧵 conversation

“But while programming languages are academically interesting, I think we more desperately need innovation in programming environments.

The programming environment isn’t a single component of our workflow, but the total sum enabled by the tools working together harmoniously. The environment contains the programming language, but also includes the debugging experience, dependency management, how we communicate with other developers (both within source code and without), how we trace and observe code in production, and everything else in the process of designing APIs to recovering from failure.

The story of programming language evolution is also a story of rising ideas in what capabilities good programming environments should grant developers. Many languages came to popularity not necessarily based on their merits as great languages, but because they were paired with some new and powerful capability to understand software and write better implementations of it.”

🎥 Getting Started in Automated Game Design via Scott Anderson

🧵 conversation

Mike Cook has done a lot of research into automated game generation. He recently released this video which is both a tutorial and an overview of the field.

📝 Gatemaker: Christopher Alexander's dialogue with the computer industry via Stefan Lesser

🧵 conversation

Don’t read this for the application “Gatemaker”. Read this for a fascinating outsider’s view on the software industry, systems design, and end-user programming.

🎥 RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching via Jack Rusher

🧵 conversation

I really like this new AR work from Ryo Suzuki, et al.