Skip to main content

This is my blog, more about me at marianoguerra.github.io

🦋 @marianoguerra.org 🐘 @marianoguerra@hachyderm.io 🐦 @warianoguerra

Elixir Flavoured Erlang: Who Transpiles the Transpiler?

When a compiler is implemented in the language it compiles it's called bootstrapping.

What's the name for a transpiler transpiled to the target language?

Note: The recomended reading setup is to open the Inception Button and click it when appropiate


I've been working on an off on Elixir Flavoured Erlang: an Erlang to Elixir Transpiler for a while, in recent months a nice netizen called eksperimental started reporting cases where the transpiler was generating semantically incorrect Elixir code, that means that the code compiled but didn't do the same thing as the Erlang version.

I started noticing a pattern in the reports and thinking: Is he/she trying to do what I wanted to do since the beginning of this project?

As the name suggests, Elixir Flavoured Erlang is an Erlang to Elixir transpiler... written in Erlang.

The joke that started this project was that I could write Elixir without ever actually writing Elixir and the final step would be to transpile the transpiler with itself and make it work.

Since I closed all open issues (except keeping comments) I decided to give it a go.

What I needed to do to try this is:

  • Create an escript mix project and configure it accordingly

  • Transpile the project from Erlang to Elixir into the lib folder

  • Build the project

  • Transpile something with both versions

  • Diff both outputs to check they are equal

The testing strategy during development was to transpile Erlang/OTP to Elixir (otp.ex).

Transpiling it in this case should exercise most of the code paths.

All the steps are automated in the make inception target

1st Attempt, almost...

The first attempt failed by giving me the usage as if I was passing the wrong command line options, but I wasn't.

The problem was caused by Elixir passing binary strings to the escript entry point while Erlang passes list strings, it was fixed by catching the Elixir case, converting the arguments to list strings and calling main again with them.

2nd attempt, is absence of evidence evidence of absence?

After fixing that I ran it again and the diff didn't generate any output.

I wasn't sure if it worked or not, I introduced a change in the output manually and ran the diff line again, this time it displayed the difference.

That meant the transpiled transpiler is identical to the original at least when transpiling Erlang/OTP.

Some of the recent changes

In the previous post about the project I listed the special cases and tricks I had to do to transpile OTP, here are the main changes introduced after that.

Quoting all reserved keywords when used as identifiers

Elixir reserved keywords

Just one example since all are the same except for the identifier:

'true'() -> ok.

Translates to

def unquote(:true)() do
  :ok
end

Fixed improper cons list conversion

cons([]) -> ok;
cons([1]) -> ok;
cons([1, 2]) -> ok;
cons([1 | 2]) -> ok;
cons([[1, 2], 3]) -> ok;
cons([[1, 2] | 3]) -> ok;
cons([[1 | 2] | 3]) -> ok;
cons([0, [1, 2]]) -> ok;
cons([0, [1 | 2]]) -> ok;
% equivalent to [1, 2, 3]
cons([0 | [1, 2]]) -> ok;
cons([[1, [2, [3]]]]) -> ok;
cons([[-1, 0] | [1, 2]]) -> ok.

Made single line to save vertical space

def cons([]) do # ...
def cons([1]) do # ...
def cons([1, 2]) do # ...
def cons([1 | 2]) do # ...
def cons([[1, 2], 3]) do # ...
def cons([[1, 2] | 3]) do # ...
def cons([[1 | 2] | 3]) do # ...
def cons([0, [1, 2]]) do # ...
def cons([0, [1 | 2]]) do # ...
def cons([0, 1, 2]) do # ...
def cons([[1, [2, [3]]]]) do # ...
def cons([[- 1, 0], 1, 2]) do # ...

Transpile map exact and assoc updates into map syntax, Map.put and Map.merge

In Erlang there are two ways to set a map key: exact and assoc.

Exact uses the := operator and will only work if the key already exists in the map:

1> M = #{a => 1}.
#{a => 1}

2> M#{a := 2}.
#{a => 2}

3> M#{b := 2}.
** exception error: {badkey,b}

Assoc uses the => operator and works if the key exists or if it doesn't:

1> M = #{a => 1}.
#{a => 1}

2> M#{a => 2}.
#{a => 2}

3> M#{b => 2}.
#{a => 1,b => 2}

In Elixir only exact has syntax using the => operator (: as syntactic sugar for atom keys):

iex(1)> m = %{a: 1}
%{a: 1}

iex(2)> m = %{:a => 1} # equivalent
%{a: 1}

iex(3)> %{m | a: 2}
%{a: 2}

iex(4)> %{m | b: 2}
** (KeyError) key :b not found in: %{a: 1}

iex(4)> %{m | :b => 2}
** (KeyError) key :b not found in: %{a: 1}

The easiest solution would be to transpile all cases to Map.merge/2

But the right solution would be to use the most ideomatic for each case:

  • If Erlang uses exact, transpile to Elixir map update syntax

  • If Erlang uses assoc

  • If Erlang uses both split it and use the right syntax for each

Let's see it with examples:

put_atom() ->
        M = #{},
        M0 = M#{},
        M1 = M#{a => 1},
        M2 = M1#{a := 1},
        M3 = M#{a => 1, b => 2},
        M4 = M3#{a := 1, b => 2},
        M5 = M1#{a := 1, b := 2},
        M6 = M1#{a := 1, b := 2, c => 3, d => 4},
        {M0, M1, M2, M3, M4, M5, M6}.

put_key() ->
        M = #{},
        M1 = M#{<<"a">> => 1},
        M2 = M1#{<<"a">> := 1},
        M3 = M#{<<"a">> => 1, <<"b">> => 2},
        M4 = M3#{<<"a">> := 1, <<"b">> => 2},
        M5 = M1#{<<"a">> := 1, <<"b">> := 2},
        M6 = M1#{<<"a">> := 1, <<"b">> := 2, <<"c">> => 3, <<"d">> => 4},
        {M1, M2, M3, M4, M5, M6}.

quoted_atom_key(M) ->
        M#{'a-b' := 1}.

Compiles to:

def put_atom() do
  m = %{}
  m0 = m
  m1 = Map.put(m, :a, 1)
  m2 = %{m1 | a: 1}
  m3 = Map.merge(m, %{a: 1, b: 2})
  m4 = Map.put(%{m3 | a: 1}, :b, 2)
  m5 = %{m1 | a: 1, b: 2}
  m6 = Map.merge(%{m1 | a: 1, b: 2}, %{c: 3, d: 4})
  {m0, m1, m2, m3, m4, m5, m6}
end

def put_key() do
  m = %{}
  m1 = Map.put(m, "a", 1)
  m2 = %{m1 | "a" => 1}
  m3 = Map.merge(m, %{"a" => 1, "b" => 2})
  m4 = Map.put(%{m3 | "a" => 1}, "b", 2)
  m5 = %{m1 | "a" => 1, "b" => 2}
  m6 = Map.merge(%{m1 | "a" => 1, "b" => 2}, %{"c" => 3, "d" => 4})
  {m1, m2, m3, m4, m5, m6}
end

def quoted_atom_key(m) do
  %{m | "a-b": 1}
end

What's next

To be sure it works in all cases I would like to make it possible to translate Erlang projects and run the project tests after transpiling, if you are interested in helping contact me on twitter @warianoguerra

Why PARC worked: Reaction against the bubblegum kind of technology from the 60s - Alan Kay

In Butler Lampson, "Personal Distributed Computing—The Alto and Ethernet Software 1:25:42" Alan Kay says:

The most important thing I got from Butler and Chuck's talk today is that's not enough to have an idea and it's not enough to actually go out and build it.

One of the things that Butler specially and Bob Tailor had decided was to be conservative.

PARC is always talked about as the forefront of technology and everything else, but in fact was, part of what was done at PARC I think was a reaction against the bubblegum kind of technology that we all used to build in the 60s that could barely work for the single person who had designed and build it, Butler and Bob and Chuck did not want to have that happen again.

So we have to me two interesting streams at PARC one was kind of a humbleness which I'm sure no Xerox executive will ever recognise that word as applied to us but in fact it was saying "we can't do everything, we have to hold some limits in order be able to replicate this systems", and then there's the incredible arrogance on the others side of saying BUT we have to be able to build every piece of hardware and software in order to control our own destiny.

So you have these two things, the conservative attitude and then pulling out all the stops once the idea that you had to replicate the systems was made, I think that to me sums up why PARC worked.

The other talk Alan Kay mentions is Chuck Thacker, "Personal Distributed Computing—The Alto and Ethernet Hardware"

A Playlist with more talks from the conference: ACM Conference on the History of Personal Workstations

Elixir Flavoured Erlang: an Erlang to Elixir Transpiler

Last year I was invited to ElixirConf Latin America in Colombia to give a talk, I proposed to also give a tutorial about Riak Core and they said that it should be in Elixir, so I started looking into Elixir to translate my Riak Core material to it.

At the same time I was learning about pretty printers and I decided to use it as a joke in my talk and a way to learn Elixir by implementing a pretty printer for Elixir from the Erlang Abstract Syntax Tree.

The joke didn't work, but it resulted in the prototype of Elixir Flavoured Erlang.

This year I was invited to give another talk about languages on the Erlang virtual machine at Code BEAM Brasil 2020 and I thought it would be a good idea to continue working on it and maybe announce it at the talk.

To measure progress I built some scripts that would transpile the Erlang standard library to Elixir and then try compiling the resulting modules with the Elixir compiler, I would pick one compiler error, fix it and try again.

With this short feedback loop and a counter that told me how many modules compiled successful it was just a matter of finding errors and fixing them. At the beginning each fix would remove lot of compiler errors and some times surface new ones, after a while each error was a weird corner case and progress slowed.

Some days before the talk I managed to transpile all of Erlang/OTP and 91% of the Elixir translations compiled successfully.

The result is of course Elixir Flavoured Erlang, but as a side effect I have Erlang/OTP in Elixir, so I decided to publish it too.

Enter otp.ex: Erlang/OTP transpiled to Elixir.

The objective of this repository is to allow Elixir programmers to read Erlang code for projects they use, most of the code compiles but I can't ensure that it behaves identically to the original source.

While writing the readme of efe I needed some example that wasn't OTP so I decided to also transpile a widely used project on Erlang and Elixir: the Cowboy web server

The ^ match operator in Elixir

In Elixir variable bindings by default rebind to the new value, if they are already bound and you want to pattern match on the current value you have to add the ^ operator in front:

iex(1)> a = 1
1
iex(2)> a = 2
2
iex(3)> a
2
iex(4)> ^a = 3
** (MatchError) no match of right hand side value: 3

In Erlang variables are bound once and then always pattern match, the easy part of the translation is that I know that when a variable is bound and in match position I have to add the ^, the thing is that I can't add the ^ on the first binding and I have to know where variables are in match position.

For this I do a pass on the Erlang Abstract Syntax Tree and I add annotations on variables to know if it's already bound and if it's in match possition, the pretty printer in the second pass checks those annotations to know if it has to add the ^ or not.

Why some modules don't compile?

Here's a list of reasons why the remaining modules don't compile after being transpiled.

For comprehensions must start with a generator

There's a weird trick in Erlang where you can generate an empty list if a condition is false or a list with one item if a condition is true by having a list comprehension that has no generator but has a filter.

I've been told that it's an artifact of how list comprehensions used to be translated to other code in the past.

1> [ok || true].
[ok]

2> [ok || false].
[]

The fact is that it's valid Erlang and is used in some places in the standard library.

For simple cases in efe I insert a dummy generator:

for _ <- [:EFE_DUMMY_GEN], true do
    :ok
end

for _ <- [:EFE_DUMMY_GEN], false do
    :ok
end

For more advanced cases with many filters I have to analyze if inserting a generator at the beginning doesn't change the result, that's why some cases are left as is.

Erlang records don’t evaluate default expressions, Elixir defrecord do

Erlang records are not part of the language, they are expanded by the Erlang Preprocessor.

What the preprocessor does is to insert the default values "as is" on the places where a record is created, this means that if the default is a function call it won't be evaluated during definition, there will be a function call for each instantiation of the record.

Elixir has a module to deal with Erlang Records using macros, the thing is that Elixir will evaluate the defaults when they are defined, this means that if the call doesn't return a constant the behavior won't be the same. If the call returns a value that can't be represented as a constant in the code it won't compile either.

Another issue is if the function being called is declared after the record is defined, it will fail with an error saying that the function doesn't exit.

There could be a solution here by creating another module that tries to emulate the way default values behave in Erlang (they behave as "quoted" expressions) but I don't know so much about Elixir macros to know how to do it.

Named lambda functions

In Erlang lambda functions can have names to allow recursion, in Elixir this is not supported, there's no way to automatically change the code in a local/simple way, it's easy to change the code by hand so I decided to transpile it as if Elixir supported named lambda functions and get a compiler error.

Expressions in bitstrings

In Elixir size in bitstring expects an integer or a variable as argument, Erlang allows any expression there, it's easy to fix by hand by extracting the expression into a variable and putting the variable there, it could be doable but for now I just leave the expression in place and get a compiler error.

Variable defined inside scope and used outside

In Erlang variables introduced within the if, case or receive expressions are implicitly exported from the bodies, this means this works:

case 1 of A -> ok end, A.
% or this
case 1 of 1 -> B = 2 end, B.

Elixir has more strict scoping rules and that is not allowed, this is highly discouraged in Erlang but used in some places in the standard library.

Corner cases all the way down

Here's a list of small differences that I had to fix.

Erlang vs Elixir imports

In Erlang you can import functions from a module in multiple imports and they "add up".

In Elixir later imports for the same module "shadow" previous ones.

The solution is to group imports for the same module and emit only one import per module.

In Erlang you can import a function more than once, in Elixir it's a compiler error, the solution is to deduplicate function imports.

Auto imported functions

Erlang "auto imports" many functions from the erlang module, Elixir auto imports just a few, the solution is to detect local calls to auto imported functions and prefix them with the :erlang module.

Lowercase variables that become keywords

Erlang variables start with uppercase, Elixir variables with lowercase, this means in Erlang variable names can't clash with language keywords but the lowercase versions can, that's why I have to check if the variable is a keyword and add a suffix to them.

Local calls and Kernel autoimports

Elixir auto import functions from the Kernel module that may clash with local functions in the current Erlang module, for this case I have to detect Kernel functions and macros that are also local functions and add an expression to avoid auto importing them, like this:

import Kernel, except: [to_string: 1, send: 2]

Private on_load function

Erlang allows to define a private function to be run when the module loads, Elixir only allowed public functions, this has been reported and fixed in Elixir but not yet released.

Function capture/calls with dynamic values

In Erlang the syntax to pass a reference to a function is uniform for constants and variables:

fun calls/3
fun cornercases:calls/3
fun M:F/Arity
fun M:calls/3
fun M:F/3
fun cornercases:F/Arity
fun cornercases:calls/Arity
fun M:calls/Arity}

In Elixir I had to special case when any part is a variable.

&calls/3
&:cornercases.calls/3
Function.capture(m, f, arity)
Function.capture(m, :calls, 3)
Function.capture(m, f, 3)
Function.capture(:cornercases, f, arity)
Function.capture(:cornercases, :calls, arity)
Function.capture(m, :calls, arity)

Something similar happens with function calls:

M = erlang
F = max
M:max(1, 2)
M:F(1, 2)
erlang:F(1, 2)
erlang:max(1, 2)
max(1, 2)

vs

m = :erlang
f = :max
m.max(1, 2)
apply(m, f, [1, 2])
apply(:erlang, f, [1, 2])
:erlang.max(1, 2)
max(1, 2)

Binary operators

In Erlang binary operators are builtin.

In Elixir they are macros from the Bitwise module.

The fix was easy, just use the module.

Call Expressions

In Erlang there's no extra syntax to call a function that is the result of an expression:

fun () -> ok end().
% or
(return_fn())().

In Elixir it has to be wrapped in parenthesis and a dot added before the call:

(fn () -> :ok end).()
# or
(return_fn()).()

Weird function names

In Erlang to declare or call function names whose names are not valid identifiers the name has to be in single quotes:

'substring-after'() ->
    wxMenu:'Destroy'(A, B).

In Elixir the declaration is different from the call.

def unquote(:"substring-after")() do
    :wxMenu.'Destroy'(a, b)
end

When the function is a keyword in Elixir the declaration is the same but a local call must be prefixed with the module to be valid syntax:

keyword_methods() ->
    {nil(), in()}.

nil() -> nil.
in() -> in.

vs

def keyword_methods() do
    {__MODULE__.nil(), __MODULE__.in()}
end

def unquote(:nil)() do
    nil
end

def unquote(:in)() do
    :in
end

Erlang non short circuit boolean operators

For historical reasons Erlang's boolean operators and and or do not short circuit, this means they evaluate both sides before evaluating itself, for short circuit versions the newer and recommended andalso and orelse operators exist. Still the old versions are used in some places.

Elixir only has short circuit versions, to solve this I replace calls to those operators to the functions in the Erlang module that do the same, since I need to force the evaluation of both sides and function calls evaluate the arguments before calling it does what I need.

o_and(A, B) -> A and B.
o_or(A, B)  -> A or B.
o_xor(A, B) -> A xor B.

vs

def o_and(a, b) do
  :erlang.and(a, b)
end

def o_or(a, b) do
  :erlang.or(a, b)
end

def o_xor(a, b) do
  :erlang.xor(a, b)
end

The problem is in guards, where only a subset of functions can be used, in Erlang since and and or are operators they are allowed, but in Elixir the function calls are not, only in this case I replace the non short circuit version for the short circuit ones since guards are expected to be side effect free and the evaluation of a side effect free expression on the right side should not change the result of the guard.

But there's a corner case in the corner case, a guard evaluates to false if the guard throws, if the right side throws then the semantics will differ, but well, I tried hard enough:

2> if true orelse 1/0 -> ok end.
ok
3> if true or 1/0 -> ok end.
** exception error: no true branch found when evaluating an if expression

6> if (false andalso 1/0) == false -> ok end.
ok
7> if (false and 1/0) == false -> ok end.
** exception error: no true branch found when evaluating an if expression

Valid character syntax

The character type is a syntax convenience to write numbers, Erlang supports more character ranges than Elixir, it was a matter of figuring out the valid ranges and generating the numbers instead for the ones that were not allowed:

chars() ->
    [$\s, $\t, $\r, $\n, $\f, $\e, $\d, $\b, $\v, $\^G, $\^C].

printable_chars() ->
    [$a, $z, $A, $Z, $0, $9, $\000, $\377, $\\, $\n].

vs

def chars() do
    [?\s, ?\t, ?\r, ?\n, ?\f, ?\e, ?\d, ?\b, ?\v, ?\a, 3]
end

def printable_chars() do
    [?a, ?z, ?A, ?Z, ?0, ?9, ?\0, 255, ?\\, ?\n]
end

Escape interpolation

Erlang doesn't support string interpolation, Elixir does, any case that looks like string interpolation coming from Erlang must be quoted because it's not:

["#{", '#{', "'p'"].

vs

['\#{', :"\#{", '\'p\'']

Did you know that in Elixir you can interpolate in atoms?

iex(1)> a = "an_atom"
"an_atom"

iex(2)> :"#{a}"
:an_atom

Constant expressions in match position

Erlang allows expressions that evaluate to a constant on match position, Elixir doesn't so I had to implement a small evaluator to do it before translating expressions.

match(1 bsl 32 - 1) -> ok.

vs

def match(4294967295) do
  :ok
end

catch expression

Erlang has a catch expression which Elixir does not, luckily since in Elixir everything is an expression I can expand it to a try/catch expression, the only downside is the extra verbosity.

Erlang/OTP as a fuzzer for the Elixir compiler

As I said I tested efe by transpiling the Erlang standard library and trying to compile it with the Elixir compiler.

The thing is that OTP has a lot of code, some of it really old and some of it using Erlang in weird ways, that meant that in some cases I would crash the Elixir compiler in the process or I would get an unexpected error that may be undefined behavior.

I reported the ones that made sense and the Elixir team had the patience to handle them and fixed them really fast, here's a list:

Future of Coding Weekly 2020/08 Week 5

For some reason tinyletter decided to not publish the newsletter in the archive so I'm posting it here.

If you want to subscribe to the newsletter, it's here: https://tinyletter.com/marianoguerra/

Subtext 1 Demo, Layered Text, VR Visual Scripting, Automated Game Design, Dynamic Sketching in AR, Tiny Structure Editors for Low, Low Prices & more

Two Minute Week

🎥 This Week in Instadeq: Event Triggers via Mariano Guerra

🧵 conversation

This week I added support for Event Triggers, a way to react to changes and do things on other entities

Share Our Work

💬 Chris Rabl

🧵 conversation

I've been doing more and more writing lately, and have been wishing for a tool that allows me to write my outlines, drafts, and final compositions in the same editor window with the ability to toggle any of those "layers" on and off at will, merge them, copy selections to new layers, etc. It would work sort of like Photoshop but for writing... I have a feeling these principles could also work as an IDE extension (imagine being able to hide the "code" layer and show only the "comments" layer, or the "documentation" layer). Curious to hear your thoughts, or whether anyone else is working on something similar?

🎥 layered text

📝 Using Gizmos via Scott Anderson

🧵 conversation

A year ago I was working on VR Visual Scripting in Facebook Horizon. They've recently started to share some more information leading up to Facebook Connect. I figured the scripting system would either be largely the same, or entirely rewritten since I left. It seems like it's mostly in tact based on documentation shared

🎥 Create and Share Interactive Visualizations from SpaceX's JSON API and 🎥 Create and Share Visualizations of Premier League Matches from a CSV via Mariano Guerra

🧵 conversation

📝 root via Dennis Heihoff

🧵 conversation

What started with me reverse engineering notion became a data-first recursive UI resolver I called root.

Here's how it differs from most common technologies today:

  • Approaches to UI development like react.js + graphQL require UI components to request data in a shape that satisfies the UI tree. This means the shape of the data is determined by the UI tree. Root takes an inverse approach where the UI tree is determined by the shape of the data.
  • A major benefit of this approach is that the UI layout is thus entirely determined by data, data that can be traversed, transformed and stored in arbitrary ways and by arbitrary means.
  • This is powerful for unstructured, user-determined, block-based UI's like rich documents (think Roam Research, Notion etc.) enabling queries and functions that, based on users' demands, derive the optimal presentation of a document.

It packs a few more punches. The best example is probably this (in about 200 LoC).

Thinking Together

📝 model of computation via Nick Smith

🧵 conversation

Why isn't any kind of logic programming considered a model of computation? Why do we talk about Turing Machines and recursive functions as fundamental, but not inference? I can't find any resources discussing this disparity. It's like there are two classes of academics that don't talk to each other. Am I missing something?

📝 Motoko, a programming language for building directly on the internet - Stack Overflow Blog via Mike Cann

🧵 conversation also discussed here 🧵 conversation

Anyone played with Motoko yet? looks really interesting, kind of reminds me of Unison in some ways

📝 https://twitter.com/cmastication/status/1299366037402587137?s=21 via Cameron Yick

🧵 conversation

Pondering: how important is it for a making environment to be made from the same medium you’re making with if your main goal isn’t making interfaces? The Jupyter ecosystem has come quite far despite relatively few people using it to write JS: https://twitter.com/cmastication/status/1299366037402587137?s=21

🐦 JD Long: Observation from Jupyter Land: The Jupyter ecosystem has a big headwind because the initial target audience for the tool (Julia, Python, R) has a small overlap with the tool/skills needed to expand the ecosystem, namely Javascript.

That's not a criticism, just an observation.

💬 Hamish Todd

🧵 conversation

In the thing I am making, you can't have a variable without choosing a specific example value for that variable. This is surely something that's been discussed here before since Bret does it in Inventing On Principle. What do folks think of it?

Content

📝 Tiny Structure Editors for Low, Low Prices! via Jack Rusher

🧵 conversation

Fun paper from 2020 IEEE Symposium on Visual Languages and Human-Centric Computing

🎥 Subtext 1 demo (from 2005) via Shalabh Chaturvedi

🧵 conversation

Jonathan Edwards recently uploaded the Subtext 1 demo (from 2005).

It has a lot of interesting takes - and most (all?) that I agree with. E.g. edit time name resolution, debugging by inspection, a concrete model of time, inline expansion of function calls, and more.

📝 It's the programming environment, not the programming language via Ope

🧵 conversation

“But while programming languages are academically interesting, I think we more desperately need innovation in programming environments.

The programming environment isn’t a single component of our workflow, but the total sum enabled by the tools working together harmoniously. The environment contains the programming language, but also includes the debugging experience, dependency management, how we communicate with other developers (both within source code and without), how we trace and observe code in production, and everything else in the process of designing APIs to recovering from failure.

The story of programming language evolution is also a story of rising ideas in what capabilities good programming environments should grant developers. Many languages came to popularity not necessarily based on their merits as great languages, but because they were paired with some new and powerful capability to understand software and write better implementations of it.”

🎥 Getting Started in Automated Game Design via Scott Anderson

🧵 conversation

Mike Cook has done a lot of research into automated game generation. He recently released this video which is both a tutorial and an overview of the field.

📝 Gatemaker: Christopher Alexander's dialogue with the computer industry via Stefan Lesser

🧵 conversation

Don’t read this for the application “Gatemaker”. Read this for a fascinating outsider’s view on the software industry, systems design, and end-user programming.

🎥 RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching via Jack Rusher

🧵 conversation

I really like this new AR work from Ryo Suzuki, et al.

A tour through the beam ADT representation zoo

The Languages

Dynamically Typed

Statically Typed

Column meaning

Record Type

named set of fields (key, value pairs), also refered as types, structs, records etc

Union Type

like a record type but with more than one "shape", also refered as discriminated unions, variants etc

Type Dispatch

a function that will have different implementations according to the type of one (or more) of its arguments, also refered as protocols or multi methods

TL;DR Table

Language

Inspiration

Record Type

Union Type

Type Dispatch

Dynamic

Clojerl

Clojure

Yes

No

Yes

Efene

Python/JS

Yes*

No*

No

Elixir

Ruby

Yes

No*

Yes

Erlang

Prolog

Yes*

No*

No

LFE

Common Lisp

Yes*

No*

No

Static

Alpaca

ML

Yes

Yes

No*

Fez

F#

Yes

Yes

Yes

Gleam

ML/Rust

Yes

Yes

Not Yet?

Hamler

Purescript

Yes

Yes

Yes

PureErl

Purescript

Yes

Yes

Yes

Groups

Languages that compile record types to erlang records

Languages that compile record types to erlang maps

  • Alpaca: __struct__ field for records, no extra field for anonymous records

  • Clojerl: __type__ field

  • Elixir: __struct__ field for structs

  • Purerl: no extra field for anonymous records

  • Hamler : no extra field for anonymous records

Languages with union types

  • Alpaca: tagged tuple if it has values, atom if not

  • Gleam: tagged tuple if it has values, atom if not

  • Fez: tagged tuple if it has values, atom if not

  • Purerl: tagged tuple (not sure about variants with no values, should be like Hamler)

  • Hanler: tagged tuple even with variant with no values

Languages that do type dispatch

Notes

Alpaca

Records: "anonymous records".

Records are compiled as maps with the KV '__struct__' => 'record'. Because Alpaca doesn't provide any reflection facilities, more type information isn't propagated to the generated Core Erlang.

In the tag (in the case of variants/discriminated unions), it's just the atom representation of the tag name itself.

E.g. Some_tag 1 gets compiled to {'Some_tag', 1}

Alpaca's records get an extra key-value pair of '__struct__' to a rough description of its type/structure if the "structure" is flagged as a record.

Tagged Unions: (variants in OCaml) get compiled as an atom if there's no associated value, and as a tuple if there is.

Type Dispatch: Author says: "Not currently. This sort of thing might get handled by type classes but I haven't gone too far down that line of thinking yet"

Clojerl

Records: (deftype) compiled to Erlang maps with a special __type__ field.

Tagged Unions: No

Type Dispatch: defprotocol and deftype, extend-type, extend-protocol work as in Clojure.

Protocols are not reified as in Clojure, there are no runtime protocol objects.

Elixir

Records: structs are compiled to Erlang maps with a special __struct__ field.

Tagged Unions: No (usually ad-hoc tagged tuples are used for this)

Type Dispatch: Protocols are collected and consolidated at compile time

Gleam

Records: Compiled to Erlang records (hrl files are generated)

Tagged Unions: Compiled to tagged tuples, they are just gleam custom types with multiple "constructors", if a variant has no values it gets compiled to an atom

Type Dispatch: Not Yet? Will Gleam have type classes?

Fez

Records: Compiled to Erlang records

Tagged Unions: Compiled to tagged tuples

Type Dispatch: Class method calls

LFE, Efene and Erlang

LFE and Efene are just "dialects" of Erlang, that's why they are covered together here.

Records: Erlang records, which are compiled to a tuple where the first value is an atom with the name of the record: LFE Records, Efene Records, Erlang Records

Tagged Unions: Since they are dynamically typed they can use tagged tuples for this, there's no need to declare them, examples are functions that return {ok, V} or {error, Reason}.

Type Dispatch: No

PureErl

Records: Compiled to Erlang maps (without an extra field), really similar to alpaca "anonymous records"

Tagged Unions: Compiled to tagged tuples

Type Dispatch: Type Classes

There are also newtype

Hamler

Records: Compiled to Erlang maps (without an extra field)

Tagged Unions: Compiled to tagged tuples

Type Dispatch: Type Classes

Why I don't like concept cars addendum

I don't like to glorify Steve Jobs, but sometimes he expresses ideas the right way and the fact that he, unlike me, showed clearly that those ideas can deliver good results may show that those ideas are valid.

Here's a part of an interview from him (emphasis mine):

... they have no conception of the craftsmanship that’s required to take a good idea and turn it into a good product and they really have no feeling in their hearts usually about wanting to really help the customers there’s a just a tremendous amount of crafts ship in in-between a great idea and a great product and as you evolved that great idea it changes and grows it never comes out like it starts because you learn a lot more as you get into the subtleties of it and you also find there’s tremendous trade-offs that you have to make I mean you know there are there are just certain things you you can’t make electrons do there are certain things you can’t make plastic do or glass do or factories do or robots do and as you get into all these things designing a product is keeping 5,000 things in your brain these concepts and fitting them all together in in kind of continuing to push to fit them together in new and different ways to get what you want and every day you discover something new that is a new problem or a new opportunity to fit these things together a little differently and it’s that process that is the magic.

From here:

Two types of software prototypes and why I don't like concept cars

When I was a child I used to love concept cars, I loved how they were much better than regular cars.

I would listen carefully to when they would be available in the market, most of the time that information wasn't mentioned, sometimes the date was so far in the future that it wasn't interesting, by that time we would already have flying cars!

With time I started noticing that none of the concept cars from my childhood were on the streets, not even close. Worst was when a really bad version of them was available for sale, the worst deception ever.

Now when I see a company showing off with a concept car I think the opposite, that company is running out of real ideas or has lost the ability to execute novel designs, and tries to justify it by showing shiny things that it knows will never see the light of day.

Why I'm talking about concept cars? well, because there are different types of concept cars and different types of software prototypes, but they are almost the same.

The prototypes that require you to accept things that are never going to be feasible but are sold as if they are possible are the worst. Either because they violate basic laws of physics, materials, safety, regulations, performance, user experience or just because they focus on a single concept by disregarding others that are required when the thing migrates from prototype to production.

If you are upfront and tell that the prototype is an exploration of "what if we push this dimension to the extreme", I'm ok with it, it's a learning experience, you may learn a lot about that dimension and how it relate to others, what are its limitations and so on.

But the prototype should be clearly marked as such.

The other useful prototypes are as learning exercises, I like to build throw away prototypes as my first approach to something, as a way to learn more about something and have a better idea for next time. You should also mark them clearly as such.

The third useful prototype is the initial stage of something you want to grow into a product but want to show the potential along the way, you may throw some versions away in the early stages because you learned something that required a big reformulation and it's easier to start from scratch than to refactor it.

But from the beginning the prototype should be grounded in reality, what's possible and how the main concept relates to other features that may be a year or more in the future but they are going to be required for the prototype to turn into a product.

You can't have a slow/complex prototype in the early stages unless you have a clear idea on how it's going to get faster/simpler, small optimizations here and there aren't going to cut it once the performance/complexity penalty of all the extra features starts creeping up. You have to think up front how those other features will fit once you reach that point.

Of course, if you are building something new, at some point you will be in a new territory and some things may require some reformulation, that's good, but at that point you need to have buffers of performance and other metrics and a simple core architecture and code to be able to solve those problems, even if not in the optimal way. That's why you built this new thing, to push the boundary a little further.

If you get the chance to build a new prototype after the current one turned into a finished product, you can then incorporate those new insights at the beginning to be able to push a little further than before. Rinse and repeat.

What you should not do is to build a prototype that starts to fall apart even when only solving the single problem you care about and call the remaining just an implementation detail or "left as an exercise for the engineers".

It may sound like a contradiction, but to build good prototypes you have to be good at building complete products, otherwise you can't guide your initial design on constraints that you never experienced. A prototype should eventually be the foundation of something much bigger and complex, you can't build that on unstable foundations. If not you, somebody on the team must provide the experience that comes from completing, polishing and maintaining something that survives the contact with reality.

As Mike Tyson said: "Everybody has a plan until they get punched in the mouth."

Or put another way:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.

—John Gall

There's only one dimension you can ignore if you have the time/money to do so, and that's price: see the Experience curve effect. Just don't be too early ;)

inb4 appeal to visionaries, check Vannevar Bush's memex, it was a full design, it was easy to see how you could build it, even when some technologies were in the future, everything was feasible. Ivan Sutherland Sketchpad Demo was a complete runnable thing, same with Douglas Engelbart's Mother of all Demos, Xerox PARC's Alto and Bret Victor's Dynamicland.

Don't fool others, but most important, don't fool yourself.

On utopians and the fact that software must exist and solve real problems

Yet a man who uses an imaginary map, thinking that it is a true one, is likely to be worse off than someone with no map at all; for he will fail to inquire whenever he can, to observe every detail on his way, and to search continuously with all his senses and all his intelligence for indications of where he should go

—E. F. Schumacher, Small is Beautiful

There's an idea I first saw at Django Conf that I really liked, the opening keynote is called "Django Sucks" and it's about everything that is wrong with Django.

Since then I've been promoting the idea on every conference I've been and many times I fantasize about giving the "X Sucks" talk myself.

I'm part of more than one community in the utopian spectrum, 2 weeks ago I gave a talk at Bob Konf titled Programming by any other name where I showed how the future is already here, we just have to find it and help its creators gain adoption, here's the other side of that talk:

As a temporary member of the software utopians, an activity we share is to imagine alternative realities in which every wrong decision was instead replaced with the "right" one, with the benefit of hindsight, of course.

The word "right" in quotes because it may have yet to be tested in the real world and may require the suspension of disbelief on many aspects of reality. We may call this activity "counterfactual porn".

I say temporary member of the software utopians because I'm also a member of the software builders, people that build software that real users are willing to use (and pay for) to solve real problems.

One of the utopians' hobbies is to look down on systems that exist and solve real problems, pointing at its flaws.

These flaws are in contrast to systems that only exist as an idea, paper, or at most as a small prototype that proves a single point and solves a single problem carefully crafted to make it shine.

But there's a gap that utopians never seem to cross.

The gap from early prototype that at most proves a point or shows a new idea, to the point where that single thing is part of a whole that people can use to solve a large variety of complex problems.

Some utopians dared to cross the gap, General Magic, Pharo, Racket, Genera to name a few.

These Pioneers turned settlers faced the fact that "If you build it, they will come" is usually not true.

Those products were (and some are) things that you can get and use, but for some reason they failed to catch on.

Here utopians usually appeal to the "No true Scotsman fallacy", if not to the simpler "people are stupid/don't know what's good for them" and go back to the confortable possition of throwing stones at things others build and maintain.

When someone takes the failed idea (or prototype) and adapts it into something that people actually use, taking into account the limitations of reality, society, economics and people's behaviors, the utopians proceed to complain about how the new thing is a bastardized version of the original idea and how if they were to do it they would stay true to the original.

The exercise of actually doing either is almost never attempted, or is attempted and quickly abandoned.

In some cases it's completed and for "strange" reason it fails to gain adoption. GOTO No true Scotsman/People are stupid.

The implementation, its users or some aspect of reality are always blamed, the idea must be kept untouched.

I usually go around preaching ideas by thinkers I admire, I think it's really useful to consider them to improve the systems that builders create.

Pure Ideas are important, people stopping at prototypes too, but as much as I believe we need to learn more from history and study great thinkers, researchers and their ideas, I also think we should at least respect and listen to people building finished products that have to face the harsh reality of productive usage of working software.

While we are at it we could also listen to the users of such systems, which we love to talk about, but almost never talk to, let alone let them tell us something that may shape our ideas.

Ideas are only impactful when they get turned into things that real people can use to solve real problems.

During this process, pure ideas have to be adapted at each step, the end result is usually not as pure as one would wish, but that's the price to pay to get from idea to reality.

This involves at least 3 kinds of people, Utopians/Pioneers, Builders/Settlers and Users/Citizens, you need the 3 to collaborate, communicate and respect each other's roles and constraints.

If you are of one kind and think you could do a better job at being the other, then before telling them, try showing them, all the way. You may learn that it's not that easy.

If you don't want to show them, then collaborate and listen, you may learn something new that may improve the chances your idea gets adopted.

The process from idea to adoption also involves 3 different timescales, short, middle and long term (similar to operational, tactical and strategic levels of planning).

Execution has to work at each level individually but to achieve the long term vision, the short and middle term need to be aligned with the long term, even if in between it has to take some detours/shortcuts.

When the 3 roles collaborate and are willing to adapt the idea to consider each other's constraints and plan for the 3 timescales as a whole, they may have a better change to achieve the utopian objective, even when during the process it doesn't look like it.

The alternative is to stay forever at the idea level complaining at people trying to bring it to reality.

PS: I may not be only talking about software

RFC: Elixir Module and Struct Interoperability for Erlang

This is a RFC in the shape of a project that you can actually use, I'm interested in your feedback, find me as marianoguerra in the erlang and elixir slacks and as @warianoguerra on twitter.

The project on github: https://github.com/marianoguerra/exat and on hex.pm: https://hex.pm/packages/exat

See the exat_example project for a simple usage example.

Here's the description of the project:

Write erlang friendly module names and get them translated into the right Elixir module names automatically.

The project is a parse transform but also an escript to easily debug if the transformation is being done correctly.

Erlang Friendly Elixir Module Names

A call like:

ex@A_B_C:my_fun(1)

Will be translated automatically to:

'Elixir.A.B.C':my_fun(1)

At build time using a parse transform.

The trick is that the @ symbol is allowed in atoms if it's not the first character (thank node names for that).

We use the ex@ prefix to identify the modules that we must translate since no one[1] uses that prefix for modules in erlang.

Aliases for Long Elixir Module Names

Since Elixir module names tend to nest and be long, you can define aliases to use in your code and save some typing, for example the following alias declaration:

-ex@alias(#{ex@Baz => ex@Foo_Bar_Baz,
            bare => ex@Foo_Long}).

Will translate ex@Bar:foo() to ex@Foo_Bar_Baz:foo() which in turn will become 'Elixir.Foo.Bar.Baz:foo()

It will also translate the module name bare:foo() into ex@Foo_Long:foo() which in turn will become Elixir.Foo.Long:foo()

Creating Structs

The code:

ex:s@Learn_User(MapVar)

Becomes:

'Elixir.Learn.User':'__struct__'(MapVar)

The code:

ex:s@Learn_User(#{name => "bob", age => 42})

Becomes:

'Elixir.Learn.User':'__struct__'(#{name => "bob", age => 42})

Which in Elixir would be:

%Learn.User{name: 'bob', age: 42}

Aliases in Structs

The following alias declaration:

-ex@alias(#{ex@struct_alias => ex@Learn_User}).

Will expand this:

ex:s@struct_alias(#{name => "bob", age => 42})

Into this:

'Elixir.Learn.User':'__struct__'(#{name => "bob", age => 42})

Pattern Matching Structs

Function calls are not allowed in pattern match positions, for example on function/case/etc clauses or the left side of a =, for that there's a different syntax:

get_name({ex@struct_alias, #{name := Name}}) ->
    Name;
get_name({ex@struct_alias, #{}}) ->
    {error, no_name}.

Becomes:

get_name(#{'__struct__' := 'Elixir.Learn.User', name := Name}) ->
    Name;
get_name(#{'__struct__' := 'Elixir.Learn.User'}) ->
    {error, no_name}.

And:

{ex@struct_alias, #{name := _}} = ex:s@Learn_User(#{name => "bob", age => 42})

Becomes:

#{'__struct__' := 'Elixir.Learn.User', name := _} =
        'Elixir.Learn.User':'__struct__'(#{name => "bob", age => 42})

This is because that pattern will match maps that also have other keys.

Note on Static Compilation of Literal Structs

On Elixir if you pass the fields to the struct it will be compiled to a map in place since the compiler knows all the fields and their defaults at compile time, for now exat uses the slower version that merges the defaults against the provided fields using 'Elixir.Enum':reduce in the future it will try to get the defaults at compile time if the struct being compiled already has a beam file (that is, it was compiled before the current file).

Use

Add it to your rebar.config as a dep and as a parse transform:

{erl_opts, [..., {parse_transform, exat}, ...]}.
...
{deps, [exat, ...]}

Build

To build the escript:

$ rebar3 escriptize

Run

You can run it as an escript:

$ _build/default/bin/exat pp [erl|ast] path/to/module.erl

For example in the exat repo:

$ _build/default/bin/exat pp erl resources/example1.erl
$ _build/default/bin/exat pp ast resources/example1.erl

Syntax Bikesheding

The syntax I chose balances the need to not produce compiler/linter errors or warnings with the objective of avoiding accidentally translating something that shouldn't be translated.

Please let me know what you think!

[1] Famous last words

Riak Core on Partisan on Elixir Tutorial: Resources

A List of resources related to riak_core and partisan.

Riak Core

Project

  • riak_kv: Riak KV itself

  • riak_pg: Distributed process groups with riak_core

  • dalmatinerdb: A fast, distributed metric store

  • riak_test_core: riak_test fork which refactors riak_test to not bo targeted directly towards riak_kv and makes it more library like

  • nkdist: a library to manage Erlang processes evenly distributed in a riak_core cluster

  • riak_id: A clone of Twitter's Snowflake, built on riak_core

  • DottedDB: A prototype of a Dynamo-style distributed key-value database, implementing Server Wide Clocks as the main causality mechanism across the system

  • riak_pipe: riak_pipe allows you to pipe the output of a function on one vnode to the input of a function on another

Docs

Partisan

Partisan.cloud: Partisan Website

Projects

  • Lasp: Lasp is a suite of libraries aimed at providing a comprehensive programming system for planetary scale Elixir and Erlang applications

  • Vonnegut: an append-only log that follows the file format and API of Kafka 1.0

  • Erleans: Erlang Orleans

Presentations