Ir al contenido principal

This is my blog, more about me at marianoguerra.github.io

πŸ¦‹ @marianoguerra.org 🐘 @marianoguerra@hachyderm.io 🐦 @warianoguerra

Will Wright on Prototypes

Edited from the transcript for readability:

Just to illustrate the differences between the two modeling techniques, in traditional math if we had let's say a square mile of land and we planted some crops on it standard math would probably go in here, measure the size of each of these things that ellipsoid would be kind of problematic and there'd be a kind of a long equation that you'd run all the numbers through to calculate what area of this plot of land was covered by fields.

Using modeling techniques we probably do something much stupider, we start throwing darts at it randomly and after we threw a certain number of darts we would then measure how many of the darts landed in a field and how many landed outside of a field and that ratio would tell us what percentage of the area was occupied by crops.

As we throw more darts we get a more accurate answer and if you can see this in some sense on the surface is a kind of a stupid approach but using the power of a computer this can actually give you very good results.

This is called by the way the monte carlo technique for kind of obvious reasons. This is a stochastic method which in simulation stochastic means there's some amount of randomness involved.

The same run of the model won't always give the same exact result, so basically as designers we have these dynamic spaces. There's this vast landscape of dynamic spaces that we can put into our games and we cut off little chunks of that space with rulesets, then we build our games around.

So as a designer one of the things I like to do is explore this space and figure out which dynamics are out there that would make for interesting game play.

Part of this talk I'm going to be using prototypes to show you examples of how we map this space.

The way I view prototypes basically is the same as paratroopers, we just drop these little paratroopers where we think might be interesting spots on the landscape, when they land we can start playing with the prototype and seeing how interesting was that space actually and then start iterating the prototype uphill in regions and so the prototypes in some sense are kind of hill climbing things in local regions of this dynamic space.

From Will Wright's Dynamics for Designers

Quick and Easy Web Slides with RemarkJS

Say you (I) want to create a quick presentation, a plus if you can share a link to the slides, another plus if you can use your text editor and markdown instead of something more complex.

Say you (I) probably know about remarkjs but doing it each time starting from the previous presentation is getting annoying.

Say you (I) want a place to go to copy paste the things to get started.

Well, here it is, a quick and easy way to get started with a web presentation using remarkjs

First create a folder:

mkdir my-presentation
cd my-presentation/

Then download the latest version of remarkjs:

wget https://cdnjs.cloudflare.com/ajax/libs/remark/0.14.0/remark.min.js -O remark.js

Then copy this html and paste it into a file named index.html in the same folder:

<!DOCTYPE html>
<html>
  <head>
    <title>Presentation Title</title>
    <meta charset="utf-8">
    <style>
      *{ font-family: 'sans'; }
      .remark-slide-container, .remark-slide-content{
          color:white;
          background-color:#0d4774;
      }
      .remark-slide-content{
          font-size: 2em;
          background-position: center;
          background-repeat: no-repeat;
          background-size: contain;
      }
      .remark-slide-scaler{
        box-shadow:none;
      }
      a,a:visited{color:white;}
      code{font-size: 0.8em !important;}
      h1, h2, h3 {
        font-weight: normal;
      }
      .remark-slide-content.reverse{color:white;}
      .reverse-text h1{color: #fefefe; background-color: #111; padding: 0.2em;}
      .remark-code, .remark-code *{ font-family: 'monospace'; white-space:pre}
      .slide-video{padding-top:1vh}
      .slide-video video{height:61vh}
      .slide-table{padding:1em 0}
      .no-padding{padding:0}
    </style>
  </head>
  <body>
    <textarea id="source">

class: center, middle

# Hi

---

class: center, middle

![An Image](image.png)

---

background-image: url(a-slide-background.png)

---

class: center, slide-video

<video controls src="a-video.mp4">

---

class: center, middle

# Thanks

    </textarea>
    <script src="./remark.js">
    </script>
    <script>
      remark.create({
        ratio: '16:9',
        slideNumberFormat: '' // '%current% / %total%'
      });
    </script>
  </body>
</html>

Then start a webserver in the folder:

python3 -m http.server

Open your browser and go to http://localhost:8000/

Then:

  • Change the title

  • Maybe change the background color and style

  • Remove/Edit the slides

  • Play with remark settings at the bottom of the index.html file

I'm writing a WebAssembly book: Early Access for WebAssembly from the Ground Up

If my memory serves me well, last year I went to Munich to have some beers with Patrick Dubroy and I mentioned that I was planning to write a small book about WebAssembly by writing toy compilers for small weird languages.

He mentioned that he was thinking about something similar and if I wanted to write it together, I said yes and the rest is a blog post announcing the public release of the Early Access of WebAssembly from the Ground Up :)

The ideas evolved over time but we settled on a "digital first" book that would allow us to experiment with interactive elements inspired by the Explorable Explanations movement and similar ideas like Bret Victor's Learnable Programming.

The first 6 chapters are already available and after the launch we are going to go back to "writing mode" to complete the rest.

If you are interested in learning the fundamentals of WebAssembly by building a compiler for a simple programming language step by step using JavaScript and OhmJS this book may be for you.

"Screw it up all the way until the finish"

In a video about molten glass the artist says something I like a lot so I'm transcribing it here to reference it (slightly edited, emphasis mine):

A great piece is basically balanced right on the edge of failure and success.

It's just balanced right there.

But you don't really know how or where that line is.

So you're very excited about that idea, it's spectacular to you.

And you go and do it even though you don't seem like it.

You're going into it with a little bit of fear and trepidation to get too close to that line because you don't want to fail and lose it.

But once you do fail it... all that's gone.

Now it's game on.

It's all about just learning, right?

So if it's a piece that you know is going to take four and a half hours and at 3 hours, it's kind of screwed up.

And you just say, okay, let's stop and start over.

Well, you really don't know what happens in hour 3 to 5. You have no idea.

So when you get to three again. Now, you have no idea what's coming.

So my idea is usually if I screw up, screw it up all the way that I can to find out exactly what's hiding, what vocabulary of intuition has not been developed, what part of that language.

So now I've screwed it up, screwed it up, screwed it up all the way until the finish.

We know where things might happen.

So now, when I go back into it, I've got the intuition more developed.

I mean, failure ends up being a good space for discovery, right?

But it's like, if I'm going to fail,

let's keep failing,

let's keep screwing up.

Let's see what's there. Let's go find out.

You know, but if you just stop and put it away and start over, you're kind of missing out on a lot.

A Simple, Understandable Production Ready Preact Project Setup

I wrote a similar post in the past, it's time for an updated version. This is a setup similar to the one I'm using with GlooData.

Setup

First we need to create our project, change folder names accordingly:

mkdir myproj
cd myproj
mkdir js lib css img

Create a package.json file at the root of the folder, we will need it later to install one or more build tools:

npm init

fetch deps (you can put this in a makefile, a justfile, a shell script or whatever)

wget https://cdnjs.cloudflare.com/ajax/libs/preact/10.11.2/preact.module.min.js -O lib/preact.js

Now let's do our first and last edit on our index.html:

<!doctype html>
<html>
 <head>
   <meta charset="utf-8">
   <meta name="viewport" content="width=device-width, minimum-scale=1.0, initial-scale=1, maximum-scale=1.0, user-scalable=no">
   <title>My App</title>
   <script type=module src="./js/app.js?r=dev"></script>
   <link rel="shortcut icon" href="img/favicon.png">
 </head>
 <body>
   <div id="app"></div>
 </body>
</html>

Here's a simple entry point that shows how to use preact without transpilers, modify to your tastes, js/app.js:

import {h, Fragment, render} from '../lib/preact.js';

const c =
        (tag) =>
        (attrs, ...childs) =>
            h(tag, attrs, ...childs),
    // some tags here, you get the idea
    button = c('button'),
    div = c('div'),
    span = c('span');

function rCounter(count) {
    return div(
        {class: 'counter'},
        button({id: 'dec'}, '-'),
        span(null, count),
        button({id: 'inc'}, '+')
    );
}

function main() {
    const rootNode = document.getElementById('app'),
        dom = rCounter(0);

    render(dom, rootNode);
}

main();

Also, I'm not covering state management in this post, I may do it in a future one if there's interest in it.

Serving

For development I use basic-http-server, I just start it at the root of the project, open the browser at the address it logs and just edit, save, reload, no transpilation, no waiting, no watchers.

If you don't want to install it you can use any other, you probably have python at hand, I find it a little slower to load, specially when you have many js modules:

python3 -m http.server

Building

You could just publish it as it is, since any modern browser will load it.

If not there are three steps you can do, bundle it into a single file, minify it and provide it as an old style js file instead of an ES module. Let's do that.

First the bundling, there are two options I use, one if you have deno at hand, the other if you have npm at hand.

First the common part, create a dist folder to put the build artifacts:

rm -r dist
mkdir -p dist/js
cp index.html dist/
cp -r img dist/

Bundling with Deno

deno bundle js/app.js > dist/app.bundle.js

Bundling with Rollup

Do this once at the root of your project:

npm install --save-dev rollup

Then:

rollup js/app.js --file dist/app.bundle.js --format iife -n app

Minifying with Uglify-js

Do this once at the root of your project:

npm install --save-dev uglify-js

Then:

uglifyjs dist/app.bundle.js -m -o dist/js/app.js

And that's it.

Now some extra/optional things.

Reproducible Dev Environments with Nix

How do you get this environment up and running consistently and reproducible?

I use nix, you can if you want too.

First you need to install it.

Then create a shell.nix file at the root of your project with the following:

{ pkgs ? import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/nixos-22.05.tar.gz") {} }:

pkgs.mkShell {
    LOCALE_ARCHIVE_2_27 = "${pkgs.glibcLocales}/lib/locale/locale-archive";
    buildInputs = [
        pkgs.glibcLocales
        pkgs.wget
        pkgs.nodejs
    ];
    shellHook = ''
        export LC_ALL=en_US.UTF-8
        export PATH=$PWD/node_modules/.bin:$PATH
    '';
}

Then whenever you want to develop in this project run the following command at the root of your project:

nix-shell

Now you have a shell with all the tools you need.

Task automation with just

I use just to automate tasks, here are some snippets, you should check the docs for more details:

set shell := ["bash", "-uc"]
cdn := "https://cdn.jsdelivr.net/npm/"

ver_immutable := "4.1.0"
ver_preact := "10.11.2"

ROLLUP := "node_modules/.bin/rollup"
UGLIFY := "node_modules/.bin/uglifyjs"

default:
    @just --list

server:
    deno run --allow-all server.js

static-serve:
    basic-http-server -a 127.0.0.1:8000

fetch-deps: clean-deps create-deps
    just fetch immutable.js immutable@{{ver_immutable}}/dist/immutable.es.js
    just fetch preact.js preact@{{ver_preact}}/dist/preact.module.js

clean-deps:
    rm -rf deps

create-deps:
    mkdir -p deps

fetch-full NAME URL:
    wget {{URL}} -O deps/{{NAME}}

fetch NAME URL:
    wget {{cdn}}/{{URL}} -O deps/{{NAME}}

clear-dist:
    rm -rf dist

mkdir-dist:
    mkdir -p dist/js dist/img

dist-bundle:
    {{ROLLUP}} js/app.js --file dist/app.bundle.js --format iife -n app
    {{UGLIFY}} dist/app.bundle.js -m -o dist/js/app.js
    rm dist/app.bundle.js
    cp img/favicon.png dist/img/
    sed "s/type=module //g;s/r=dev/r=$(git describe --long --tags --dirty --always --match 'v[0-9]*\.[0-9]*')/g" index.html > dist/index.html

dist: fetch-deps clear-dist mkdir-dist dist-bundle

Older browser and cache busting

The script tag with type=module will not work for really old browsers, you may also want to make sure the browsers load the latest bundle after a deploy, for that you can run a line similar to this to replace the script tag in the index.html in your dist folder with one that achieves the two objectives:

sed "s/type=module //g;s/r=dev/r=$(git describe --long --tags --dirty --always --match 'v[0-9]*\.[0-9]*')/g" index.html > dist/index.html

The Proxy trick to not repeat yourself ℒ️

Above you saw something like this:

import {h, Fragment, render} from '../lib/preact.js';

const c =
        (tag) =>
        (attrs, ...childs) =>
            h(tag, attrs, ...childs),
    // some tags here, you get the idea
    button = c('button'),
    div = c('div'),
    span = c('span');

There's a trick to avoid writing the tag name twice, it avoids mistakes and minifies better, here it is:

import {h, Fragment, render} from '../lib/preact.js';

const genTag = new Proxy({}, {
      get(_, prop) { return (attrs, ...childs) => h(prop, attrs, ...childs); },
    },),
    // some tags here, you get the idea
    {button, div, span} = genTag;

Reproducible Riak Core Lite Tutorial with Nix

Introduction

Over the years I've created and recreated guides, posts and tutorials on how to setup an environment to create riak_core based applications.

Most of the repetitive work and troubleshooting is around the moving target that is a current Erlang/Elixir/Rebar3/Mix setup.

With this attempt I hope people will be able to setup and follow the guides with a reproducible environment that always reflects the one I had when I wrote the guide.

For that I will use Nix, to follow it you just need git and nix.

Follow instructions here to Install Nix if you haven't done that already.

Riak Core Lite with Erlang

Clone the Riak Core Lite Rebar3 Template

mkdir -p ~/.config/rebar3/templates
git clone https://github.com/riak-core-lite/rebar3_template_riak_core_lite.git ~/.config/rebar3/templates/rebar3_template_riak_core_lite

Enter a nix-shell with the needed tools:

nix-shell ~/.config/rebar3/templates/rebar3_template_riak_core_lite/shell.nix

Now your shell should have Erlang and rebar3 available, try it:

erlc -h
rebar3 help

Now let's create a new Riak Core Lite application, go to the folder where you want to store your new project and then:

Create the project:

rebar3 new rebar3_riak_core_lite name=ricor

Build it:

cd ricor
rebar3 release

Try it:

./_build/default/rel/ricor/bin/ricor console
(ricor@127.0.0.1)1> ricor:ping().

The output should look something like, the number will probably be different:

{pong,'ricor@127.0.0.1', 1324485858831130769622089379649131486563188867072}

With this environment you should be able to follow previous tutorials and guides like the Riak Core Tutorial and maybe the Little Riak Core Book.

Riak Core Lite with Elixir

I recommend you to follow Rkv: Step by Step Riak Core Lite Key Value Store Project

Clone the project:

git clone https://github.com/riak-core-lite/rkv.git

Enter a nix-shell with the needed tools:

cd rkv
nix-shell shell.nix

Now your shell should have Erlang, Elixir, rebar3 and mix available, try it:

erlc -h
rebar3 help
elixirc -h
mix --help

Fetch deps, compile test and run:

mix deps.get
mix compile
mix test
iex --name dev@127.0.0.1 -S mix run

Play with the API:

Rkv.get(:k1)

# {:error, :not_found}

Rkv.delete(:k1)

# :ok

Rkv.put(:k2, :v2)

# :ok

Rkv.get(:k2)

# {:ok, :v2}

Rkv.delete(:k2)

# :ok

Rkv.get(:k2)

# {:error, :not_found}

Rkv.put(:k2, :v2)

Rkv.get(:k2)

# {:ok, :v2}

You can follow the guide by switching to each tag in order: https://github.com/riak-core-lite/rkv/tags

I will try to keep the shell.nix files on both languages up to date from time to time to keep with major Erlang/Elixir versions, you can try to update the nix hash yourself and see if it still builds by following the instructions here: Nix Possible URL values

Have fun!

Nikola Restructured Text Roles Plugin Example Project

This guide assumes you have python 3 and nikola installed.

See the Nikola Getting Started Guide for instructions to install it.

Environment with Nix

If you have Nix installed you can get the environment by running nix-shell on the root of this project with this shell.nix file:

{ pkgs ? import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d9448c95c5d557d0b2e8bfe13e8865e4b1e3943f.tar.gz") {} }:
with pkgs;

mkShell {
  LOCALE_ARCHIVE_2_27 = "${glibcLocales}/lib/locale/locale-archive";
  buildInputs = [
    glibcLocales
    python39
    python39Packages.Nikola
  ];
  shellHook = ''
    export LC_ALL=en_US.UTF-8
  '';
}

Setup a Nikola Site

In case you don't have a nikola site around or you want to play in a temporary project we will create one here:

nikola init my-site
cd my-site

I answered with the default to all of them by hitting enter on all questions, feel free to give different answers.

Create a New Plugin

In the base folder of the nikola site create a folder for your plugin, I will name mine my_rst_roles:

mkdir -p plugins/my_rst_roles

Create a configuration file with the plugin metadata and customize it at plugins/my_rst_roles/my_rst_roles.plugin:

[Core]
Name = my_rst_roles
Module = my_rst_roles

[Nikola]
PluginCategory = CompilerExtension
Compiler = rest
MinVersion = 7.4.0

[Documentation]
Author = Mariano Guerra
Version = 0.1.0
Website = https://marianoguerra.org
Description = A set of custom reStructuredText roles

Create a python module inside the folder that will contain the plugin logic at plugins/my_rst_roles/my_rst_roles.py:

from docutils import nodes
from docutils.parsers.rst import roles

from nikola.plugin_categories import RestExtension
from nikola.plugins.compile.rest import add_node

class Span(nodes.Inline, nodes.TextElement):
    pass

def visit_span(self, node):
    attrs = {}
    self.body.append(self.starttag(node, "span", "", **attrs))

def depart_span(self, _node):
    self.body.append("</span>")

add_node(Span, visit_span, depart_span)

class Plugin(RestExtension):

    name = "my_rst_roles"

    def set_site(self, site):
        self.site = site

        generic_docroles = {
            "my-foo": (Span, "my-base my-foo"),
            "my-bar": (Span, "my-base my-bar"),
        }

        for rolename, (nodeclass, classes) in generic_docroles.items():
            generic = roles.GenericRole(rolename, nodeclass)
            role = roles.CustomRole(rolename, generic, {"classes": [classes]})
            roles.register_local_role(rolename, role)

        return super(Plugin, self).set_site(site)

Create a New Post or Page to test it

Create a post:

nikola new_post -t "Nikola Restructured Text Roles Plugin Example Project"

Put some content in it:

echo 'Hi :my-foo:`hello`, :my-bar:`world`!' >> posts/nikola-restructured-text-roles-plugin-example-project.rst

Build the site

nikola build

Check the Generated HTML

You can serve it with:

nikola serve

And open http://localhost:8000 and inspect it with the browser's developer tools, here's a blog post friendly way with grep:

grep my-foo output/posts/nikola-restructured-text-roles-plugin-example-project/index.html

The output should be something like this:

Hi <span class="my-base my-foo">hello</span>, <span class="my-base my-bar">world</span>!</p>

More Advanced Transformations

I needed to process the children of the Span node by myself and stop docutils from walking the children between visit_span and depart_span, to do that here's a simplified version:

class Span(nodes.Inline, nodes.TextElement):
    def walk(self, visitor):
        visitor.dispatch_visit(self)
        # don't stop
        return False

    def walkabout(self, visitor):
        visitor.dispatch_visit(self)
        visitor.dispatch_departure(self)
        # don't stop
        return False

def visit_span(self, node):
    cls = " ".join(node.get('classes', []))
    child_str = " ".join([format_span_child(child) for child in node.children])
    self.body.append("<span class=\"" + cls + "\">" + child_str)

def depart_span(self, _node):
    self.body.append("</span>")

def format_span_child(node):
    return node.astext()

Here are links to the implementations of some of the relevant functions:

A Real World Use Case

I did this to embed inline UI components in the documentation for instadeq, you can see it in action in This Walkthrough Guide if you scroll a little.

I still have to improve the style and add some missing roles but it's much better than having to describe the position and look of ui components instead of just showing them.

As usual, thanks to Roberto Alsina for Nikola and for telling me how to get started with this plugin.

EFLFE: Elixir Flavoured Lisp Flavoured Erlang

History

In 2019 I was invited to give a talk at ElixirConfLA, at that point I didn't know Elixir so I decided to "make a joke" and instead of learning Elixir I would create a transpiler from Erlang to Elixir.

The Proof of Concept as a Joke was a lot of work but at least I learned a lot about Elixir and Pretty Printers.

One year later I was invited to CodeBEAM Brasil and I decided to push the PoC to completion to achieve the goal of transpiling Erlang/OTP and the transpiler itself.

This year I was invited again and I felt the pressure to continue with the tradition.

Last year my talk was with Robert Virding Co-creator of Erlang and creator of LFE (Lisp Flavoured Erlang), I had the idea to transpile LFE too.

At that point LFE 1.0 compiled to an internal representation (Core Erlang) that was one level below the one I was using in Elixir Flavoured Erlang.

I knew that the next major version of LFE was going to switch to the representation I was using, so it was just a matter of waiting for the release.

Timing helped and LFE 2.0 was released in June.

I went code diving and found how to get the data at the stage I needed and then fixing some corner cases around naming (LFE uses lispy Kebab case).

Result

The result is EFLFE: Elixir Flavoured Lisp Flavoured Erlang an Lisp Flavoured Erlang to Elixir transpiler.

Business in the Front

Aliens in the Back

Run it with a configuration file and a list of LFE files:

./efe pp-lfe file.conf my-code.lfe

And it will output Elixir files for each.

Example

Input:

(defmodule ping_pong
  (export
    (start_link 0)
    (ping 0))
  (export
    (init 1)
    (handle_call 3)
    (handle_cast 2)
    (handle_info 2)
    (terminate 2)
    (code_change 3))
  (behaviour gen_server))        ; Just indicates intent

(defun start_link ()
  (gen_server:start_link
    #(local ping_pong) 'ping_pong '() '()))

;; Client API

(defun ping ()
  (gen_server:call 'ping_pong 'ping))

;; Gen_server callbacks

(defrecord state
  (pings 0))

(defun init (args)
  `#(ok ,(make-state pings 0)))

(defun handle_call (req from state)
  (let* ((new-count (+ (state-pings state) 1))
         (new-state (set-state-pings state new-count)))
    `#(reply #(pong ,new-count) ,new-state)))

(defun handle_cast (msg state)
  `#(noreply ,state))

(defun handle_info (info state)
  `#(noreply ,state))

(defun terminate (reason state)
  'ok)

(defun code_change (old-vers state extra)
  `#(ok ,state))

Result:

defmodule :ping_pong do
  use Bitwise
  @behaviour :gen_server
  def start_link() do
    :gen_server.start_link({:local, :ping_pong}, :ping_pong, [], [])
  end

  def ping() do
    :gen_server.call(:ping_pong, :ping)
  end

  require Record
  Record.defrecord(:r_state, :state, pings: 0)

  def init(args_0) do
    {:ok, r_state(pings: 0)}
  end

  def handle_call(req_0, from_0, state_0) do
    new_count_0 = r_state(state_0, :pings) + 1

    (
      new_state_0 = r_state(state_0, pings: new_count_0)
      {:reply, {:pong, new_count_0}, new_state_0}
    )
  end

  def handle_cast(msg_0, state_0) do
    {:noreply, state_0}
  end

  def handle_info(info_0, state_0) do
    {:noreply, state_0}
  end

  def terminate(reason_0, state_0) do
    :ok
  end

  def code_change(old_vers_0, state_0, extra_0) do
    {:ok, state_0}
  end

  def unquote(:"LFE-EXPAND-EXPORTED-MACRO")(_, _, _) do
    :no
  end
end

Where The Code Gets Ugly

LFE and Elixir both share the fact that they support macros, macros are expanded at compile time and are a language feature, that means that when I get the code to transpile it, it's already expanded.

If the module you are transpiling uses macros you will transpile the macro expanded version of the code, which may be okay or not depending on the kind of code that the macro generates.

Remaining Work

The remaining work is to understand the details of variable scoping in LFE and see if it's compatible with Elixir so that I can translate it as is like I'm doing now.

If they differ I have to see if I can do some local analysis to transform it so that the resulting code behaves semantically like the original.

If you try it and have some questions let me know at @warianoguerra or in the repo's issue tracker.

How to transpile a complex Erlang project with Elixir Flavoured Erlang: erldns

A user (yes, there's another one!) of Elixir Flavoured Erlang asked why a project wasn't transpiling correctly, I went to look and here are the notes:

Assuming you have the efe escript in your PATH, cd to a folder of your choice and do:

git clone https://github.com/dnsimple/erldns.git
git clone https://github.com/dnsimple/dns_erlang.git

Create a file called erldns.conf with the following content:

#{
   includes => ["../include/", ".", "../priv/", "../../"],
   macros => #{},
   encoding => utf8,
   output_dir => "./out/"
}.

And then run:

efe pp erldns.conf erldns/src/*.erl

Now the context:

We clone dns_erlang because erldns includes headers files from it, like here include_lib("dns_erlang/include/dns_records.hrl")

Since we want to find files using include_lib that refer to dns_erlang, we also include ../../ (which will find any project cloned at the same level as erldns)

erldns also includes headers from the include and priv folders, paths in include are relative to the source file, that's why both start with ../

Currently efe will silently remove parts of the code that can't find or parse properly, that's why you may notice that code that references external records or constants in header files not found in the includes list will silently be missing, in the future I may warn about that.