Papers of the Week V

I'm already late so let's go:

The first one, Discretized Streams, is the one I liked the most, it's about the the theory behind what became Spark Streaming, really interesting.

The second one is interesting in its introduction of punctuation which it explains really well.

Didn't liked this one too much, doesn't mean it's bad, just that sometimes the title gives me an idea on what it's about and when it's not I loose interest

This one was interesting in its description of different types of windows and the definition of windows sematincs, I get the feeling that if I read it back in some not too distant future I will get a lot more out of it.

More statistical than I tought it would be, but still I learned some things about random sampling. I guess it's one of those papers that are great if you are looking for a solution and this paper tells you what to implement.

Papers this week: 5

Papers so far: 24

Papers in queue: 85 (I cleaned some duplicates and similar papers)

Papers of the Week IV

Better late than never, and proving that I can count to 4, here we go with the 4th straight paper reading week.

I liked the MillWheel paper, Google and Microsoft write really nice papers from the ones I've read.

Didn't liked the dryad paper, was expecting something else.

The PacificA paper is my favorite of the week.

About Naiad, I liked the idea about tracking distributed progress.

I had big expectations for this paper, but it was too haskellish for my taste, I was expecting something else.

Papers this week: 5

Papers so far: 19

Papers in queue: 91

Sonic Pi on Ubuntu 16.04

Yet another "how to make a Sam Aaron project on the current ubuntu version"

first add the following two lines at the end of /etc/apt/sources.list:

deb xenial main
deb-src xenial main

Update packages:

sudo apt update

Install Sonic Pi:

sudo apt install sonic-pi

We need to kill pulseaudio and start jack, it sounds easier than it is because pulseaudio just won't stay dead :(

The way I found to make it work was to edit pulseadio client.conf:

sudo vim /etc/pulse/client.conf

Uncomment the line (remove the semicolon):

; autospawn = yes

And leaving it like this:

autospawn = no

I added myself to the audio group, not sure if it's required but just in case:

sudo adduser $USER audio

For this to take effect you need to logout and login again, to make sure you have the group, open a terminal and run:


You should see audio between some other groups, if you can't see it try rebooting or replacing $USER with your actual username in the adduser command.

Now stop pulseaudio:

pulseaudio --kill

Then start jackd, I tried all the combinations I could find on the internet without success, this is the one that worked for me:

jackd -R -d alsa -d hw:1

If that doesn't work try:

jackd -R -d alsa

Or try the versions that are recommended on the overtone wiki:

jackd -r -d alsa -r 44100


jackd -r -d alsa -r 44100 -P

You can also try running qjackctl and play with the settings to see if you have luck.

If that doesn't work read /usr/share/doc/sonic-pi/README-JACKD to see if the instructions there help.

Now you should be able to run sonic-pi:


Have fun!

Papers of the Week III

No, I didn't gave up, last week was short because of a holyday and a "bridge day" so I was riding my bike through the black forest, but I still read the papers I set up to read by cramming all of them in 3 days :)

First, a classic in distributed systems :)

More on the "topic"

This is a great paper, I guess is one of the first "papers I love"

This one is interesting, much of the content sounds like what the creators of kafka propose, you can see what I mean by watching a talk like "turning the database inside out", and by reading the paper, which is quite short.

Didn't read the paper, just the blog post summary, but it's quite descriptive.

It has an interesting compression technique and a quote I liked:

We found that building a reliable, fault tolerant system was the most time consuming part of the project. While the team prototyped a high performance, compressed, in-memory TSDB in a very short period of time, it took several more months of hard work to make it fault tolerant. However, the advantages of fault tolerance were visible when the system successfully survived both real and simulated failures.

Related to the MacroBase paper:

Papers this week: 4

Papers so far: 14

Papers in queue: 94

It seems I add 30 papers to the queue for each 5 I read, I hope it's not linear :)

Papers of the Week II

In my continuous attempt to see how far I can count in roman numerals here is the second week, still going, still 5 papers.

The one I liked the most was "The Dataflow Model..." mainly because it fired some ideas related to a problem I'm trying to solve.

The others were also good except "Reimplementing the Cedar File System Using Logging and Group Commit", mainly because it wasn't what I was expecting it would be.

Read this week: 5

Total read: 10

In the Queue: 61

Papers of The Week I

This is an attempt to treat what I would call acolyer's syndrome which is the gilt felt by people that would like to read papers as often as Adrian Colyer but never do.

So I will blog the ones I read here to try to follow Jerry Seinfeld's Productivity Secret

I will blog weekly because I won't read a paper a day, but I will try to read around 4 or 5 papers a week if they are around 12~15 pages, if they are longer I will read less.

The initial topics are stream processing systems and distributed systems, I will follow the references that I find interesting to inform future papers.

I will also read papers that I find interesting as I go.

ok, without further ado, here are the ones I read this week.

Related to stream processing:

Clasics I wanted to read:

In queue: 33

LoRaWan Overview


  • Network Protocol Candidate Specification
  • Optimized for battery powered end-devices
    • Fixed
    • Mobile (as in, they move, not phones)
  • Network Topology typically is start-of-stars
  • Network Operators can't secretly listen on application data


All Communication is bidirectional, uplink traffic should dominate:

|   |

           LoRa or FSK   +---------+   IP   +---------+
    +---+    (radio)     |         |        |         |
    |   |  <---------->  |         <-------->         |
    +---+                |         |        |         |
                         +---------+        +---------+
                           Gateway         Network Server
   |   |

  • Communication spread out on different frequency channels and data rates
  • Data Rates between 0.3 kbps and 50 kbps
    • Max ~ 45 tweets/s (extended ASCII only ;)
    • Just the text w/o protocol overhead
    • Don't expect audio, video or any kind of streaming
  • Encryption of payload
    • AES 128 bit key length
    • One key for each FPort
  • MAC Commands
    • For Network Management
    • Invisible to Application Layer

Devices Classes

  • Class A: Baseline
    • Uplink transmission
    • Followed by two short downlink receive windows (RX1, RX2)
  • Class B: Beacon
    • Allow more receive slots at scheduled times
    • Synchronize by a beacon from the gateway
  • Class C: Continuous
    • Nearly continuos open receive windows
    • Only closed when transmitting
    • Lower latency, but more energy usage
  • All devices implement at least class A

Receive Windows

  • After uplink at configured periods
  • If msg received for current device on RX1, RX2 doesn't happen
    • Max one downlink per uplink on Class A
  • Can't transmit from last transmit until after RX2 window
+------------------+                  +-----------+              +------------+
|                  |                  |           |              |            |
| Transmit         |                  | RX1       |              | RX2        |
|                  |                  |           |              |            |
+------------------+                  +-----------+              +------------+

   Transmit Time         Receive
      on Air             Delay 1

                                      Delay 2

MAC Message Types

  • Join Request/Accept
    • For Over the air Activation
  • Unconfirmed Data Up/Down
    • No ACK required
  • Confirmed Data Up/Down
    • ACK required

MAC Messages

  • Can be standalone messages
    • Always encrypted
  • Or "Piggyback" on next message
    • No encryption
  • Unknown messages ignored

ACK Messages

  • Can be standalone messages
  • Or "Piggyback" on next message

End Device Activation

To participate on a LoRaWAN network

Over the Air Activation

  • Needs join procedure
  • Requires fields set on device
    • DevEUI
    • AppEUI
    • AppKey (AES 128, derived from root AppKey)
  • Network Key provided
    • Allows network roaming

Activation by Personalization

All info stored on device on setup

Information Stored after Activation

  • Device Address
    • Two parts: Network Id and Network Address
  • Application Identifier
    • Global ID, uniquely identifies owner
  • Network Session Key
    • Used for MIC generation
    • Used for MAC only message encryption/decryption
  • Application Session Key
    • Used to encrypt/decrypt payload and for MIC

Class B Devices

  • Devices mobile or fixed that require to open receive windows
    • At fixed time intervals (ping slots)
  • Class B implements Class A
  • All gateways must synchronously broadcast a beacon
  • Provides timing reference to devices
  • Devices start as Class A and can switch to B when detect a beacon
  • If no beacon is detected for 120 minutes, devices switches back to Class A

Class C Devices

  • Used for applications that have suficient power available
    • cannot implement Class B
  • Will listen with RX2 window parameters as often as possible
  • No message to tell the server that it is a class C node
    • App must know
  • Like Class B, can receive multicast downlink frames end to end - Part II: Frontend

This is the second and final part, the previous part is here: end to end - Part I: Backend, this part will be a little more complicated than necesary since I made a mistake in the first part and I carried it in the first implementation of the frontend, you can have a clean picture of the final result which doesn't include any cruft by reading the current code in the repository marianoguerra-atik/om-next-e2e.

Without further ado, here we go:

In the previous section I created one endpoint for queries and one for actions (or transactions), this was a confusion I had and is not needed, the om parser will call mutators or readers depending on what is passed, let's review the changes needed in the backend to make this a single endpoint:

If we run this changes and try the increment mutation like before but sending it to the query endpoint we will get an error:

$ echo '(ui/increment {:value 1})' | transito http post http://localhost:8080/query e2t -

Status: 500
Connection: keep-alive
Content-Type: application/transit+json
Content-Length: 33

{:error "Internal Error"}

To make it work we have to send it inside a vector:

$ echo '[(ui/increment {:value 1})]' | transito http post http://localhost:8080/query e2t -

Status: 200
Connection: keep-alive
Content-Type: application/transit+json
Content-Length: 6


Like in the frontend, we can send a list of places to re read after the transaction:

$ echo '[(ui/increment {:value 1}) :count]' | transito http post http://localhost:8080/query e2t -

Status: 200
Connection: keep-alive
Content-Type: application/transit+json
Content-Length: 18

{:count 2}

Now that we have all the changes in the backend let's review the frontend.

In this ui we just display hello world and is only to test that the figwheel and cljsbuild setup works.

You can try it running:

lein figwheel

And opening http://localhost:3449/index.html

Then we implement a counter component that only works in the frontend, if you read the documentation it shouldn't require much explanation.

Then we add cljs-http dependency that we will use to talk to the server from the frontend and we do some changes on the backend to serve static files from resources/public.

In the next commit we rename the increment mutation to ui/increment (ui isn't a good name for this, should have picked a better one).

We also require some modules and macros to use the cljs-http module and implement the :send function that is required by the reconciler if we want to talk to remotes, this is explained in the documentation in the Remote Synchronization Tutorial and the FAQ.

In this commit I did the increment transaction by hand because I couldn't get it to work since I was trying to pass ":remote true" to the mutator but not the query ast, you will see that in the next commit.

Then when Increment is clicked I make a transaction to increment it both locally and send it to the backend, I make the transaction on click which is handled at defmethod mutate 'ui/increment, notice the ":remote true" and ":api ast", :api is an identifier for a remote that I specified when creating the reconciler.

Now you can start the server with:

lein run

And open http://localhost:8080/index.html.

click increment, open it in another browser and click increment in one and then in the other one, see how they reflect the actual value after a short time where they increment it by one locally.

You can see a short screencast of this demo here: end to end - Part I: Backend

Here I will build an example of end to end app with frontend communicating with backend both using clojure.

The repository is here: gh:marianoguerra-atik/om-next-e2e, each commit is one step here, some commits are simple changes that I don't cover here.

Click on the links to go to the diff of that specific part.


Start by creating a new clojure project with leiningen:

lein new om-next-e2e

Basic Logging and HTTP Server

Jump to this commit with:

git checkout 32842e95abc4960b32488a51110fe7d7e385be88

To test run:

lein run

You should see:

14:55:22.179 [main] INFO  om-next-e2e.core - Starting Server at
14:55:22.778 INFO  [org.projectodd.wunderboss.web.Web] (main) Registered
web context /

On another terminal using httpie (

$ http get localhost:8080/

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 12
Date: Thu, 26 Nov 2015 13:55:24 GMT
Server: undertow

Hello world!

Basic Routing with Bidi

This handlers (action and query) just return 200 and the body with some extra content.

Jump to this commit with:

git checkout 03b95c397b1c7d21cafe7a9a21efebc7df5b6b41

Let's try it, first let's try the not found handler:

$ http get localhost:8080/lala
HTTP/1.1 404 Not Found
Content-Length: 9
Server: undertow

Not Found

Let's check that doing get on a route that handles only post returns 404 (for REST purists it should be 405, I know):

$ http get localhost:8080/action
HTTP/1.1 404 Not Found
Content-Length: 9
Server: undertow

Not Found

Let's send some content to action as json for now:

$ http post localhost:8080/action name=lala
HTTP/1.1 200 OK
Content-Length: 24
Server: undertow

action: {"name": "lala"}

And query:

$ http post localhost:8080/query name=lala
HTTP/1.1 200 OK
Content-Length: 30
Server: undertow

query action: {"name": "lala"}

Use Transit for Requests and Responses

Jump to this commit with:

git checkout 56d8d2e615e7f499c9dbeaa1d1479a0f39dc1950

From here on I will use a tool I created called transito written in python since writing and reading transit is not fun I created a tool to translate to and from json, transit and edn, here I use edn since it's more readable and is what we will use in our frontend, you can install it with:

sudo pip install transito

Send an action:

$ echo '(start {:id "id2"})' | transito http post http://localhost:8080/action e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 60
Server: undertow

{:action (start {:id "id2"})}

The response is translated from transit to edn, the actual response can be seen using something like curl:

curl -X POST http://localhost:8080/action -d '["~#list",["~$start",["^ ","~:id","id2"]]]'

["^ ","~:action",["~#list",["~$start",["^ ","~:id","id2"]]]]

You can get the body you want translated to transit like this:

echo '(start {:id "id2"})' | transito e2t -
["~#list",["~$start",["^ ","~:id","id2"]]]

Let's try the not found handler (notice we are sending to actiona instead of action):

$ echo '(start {:id "id2"})' | transito http post http://localhost:8080/actiona e2t -
Status: 404
Content-Type: application/transit+json
Content-Length: 28
Server: undertow

{:error "Not Found"}

Now let's test the query endpoint:

$ echo '(tasks {:id "id2"})' | transito http post http://localhost:8080/query e2t -
Status: 200
Content-Type: application/transit+json
Content-Length: 59
Server: undertow

{:query (tasks {:id "id2"})}

Supporting Actions and Queries

At this point we need to support the same mutations and reads as the frontend, to do this we need to add the dependency, I'm using om next alpha25 SNAPSHOT, here is the way to install the exact version I'm using:

git clone
cd om
git checkout 34b9a614764f47a022ddfaf2e469d298d7605d44
lein install


Jump to this commit with:

git checkout f9ac70c18c89ecbe336c736ef266c17ee1ef8eab

Now let's test it.

Increment by 20:

$ echo '(increment {:value 20})' | transito http post http://localhost:8080/action e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 44
Server: undertow

{:value {:keys [:count]}}

Get current count:

$ echo '[:count]' | transito http post http://localhost:8080/query e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 19
Server: undertow

{:count 20}

Increment by 1:

$ echo '(increment {:value 1})' | transito http post http://localhost:8080/action e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 44
Server: undertow

{:value {:keys [:count]}}

Get current count:

$ echo '[:count]' | transito http post http://localhost:8080/query e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 19
Server: undertow

{:count 21}

Try getting something else to try the :default handler:

$ echo '[:otherthing]' | transito http post http://localhost:8080/query e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 6
Server: undertow


Try an inexistent action to try the :default handler:

$ echo '(somethingelse {:value 1})' | transito http post http://localhost:8080/action e2t -

Status: 404
Content-Type: application/transit+json
Content-Length: 84
Server: undertow

{:params {:value 1}, :key somethingelse, :error "Not Found"} with devcards how to

Simple step by step guide to try with devcards.

This assumes you have leiningen installed if not, go to and follow the instructions there.

Let's start by creating the basic devcards environment using the devcards template:

lein new devcards omnom
cd omnom
lein figwheel

The output should look something like this:

Figwheel: Starting server at http://localhost:3449
Focusing on build ids: devcards
Compiling "resources/public/js/compiled/omnom_devcards.js" from ["src"]...
Successfully compiled "resources/public/js/compiled/omnom_devcards.js" in 15.476 seconds.
Started Figwheel autobuilder

Launching ClojureScript REPL for build: devcards
Figwheel Controls:


  Switch REPL build focus:
          :cljs/quit                      ;; allows you to switch REPL to another build
    Docs: (doc function-name-here)
    Exit: Control+C or :cljs/quit
 Results: Stored in vars *1, *2, *3, *e holds last exception object
Prompt will show when figwheel connects to your application
To quit, type: :cljs/quit

then after it does all it's thing open http://localhost:3449/cards.html

it should look something like this:


click the omnom.core link, you should see this:


now we have to install the latest development snapshot for om to try, in some folder outside your project run:

git clone
cd om
lein install

Now let's add the dependencies to our project, open project.clj and make the :dependencies section look like this:

:dependencies [[org.clojure/clojure "1.7.0"]
               [org.clojure/clojurescript "1.7.122"]
               [devcards "0.2.0-3"]
               [sablono "0.3.4"]
               [org.omcljs/om "0.9.0-SNAPSHOT"]
               [datascript "0.13.1"]]

Now restart fighwheel (press Ctrl + d) and run it again:

lein figwheel

reload the page.

open the file src/omnom/core.cljs and replace its content with this:

(ns omnom.core
   [cljs.test :refer-macros [is async]]
   [goog.dom :as gdom]
   [ :as om :refer-macros [defui]]
   [om.dom :as dom]
   [datascript.core :as d]
   [sablono.core :as sab :include-macros true])
   [devcards.core :as dc :refer [defcard deftest]]))


(defcard first-card
  (sab/html [:div
             [:h1 "This is your first devcard!"]]))

(defui Hello
  (render [this]
    (dom/p nil (-> this om/props :text))))

(def hello (om/factory Hello))

(defcard simple-component
  "Test that Om Next component work as regular React components."
  (hello {:text "Hello, world!"}))

(def p
    {:read   (fn [_ _ _] {:quote true})
     :mutate (fn [_ _ _] {:quote true})}))

(def r
    {:parser p
     :ui->ref (fn [c] (-> c om/props :id))}))

(defui Binder
  (componentDidMount [this]
    (let [indexes @(get-in (-> this om/props :reconciler) [:config :indexer])]
      (om/update-state! this assoc :indexes indexes)))
  (render [this]
    (binding [om/*reconciler* (-> this om/props :reconciler)]
      (apply dom/div nil
        (hello {:id 0 :text "Goodbye, world!"})
        (when-let [indexes (get-in (om/get-state this)
                             [:indexes :ref->components])]
          [(dom/p nil (pr-str indexes))])))))

(def binder (om/factory Binder))

(defcard basic-nested-component
  "Test that component nesting works"
  (binder {:reconciler r}))

(deftest test-indexer
  "Test indexer"
  (let [idxr (get-in r [:config :indexer])]
    (is (not (nil? idxr)) "Indexer is not nil in the reconciler")
    (is (not (nil? @idxr)) "Indexer is IDeref")))

(defn main []
  ;; conditionally start the app based on wether the #main-app-area
  ;; node is on the page
  (if-let [node (.getElementById js/document "main-app-area")]
    (js/React.render (sab/html [:div "This is working"]) node)))


;; remember to run lein figwheel and then browse to
;; http://localhost:3449/cards.html

it should display the om cards if not try reloading the page.

now just keep adding cards!