Erlang/OTP 20.2 is released

img src=http://www.erlang.org/upload/news/

Erlang/OTP 20.2 is the second service release for the 20 major release.
The service release contains mostly bug fixes and characteristics
improvements but also some new features.
 
Some highlights for 20.2
 
  • crypto, ssl
 
               The crypto API is extended to use private/public keys
               stored in an Engine for sign/verify or encrypt/decrypt
               operations.
 
               The ssl application provides an API to use this new
               engine concept in TLS.
 
  • ssh
 
               SSH can now fetch the host key from the private keys
               stored in an Engine. See the crypto application for
               details about Engines.
 
  •  ssl
 
               A new command line option -ssl_dist_optfile has been
               added to facilitate specifying the many options needed
               when using SSL as the distribution protocol.
 
  • stdlib
 
               Improve performance of the new string functionality
               when handling ASCII characters.
 
You can find the README and the full listing of changes for this
service release at
 
 
The source distribution and binary distributions for Windows can be
downloaded from
 
 
Note: To unpack the TAR archive you need a GNU TAR compatible program.
 
For installation instructions please consult the README file that is
part
of the distribution.
 
The Erlang/OTP source can also be found at GitHub on the official
Erlang
repository, https://github.com/erlang/otp with tag OTP-20.2
 
The on-line documentation can be found at: http://www.erlang.org/doc/
You can also download the complete HTML documentation or the Unix
manual files
 
 
Please report any new issues via Erlang/OTPs public issue tracker
 
 
We want to thank all of those who sent us patches, suggestions and bug
reports!
 
Thank you!
 
The Erlang/OTP Team at Ericsson

Permalink

Zomp/zx: Yet Another Repository System

I’ve been working on a from-source repo system for Erlang on and off for the last few months, contributing time to it pretty much whenever real-life is not interfering. I’m getting close to making a release. Now that my main data bits are worked out, the rest isn’t all that hard. I need to figure out what I want to say in an announcement.

The problem is that I’m really horrible at announcements and this system does things in a pretty different way to other repository systems out there, so I’m not sure what things are going to be important about it to users (worth putting into an announcement) and what things are going to be important to only me because I’m the one who wrote it (and am therefore obsessed with its externally inconsequential internals). What is internally interesting about a project is almost never what is externally interesting about it. Marketing; QED. So I need to sort that out, and writing sometimes helps me sort that kind of thing out.

I’m making this deliberately half-baked, disorganized, over-long post public because Joe Armstrong gave me some food for thought the other day. I had written him my thoughts on a subject posted to a mailing list but sent the message in private. I made my message to him off-list for two reasons: first, I wasn’t comfortable with my way of expressing the idea just yet; and second, I am busy with real-life stuff and side projects, including the repo system, and don’t want to get sucked into online chatter that might amount to nothing more than bikeshedding. (I’m a world-class bikeshedder!) Joe wrote me back asking why I made the reply private, I told him my reasons, and he made me change my mind. He hopes that more people will publish their ideas all the time, good or bad, fully baked or still soggy — because that’s the only way we can ever find any other interesting ideas these days is by searching for them, usually in text, on the net somewhere. It isn’t like we can’t go back and revise, but whether or not we do go back and clean up our literary messes, the availability of core ideas and exposure of thought processes are more important than polish. He’s been on a big drive to make sure that he posts most of his thoughts to public mailing lists or blogs so that his ideas get at least indexed and archived. On reflection I agree with him.

So here I am, trying to publicly organize my thoughts on my repository system.

I should start with the goals of the system.

This system is intended to smooth over a few points of pain experienced when trying to get a new Erlang project off the ground, and in particular avert the path of pain peculiar to Erlang newcomers when they encounter the “how to set up a project” problem. Erlang’s tooling is great but a bit crufty (deeply featured, but confusing to interface with) and not at all what the kool kids expect these days. And anyway I’m really just trying to scratch my own itch here.

At the moment we have two de facto standards for publishing Erlang systems: erlang.mk and Rebar. I like both of these, especially erlang.mk, but they do one thing that annoys me and never seems to quite fit my need: they build Erlang releases.

Erlang releases are great. They cut all the cruft of a release out and pack everything needed to actually run a system into a single blob of digits that you can move, in a single shot, to a new target system — including the Erlang runtime itself. Awesome! Self-contained deployment and it never misses. This has been an Erlang feature since before people even realized that they needed repeatable deployment infrastructure outside of the classic “let’s build a monolithic, static binary executable” approach. (Erlang is perpetually ahead of its time, even by today’s standards. I look at the poor kids stubbing their toes with Docker and language du jour and just shake my head — though part of that is because many shops are using Docker to solve concurrency issues that they haven’t even become cognizant of, thinking that they are experiencing “scaling” problems but missing the point entirely.)

Erlang releases are awesome when the deployment target is an embedded system, but not so awesome if the target is a full-blown operating system, VM, container, or virtual environment fully stocked with gobs of memory and storage and flush with system utilities and resources. Erlang releases sort of kitchen-sink the deployment itself. What if you want to run several different Erlang programs, all delivered as releases, all depending on the same library? You’ve got tons of copies of that library. Which is OK, but still sort of weird, because you also have tons of copies of the runtime (among other things). Each release is self-contained and lean, but in aggregate this is a bit odd.

Erlang releases make sense when you’re deploying to a phone switch or a sensor device in the middle of nowhere and the runtime is basically acting as its own operating system. Erlang releases are, in that context, analogous to putting a Gentoo stage 3 binary image on a system to leapfrog most of the toolchain process. Very cool when you’re in that situation, but a bit tinker-tacky when you’re just trying to run, say, a client program written in Erlang or test a web front-end for something that uses YAWS or Cowboy.

So that’s the siloed-kitchen-sink issue. The other issue is that newcomers are perpetually confused about releases. This makes teaching elementary Erlang hard. In my view we should really focus on escript for beginner code — just let the new guy run something out of a single file the way he is used to doing when learning a new language instead of showing him pages of really slick code, then some interpreter stuff, and then leaping straight from that to a complex and advanced packaging setup necessarily tailored for conducting embedded deployments to slim hardware devices. Seriously. WTF. Escripts give beginners all the power of Erlang necessary for exploring the more interesting bits of code and refactoring needed to learn sequential Erlang with the major advantage of being able to interface with the system the same way programmers from other environments are used to dealing with langauge runtimes like Bash, AWK, Python, Ruby, Perl, etc.

But what about that gap between scripts and full-blown production deployments for embedded hardware?

Erlang has… nothing.

That’s right! There is no agreed-upon way to deploy or even run Erlang code in the same manner a Python coder would expect to execute a python program. There is no virtualenv type system, there is no standard answer to the question “if I’m in the project directory and type ./do_thingy it will just work, right?” The answer is always “Well, it depends…” and what actually winds up happening is that people either roll a whole release just to crank a trivial amount of code up or (quite often) implement an ad hoc way to get the same effect in a lighter-weight way. (erlang.mk shines here, actually.)

Erlang does provide a number of ways to make a system run locally from source of .beam files — and has actually quite reasonable built-in resources for this — but nothing has been built around these tools that also deals with external dependencies, argument passing in a standard way, or any of the other little things you really need if you want to claim a complete solution. Hence all the ad hoc solutions that “work on my machine” but certainly aren’t something you expect your users to use (not with broad success, anyway).

This wouldn’t be such a big problem if it weren’t for the fact that not having any standard way to “just run a program” also means that there really isn’t any standard way to deal with client side code in Erlang. This is a big annoyance for me because much of what I do is client-side code. In Erlang.

In fact, it totally boggles my mind that client-side Erlang isn’t more common, especially considering that AMD is already fielding zillion-core processors for desktops, yet most languages are fundamentally single-threaded. That doesn’t mean you can’t do concurrency and parallelism in other languages, but most problems are not parallel in nature to begin with (parallel problems are easy to write solutions to in any language) while most real-world problems are concurrent. But concurrent systems are hard to write in almost every language. Concurrent problems are the bulk of the interesting problems we’re still not very good at solving with computers. AMD is moving to make the tools available to make much more interesting concurrent processing tools available on the client side (which means Intel will soon start pouring it gajillions worth of blood diamond money into a similar effort), but most languages and environments have no good way to make use of that on the client side. (Do you see why I hear Lady Fortune knocking?)

Browsers? Oh yeah. That’s a great plan. Have you noticed that most sites slowly move toward the “Single Page App” design over time (read as: the web sucks, so now we write full-but-crippled client-programs and deliver them over the web), invest heavily in do-sneaky-things-without-telling-you JavaScript and try to hog every core your system has if you allow it the slightest permission to do so? No. In the age of bitcoin miners embedded in nearly every ad this is not the direction I think we should be envisioning things going.

I want to take better advantage of the cores users have available, and that doesn’t necessarily mean make more efficient use of every cycle as much as it means to make scheduling across processes more efficient to reduce latency throughout the system overall. That’s something users care about quite a lot. This is the problem Erlang has already solved in a way no other runtime out there has. So I want to capitalize on it.

And yet, there is still not standardish way of dealing with code from source, running it locally, declaring or resolving dependencies, or even launching a client-side program at all.

So… how am I approaching it?

I have a project called “zomp” which is a repository system. It is a distributed repository system, so not everything has to be held in one place. Code in the zomp universe is held in little semantic silos called “realms”. Each realm can have whatever packages the owner (sysop) wants it to have. Each realm must have one server node somewhere that is its “prime” — the node in charge of that realm. That node is where system operator tasks for that realm take place, packagers and maintainers submit code for inclusion, where the package index is built, where the canonical copy of everything is stored. Other nodes configured to see that realm connect to the prime node and receive a copy of the current indexes and are tested for availability and published as available resources for querying indexes or downloading packages.

When too many subordinate nodes connect to a prime the prime will redirect a new node to a subordinate, when a subordinate gets “full” of subordinates itself, it picks a subordinate for new redirects itself, etc. so each realm winds up forming a resource tree of mirror nodes that connect back to the realm prime by a single path. A single node might be prime for several realms, or other nodes may act as prime for different realms — and any node can be configured to become a part of any number of realm trees.

That’s the high-level code division.

The zomp constellation is interfaced with via the “zx” program (short for “zomp explorer”, or “zomp exchanger”, or “Zomp eXtreem!”, or homage to the Sinclair ZX-81, or whatever else might lend itself to the letters “zx” that you might want to make up — I actually forget what it originally stood for, but it is remarkably convenient to type so it’s staying that way)

zx is configured to have visibility on zomp realms the same way a zomp node is (in fact, they use the same configuration files and it isn’t weird to temporarily host a zomp node on your desktop the same way you might host a torrent node for a while — the only extra effort is that you do have to open a port, zomp doesn’t (yet) do hole punching magic).

You can tell zx to run a program using the highly counter-intuitive command:

zx run Realm-ProgramName[-Version]

It breaks the program name down into:

  • Realm (optional, defaulting to the main realm of public FOSS packages called “otpr”)
  • Name (necessary — sort of the whole point)
  • Version (which is optional and can also be partial: “1.0.3” vs just “1.0” or “1”, defaulting to the latest in a series or latest overall)

With those components it then contacts any zomp node it knows provides the needed realm, resolves the latest version number of the requested program, downloads and unpacks it, checks and downloads any missing dependencies, builds the program, and launches it. (And if it doesn’t know any active mirrors it asks the prime node and is seeded with known mirror nodes in addition to getting its query answered.)

The packages are kept in a local cache stored at the user level, not the system level (sort of like how browsers keep their JS and page caches) — though if you want to daemonize zomp and run it as a permanent service (if you run a realm prime, for example) then you would want to create an unprivileged system user specifically for the purpose. If you specify a fully-qualified “realm-name-version” for execution and the packages already exist and are built, zx just launches the code directly (which is the majority case, so no delay there — fast startup).

All zomp nodes carry a complete index of their configured realms and can answer queries with very little overhead, but only the prime node has a copy of all the packages for that realm

 

Zomp realms are write-only. There is no facility for removing a package from a realm entirely, only for upgrading the versions of packages whenever necessary. (Removal is, of course, possible, but requires manual intervention by the sysop.)

When a zx client or zomp node asks an upstream node for a package and the upstream node does not have a copy it will query its upstream until the request reaches a node that does have a copy. Once found a “found” notice goes back down to the client telling it how many hops away the package is, and new “hops away” notices are sent as the package is passed downstream toward the original requestor (avoiding timeouts and allowing the user to get some feedback about what is going on). The package is cached at each node along the way, so subsequent requests for that same package will be handled immediately without any more relay downloading.

Because the tree of nodes is expected to be relatively ephemeral and in a constant state of flux, the tendency is for package stores on mirror nodes to be populated by only the latest, most popular packages. This prevents the annoying problem with old realms having gobs of packages that nobody uses but mirror hosts being burdened with maintaining them all anyway.

But why not just keep the latest of everything and ditch old packages?

Ever heard of “version shear”? Yeah. Me too. It sucks. That’s why.

There are no “up to” or “greater than” or “abstract version 3” type dependency declarations in zomp package metadata. As a package maintainer you must explicitly declare the complete version of each dependency in your system. In the case of diamond-shaped dependencies (where two packages in your system depend on slightly different versions of the same package) the burden is on the packagers to declare a version that works for a given release of that package. There are no dependency trees for this reason. If your package depends on X, and X depends on Y and Z then your package must be defined as depending on X, Y and Z — and fully specify the versions involved.

Semver is strictly enforced, by the way. That is, all release numbers are “Major.Minor.Patch”. And that’s it. No more, no less. This is one of the primary criteria for inclusion into a public realm and central to the way both zx and zomp interpret package semantics. If an upstream project has some other numbering scheme the packager will need to create a semver standard of his own. And actually, this turns out to not be very hard in practice. There is one weird side-effect of full, static dependency version declarations and semver: updating dependencies results in incrementing your package’s patch number, so even if you don’t change anything in a program for a long time, a program with many dependencies under heavy development may wind up on version 2.3.257 without much change other than the {deps, PackageIDs}. line in the package meta file.

zx helps make you aware of these situations, so solving them has not been particularly difficult in practice.

Why do things this way?

The “static dependencies forever and ever, amen” decision is a tradeoff between the important feature of fully repeatable builds Erlang releases are famous for (to the point of bug-compatibility between deployment sites — which is critical in production) and the flexibility users and developers have come to expect from source repository systems like pip, pypi, CPAN, etc. Because each realm is write-only there is no danger that a package will be superceded and disappear. The way trickle-down caching works for mirror zomp nodes does not unduly burden the subordinate realm mirrors, and the local caching behavior of zx itself at launch time tends to make all of this mostly delay-free for zx clients and still gives them the option to always run “latest available version” if they want.

And on the note of “latest version”…

Client-side programs are not expected to be run too terribly long at a time. People shut desktop programs down, restart computers, update their kernels, etc. So even if a client program runs a long time (on the order of web, email, IRC, certain games, crypto wallets/miners, torrent nodes, Freenode, Tor, etc) it will still have a chance to restart every few days or weeks to check for a new version (if invoked in a way that omits the version number so that it always queries the latest version).

But what about for long-running server-side type programs? When zx starts a script checks the initial environment and then starts the erlang runtime with zx as its target application, passing it the package ID of the desired program to run and its arguments as arguments. That last sentence was odd. An example is helpful:

zx run foo-bar arg1 arg2 arg3

zx invokes the launching script (a Bash script on Linux, BSD and OSX, a batch file on Windows — so actually the command is zx.bash or zx.cmd)  with the arguments run foo-bar arg1 arg2 arg3. zx receives the instruction “run” and then breaks “foo-bar” into {Realm, Name} = {"foo", "bar"}. Everything after that is passed in as strings which wind up being the input arguments to the program being run: “foo-bar”.

zx registers a process called zx_daemon which remains resident in the runtime and waits for a subscription request or zomp query. Any Erlang program written with the intention of being used with zx can send a message to zx_daemon and ask it to maintain a connection to the program’s parent realm and enroll for update notifications. If the target program itself is the subject of a realm index update then it will get a message letting it know what has changed. The program can respond any way the author wants to such a notification.

In this way it is possible to write a client-side or server-side application that can enroll to become aware of updates to itself without any extra infrastructure and a minimal amount of code. In some programs I’ve used this to cause a pop up notification to appear to desktop users so they know that a new version has become available and they should restart the program (the way Firefox does on Windows). It could also be used to initiate a restart on its own, or whatever else you might come up with.

There are several benefits to developers of using this system as well.

As a developer I can start a new project by doing zx init app [Realm-Name] or zx init lib [Realm-Name] in an existing project root directory and a zomp.meta file will be generated for it, or a new project template directory will be created (populated with a functioning sample skeleton project). I can do zx dailyze and zx will make sure a generally relevant PLT exists or is built (if not up to date) and used to check the typespecs of the project and its dependencies. zx create package [Path] will create a zomp package, sign it, and populate the metadata for it. zomp keygen will generate the kind of keys necessary to interact with a zomp server. zomp submit PackageFilePath will submit a package for review.

And so on.. It is a lot easier to do most things now, and that’s the main point.

(There are commands for reviewing, approving, or rejecting package submissions, adding packagers and maintainers to package projects, adding dependencies to projects, X.Y.Z version incrementing, etc. as well.)

This is about 90% of the way I want it to be, but that means about 90% of the effort remains (pessimistically assuming the 90/10 rule, because life sucks and nobody cares). Most of that is probably going to be finagling some network lunacy, but a lot of the effort is going to be in putting polish to it.

Zomp/zx is based on a similar project I wrote for use within Tsuriai a few years ago that has much sparser features but does basically the same thing: eases packaging and repeatable deployment from source to client systems. I would never release that version publicly because it has a lot of “works for me!” level functionality, but very little polish and requires manually diddling quite a few settings files in error-prone ways (which is fine because it was just us diddling them).

My intention here is to Cadillac this out a bit so that newcomers can slide into the new language and just focus on that language after learning a minimum of tooling commands or environmental details. I think zx init app foo-bar and zx runlocal are a low enough bar for entry.

Permalink

Vagrant for Erlang Development

I typically like to do development work on my local machine. Locally I’ve got all my favorite tools, scripts, and aliases along with custom mappings for my editor. Local development is much more pleasant than SSH’ing into a server and running commands. Without all my custom tools and configurations the environment feels foreign to me. Because of this I generally try to avoid solutions to development problems that involve a virtual machine. Even though the VM is running on my laptop it’s really not that much easier to develop on than a regular server.

I’ve known about Vagrant for a long time, but I really wasn’t interested in using it because it was easy to setup development environments on my laptop with asdf. Then I encountered a project at work that I wasn’t able to get working on my laptop. I spent hours trying to figure out what was misconfigured, but to no avail. I reluctantly figured I would give Vagrant a try. It seemed like a better option than using a plain VM. It turned out to be very effective. My development with Vagrant is almost seamless now.

In this blog post I’ll cover a few of the issues I ran into when setting up Vagrant for my Erlang project as well as some things I discovered that improved my workflow with Vagrant.

Installation

First off you’ll need to install Vagrant and a hypervisor for running the actual VM. I like VirtualBox because it is free and open source.

If you are on linux you may be able to use your package manager to install VirtualBox. Version 5.0 is the latest version that Vagrant supports.

# Install VirtualBox 5, I'm on Debian so I'm using apt-get
$ sudo apt-get install virtualbox-5.0

Then install vagrant. You can download it from the Vagrant website or if you are on linux you can use your package manager to install it.

Installation Hiccup

After installing Vagrant and trying to start it up for one of my projects I realized there was an issue with my VirtualBox installation. It turned out to be due to an option called VT-x being disabled in my BIOS. The error I got when I tried to boot the VM looked like this:

There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "dc1a0388-9aab-4ce9-9343-0778af7d1f1d", "--type", "headless"]

Stderr: VBoxManage: error: VT-x is disabled in the BIOS for all CPU modes (VERR_VMX_MSR_ALL_VMX_DISABLED)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole

I rebooted my machine, went into the BIOS, and enabled that option. When I booted up my VirtualBox installation worked and no errors were printed.

Intel VT-d Feature Enable

Setting Up the Environment on the VM

Once you have Vagrant installed you can begin setting it up for your Erlang project. Navigate to your project on the command line and run vagrant init <box> to generate a Vagrantfile for the project. For this blog post I chose the hashicorp/precise64 box, which is Ubuntu 12.04 and seems to be the default box that is used in the Vagrant documentation. Boxes are the package format for Vagrant environments. Boxes contain the base VM image and other metadata. Available boxes are listed on the Vagrant website. The Vagrantfile in your project root is where you can specify configuration values for your project’s box. Typically there isn’t much that needs to change, but there are plenty of options available. You can set options for network interfaces, synced folders, and the base box image that is used by the VM. I’m not going to cover all that here. The Vagrant documentation and the book Vagrant: Up and Running are great resources.

Once you have a VM up and running you will need to provision the box. For Erlang development you will need Erlang, Rebar/Rebar3, and optionally Elixir. The Vagrantfile allows us to specify a provisioning script that can be run when the VM is created to install all the tools you will need. I use asdf locally, so I figured I would use it on the VM as well.

The provision script I needed would need to install asdf, install all the necessary asdf plugins, and then install the correct versions of Erlang, Rebar, and Elixir for the project. The script I came up with does all of this:

#!/usr/bin/env bash

# Unoffical Bash "strict mode"
# http://redsymbol.net/articles/unofficial-bash-strict-mode/
set -euo pipefail
#ORIGINAL_IFS=$IFS
IFS=$'\t\n' # Stricter IFS settings

# Install Git and other asdf dependencies
sudo apt-get install -y git automake autoconf libreadline-dev libncurses-dev \
    libssl-dev libyaml-dev libffi-dev libtool unixodbc-dev \
    build-essential autoconf m4 libncurses5-dev curl

# Install asdf
git clone https://github.com/asdf-vm/asdf.git $HOME/.asdf
(cd $HOME/.asdf; git checkout v0.4.0)
echo -e '\n. $HOME/.asdf/asdf.sh' >> $HOME/.bashrc
echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> $HOME/.bashrc
# Make asdf available in this script
set +u
source "$HOME/.asdf/asdf.sh"
set -u

# Install all the necessary asdf plugins
asdf plugin-add erlang https://github.com/asdf-vm/asdf-erlang.git
asdf plugin-add rebar https://github.com/Stratus3D/asdf-rebar.git
asdf plugin-add elixir https://github.com/asdf-vm/asdf-elixir.git

# Navigate to the directory containing the project (/vagrant is the directory
# that is synced with the project dir on the host)
cd /vagrant
# Make the versions defined .tool-versions file the versions used by the vagrant
# user in any directory
cp .tool-versions $HOME
# Install all correct versions of these packages for the project
asdf install

echo "Completed setup of Erlang environment!"

asdf expects a .tool-versions file in the project root, so before you have Vagrant run the provision script the .tool-versions file must exist in the project. For my project I needed the latest Erlang and Rebar3 versions but not Elixir, so mine looked like:

erlang 20.1
rebar 3.4.7

Now you just need to tell Vagrant to use this script to provision your VM. The config.vm.provision parameter allows us to specify the provision method for the Vagrant box. For a shell script like this you need to add a config.vm.provision line like this:

Vagrant.configure(2) do |config|

  # ... omitted other options

  config.vm.provision "shell", path: "provision.sh", privileged: false
end

Vagrant will run the provision script after creating the VM, so if you already have a Vagrant box running run vagrant destroy and vagrant up to have Vagrant setup a new VM and then run the provision script. If the provision script finishes without errors you should have a running Vagrant VM configured for Erlang development!

Tighter Integration with My Local Environment

SSH’ing onto the VM to run commands is something I wanted to avoid and it turns out it’s easy to avoid running commands directly on the VM. Vagrant provides the vagrant ssh command which can be used to ssh onto the server, but it can be treated as a regular SSH client, meaning you can use it to run arbitrary commands on the server just like you could with a regular SSH client. To run arbitrary commands use:

$ vagrant ssh -- '<command>'

For example, to see the IP addresses of the VM run:

$ vagrant ssh -- 'ip address'

You can also run scripts on the VM like this:

$ vagrant ssh -- < <script>

This is a lot to type out for simple things so I was eager to find a better way of doing this. It would be nice to not have to type out so much. After asking some questions I found three ways to make running commands on the VM easier.

Shell Alias

The first way to simplify commands is to just create a shell alias for vagrant ssh --. It’s easy to do and makes the commands a lot shorter:

# Add this to your .bashrc
alias vc="vagrant ssh --"

# Then you can use it to run commands on the VM:
$ vc 'ip address'

The downside to this is that you still have to quote the command you want to run.

vagrant-exec

vagrant-exec is a very nice Vagrant plugin that aims to make it easier to run commands on the VM. It offers some very nice features:

  • Uses synced folders to map commands to the right directory on the VM, allowing you navigate around your local environment and run commands in the equivalent on the VM.
  • It has options for generating shims, which you can add to your $PATH and then run commands locally without a prefix.
  • It has options for prepending commands with other commands. For example prepend apt-get with sudo.

vagrant-exec is a much better choice than shell aliases. It offers more features and tighter integration. The downside is it often requires more work to configure.

What vagrant-exec does isn’t that complicated so I wanted to see if I could write a simplified version of it as a shell script.

va script

I was able to write a simple Bash script that works similar to vagrant-exec. It lacks many of the features provided by vagrant-exec, but still makes running commands very easy. I named the script va to make it short enough that no alias would be needed. Using the script is very easy. Going back to the IP address example it would just be:

$ va ip address

Basically all the script does is look at the synced folder mappings configured for the project, and then maps the current directory on the host machine to equivalent directory on the VM. This allows you to easily run directory-specific commands on the host without having to worry about the directory being used on the VM. The output from the command is printed just as if it was run locally.

The source for the script can be found in my dotfile repo on GitHub. All you need to is put it on your $PATH.

Conclusion

Overall Vagrant has been a big help. I was surprised at how much searching I had to do to find a good way of seamlessly running commands on the VM from my local environment. With my va script I’m pretty happy, and I can always use vagrant-exec in the future if I find my va script insufficient.

I still really like developing locally but for times when I can’t run a project locally I’m going to use Vagrant. It’s hard to beat the ease of use and tight integration that Vagrant provides.

References

Permalink

Macro Madness: How to use `use` well

In Elixir, macros are used to define things that would be keywords in other languages: defmodule, def, defp, defmacro, and defmacrop are all macros defined by the standard library in Kernel. In ExUnit, assert is able to both run the code passed to it to see if the test is passing, but also print that code when it fails, so that you don’t need a custom DSL to show what was being tested. In GenServer, use GenServer defines default implementations for all the required callbacks.

If you want a head-trip, look at the implementation of defmacro, which is defined using defmacro:

defmacro defmacro(call, expr \\ nil) do
  define(:defmacro, call, expr, __CALLER__)
end

Kernel.defmacro/2

Don’t worry, like all languages defined in themselves, defmacro is defined using a “bootstrap” library that’s written in the underlying language, in Elixir’s case :elixir_bootstrap defines minimal versions of @, defmodule, def, defp, defmacro, defmacrop in Erlang: just enough for Kernel to be parsed once and then it defines the full version. This way, you don’t need the last version of Elixir to build the next version, just Erlang.

import Kernel, except: [@: 1, defmodule: 2, def: 1, def: 2, defp: 2,
                        defmacro: 1, defmacro: 2, defmacrop: 2]
import :elixir_bootstrap

Kernel

Macros allow us to generate code dynamically at compile time. One of the reasons they were added to Elixir was to reduce the amount of boiler plate needed to be written for behaviours, such as :gen_server. In Erlang, this boiler plate was manually added to each file using Emacs templates.

Before the introduction of -optional_callbacks attribute in Erlang 20, there was no way to add new callbacks without having everyone update their code to add their own copy of the default implementation.

GenServer has 6 callbacks you need to implement. Every GenServer you use would need to have the correct signature and return values for all those callbacks.

So, to implement the bare minimum, we can get away with one-liners in most cases, but we need to remember the shape of each of the returns even if we don’t care about code_change/3 for hot-code upgrades. Additionally, the one-liners with raise won’t type check with dialyzer: it will warn about non-local return, which is just dialyzer’s way of saying you’re raising an exception or throwing. The real code in GenServer is doing more to make dialyzer happy and to give you more helpful error messages that are easier to debug.

def init(args), do: {:ok, args}

def handle_call(msg, _from, state), do: raise "Not implemented"

def handle_info(msg, state) do
  :error_logger.error_msg(
    '~p ~p received unexpected message in handle_info/2: ~p~n',
   [__MODULE__, self(), msg]
  )
  {:noreply, state}
end

def handle_cast(msg, state), do: raise "Not implemented"

def terminate(_reason, _state), do: :ok

def code_change(_old, state, _extra), do: {:ok, state}

But, if you read the docs for GenServer and know that you don’t need to implement all the callbacks, you can put use GenServer in your callback module and all those default implementation will be defined for you. So, you go from having to hap-hazardly copy default implementations to each callback module to a single line.

Just like defmodule and the various def* for call definitions, use is not a keyword in Elixir, it is a macro in Kernel, so think of use as a convention, not a keyword.

use is not magic. It’s very short piece of code that is only complex to give some convenience:

  1. It automatically does require, as __using__ is a macro and macros can’t be used without an explicit require first
  2. It uses Enum.map, so you can pass multiple aliases (use Namespace.{Child1, Child2})
  3. It raises an ArgumentError if you called it wrong.
defmacro use(module, opts \\ []) do
  calls = Enum.map(expand_aliases(module, __CALLER__), fn
    expanded when is_atom(expanded) ->
      quote do
        require unquote(expanded)
        unquote(expanded).__using__(unquote(opts))
      end
    _otherwise ->
      raise ArgumentError, "invalid arguments for use, expected a compile time atom or alias, got: #{Macro.to_string(module)}"
  end)
  quote(do: (unquote_splicing calls))
end

Kernel.use/2

If use just calls the __using__ macro, what is the __using__ macro supposed to do? The only requirement is that it behaves like any other macro: it returns quoted code. The rest is up to the conventions and best practices in the docs for Kernel.use.

Example

Let’s look at an example of using __using__ and the misteps you can make along the way and how to fix them.

An Old One

While working at Miskatonic University, William Dyer started a compendium of various species the university had encountered. The university’s not mad enough to try to bring them to Earth, so we use a Client library to establish communication with grad students working in the field.

defmodule Miskatonic.OldOnes do
  def get(id) do
    with {:ok, client_pid} <- client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end

  defp client_start_link do
    Miskatonic.Clients.Portal.start_link(entrance: "witch-house")
  end
end

Miskatonic.OldOnes@william-dyer

The heads of multiple Great Old Ones merge organically with Cthulhu's head at the base

While researching the Old Ones, Miskatonic grad students found some of their records referring to greater species that the Old Ones were studying. Because naming is hard, Miskatonic has started to call them Great Old Ones.

defmodule Miskatonic.GreatOldOnes do
  def get(id) do
    with {:ok, client_pid} <- client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end

  defp client_start_link do
    Miskatonic.Clients.Boat.start_link(
      latitude: -47.15,
      longitude: -126.72
    )
  end
end

Miskatonic.GreatOldOnes@gustaf-johansen

So, we have two modules, that both have a get function for getting the research on a resource, but how we can communicate with the grad students in the fields differ. We want to make communicating with new and exciting things that want to drive us mad easier because we keep losing grad students, so we need to refactor our two modules and extract the common pieces. Here’s the general shape. There’s a get/1 function that takes an id and then internally there’s client_start_link/0 function that hides the different ways we communicate with the realms of the different species.

defmodule Miskatonic.Species do
  def get(id) do
    with {:ok, client_pid} <- client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end

  defp client_start_link do
    ??
  end
end

Using use

Using the use convention, we can move get/1 definition into a quote block in the __using__ macro for a new, general Miskatonic.Species module. We can move get/1 into it, but we can’t move client_start_link in it.

defmodule Miskatonic.Species do
  defmacro __using__([]) do
    quote do
      def get(id) do
        with {:ok, client_pid} <- client_start_link() do
          Miskatonic.Client.show(client_pid, id)
        end
      end
    end
  end
end

Miskatonic.Species@bob-howard

Now we can use Miskatonic.Species allow us to get rid of the duplicate get/1 code in each module, but we still need the client_start_link since it differs in each.

defmodule Miskatonic.OldOnes do
  use Miskatonic.Species

  defp client_start_link do
    Miskatonic.Clients.Portal.start_link(entrance: "witch-house")
  end
end

Miskatonic.OldOnes@bob-howard

defmodule Miskatonic.GreatOldOnes do
  use Miskatonic.Species

  defp client_start_link do
    Miskatonic.Clients.Boat.start_link(latitude: -47.15,
                                       longitude: -126.72)
  end
end

Miskatonic.GreatOneOne@bob-howard

Bob Howard in a tactical turtleneck holding a glow hand-held device

Bob Howard gets pulled off the project and sent to The Laundry, so a new grad student, Carly Rae Jepsen needs contact with the Yithians, who Old Ones fought.

Great Race of Yith

Seeing how useful use Miskatonic.Species was in the other modules, the Carly Rae Jepsen tries the same, but she get a cryptic error message that client_start_link/0 is undefined.

defmodule Miskatonic.Yithians do
  use Miskatonic.Species
end

Miskatonic.Yithians@carly-rae-jepsen-compilation-error

== Compilation error in file lib/miskatonic/yithians.ex ==
** (CompileError) lib/miskatonic/yithians.ex:2: undefined function client_start_link/0
    (stdlib) lists.erl:1338: :lists.foreach/2
    (stdlib) erl_eval.erl:670: :erl_eval.do_apply/6

mix compile

Carly Rae tracks down that Miskatonic.Species depends on client_start_link/0 being defined, but Miskatonic.Species isn’t currently making the best use of the compiler to tell developers that. Using @callback, to declare that client_start_link/0 is required by @behaviour Miskatonic.Species that Carly Rae adds to the quote block.

defmodule Miskatonic.Species do
  @callback client_start_link() ::
              {:ok, pid} | {:error, reason :: term}

  defmacro __using__([]) do
    quote do
      @behaviour Miskatonic.Species

      def get(id) do
        with {:ok, client_pid} <- client_start_link() do
          Miskatonic.Client.show(client_pid, id)
        end
      end
    end
  end
end

Miskatonic.Species@carly-rae-jepsen-client-start-link-callback

So, great, Carly Rae gets a compiler warning now, that’s more specific about why Carly Rae needs client_start_link in Miskatonic.Yithians, but it looks like @callback implementations need to be public, so change all the defp client_start_link to def client_start_link

warning: undefined behaviour function client_start_link/0  (for behaviour Miskatonic.Species)
  lib/miskatonic/great_old_ones.ex:1

warning: undefined behaviour function client_start_link/0  (for behaviour Miskatonic.Species)
  lib/miskatonic/yithians.ex:1

warning: undefined behaviour function client_start_link/0  (for behaviour Miskatonic.Species)
  lib/miskatonic/old_ones.ex:1

mix compile

With the switch to public client_start_link/0, we can learn about the Old Ones, Great Old Ones, and Yithians, but the code could be better. Although we’re not writing the def get in every file, it’s being stored in each, which we can see if we ask for the debug info. For one function, this isn’t a big deal, but if we add more and more functions, this is unnecessary bloat, we know it’s exactly the same code. Code loading still takes time with the BEAM even if it’s faster than languages that need to be interpreted from source first.

iex> {:ok, {module, [debug_info: {_version, backend, data}]}} = :beam_lib.chunks('_build/dev/lib/miskatonic/ebin/Elixir.Miskatonic.Yithians.beam',[:debug_info])
iex> {:ok, debug_info} = backend.debug_info(:elixir_v1, module, data, [])
iex> {:ok, %{definitions: definitions}} = backend.debug_info(:elixir_v1, module, data, [])
iex> List.keyfind(definitions, {:get, 1}, 0)
{:get, 1}, :def, [line: 2, generated: true],
 [{[line: 2, generated: true],
   [{:id, [counter: -576460752303423100, line: 2], Miskatonic.Species}], [],
   {:with, [line: 2],
    [{:<-, [line: 2],
      [{:ok,
        {:client_pid, [counter: -576460752303423100, line: 2],
         Miskatonic.Species}}, {:client_start_link, [line: 2], []}]},
     [do: {{:., [line: 2], [Miskatonic.Client, :show]}, [line: 2],
       [{:client_pid, [counter: -576460752303423100, line: 2],
         Miskatonic.Species},
        {:id, [counter: -576460752303423100, line: 2],
         Miskatonic.Species}]}]]}}]}

The general approach you want to take when making functions in your __using__ quote block to be as short as possible. To do this, I recommend immediately calling a normal function in the outer module that takes __MODULE__ as an argument.

The reason I recommended always passing in the __MODULE__ is illustrated well here, module is needed, so that client_start_link/0 can be called in get/2 because it’s outside the quote block and won’t be in the module that calls use Miskatonic.Species anymore.

defmodule Miskatonic.Species do
  @callback client_start_link() ::
              {:ok, pid} | {:error, reason :: term}

  defmacro __using__([]) do
    quote do
      @behaviour Miskatonic.Species

      def get(id), do: Miskatonic.Species.get(__MODULE__, id)
    end
  end

  def get(module, id) do
    with {:ok, client_pid} <- module.client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end
end

Miskatonic.Species@get-module

Carly Rae Jepsen is doing such a good job on the code that the university doesn’t want to risk her going mad in the field, so Miskatonic University has decided to fund another graduate position on the team. Nathaniel Wingate Peaslee joins the team and discovers that the Yithian psychic link isn’t limited to just swamping location, but can be used to swap in time. This means to study more of Yithians, the Miskatonic.Yithians module should try mind transferring to a Yithian in a different time, if getting info on a Yithian fails.

defmodule Miskatonic.Yithians do
  use Miskatonic.Species

  def client_start_link(keywords \\ [yithian: "Librarian"]) do
    Miskatonic.Clients.Psychic.start_link(keywords)
  end

  def get(id) do
    case Miskatonic.Species.get(__MODULE__, id) do
      {:error, :not_found} ->
        with {:ok, pid} <- client_start_link(yithian: "Coleopterous") do
          Miskatonic.Client.show(pid, id)
        end
      found ->
        found
    end
  end
end

Miskatonic.Yithians@clause-cannot-match

Ah, but Nathaniel seems unable to override get/1 that the use Miskatonic.Species is inserting. Line 2 is the line where use Miskatonic.Species is called while line 8 is where Nathaniel wrote the def get.

warning: this clause cannot match because  a previous clause at line 2 always matches
  lib/miskatonic/yithians.ex:8

mix compile

We can use defoverridable to any function defined above in a quote block as overridden if the outer scope defines the same name and arity, instead of the outer scope appending clauses to the same name and arity. Although mixing clauses from quote blocks and the outer scope is allowed, it’s mostly going to cause confusing bugs, so I recommend always marking any functions defined in a quote block.

defoverridable No Yes
quote clauses quote clauses quote clauses
defmodule clauses Both defmodule clauses

So, Nathaniel marks get/1 as overridable, and the override works without warnings.

defmodule Miskatonic.Species do
  @callback client_start_link() ::
              {:ok, pid} | {:error, reason :: term}

  defmacro __using__([]) do
    quote do
      @behaviour Miskatonic.Species

      def get(id), do: Miskatonic.Species.get(__MODULE__, id)

      defoverridable get: 1
    end
  end

  def get(module, id) do
    with {:ok, client_pid} <- module.client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end
end

Miskatonic.Species@defoverridable

But, he’s able to do more, when you override a defoverridable function, you can call the overridden function with super. This allows users of your __using__ macro to not have to look at the implementation of the function they are overriding, which means their code is more likely to continue working if you change implementation details.

defmodule Miskatonic.Yithians do
  use Miskatonic.Species

  def client_start_link(keywords \\ [yithian: "Librarian"]) do
    Miskatonic.Clients.Psychic.start_link(keywords)
  end

  def get(id) do
    case super(id) do
      {:error, :not_found} ->
        with {:ok, pid} <- client_start_link(yithian: "Coleopterous") do
          Miskatonic.Client.show(pid, id)
        end
      found ->
        found
    end
  end
end

Miskatonic.Yithians@defoverridable

Miskatonic University’s library is doing really well, but it still has some slight bugs: every module has a get/1 and it’s overridable, but it’s not a callback. It may seem weird to mark get/1 as a callback, since only client code calls get/1, but if we want to make test mocks, to test code that depends on Miskatonic.Species we really need a get/1 callback. By making get/1 a callback, we can also use the compact form of defoverridable, that takes the name of the behaviour whose callbacks are overridable, instead of listing each function’s name/arity.

defmodule Miskatonic.Species do
  @callback client_start_link() ::
              {:ok, pid} | {:error, reason :: term}

  @callback get(id :: String.t) :: term

  defmacro __using__([]) do
    quote do
      @behaviour Miskatonic.Species

      def get(id), do: Miskatonic.Species.get(__MODULE__, id)

      defoverridable Miskatonic.Species
    end
  end

  def get(module, id) do
    with {:ok, client_pid} <- module.client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end
end

Miskatonic.Species@defoverridable-behaviour

One final check that Elixir 1.5 gives us is @impl. @impl is like @Override in Java, but better.

  1. Mark which functions are implementations of callbacks
  2. Document which behaviour a function is for, which makes finding docs and source easier for readers
  3. Force all other callbacks for the same behaviour to use @impl to maintain consistent documentation.

In Miskatonic.Species, there is only one behaviour, but if it was a stack of behaviours, such as building on top of GenServer, then marking which callbacks are for GenServer and which are for other behaviours can be very helpful.

defmodule Miskatonic.Species do
  @callback client_start_link() ::
              {:ok, pid} | {:error, reason :: term}

  @callback get(id :: String.t) :: term

  defmacro __using__([]) do
    quote do
      @behaviour Miskatonic.Species

      @impl Miskatonic.Species
      def get(id), do: Miskatonic.Species.get(__MODULE__, id)

      defoverridable Miskatonic.Species
    end
  end

  def get(module, id) do
    with {:ok, client_pid} <- module.client_start_link() do
      Miskatonic.Client.show(client_pid, id)
    end
  end
end

Miskatonic.Species@impl

TL;DR

Let’s review Miskatonic University’s finding and thank the graduate students for turning mad, so we don’t have to.

  1. We can use use, which calls __using__, which calls quote to inject default implementations
  2. All defs in the quote block should be declared as @callbacks in the outer module where defmacro __using__ is.
  3. Put @behaviour with the outer module as the behaviour name at the top of quote block
  4. The default functions should be one-liners that call functions with the same name in the outer module with __MODULE__ as a prepended argument.
  5. Mark all default functions with @impl, as it will force other callbacks for the behaviour to also use @impl and double check you got the name and arity right between the @callbacks and implementation in the quote block.
  6. Use that passed in __MODULE__ whenever you need to call another callback from the outer module functions, so that overrides for any callback will always be called. Don’t call other outer module functions directly!
  7. Use defoverridable with the outer module so that you don’t have confusing errors with clauses mixing from the quote block and the use using module.

Permalink

World, meet Code Sync Conferences

I attended my first Erlang User Conference in 1995. It was my first conference ever. I was an intern at the Computer Science Lab, working on my Master’s thesis with Joe Armstrong. The conference was opened by Erlang System’s manager Roy Bengtson, my future boss. In his opening talk, he announced two new libraries, the Erlang Term Storage, and the Generic Server Module, as well as the tools which were eventually merged to give us the Observer. When attendees complained over the lack of documentation for these tools, Klacke at the CS Lab suggested they write it themselves.

The two day conference had doubled in numbers from its first installation the previous year, with presentations from the Computer Science Laboratory, Erlang Systems, Ericsson and Universities around the world. It was the beginning of something you do not get to experience often.


Opening slide from the proceedings of the Second Erlang User Conference 1995

The journey to launching Code Sync

By 2009, the conference had outgrown the Ericsson conference center in Älvsjö, and the OTP team did not have the infrastructure and flexibility needed to expand the event. We at Erlang Solutions had gained experience in events by running the Erlang eXchange in 2008 followed by the first Erlang Factory in Palo Alto in early 2009. Ericsson asked us to help, so we took over the logistics and worked with them to put together the program.


Can you spot me at Erlang User Conference 2006?

From these humble beginnings, a conference focused on Erlang expanded to include OTP. Use cases of trade-offs in distributed systems. Talks on cloud infrastructure, orchestration and micro services before the terms were invented. And attempts to make Erlang OO (Not the way Alan Kay intended it) were described and forgotten. The discussions in the hallway track were on the unsuitability of C++ for certain types of problems and around an emerging language called Java.

Fast forward to 2017, the focus from Java has moved to the JVM and its ecosystem. It is Scala, Akka, Groovy, Grails, Clojure and Spring. The same happened with .NET, giving it an ecosystem for C#, F# and Visual Basic to thrive. Erlang’s natural progression was no different. As time progressed, the BEAM came along, and new languages were created to run on it. Reia, by Tony Arcieri was the first (who ever said that a Ruby Flavoured Erlang was a bad idea?) and Efene, a C-flavoured language by Mariano Guerra first presented at the Erlang Exchange in 2008 is still used in production today!

The conferences evolved from a languages conference to a conference on the Erlang Ecosystem, where the BEAM and OTP were used to build scalable and resilient systems. Conferences where communities were exchanging experiences, inspiring and learning from one another. And as we started looking outside of the Erlang ecosystem, our events expanded to include talks on functional programming, concurrency, multi-core and distributed systems.

As a result, the Erlang User Conference, Erlang Factory, and Code Mesh have grown to a roster of global Erlang, Elixir and Alternative Tech conferences which have gone from strength to strength. Who can forget Mike, Joe and Robert on stage bickering together, Martin Odersky joking on how Scala influenced Erlang, Simon Peyton Jones talking about Erlang and Haskell, two childhood friends grew up together or Joe Armstrong interviewing Alan Kay! As of today, we organise five tentpole conferences every year, as well as numerous satellite conferences and a thriving partnership with ElixirConf and Lambda Days.


Joe Armstrong and Alan Kay in conversation at Code Mesh 2016

Last month we took Erlang Factory Lite to the Indian Subcontinent for the first time! This was on the back of a successful event in Buenos Aires this March and a sold out Factory Lite in Rome. This happened alongside some of the best conferences we’ve ever put on, from Erlang and Elixir Factories in San Francisco, Erlang User Conferences in Stockholm, Code Mesh in London to co-organising ElixirConf EU in Barcelona.

Introducing Code Sync

On the eve of 2018, the tenth anniversary of our first event, we’re ready for the next phase. I’m excited to announce that all of our conferences are joining the newly launched family of global conferences called Code Sync. Each conference will retain its own personality and stay true to one vision of creating the space for developers and innovators to come together as a community to share their ideas & experiences, learn from one another and invent the future. New name and brand, new colleagues and speakers joining our existing roster of contributors, speakers, and attendees.

Scheduled for next year, we have:

Code BEAM - Discovering the Future of the Erlang Ecosystem

Previously Erlang and Elixir Factory
Code BEAM SF, San Francisco - 15 - 16 March 2018
Code BEAM STO, Stockholm - 31 May - 1 June 2018

Code BEAM Lites - Satellite conferences of Code BEAM

Previously Erlang and Elixir Factory Lite
Various dates & locations
Milan - 6 April 2018
Berlin - 12 October 2018

Code Elixir - Connecting the Elixir Community

Previously ElixirLDN
London - 16 August 2018

Code Mesh - Exploring Alternative Tech

Name unchanged
London - 8-9 November 2018

We are in the early stages of planning Code BEAM Lite events in New York, Budapest, Bangalore, and Bogota. If interested, join our mailing list and stay tuned.

The creation of the Code Sync family of tech conferences is part of the commitment we have made to open our conferences to a wider audience and to spread the culture to Learn, Share & Inspire globally. The Code Sync team has grown from a single person, to a group of five full time employees and an ever growing number of local partners, programme committee members and volunteers. All this wouldn’t have happened without your continuous support - so we hope you will join our Code Sync conferences and become a member of one global community!

The very first Code Sync conference is Code BEAM SF taking place in San Francisco on 15-16 March. Call For Talks and Very Early Bird tickets are already open so we hope to see you there!

  • - Francesco

Permalink

Advanced RabbitMQ Support Part II: Deeper Insight into Queues

Before you go any further, you should know that you can test WombatOAM out today with a 45 day free trial for WombatOAM 3.0.0


Introduction

The most important and critical elements of any RabbitMQ installation are the Queues. Queues retain messages specific to different use cases across various industrial sectors such as telecommunications, financial systems, automotive, and so forth. Queues, and their adherence to AMQP are essentially “why” RabbitMQ exists. Not only do they retain messages till consumption, but internally, they are also an implementation of some of the most complex mechanisms for guaranteeing efficient message propagation through the fabric, while catering for additional requirements such as high availability, message persistence, regulated memory utilisation, and so forth.

So queues are general, the main focal point of any RabbitMQ installation. Which is why all RabbitMQ users and support engineers often find themselves having to do regular checks around queues, as well ensuring their host Rabbit nodes have been precisely configured to guarantee efficient message queueing operations. Typical questions that tend to arise from RabbitMQ users and support engineers are;

… how many messages are in Queue “A”?

… how many messages are pending acknowledgement in Queue “K”?

… how many consuming clients are subscribed to Queue “R”?

… how much memory is Queue “D” using?

… how many messages in Queue “F” are persisted on disk?

… is Queue “E” alive?

Within RabbitMQ, the implementation of a queue is a combination of multiple aspects such as the behaviour specification governing its operation (e.g. internally, what is known as the backing queue behaviour), the transient/persistent message store components, and most importantly, the queue process dictating all the logic and mechanics involved in the queueing logic. From these, a number of attributes exist, which give an indication of the current state of the queue. Some of these queue attributes are illustrated below:

Fig 1: RabbitMQ Queue Attributes

WombatOAM

As of WombatOAM 2.7.0, the WombatOAM-RabbitMQ plugin now ships with an additional agent, the RabbitMQ Queues agent. This RabbitMQ Queues agent has been precisely designed and developed to allow monitoring and acquisition of metrics specific to Queues, as well as presenting them in a user friendly manner to RabbitMQ users. Two modes of operation are supported:

Dynamic operation: Queues existing on the monitored node, with names matching to a user defined regex are dynamically loaded by WombatOAM for monitoring. Static operation: Specific queues are configured and monitored as defined in the WombatOAM RabbitMQ

Configuration

The manner in which this agent operates and presents metrics is solely dependant on the way in which it has been configured.

1. Dynamic operation

Dynamic mode of monitoring Queues may be configured by defining a match specification, from which queue names are matched against as follows, and the particular, desired attribute/metric from each matched queue. For example, to monitor memory usage of all queues, the following configuration may be defined in the wombat.config file:

{set, wo_plugins, plugins, rabbitmq_queues, dynamic_queues, [{match_spec, ".*"},
{metric, memory}]
}.

This will capture all queues on the node being monitored and present memory metrics from queues.

Fig 2: RabbitMQ Dynamic Queue Metrics

2. Static operation

In static mode of operation, users explicitly specify Queues and corresponding attribute/metric they would like to monitor in the wombat.config . A complete static configuration entry would consist of the Queue Name, Virtual Host, and the Attribute being measured. For example, to monitor the number of messages , consumers and amount of memory utilisation from the SERVICES.QUEUE , and number of messages only, from the EVENTS.QUEUE, a user may specify the following configuration from the wombat.config file:

{set, wo_plugins, plugins, rabbitmq_queues, static_queues,
[{<<"SERVICES.QUEUE">>, <<"/">>, messages},
{<<"SERVICES.QUEUE">>, <<"/">>, memory},
{<<"SERVICES.QUEUE">>, <<"/">>, consumers},
{<<"EVENTS.QUEUE">>, <<"/">>, messages}]}.

Configuring Static Queues is of extreme importance if your mission critical queues which you need continuous visibility of metrics such as messages counts and memory usage

The following illustrates an example of static mode:

Fig 3: RabbitMQ Static Queue Metrics

Taking “things” further!

Coupling together our discussion of monitoring Queues, together with discussion with Part-1 of this series of carrying out advanced alarming operations for RabbitMQ operations, imagine how many alarming cases we could achieve by defining alarms specific to Queue metrics?

Not only does WombatOAM provide us with a huge spectrum of alarming cases we could handle, but useful metrics. Imagine how useful the following alarms would be:

“an alarm which when triggered would send your team email notifications indicating that the number of messages in your most critical SERVICE.QUEUE has just reached the 500 000 message, limit without messages being consumed?”

Or:

“an alarm configured to issue email notifications when the number of consuming clients falls below a certain minimum permissible number, indicating there’s a critical service affecting problem on the client end”

or even more interesting:

“an alarm and email notification issued when a queues individual memory usage exceeds a certain cap value, beyond which would be an indication of one or more problems manifesting in the cluster.”

Defining such alarms could be as simple as configuring the following in wombat.config as illustrated here.

Fig 4: RabbitMQ Queue Alarms

Conclusion

So with these capabilities in mind, imagine the total number Queue specific metrics attainable for monitoring on WombatOAM? The number can be immense, and only limited by the total number of queues you have running, along with the number of attributes you have configured/setup for monitoring. All this is dependant on your configuration. To be precise, a total of 16 attributes are configurable per queue on WombatOAM, meaning a total of 16 x N queue specific metrics are attainable (Wow!). So imagine a queue count of ~50 or more queues on a RabbitMQ installation? The number of attainable metric capabilities becomes crazy! That’s ~50 x 16 = a staggering 800 metrics!!!

WombatOAM also provides ability to order queues as desired since the number of available queue metrics has the potential to be extremely large. The rate at which metrics are acquired is also configurable. If you desire to reduce frequency of which metrics are gathered (which is recommended when you have an extremely large number of queues, and queue metrics configured), this can be carried out by simply updating configuration.


Erlang Solutions offers world-leading RabbitMQ consultancy, support & tuning solutions. Learn more >

Permalink

What's new in Elixir - Dec/17

Today’s post marks the first in a new series bringing you the latest changes to the Elixir language. We’d love to hear from you about what you’d like to see in future posts so join the conversation on the Elixir Forum thread.

So what’s in master? Let’s have a look:

  1. Disagreements about formatting are a thing of the past! As part of 1.6 we’ve added a code formatter to Elixir. The formatter is available in projects via the mix task format. The community already helped format all files in the Elixir codebase and you can give the formatter a try now.

  2. The all new DynamicSupervisor behaviour is now available on master. Unlike the traditional Supervisor strategies, the DynamicSupervisor allows children to be added dynamically via start_child/2. For more on the DynamicSupervisor check out the documentation.

  3. Look for changes in compiler diagnostics as part of this new release that make integration with editors easier. An all new Mix.Task.Compiler behaviour will ensure existing and future compilers meet a common specification and return adequate diagnostics. These changes will enable editors to provide better support for Elixir code compilation. Jake Becker, one of the features contributors, outlined these benefits in his blog post ElixirLS 0.2: Better builds, code formatter, and incremental Dialyzer.

  4. Improvements to the mix xref task should make it easier for developers to make sense of the output. These improvements include the new graph --format stats command and a new option for all xref commands --include-siblings, for umbrella projects. For more information on xref changes checkout the CHANGELOG entry.

  5. Stream data and property testing will be joining Elixir core in a future release. Not only will these be useful to users of Elixir but they’ll be used to make Elixir itself better! See our previous announcement for more information and give the stream_data library a try.

Think we missed something? Let us know at the Elixir Forum.

Permalink

Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.