Overview of the JavaScript ecosystem

What can I accomplish with Javascript?

The ecosystem of JavaScript has grown. Long gone are the days of simply inserting jQuery into your website and fading stuff in or out.

Entering the world of JavaScript today is an overwhelming task with lots of possibilities. It also means that it’s a world that’s brimming with opportunity. In the words of Jeff Atwood (http://blog.codinghorror.com/the-principle-of-least-power/):

Any application that can be written in JavaScript, will eventually be written in JavaScript.

The different aspects of JavaScript

There was never a better time to find a niche within the JavaScript ecosystem. Here’s a list of aspects you can dive into and which this article will explore deeper:

  1. Front-end development
  2. Command line interface (CLI) applications
  3. Desktop (GUI) applications
  4. Mobile applications
  5. Back-end development
  6. Any combination of the above

Front-end development

AngularJS

AngularJS

Developing the user facing part of websites has become increasingly complex by becoming highly interactive and offloading traditional server-side tasks to the front-end. It was once unfathomable that we’ll be running the likes of Google Maps, Spotify or YouTube in our web browsers but here we are, with a varied toolset to make complex web applications.

Front-end web development has grown exponentially in the last few years and I’ll offer just a glimpse of that here.

The basics of front-end web development

For a long time, JavaScript has been used solely for DOM manipulation with the odd animation thrown in for good measure. Since the beginnings, there was a big discrepancy between browser features.

jQuery started a revolution with abstracting the aforementioned browser features and making DOM manipulation easy while also bringing quite a few utilities to the table.

Nowadays, it’s quite easy to manipulate the DOM with pure JavaScript and there’s a very nice cheat sheet just for that purpose.

Efficiency through frameworks

With the growing complexity of websites and websites growing to web applications, there was a need to address the complex issues of web applications (state handling, data binding, view rendering, etc.). Many frameworks rose to that challenge and two that are probably most popular today are AngularJS and React.

It’s clear that Angular and React gained such traction since the former is backed by Google and the latter by Facebook. While Angular contains the whole MVC paradigm, React is somewhat leaner and mostly considered the V of MVC.

New frameworks show up all the time and time will only tell which one will reign supreme (of course, if something like that even happens).

What’s in a name?

There’s a good chance that you won’t be using  JavaScript any more, but any of the languages that transpile to JavaScript like:

  • EcmaScript 6 — the newest spec of JavaScript
  • TypeScript — Microsoft’s superset of JavaScript featuring types

Apart from just adding new features to a language, there’s a good chance you’ll be modularising your application by using ES6 native modules orCommonJS (mostly for Node.js development) or RequireJS (async module loading mostly for websites).

Transpilation and connecting of modularised applications is done via build tools (Gulp and Grunt, mentioned in detail later), using transpile tools (like Babel or Traceur) and module builders (like Browserify or Webpack). You’ll most likely transpile and build your modules in every aspect of JavaScript development.

There is a boatload of tools that weren’t mentioned. It is left to the reader to explore them and a good starting place is the awesome list of front-end development.


Command line interface (CLI) applications

Gulp running a gulpfile

Gulp running a gulpfile

Many developers rely mostly on the CLI in their day-to-day development — be it code linting, task running or starting a server, there’s a certain beauty in the efficiency of executing a task purely from the command line.

CLI applications are written using Node.js (or io.js, a fork of Node.js which is going to be merged back into Node.js soon). Node.js is an open source, cross-platform runtime environment that allows you to execute your JavaScript code everywhere through Chrome’s JavaScript runtime (not just in the browser, like before). In essence, once someone installs Node.js and takes your CLI application (package), it can be executed.

Package managers

It would be really bad if you’d have to write every functionality of every app from scratch. That’s where npm steps in. npm is a package manager forNode.js modules and using packages is really simple — install and require them.

The CLI application that you write can also be packaged as a Node.js module and distributed via npm. That is the preferred way of getting your CLI application (or Node.js modules for that matter) to other people.

Many popular libraries and tools have CLI applications for easier use likeGulp or Grunt. There’s also a list of awesome Node.js CLI apps.

Build tools

Build tools (and task runners) get a special mention because they’re the most basic tools you’ll be using no matter what type of application you’re building.

The most popular build tools nowadays are Grunt and Gulp which make the process of transforming your code to something usable much easier. A typical scenario is to:

  • transpile your JavaScript from EcmaScript 6 to EcmaScript 5
  • compile your SCSS to CSS
  • minify and concatenate the resulting files
  • copy everything to a distribution folder

 Desktop (GUI) applications

Slack

Slack

Applications are mostly moving to the web or onto mobile devices. Still, desktop applications offer an immersion mostly unavailable to web applications.

The biggest advantage of writing your desktop applications with JavaScript is abstraction of the platform for which you are coding. Your applications are cross-platform and the modules you use simplify the usage of typical desktop features (such as tray icons, notifications, global keyboard shortcuts, etc.).

Having a good project structure allows you a lot of code reuse between your web and desktop application. That in turn leads to easier maintenance.

Available tools

There are two popular projects which allow you to write a desktop application via HTML/JS:

  • NW.js — formerly known as node-webkit, it’s the most popular way of writing native desktop applications
  • Electron — a newer contender made by GitHub which already gained big traction in the same space

Notable applications

Both of the mentioned projects are used in quite a few popular desktop applications.

Notable applications done with NW.js or Electron include Slack, Game Dev Tycoon, GitHub Atom, WhatsApp Desktop, Facebook Messenger Desktop, Popcorn Time and Microsoft Visual Studio Code. There’s an extensive list of projects made with NW.js and an extensive list of projects made withElectron (both containing links to Repositories for learning or contributing purposes).


 Mobile applications

Facebook Mobile applications made with React Native

Facebook Mobile applications made with React Native

With such a booming market, it makes sense to develop mobile applications. The JavaScript ecosystem provides a few solutions to developing cross-platform (iOS, Android and Windows Phone) applications. The most popular projects providing cross-platform mobile application include:

Ionic and Phonegap use a browser wrapper around your HTML/JS and provide access to otherwise unavailable features of the platform (camera, various sensors, etc.). Ionic is leveraging the power of Angular to provide a well-tested and stable platform.

Facebook’s React Native has an interesting approach in which they render your written application to higher-level platform-specific components to achieve a truly native look of the app. This means that you’ll have to write a separate view layer for each platform but you’ll do it in a consistent manner.In the words of Tom Occhino, a software engineer at Facebook, they’re trying the approach of “learn once, write anywhere” which is completely in the spirit of such a diverse ecosystem as this one.

Notable applications

While React Native doesn’t support Android just yet, it’s great that Facebook is using it in their own apps already (Facebook Groups and Facebook Ads Manager). Android support should arrive in less than two months.

Mobile applications written in Ionic or Phonegap include popular applications such as Sworkit, Mallzee, Chefsteps, Snowbuddy and Rormix. There are extensive lists of applications built with Ionic and applications built with Phonegap.


Back-end development

Node.js

Node.js

Node.js is also the main driving force in back-end development in JavaScript.

The main advantage of Node.js is it’s event-driven, non-blocking I/O model. That said — Node.js is great at handling data-intensive real-time applications with many concurrent requests. Node.js does it by handling all these concurrent requests on a single thread and thereby greatly reducing needed system resources and allowing for great scalability.

A typical example for these benefits are chat applications. They require uninterrupted connections from clients to a chat room (real-time, non-blocking) and notifications of new messages (event-driven) where you’re supposed to support large numbers of clients and rooms (data-intensive).

You can also have a fairly decent web server written in JavaScript. The main takeaway here is that its main purpose shouldn’t be CPU intensive tasks or connections to a relational database, but a high volume of connections.

The most popular modules associated with back-end development are:

  • express — simple web framework for Node.js
  • socket.io — module for building real-time web applications
  • forever — module for ensuring that a given Node.js script runs continously

How these modules fit together

First of all, you need a web server which can process typical HTTP request on various routes like http://localhost:3000/about. That’s where express comes in.

To have an uninterrupted connection with the server, socket.io is used with a server-side and client-side component of establishing connections.

Since express runs on one thread, we must ensure that an exception doesn’t stop the process (and server altogether). For that purposes, we use forever.

To learn more about these modules, visit their respective websites which feature many tutorials (socket.io even has you building a chat server and client as a hello world application).


Any combination of the above

Meteor JavaScript app platform

Meteor JavaScript app platform

It’s easy to imagine how all these aspects come together.

One of the most popular way of combining them is having a full-stack JavaScript framework like MEAN or Meteor.
MEAN combines express, Angular, Node.js and MongoDB to offer a web platform whose back-end as well as front-end are written in JavaScript.
Meteor is a platform offering full-stack web and mobile application development in JavaScript.

Another example could be a JavaScript minifier for which you write the base module for Node.js and then use that module in a CLI application, a desktop application, a mobile application and a web application (served by express, of course) — all in JavaScript.

The possibilities are endless and we’re probably just scratching the surface. This ecosystem is exploding with new techniques, frameworks, modules and even language specs being defined all the time. It’s really exciting.


Where should I start?

That depends on how familiar with JavaScript you are right now.

Should you only be starting out, there are great resources to start with JavaScript (or programming, for that matter) like Codecademy, Nodeschoolor Codeschool which are all interactive and fun.

If you’ve got some jQuery knowledge under your belt and have been dabbling with pure JavaScript, know that no framework or library is ever going to replace a good understanding of core JavaScript. Start with really digging into the nitty-gritty of JavaScript. For that purpose, I can’t recommend Kyle Simpson’s You don’t know JS series enough. It’s open source and available on GitHub. The open source nature makes it really easy for you to contribute with errors you notice in the books. The books are also available as hard copies if you prefer reading that way with the added benefit of supporting the author.

With a strong JavaScript core, it would be wise to brush up on Node.js. As you’ve seen, it’s the basis for almost all of the aspects. Node.js promotes asynchronous programming which takes a while to get accustomed to but solves the problem of blocking UI. The aforementioned learning sources (Nodeschool and Codeschool) can also be used here.

After that, just follow the path that seems the most interesting. Chances are, you’ll fall deeper down the rabbit hole, discover new things and enjoy the experience even more.

comSysto loves JavaScript

Getting Started with D3.js

You are thinking about including some nice charts and graphics into your current project? Maybe you heard about D3.js, as some people claim it the universal JavaScript visualization framework. Maybe you also heard about a steep learning curve. Let’s see if this is really true!

First of all, what is D3.js?
D3.js is an open source JavaScript framework written by Mike Bostock helping you to manipulate documents based on data.

Okay, let’s first have a look at the syntax
Therefore, lets look at the following hello world example. It will append an <h1> element saying ‘Hello World!’ to the content <div> element.

<!DOCTYPE html>
<html>
    <head>
        <script src="http://d3js.org/d3.v3.min.js"></script>
    </head>
    <body>
    <div id="content"></div>
        <script> 
            d3.select('#content')
                .append('h1')
                .text('Hello World!');
        </script>
    </body>
</html>

As you can see the syntax is very similar to frameworks like JQuery and obviously, it saves you a lot of code lines as it offers a nice fluent API.

But let’s see how we can bind data to it:

d3.select('#content')
   .selectAll('h1')
   .data(['Sarah', 'Robert', 'Maria', 'Marc'])
   .enter()
   .append('h1')
   .text(function(name) {return 'Hello ' + name + '!'});

What happens? The data function gets our names array as parameter and for each name we append an <h1> element with a personalized greeting message. For a second, we ignore the selectAll(‘h1′) and enter() method call, as we will explore them later. Looking into the browser we can see the following:

Hello Sarah!
Hello Robert!
Hello Maria!
Hello Marc!

Not bad for a start! Inspecting the element in the browser, we see the following generated markup:

[...]
    <div id="content">
        <h1>Hello Sarah!</h1>
        <h1>Hello Robert!</h1>
        <h1>Hello Maria!</h1>
        <h1>Hello Marc!</h1>
    </div>
[...]

This already shows one enourmous advantage of D3.js: You acctually see the generated code and can spot errors easily.

Now, let’s have a closer look at the data-document connection
As mentioned in the beginning, D3.js helps you to manipulate documents based on data. Therefore, we only take care about handing the right data over to D3.js, so the framework can do the magic for us. To understand how D3.js handles data, we’ll first have a look at how data might change over time. Let’s take the document from our last example. Every name is one data entry. 

Data-Document Example 1

Easy. Now let’s assume new data comes in:

Data-Document Example 2

As new data is coming in, the document needs to be updated. The entries of Robert and Maria need to be removed, Sarah and Marc can stay unchanged and Mike, Sam and Nora need a new entry each. Fortunately, using D3.js we don’t have to care about finding out which nodes need to be added and removed. D3.js will take care about it. It will also reuse old nodes to improve performance. This is one key benefit of D3.js.

So how can we tell D3.js what to do when?
To let D3.js update our data, we initially need a data join, so D3.js knows our data. Therefore, we select all existing nodes and connect them with our data. We can also hand over a function, so D3.js knows how to identify data nodes. As we initally don’t have <h1> nodes, the selectAll function will return an empty set.

var textElements = svg.selectAll('h1').data(data, function(d) { return d; });

After the first iteration, the selectAll will hand over the existing nodes, in our case Sarah, Robert, Marc and Maria. So we can now update these existing nodes. For example, we can change their CSS class to grey:

textElements.attr({'class': 'grey'});

Additionally, we can tell D3.js what to do with entering nodes, in our case Mike, Sam and Nora. For example, we can add an <h1> element for each of them and set the CSS class to green for each of them:

textElements.enter().append('h1').attr({'class': 'green'});

As D3.js now updated the old nodes and added the new ones, we can define what will happen to both of them. In our cases this will affect the nodes of Mike, Sarah, Sam, Mark and Nora. For example, we can rotate them:

textElements.attr({'transform', rotate(30 20,40)});

Furthermore, we can specify what D3.js will do to nodes like Robert and Maria, that are not contained in the data set any more. Let’s change their CSS class to red:

textElements.exit().attr({'class': 'red'});

You can find the full example code to illustrate the data-document connection of D3.js as JSFiddle here: https://jsfiddle.net/q5sgh4rs/1/

But how to visualize data with D3.js?
Now that we know about the basics of D3.js, let’s go to the most interesting part of D3.js: drawing graphics. To do so, we use SVG, which stands for scalable vector graphics. Maybe you already know it from other contexts. In a nutshell, it’s a XML-based vector image language supporting animation and interaction. Fortunately, we can just add SVG tags in our HTML and all common browsers will display it directly. This also facilitates debugging, as we can inspect generated elements in the browser. In the following, we see some basic SVG elements and their attributes:

SVG elements

To get a better understanding of how SVG looks like, we’ll have a look at it as a basic example of SVG code, generating a rectangle, a line and a circle.

<svg>
 <rect x="10" y="15" width="60" height="20" />
 <line x1="95" y1="35" x2="105" y2="15" />
 <circle cx="130" cy="25" r="6" />
</svg>

To generate the same code using D3.js, we need to add an SVG to our content <div> and then append the tree elements with their attributes like this:

var svg = d3.select('#content').append('svg');
svg.append('rect').attr({x: 10, y: 15, width: 60, height: 20});
svg.append('line').attr({x1: 85, y1: 35, x2: 105, y2: 15});
svg.append('circle').attr({cx: 130, cy: 25, r: 6});

Of course, for static SVG code, we wouldn’t do this, but as we already saw, D3.js can fill attributes with our data. So we are now able to create charts! Let’s see how this works:

<div id="content"></div>
<script>
 d3.select('#content')
        .append('svg')
            .selectAll('rect')
            .data([100, 200, 150, 60, 50])
            .enter()
            .append('rect')
                .attr('x', 0)
                .attr('y', function(data, index) {return index * 25})
                .attr('height', 20)
                .attr('width', function(data, index) {return data});
</script>

This will draw our first bar chart for us! Have a look at it: https://jsfiddle.net/tLhomz11/2/

How to turn this basic bar chart into an amazing one?
Now that we started drawing charts, we can make use of all the nice features D3.js offers. First of all, we will adjust the width of each bar to fill the available space by using a linear scale, so we don’t have to scale our values by hand. Therefore, we specify the range we want to get values in and the domain we have. In our case, the data is in between 0 and 200 and we would like to scale it to a range of 0 to 400, like this:

var xScale = d3.scale.linear().range([0, 400]).domain([0,200]);

If we now specify x values, we just use this function and get an eqivalent value in the right range. If we don’t know our maximum value for the domain, we can use the d3.max() function to calculate it based on the data set we want to display.

To add an axis to our bar chart, we can use the following function and call it on our SVG. To get it in the right position, we need to transform it below the chart.

[svg from above].call(d3.svg.axis().scale(xScale).orient("bottom"));

Now, we can also add interaction and react to user input. For example, we can give an alert, if someone clicks one our chart:

[svg from above].on("click", function () {
    alert("Houston, we get attention here!");
})

Adding a text node for each line, we get the following chart rendered in the browser:

Coding Example Result

If you would like to play around with it, here is the code: https://jsfiddle.net/Loco5ddt/

If you would like to see even more D3.js code, using the same data to display a pie chart and adding an update button, look at the following one: https://jsfiddle.net/4eqzyquL/

Data import
Finally, we can import our data in CSV, TSV or JSON format. To import a JSON file, for example, use the following code. Of course, you can also fetch your JSON via a server call instead of importing a static file.

d3.json("data.json", function(data) {
    [access your data using the data variable]
}

What else does D3,js offer?
Just to name a few, D3.js helps you with layouts, geometry, scales, ranges, data transformation, array and math functions, colors, time formating and scales, geography, as well as drag & drop.

There are a lot of examples online: https://github.com/mbostock/d3/wiki/Gallery

TL;DR
+ based on web standards
+ totally flexible
+ easy to debug
+ many, many examples online
+ Libaries build on D3.js (NVD3.js, C3.js or IVML)
– a lot of code compared to other libraries
– for standard charts too heavy

Learning more
As this blog post is based on a presentation held at the MunichJS Meetup, you can find the original slides here: http://slides.com/elisabethengel/d3js#/ The recording is available on youTube: https://www.youtube.com/watch?v=EYmJEsReewo

For further information, have a look at:

Cross Language Benchmarking Part 3 – Git submodules and the single-command cross language benchmark

In my recent blog posts (part 1, part 2) I have described in detail how to do micro benchmarking for Java and C/C++ with JMH and Hayai. I have presented a common execution approach based on Gradle.

Today I want to improve the overall project structure. Last time I already mentioned, that the project structure of the Gradle projects is not optimal. In the first part I will roughly repeat the main goal and proceedings from the past articles, secondly introduce some new requirements, and finally I will present you a more flexible module structure to split production code and benchmarks, which will then be embedded in a cross language super-project.
Continue reading

Developing a Modern Distributed System – Part II: Provisioning with Docker

As described in an earlier blog post “Bootstrapping the Project”, comSysto’s performance and continuous delivery guild members are currently evaluating several aspects of distributed architectures in their spare time left besides customer project work. In the initial lab, a few colleagues of mine built the starting point for a project called Hash-Collision:

Hash-collision's system structure

They focused on the basic application structure and architecture to get a showcase up and running as quickly as possible and left us the following tools for running it:

  • one simple shell script to set up the environment locally based on many assumptions
  • one complex shell script that builds and runs all services
  • hardcoded dependency on a local RabbitMQ installation

First Attempt: Docker containers as a runtime environment

In search of a more sophisticated runtime environment we went down the most obvious path and chose to get our hands on the hype technology of late 2014: Docker. I assume that most people have a basic understanding of Docker and what it is, so I will not spend too much time on its motivation here. Basically, it is a tool inspired by the idea of ‘write once, run anywhere’, but on a higher level of abstraction than that famous programming language. Docker can not only make an application portable, it allows to ship all dependencies such as web servers, databases and even operating systems as one or multiple well-defined images and use the very same configuration from development all the way to production. Even though we did not even have any production or pre-production environments, we wanted to give it a try. Being totally enthusiastic about containers, we chose the most container-like place we could find and locked ourselves in there for 2 days.

impact-hub-munich

One of the nice things about Docker is that it encourages developers to re-use existing infrastructure components by design. Images are defined incrementally by selecting a base image, and building additional functionality on top of it. For instance, the natural way to create a Tomcat image would be to choose a base image that already brings a JDK and install Tomcat on top of it. Or even simpler, choose an existing Tomcat image from the Docker Hub. As our services are already built as fat JARs with embedded web servers, things were almost trivial.

Each service should run in a standardized container with the executed JAR file being the sole difference. Therefore, we chose to use only one service image and inject the correct JAR using Docker volumes for development. On top of that, we needed additional standard containers for nginx (dockerfile/nginx) and RabbitMQ (dockerfile/rabbitmq). Each service container has a dependency on RabbitMQ to enable communication, and the nginx container needs to know where the Routing service resides to fulfill its role as a reverse proxy. All other dependencies can be resolved at runtime via any service discovery mechanism.

As a first concrete example, this is the Dockerfile for our service image. Based on Oracle’s JDK 8, there is not much left to do except for running the JAR and passing in a few program arguments:

FROM dockerfile/java:oracle-java8
MAINTAINER Christian Kroemer (christian.kroemer@comsysto.com)
CMD /bin/sh -c 'java -Dport=${PORT} -Damq_host=${AMQ_PORT_5672_TCP_ADDR} -Damq_port=${AMQ_PORT_5672_TCP_PORT} -jar /var/app/app.jar'

After building this image, it is ready for usage in the local Docker repository and can be used like this to run a container:

# start a new container based on our docker-service-image
docker run docker-service-image
# link it with a running rabbitmq container to resolve the amq dependency
docker run --link rabbitmq:amq docker-service-image
# do not block and run it in background
docker run --link rabbitmq:amq -d docker-service-image
# map the container http port 7000 to the host port 8000
docker run --link rabbitmq:amq -d -p 8000:7000 docker-service-image
# give an environment parameter to let the embedded server know it has to start on port 7000
docker run --link rabbitmq:amq -d -e "PORT=7000" -p 8000:7000 docker-service-image
# inject the user service fat jar
docker run --link rabbitmq:amq -d -e "PORT=7000" -v HASH_COLLISION_PATH/user/build/libs/user-1.0-all.jar:/var/app/app.jar -p 8000:7000 docker-service-image

Very soon we ended up with a handful of such bash commands we pasted into our shells over and over again. Obviously we were not exactly happy with that approach and started to look for more powerful tools in the Docker ecosystem and stumbled over fig (which was not yet deprecated in favor of docker-compose at that time).

Moving on: Docker Compose for some degree of service orchestration

Docker-compose is a tool that simplifies the orchestration of Docker containers all running on the same host system based on a single docker installation. Any `docker run` command can be described in a structured `docker-compose.yml` file and a simple `docker-compose up` / `docker-compose kill` is enough to start and stop the entire distributed application. Furthermore, commands such as `docker-compose logs` make it easy to aggregate information for all running containers.

fig-log-output

Here is an excerpt from our `docker-compose.yml` that illustrates how self-explanatory those files can be:

rabbitmq:
 image: dockerfile/rabbitmq
 ports:
 - ":5672"
 - "15672:15672"
user:
 build: ./service-image
 ports:
 - "8000:7000"
 volumes:
 - ../user/build/libs/user-1.0-all.jar:/var/app/app.jar
 environment:
 - PORT=7000
 links:
 - rabbitmq:amq

Semantically, the definition of the user service is equivalent to the last sample command given above except for the handling of the underlying image. The value given for the `build` key is the path to a directory that contains a `Dockerfile` which describes the image to be used. The AMQ service, on the other hand, uses a public image from the Docker Hub and hence uses the key `image`. In both cases, docker-compose will automatically make sure the required image is ready to use in the local repository before starting the container. A single `docker-compose.yml` file consisting of one such entry for each service is now sufficient for starting up the entire distributed application.

An Aside: Debugging the application within a Docker container

For being able to debug an application running in a Docker container from the IDE, we need to take advantage of remote debugging as for any physical remote server. For doing that, we defined a second service debug image with the following `Dockerfile`:

FROM dockerfile/java:oracle-java8
MAINTAINER Christian Kroemer (christian.kroemer@comsysto.com)
CMD /bin/sh -c 'java -Xdebug -Xrunjdwp:transport=dt_socket,address=10000,server=y,suspend=n -Dport=${PORT} -Damq_host=${AMQ_PORT_5672_TCP_ADDR} -Damq_port=${AMQ_PORT_5672_TCP_PORT} -jar /var/app/app.jar'

This will make the JVM listen for a remote debugger on port 10000 which can be mapped to any desired host port as shown above.

What we got so far

With a local installation of Docker (on a Mac using boot2docker http://boot2docker.io/) and docker-compose, starting up the whole application after checking out the sources and building all JARs is now as easy as:

  • boot2docker start (follow instructions)
  • docker-compose up -d (this will also fetch / build all required images)
  • open http://$(boot2docker ip):8080/overview.html

Note that several problems originate from boot2docker on Mac. For example, containers can not be accessed on `localhost`, but using the IP of a VM as boot2docker is built using a VirtualBox image.

In an upcoming blog post, I will outline one approach to migrate this setup to a more realistic environment with multiple hosts using Vagrant and Ansible on top. Until then, do not forget to check if Axel Fontaine’s Continuous Delivery Training at comSysto is just what you need to push your knowledge about modern deployment infrastructure to the next level.

Cross-language benchmarking – Gradle loves native binaries!

Last time I gave you an introduction to my ideas about benchmarking. I explained to you that comparing performance between different compilers and implementations is as old as programming ifself, and, above all, it is not that simple to setup as it sounds. If you don’t know what I write about, have a look at the former post on this topic. There, I explained, how to setup a simple Gradle task which runs JMH benchmarks as part of a Gradle task chain. However, the first article is no prerequisite at all for this article!

Today I want to start from the other side. As a C++ developer who wants to participate in a performance challenge, I want

  • a framework that outputs numbers that can be compared with other benchmark results
  • to run all benchmarks at once
  • to compare my results with other implementations or compiler assemblies
  • to execute all in one task chain

Continue reading

Cross-language benchmarking made easy?

There is this eternal fight between the different programming languages. “The code in ‘XYZlang’ runs much faster than in ‘ABClang'”. Well, this statement bears at least three misunderstandings. First of all, in most cases it is not the source code you wrote, that gets actually executed, second thing, please define “faster” upfront, third thing, and general rule of experiments: Do not draw conclusions based on a benchmark, see them more of a hint that some things seem to make a difference.

In this article series, I will not discuss about numbers and reasons, why code X or language Y supersedes the other. There are many people out there who understand the backgrounds much better than me – here, short hint to a very good article about Java vs. Scala benchmarking by Aleksey Shipilëv. There will be no machine code, no performance tweaks which make your code perform 100 times better.  I want to present you ideas, how you can set such micro benchmarks up in a simple, automated and user friendly way. In detail we will come across these topics:

  • How to setup one build that fits all requirements?
  • Gradle in action building across different languages
  • Benchmarking with JMH (java) and Hayai (C/C++) to proof the concept
  • How to store the results?

Continue reading