What is GraphQL and why you should care?

Hello dear reader!

First of all I would like to apologise for not having been able to post in the last few weeks, I have been swamped with work. Before continuing, I’d like to wish you all a very happy new year (better late than never), and I couldn’t think of a better post for the start of this year than this one.

graphqlAs you all know, most Apps depend on a backend system. This backend is responsible for communicating and serving data among the different users.This involves having an API (Application Programming Interface) to access the data. Most APIs are REST (Representational State Transfer) APIs. However, good old Facebook started to see some limitations and thought they could come up with a better and more efficient system. In this post we will do a bird’s eye of how it works and what advantages it brings to the table.

In a nutshell, GraphQL is a syntax that describes how to ask for data, and is generally used to load data from a server to a client. GraphQL has three main characteristics:

  • It lets the client specify exactly what data it needs.
  • It makes it easier to aggregate data from multiple sources.
  • It uses a type system to describe data.

So how did this come to be?

For example, imagine you need to display a list of posts, and under each post a list of likes, including user names and avatars. So, you tweak you like array to contain avatars. Such as shown below.


But now, it’s time to work on your mobile app, and it turns out loading all that extra data is slowing things down. This translates in the fact that you need two endpoints, one with the likes and one without them.

Furthermoire, imagine if the “likes” and the “avatar” where to be stored on different storage types (MySQL for Avatar and MongoDB for likes for instance). Well now, we are in a bit of a mess. Extrapolate this scenario to however many data sources and API clients Facebook has to manage, and you can imagine why good old REST APIs were starting to show their limits.

The Solution

The solution Facebook came up with is conceptually very simple: instead of having multiple “dumb” endpoints, have a single “smart” endpoint that can take in complex queries, and then massage the data output into whatever shape the client requires.

What this means, is that there is a layer between the client and multiple data sources that is in charge of receiving requests and deciding how to serve them. Let’s have a quick metaphor example: The old REST model is like ordering pizza, then getting groceries delivered, then calling your dry cleaner to get your clothes. Three shops, three phone calls.


GraphQL on the other hand is like having a personal assistant: once you’ve given them the addresses to all three places, you can simply ask for what you want (“get me my dry cleaning, a large pizza, and two dozen eggs”) and wait for them to return.


In other words, GraphQL establishes a standard language for talking to this magical personal assistant. Why Facebook decided not to call it Siri is beyond me ;).

The best way to understand more about GraphQL and its possibilities is by practical examples. Let me know if you are interested and what you would like to see in the next post.

Thanks for reading!

Until next time!


Is Serverless the way to go? – Part 1

If you are currently in the IT business and rely on Internet infrastructure to serve data you probably know what a pain it is to manage those IT systems. Constantly having to manage servers to increase throughput, lower costs, ensure autoscaling and keep latency at bay is a tremendous effort. It has even spun a new role in the industry, the Dev Ops.


Amazon Web Services (I’ll be using them as the example since they are the leader, but other providers have similar resources) have made giant efforts to try to help developers with this aspect. One of the resources they offer is what they call Elastic Beanstalk. This effectively is a system that allows you to provision code to a number of instances that are behind a load balancer. The load balancer is the key, as it can deploy code into instances by taking them out of the group until they are ready and it handles auto scaling. Auto scaling is handled via triggers, for instance, if you use a CPU trigger, when the overall CPU usage goes over X% the load balancer will spun a new instance with your code and add it to your group, and when it goes below X% it will remove instances. Basically, it helps you manage instances while you maintain full control of everything. So, everything covered no? Amazon felt it was so great three years ago they made a strange video with a curious voice over (honestly, with the budget these guys handle you think they could do better) you can watch below.

However, the truth is this system has immense limitations, the first of which is what happens if you get a traffic spike. When you know you are expecting heavy traffic you can get ready for it by provisioning more instances, but what when that catches you by surprise? Imagine a blog post you don’t expect or some other type of uncontrolled media like an influencer. You are basically screwed, since the load balancer is not particularly agile at sensing something that doesn’t grow in a linear fashion (it typically uses health checks every 1 to 5 minutes), so your system would probably go down as your instances serverlesswould get saturated.
So how do you solve this? How do you prepare for situations like this without having to spend vast amounts of money of infrastructure provision? One of the possibilties is going serverless, and we will analyse what it is and how it works in the next blog post 😉


Hope to see you there!

Parse closes down: Is Backend as a Service profitable?

Parse starting sending messages last week that it would be winding down it’s services. This came as a shock for many people who have come to rely on their services as a way to reduce the complexity of having to maintain, deploy and scale your own backend solution. For those of you who haven’t seen it, you can take a look at the announcement below.

Parse closes

It is clear that Facebook is not joking around and not keeping services and tools that are not as profitable as they need. This is a shock for companies and developers as it demonstrates that in the end trusting your backend to an external system can have nefarious consequences for you. From the moment you don’t control all the variables, the risk of this type of problems always exists.

Parse are going a give a full year until the sun sets on the platform, they are even open sourcing the Parse server (which is build on node.js and requires a Mongo Database). But mind you, self hosting a Parse server is by no means easy and never substitutes the ease of the original Parse panel, so other alternatives like setting up your own APIs and creating your own infrastructure, or another Backend as a Service should be considered.

A few years ago, I published a post named “Is Backend as a Service the new Gold rush?”. In this post I analysed if systems like Parse that made developing Apps with a server side were the new money making scheme of the tech world companies. However, after the Parse announcement, one can’t but help thinking that perhaps it is not such a profitable endeavour.

If you stop to think about it, big companies usually create their own Backends and host them on systems like Amazon Web Services, and those are the ones that pay the big bills. Small developers that use Parse, tend to have the free version (which has a ridiculously high limit usage before it starts charging) or pay very little money.

So, is backend as a service really profitable? Will it become an open source self hosted alternative? Or will companies use this as a launch pad for their new and reinforced Parse alternatives?

Let me know your thoughts!