While Serverless is not exactly a new idea, it has certainly been gaining traction in recent months. In this article I will explain what Serverless Architecture is and what are it’s advantages and disadvantages. And most importantly – the use cases!
So what exactly are we talking about?
In a nutshell, going Serverless means basically offloading the always on servers and outsourcing their jobs to the short lived stateless jobs. These jobs (also called functions or lambdas) are created only whenever there is something to be done. As soon as they complete and there are no more incoming requests, they are being teared down. As you can imagine this opens a road to a completely new billing model for your business/clients. Small short lived stateless jobs allow us to pay per single execution of each “job”!
There is also an added benefit of what is called the “no-ops” approach. Since there has to be an efficient run-time and a platform creating and tearing down our jobs, there has to be someone taking care of its maintenance, updates etc. (this could be Google, Microsoft or Amazon). Technically our short lived jobs are provided to us in the form of so called cloud lambdas or cloud functions depending on the platform you choose.
On Azure you would have Azure Functions and the Amazon users have their Amazon Lambdas, Google also calls it Cloud Functions. Serverless, closely associated with the FaaS approach (Function as a Service) has been adopted by many global companies only few of which are Coca Cola, Reuters, AOL, Codepen and Netflix. For the latter the Serverless architecture is supporting over a billion hours of streamed videos every week! Netflix use case is especially interesting and I encourage you to read about it here.
How does it really work under the hood?
Obviously Serverless doesn’t really mean there is no server. Except sometimes it does. If we are not doing any work, then no Lambdas or Functions will be instantiated. Hence no business logic executors will exist wasting electricity. However lambdas are not the only thing driving Serverless. The FaaS based architectures require few key components and these components need to be managed by some more or less complex platform.
I’ve already told you about jobs/functions/lambdas (pick your favour name – they’re all the same really). They are the place for most of your business logic, but they are not the only component of Serverless architecture. Next to the business logic, most use cases for computer systems will also have some client application and some data store. Clients need a way of invoking business logic but since our lambdas are simple short lived task executors they won’t have an IP address your client can contact them with. For the purpose of orchestration you will need a public API that will pass clients requests to our lambdas. Often we can see system composed of a bunch of micro services exposed to the rest of the world via one or few public APIs. This is no different except of the technology selected to build our private APIs.
So, we can now create a Public Gateway API that a client can call. This could be as simple as a restful WebAPI. And a couple of lambdas that get executed by this public API depending on what needs to be done.
And that’s great! But we still need one final component to make this a functional solution and that’s a data store. One key requirement for a data store to be fit for Serverless architectures is the ability for raising events whenever something interesting happens. Not everything though – only actions for which some functions have subscribed. Events are at the heart of Serverless. You will have events raised by the database as well as lambdas themselves. All this allows for a very cost effective operations as the functions do not exist if they are not needed. The on-demand factor of lambdas inspired some database vendors to incorporate it in to their products one of which is Amazon Aurora database. Aurora can be configured to automatically startup and shutdown if a business scenarios requires it!
There has to be a catch somewhere right?
As we have all learned at one point or another in live – everything comes at a price. This is also true for Serverless. The amazing cost effectiveness and scalability gained by the use of lambdas and event sourcing came with some limitations:
- Functions, due to their statelessness, are difficult to debug. Extensive logging will often be your best way of diagnosing code problems.
- Cold Start. Like all the other environments, lambdas depending on the runtime configuration can take a while to startup. You can tackle this startup time issue by selecting a runtime language with a smaller “footprint”, like Python. You can also periodically “ping” some of your functions to keep them alive, there are ready tools that can help you with this.
- Vendor Lock. Since you don’t own the platform hosting functions, you’ll be dependent on Microsoft, Google or Amazons choices of integrations and platform updates.
- Serverless is a relatively new technology so hiring may be slightly more difficult as experts are few.
So, is it worth it?
There are some limitations, yes. But the benefit of cost-effectiveness is well worth any inconveniences as proven by tech giants like Netflix. World’s favourite chill provider has trusted Serverless to drive their costs down allowing them scale quicker in over 190 countries. Lastly remember that Serverless does not have to be difficult and you can start up quicker with the help of provider agnostic frameworks like Serverless.
I strongly encourage you to visit Serverless website whether you are interested in their help or not. It is a great library of use cases and other information. You will definitely find it useful on your journey to Serverlessnes! Another useful material you may want to check would be Matin Fowlers Serverless article.
Hope you’ll find this article helpful and as always, feel free to leave your thoughts and questions in comments bellow.