The Pitfalls of Serverless Computing

in Serverless , Monitoring and Observability

The pitfalls of serverless computing

Serverless is one of the significant trends of the moment in software development and deployment. A promising technology, Serverless Computing is developing very quickly in companies. In this concept, the cloud provider is fully responsible for launching and executing the code of your applications. A Serverless platform ensures that the resources necessary for its optimal operation are available.


Most studies show that Serverless Computing technologies are currently experiencing the most substantial growth in the very varied universe of cloud services. Datadog has published the results of a survey that voluntarily limits its analysis to the Serverless FaaS (Function as a Service) approach and, more particularly, to its use through AWS Lambda.

The first key finding from the study is that half of AWS users have also adopted Amazon Lambda. The research shows that in two years, the concept of Serverless Computing has moved within companies from the experimental or curiosity stage to much more extensive use with a wide variety of companies already having a part of their infrastructure in AWS.

But, behind this term, hide several realities. The last year’s survey of 501 IT professionals by Cloud Foundry found that companies need to be careful when switching to a Serverless architecture. What is behind this warning? We will focus here on coming back to the disadvantages and possible pitfalls of these Serverless approaches.

Pitfalls of Serverless Computing

Everything is always a question of balance. The significant benefits of Serverless Computing necessarily come with limits and constraints that should not be overlooked or overlooked.

Architecture complexity

Serverless platforms are optimally designed for scaling up. The direct consequence of this conception is that if a database (in the case of Database as a Service) or a function (in the case of Function as a Service) is only very rarely called, it will face a longer boot time especially compared to equivalent resources running on a dedicated server.

The Serverless infrastructure seeks to optimize the use of its underlying resources and therefore frees up anything that is not frequently used, resulting in a longer wake-up time (since it is necessary to fill the caches and reload the frameworks of ‘execution).

Limited freedom

Especially in the context of the Serverless platform, there are often constraints specific to each platform that must be known and integrated. Typically, your functions are often limited in code size and especially in execution time.

Therefore, it is important to keep in mind that even if these platforms have excellent scalability, they do not clear the developers to deliver quality code.

Resource constraint for debugging and monitoring

The loss of control linked to the very concept of Serverless makes it more complex to diagnose and monitor applications, particularly in terms of execution performance and resource use. Nevertheless, the “pay as you go” system requires you to have a good view of the execution times and the resources consumed since these elements will be invoiced to you.

This essential aspect is gradually improving with the maturity of monitoring tools integrated into the platforms. The appearance of third-party tools specialized in monitoring cloud resources such as Dashbird, Epsagon, CloudWatch, Thundra, IOPipe, or Stackerry is providing some flexibility to the monitoring.


Read Top Serverless Platforms to Follow in 2020.


More complex security

The introduction of Serverless in your security policies adds even more heterogeneity and complexity. Serverless also tends to increase your attack surface by multiplying access points and technologies. Besides, these technologies are still relatively immature and poorly understood by security officials and developers. In short, the security of your Serverless resources should not be overlooked and requires special attention even if the platforms and infrastructures are well protected and defended by the cloud operator.

Vendor lock-in

Serverless consists of relying entirely on a service provided by a third party, which necessarily increases the dependence of your developments on this supplier. Whether it is BaaS (Backend as a Service), FaaS, DBaaS, or to a lesser extent CaaS (Container as a Service), it will not be easy to change providers.

The frameworks and the languages ​​of development supported by one provider and the others differ considerably. Serverless is, without doubt, one of the cloud technologies on which the “Lock-in” effect is most potent. But the benefits are strong enough to offset the risk of this increased dependence.

Deployment remains a concern

On paper, Serverless simplifies all the deployment phases to the extreme. But when deploying interdependent functions or interdependent containers, procedures must be put in place to stop the event generating services beforehand and simultaneously implement all of the updated containers or functions. This is nothing new but poses organizational problems, a rollback in case of concerns, and availability of services that the Serverless does not solve alone.

Serverless is not a universal solution

Let us be clear; Serverless is not and does not pretend to be the comprehensive solution to all your problems. Also, the very concept of “Function as a Service” is well suited to a specific type of development: event programming, where the triggering of events dictates the execution of functionality.

On the other hand, it adapts less well to more monolithic scenarios with long transactions and intensive calculations or architectures designed for VMs or containers running in an orchestrator.

Is Serverless architecture a good choice for app development?

Serverless technologies allow a team to start an application by focusing on the business logic of the code, rather than the underlying infrastructure. This not only gives more time to market with more agility but also allows for more team innovation.

Rapid development

The primary interest of Serverless is to shorten the time between the idea of ​​the project and its production. Developers no longer have to worry about physical infrastructure, resources, and operating systems. They can entirely focus their attention on code quality and functionality without wasting time on the software plumbing required for implementation.

Load up

IT developers and teams no longer have to worry about the complex problematic scalability. No design, advanced settings, and endless tuning phases to ensure the scalability of the application. It is the role of the cloud provider (and its Serverless platform) to provide the necessary resources according to the needs of the use.

Reliability of executions

By design, Serverless platforms are generally extremely resilient and reliable. Because developers also have fewer lines of code to write, they can focus more on the quality of the functionality they implement. This results in generally more reliable executions as far as they rely on very elastic resources.

Economic efficiency

With Serverless, there are no costs related to the acquisition and installation of hardware, no costs of operating systems and attached licenses, no maintenance costs, no update costs to BIOS, and OS. The company also saves unused resources like all those VMs too often left on and yet wholly abandoned.

Conclusion

Companies are optimistic about the use of Serverless, predicting that most described challenges will be met or are being met. Serverless will gain popularity because by offering a simplified programming environment, the platform makes the usage of the cloud much easier, thereby attracting more people.

Serverless Architecture avoids the need for manual resource management and optimization that today’s server-based computing imposes on application developers. It is a maturation similar to the transition from assembly language to high-level languages. Even comparatively non-technical end users may be able to deploy functions without any clear understanding of the cloud infrastructure.



Share this story with your friends
editorial
The Chief I/O

The team behind this website. We help IT leaders, decision-makers and IT professionals understand topics like Distributed Computing, AIOps & Cloud Native

Sponsored


Content Marketing Platform for Cloud Native Companies
Get in touch

Content marketing platform for Cloud Native Companies

DevOps Narrated (Podcast)
Subscribe to The DevOps FaunCast

Listen to the stories behind the stories and learn new things each week