Serverless is a Security Nightmare: Here’s How to Protect Yourself

Image by Pete Linforth from Pixabay
Photo : Pete Linforth from Pixabay

Serverless has taken the world by storm. The sheer convenience of outsourced, highly-maintained servers - perfectly scalable - is an offering that more and more companies are awakening to. However, the security complications that serverless entails has already claimed thousands of unwitting victims. As powerful as this new architecture is, serverless security is a brand-new beast that needs to be dealt with carefully and comprehensively. 

What is Serverless?

Serverless describes a model of application development that offers servers ready for deployment. These servers are managed by a cloud provider. This company maintains the computing resources supporting the servers; separates their clients' instances, and allocates the resources accordingly. So, servers still exist in 'serverless' - they're just separated from the app development process. 

Able to scale their server requirements up and down automatically, deployed serverless apps are highly reliable. If a function needs to be run for multiple users, the outsourced servers will start, run for as long as necessary, and then end when each instance no longer requires said function. As a result, serverless infrastructure is capable of handling sudden spikes in user numbers and requirements. On the other hand, a traditional application, dependent upon a fixed quantity of server space, can buckle under the weight of a sudden user influx. Serverless can also save the savvy developer considerable amounts of money: cloud providers usually offer metered solutions, dependent upon demand. Thanks to this, a serverless function is only paid for when directly in use.

The convenience, cost-effectiveness and reliability of the serverless solution has seen an explosion in its popularity in the last few years. Providers such as AWS, Azure and Google Cloud have grown substantially, benefitting from increasingly mainstream adoption. Over 50% of all cloud customers now rely on serverless applications. This is partially driven by the current trend of retrofitting existing applications with serverless architecture, giving them the same features of shiny new serverless applications. 

Serverless Security 

As software and its development increases in efficiency and ease of use, its security demands will continue to change rapidly. The serverless application concept presents what many consider one of the biggest paradigm shifts in application security.

Serverless presents an entirely new list of vulnerabilities, with a far stronger lean toward code-related misconfigurations. Complicating matters further is the question of 'whose responsibility is it anyway?' As a critical component to your attack surface, a serverless provider must guarantee that their own servers are safe from breaches and attackers. However, bespoke configurations are not their responsibility. As a rough guide, the cloud provider is only responsible for the overarching security loopholes. They periodically patch the infrastructure, and configure the servers with proactive protection. They need to securely handle account management, and ensure only supported operating systems and software are used. 

However, the individual instances that you rely on are entirely up to you. This is arguably the harder task, as serverless functions run in entirely separate compute containers. This architecture creates a flow that is disjointed, not managed by one single server. Instead, that critical application will be made of hundreds of different functions that run separately. Each function has an individual role, triggered from a unique event and with no clue about the thousands of other moving parts.

Traditional application security saw organizations relying on each application's infrastructure - and perhaps some network-based tools. Each application's defenses could be assessed and bolstered with a firewall. The lack of clear boundaries within serverless applications creates a problem called Perimeter Blindness - where traditional protection methods struggle to recognize the function at hand.

The Worst Serverless Flaws

Let's imagine a serverless application that allows users to upload a file to cloud storage. Once the user chooses a file, a serverless function triggers - it reads the file, processes it, and stores it on the database. The process is super useful for legitimate users. However, it represents one of the most severe security flaws that serverless struggles with: an event data injection attack. If an attacker uploads a file laced with malicious code, the serverless function will still read and process the file, making no discrimination between legitimate and malicious data. An attacker could upload a document file with XML External Entities (XXE) payloads. Once the serverless function reads this, it gives the attacker access to the backend variables of the function; this in turn is leveraged to gain access to the user's cloud account.

Alongside injection flaws, broken authentication represents another major issue. The existence of multiple potential entry points, and events introduce brand-new complexity. This, alongside insecure deployment, produces their own overlapping hosts of vulnerabilities. Attackers will snoop around for a forgotten resource, such as an abandoned APIs, and attempt to bypass authentication methods. For example, if a function is set to trigger an email inside the organization, but bad actors can send spoofed emails that also trigger the function, the result is that they can trigger internal functionality without authentication.

Secure and Serverless

The key security element to serverless deployments is testing. It's vital to recognize the exploded nature of serverless apps, and understand that these are now thousands of application components working together. Stress testing is the main way in which these unrelated components can be orchestrated. For example, stress testing can probe the APIs that support deployments, flagging up misconfigurations and robustly working its way through authentication and permission settings. Individual focal points across permission and session management help to pinpoint potential issues. 

Statistical self-management techniques can also be used to detect penetration or denial-of-service attempts. For example, a social media application, designed and released for the North American market, could easily detect even a sudden and unexpected spike in activity in the middle of the night. This, one recognized as abnormal, can be flagged as an attack and security teams alerted.

Now more than ever, organizations need a new solution that views the security management process as understanding how these small, individual elements all fit together.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.
* This is a contributed article and this content does not necessarily represent the views of itechpost.com

Tags

Company from iTechPost

More from iTechPost