Is it worth optimising your Lambda functions?

January 25th, 2021 Written by Ivan Seed

With serverless’ pay-as-you-go pricing, customers have benefited from more cost-effective architectures running on services like AWS Lambda. Because of this pricing model, if we can shave time off our Lambda execution time then we could be saving money.

This got me thinking, is it actually worth spending your time to optimise Lambda functions and how far should you go to save some money?

How memory impacts cost and performance

Before we get into things, let us quickly go through how Lambda’s allocated memory impacts cost & performance.

  • Lambda duration bill is calculated per GB-second used, at the time of writing this article eu-west-1 charges $0.0000166667 dollars for every GB second run.
    • e.g. 128MB Lambda running for 1150ms costs = 128/1024 x 0.0000166667 * 1150/1000 = $0.00000239583 per invocation
    • e.g. 5GB Lambda running for 30ms costs = 5120/1024 x 0.0000166667 * 30/1000 = $0.0000025 per invocation
  • vCPU is linearly proportional to memory allocation. At 1,769MB of memory the function has the equivalent of 1 vCPU, then approximately every ~1,695MB added to the function is the equivalent of adding another vCPU.
    • This means for single-threaded applications, there will be a diminishing return scaling the Lambda function past 1,769MB.
    • For multi-threaded applications, you will want to scale past 1,769MB to reap the benefit of being multi-threaded which could mean lower invocation cost.

It can be easy to assume that having more memory means you get faster performance at the expense of higher invocation costs. But since the Lambda has more resources, the execution time will be shorter, which can lead to cheaper invocation costs; it becomes a balancing act.

To demonstrate this, here are two examples of a single-threaded and a multithreaded function:

AWS-PowerTuning-single-threaded-function-results
Figure 1: AWS Power Tuning single-threaded function results

As expected with the single-threaded function, allocating any more memory after 1,769MB did not improve the execution time.

Figure 2: AWS Power Tuning multi-threaded function results

Here we can see that allocating 1536MB of memory for this example multi-threaded function was the optimal amount for cost. Allocating more memory still improves performance at a very small cost. 

You can view the results of the AWS Power Tuning site:

Pricing update

AWS gave us an early Christmas present in 2020 by updating Lambda billing to round up to the nearest 1ms, compared to the 100ms increments it did before, providing instant cost savings for customers.

The problem is that before this pricing update it did not matter if your function took 1ms or 99ms to run as it would be rounded up to 100ms. This made micro optimisation for cost not really worth the time investment. Of course, if we have a time-sensitive workload you may still be wanting to tweak and optimise your function. 

Let’s see how the new pricing affected one of our client’s production functions:

Billed-duration-before-and-after-pricing-update
Figure 3: Billed duration before and after pricing update

Because of this use case in particular, a backend API written in Golang, the average execution time was already low and the pricing update brought significant savings in Lambda cost.

Why should we optimise our Lambdas?

Most of us want our applications to be performant. When comparing cost vs performance it tends to come down to a trade-off between the two, but with Lambda’s pay-for-what-you-use pricing you can usually have the best of both worlds.

Whether you have a customer-facing API or you have a Kinesis consumer, faster execution times can bring business value. For example, faster API response times, especially those critical for a website page loads, reduce the probability of bounces. Even improving data ingest and processing speeds could mean your business can react faster to time-sensitive events.

If cost-optimisation is important to you, you will probably be considering optimising your functions for cost which means configuring the allocated memory to a sweet spot. If you are using Lambda for a core system, like a backend API, faster response times could also lead to a better user experience which can outweigh the additional cost.

Optimising memory configuration

It is recommended best practice from AWS to performance-test Lambda functions to find the right memory configuration, and it is the most straightforward way to optimise your Lambda.

AWS recommends using AWS Lambda Power Tuning. I found it to be quite painless to get up and running, and I was able to deploy this tool by using the Serverless Application Repository. Once it had created the state machine all I had to do was pass it my function’s ARN and give it a test payload. It took me less than 5 minutes to deploy the tool and test my first function, but functions that require more complex payloads might take more time to set up.

Figures 1 and 2, above, were generated from the tool’s state machine output making it easy to visualise the test results. The tool can also be configured to automatically update the memory configuration of the function under test and you also have control over the strategy weighting (cost vs performance).

For the time it takes to set up, it was definitely worth running this tool on the functions at least once. You don’t strictly need the tool since you can invoke your functions with test payloads manually and access the duration and billing metrics via CloudWatch Metrics, but using Lambda Power Tuning is a lot quicker.

If you don’t really care about a function, for example if it is not critical and infrequently called, you may not see a benefit in tweaking the memory so leaving it over provisioned may be more cost-effective use of your time. Just remember that scaling past 1,739MB will have diminishing returns for single-threaded functions.

Long term, we ideally want to remove the human from the testing process and automate this so we can continuously test and tweak our Lambda functions. In this blog, Yan Cui talk about how you can use lumigo-cli to integrate this into automation into your CI/CD.

Optimising your code

So how about making our code more performant? Is it critical that the Lambda execution time is as low as possible to meet business needs or is it that you are wanting to cost optimise your function?

If it is for a business requirement only you can answer the question as to whether there could be value in doing this. However, if it is mainly or purely for cost optimisation, you may want to hold fire.

There are a few factors that we need to consider to figure out the return on investment… so I built a calculator.

It takes in a few variables:

  • Price per GB/s – this will be subject to current AWS pricing in your region.
  • Invocations per minute – how many times is your function invoked per minute on average?
  • Configured memory – what is the function’s configured memory size?
  • Current execution time – what is the current average execution time of your function?
  • Lifespan – how long do you anticipate the function being operational?
  • Development cost per hour – this is the hourly cost to the business, how much does it cost to spend time refactoring this?

With this we can get a rough estimate of how much you save, or lose, spending time optimising your functions. For example, let’s say we have a 1024MB Lambda function that runs at an average of 200ms, is called on average 150 times a minute, we expect the lifespan of this function to be 18 months, and the development cost per hour is $25. Using these numbers this is the cost savings we would see:

How-much-will-you-save-optimising-your-Lambda-function
Figure 4: How much will you save optimising your Lambda function

Well that was… anti climatic. So in the best-case scenario we can save is $330 if a developer spends one hour making a 90% code performance improvement, which is unrealistic. The issue is that it is not easy to know how long the time investment will take, or how it is going to affect the execution time (we could spend all that time and perhaps make it worse!).

We also need to consider the potential opportunity cost in spending time tuning these functions as there is more than likely other valuable work we could be focusing on or other areas where we can look into cost optimisations in our stacks.

As with a lot of these fine-tuning strategies, the limiting factor seems to be developer time. We need to remember that one of the main benefits of Lambda is the time saved not having to worry about the management of infrastructure so that we can focus on the more important problems.

You can check out the calculator here and plug your own numbers to see for yourself if spending time micro optimising is worth it.

Conclusion

We covered how memory configuration influences Lambda’s cost and execution time, how the new 1ms billing brought us noticeable cost savings, and how we can optimise our Lambda functions. So… is it worth optimising your Lambda functions? Let me break it down into two questions.

Should we spend some time optimising our allocated memory?

Yes, just spending a little bit of time using a tool like AWS Lambda Power Tuning can help you tune your functions to get an optimal memory setting.

Should we spend time optimising our code?

Probably not; it really depends on your reasons but if you are looking for a cost saving you will most likely not make it worth the time investment.

The main lesson here is that unless you have a considerable Lambda bill per month, which is pretty unlikely, it is not going to be worth spending the time trying to micro optimise it. If your Lambda cost is astronomical, perhaps you should be looking for more dedicated performance by leveraging ECS and EC2 for the diverse selection of instance types optimised for different use cases.