Effectively test your serverless applications

Ike Gabriel Yuson
Towards AWS
Published in
15 min readApr 7, 2024

Going serverless, like all the other types of architectures, has its pros and cons. The idea of only shipping code to the cloud and servers being abstracted away from the user can be a game-changer for some people. However, this relatively new type of technology, even though it gives a lot of amazing features at face value, also has its caveats.

The number one question of every developer who is trying to get their feet wet in implementing serverless solutions in AWS is this:

How on earth do I invoke my APIs locally?

Monolithic backend applications and their corresponding hot reloading command

If you’re a backend developer coming from a traditional background of creating APIs from serverful frameworks such as Express or Django, you might be familiar with its corresponding hot reloading feature with nodemon, python manage.py runserver, or even docker-compose up -d if you utilize container volumes during the development process. Arguably, however, with serverless, you might notice that there is no hot reloading feature that fully encompasses, or is similar to, the developer experience of a traditional backend.

The main reason why implementing serverless solutions gives developers a somewhat unique developer experience is because serverless is built to be in the cloud. AWS Lambda, and all other serverless services of AWS work their best in the cloud, not in a local environment. However, this ideology may be quite problematic for some people.

So are you telling me that I must deploy to the cloud every time I make changes to my code?

Yes and no. In serverless, you have the power to create your own ephemeral environment in the cloud. The general pricing scheme of cloud computing is that you only pay for what you use. In serverless, however, although this also holds true to a certain extent, what more accurately defines its pricing scheme is that you only pay for what your end users use. Hence, the serverless pricing scheme allows you to create and destroy environments in the cloud with little to no costs.

Being able to deploy your own development environment in the cloud is quite advantageous in mimicking the production environment. However, some might still find this as a deal-breaker since you must deploy at every code change. If you have already been practicing serverless with existing IaC (Infrastructure as Code) solutions such as the Serverless Framework or AWS CDK, you might be familiar with the most annoying feedback loop:

code deploy invoke Lambda function check CloudWatch logs debug. Repeat until stable.

As stated above, deploying an ephemeral development environment to the cloud has its benefits such as gaining the confidence that if your code works as expected in your own ephemeral environment, then its behavior when shipped to production would be no different. This process will, without a doubt, give you optimal returns, however, the argument still stands that this process is quite a hassle where we deploy at every code change as well as giving every developer access to the cloud environment. Not to mention, each deployment lasts about 3–10 minutes depending on how big your application is.

There has been efforts to improve this slow and cumbersome feedback loop. Features such as the serverless offline from the Serverless Framework enables you to emulate your API Gateway and Lambda functions locally which, without a doubt, speeds up the development process. However, this feature is only limited to emulate API Gateways and Lambda functions. There are other plugins that enables you to emulate other serverless services such as DynamoDB, SQS, SNS, or S3, but setting these up requires a significant amount of operational overhead. No local emulator of today is able to fully mimic a cloud environment. Local environments may not perfectly mirror the production environment, potentially leading to discrepancies in behavior and security configurations. This approach may give you false negatives where “it works on my machine” but not in production.

So one question to ponder on is, are you willing to go the extra mile and setup those pretty complex and heavy emulators just to be able to invoke your functions locally?

Another example is cdk watch from AWS CDK. AWS CDK fully embraces the idea of ephemeral environments. Although it has features like sam invoke local that behaves similar to that of the serverless offline feature of the Serverless Framework, cdk watch listens to every code change in your Lambda functions locally and deploys ONLY these changes. Although it doesn’t reinvent the cumbersome feedback loop above, it enhances its developer experience. Instead of redeploying your whole serverless application, it only redeploys the Lambda function that has a code change which makes the deployment faster (but not as fast as hot reloading) and your terminal would listen to all CloudWatch logs of your Lambda functions in your stack. This saves a lot of time since you no longer need to open the AWS Management Console, then go to CloudWatch Logs, and check your logs there. It’s all in the terminal!

This approach is a step towards a good developer experience and optimal returns since it promotes the use of ephemeral environments. However, the waiting time for deployment is still not as fast as hot reloading comparing to the traditional backend and people will still find this as a deal-breaker since it greatly lessens their productivity.

So another question to ponder on is, are you willing to sacrifice developer experience in exchange for a much confident and robust security posture?

It is highly recommended to use Infrastructure as Code solutions such as the Serverless Framework or AWS CDK and choosing one after the other does not really matter that much. Because to answer the previous question above, if we need to redeploy at every code change, it is assumed that you are already using one these IaC solutions in your serverless projects. And to give you a much more detailed answer, in order to test your serverless applications, yes, you need to deploy at every code change that tweaks the configuration of your infrastructure, but you don’t need to redeploy your Lambda functions at every code change. This is made possible through testing.

Why should you test serverless applications?

Creating tests for your serverless application is another way of invoking them locally. It gives you the perfect balance between developer experience, productivity and code confidence. This way, you don’t need to sacrifice developer experience in exchange for code confidence when shipping code to production. You can have the best of both worlds!

So are are you telling me I should just test locally? Maybe this works for my Lambda functions but my application is composed of other services such as the database layer with DynamoDB.

The kind of testing I’m going to introduce here is remocal testing, where local code (Lambda function code) will be tested against remote/deployed AWS services — this leverages the use if ephemeral environments and at the same time local code invocation. So here’s the proposed feedback loop:

code deploy run local test check test logs debug. Repeat until stable.

At first glance, if we compare this to the previous feedback loop, there isn’t much difference. This, however, is faster and at the same time gives you much code confidence that your local code will work in the cloud. Why? It’s because that deploy step above won’t be executed as often as the others steps. This step is basically for deploying the services you are going to need aside from your Lambda functions, such as your DynamoDB tables and S3 buckets. And again, our local code will be tested against these deployed services in the cloud, which provides a closer approximation to that of a production environment. Ultimately, the feedback loop you will be encountering most of the time will be the following:

code run local test check test logs debug. Repeat until stable.

This proposed feedback loop is much more similar to that of a traditional backend isn’t it?

Since your local code depends on existing AWS resources to work, the only time where you would need to deploy your whole serverless application are the following:

  1. When you will be utilizing a new DynamoDB Table, S3 bucket or other services that you need aside from your Lambda functions.
  2. When you are ready to do some end-to-end testing.

So are you telling me, instead of deploying every code change, I run a test instead? Isn’t this the same thing?

The answer to that question is a resounding no. It is not the same. Deploying a full serverless application can take up to 5–10 minutes depending on the scale or the number of resources needed. Imagine redeploying due to a typo in a single Lambda function. You then need to wait 5–10 minutes for your changes to be reflected and be able to test your function’s behavior. Gruesome isn’t it? Local tests, on the other hand, can run for at most a minute depending on the amount of tests being executed.

Additionally, if you want a behavior similar to that of a traditional backend that supports hot reloading, modern testing frameworks such as Jest already have watch modes that runs your tests interactively and watches for any changes to your files. So as soon as a code change happens in your local Lambda function code, the testing framework automatically runs all tests related to that code change.

What types of tests should you be implementing?

Test Honeycomb

The answer to this question really depends on the amount of return of investment a specific type of tests gives you. In serverless systems, or arguably any other distributed systems, the type of tests that gives much value are the integration tests. You can refer to the test honeycomb above.

Types of tests and their corresponding scopes

It’s not about the quantity of tests or the test coverage. It’s about the return of investment your tests gives you. In a serverless type architecture which also promotes microservices, implementing integration tests makes more sense since more of the complexities lies outside of your Lambda code. The best way to test microservices is to test how your code interacts with other services.

End-to-end tests test the whole application from beginning to end. Although this gives us optimal returns, the return of investment of this type of tests are quite low since it takes time to execute. Unit tests, on the other hand, most likely will give us low returns and a small return of investment since it can process your Lambda function’s domain logic. Unit tests will mock all interactions with external layers such as the database layer and other services which won’t give you a lot of value.

Hands-on example

The overall architecture of our hands-on example

Let’s implement the following REST API that creates a blogpost and its corresponding tests. In this example, I used the Serverless Framework as my IaC of choice. You may find the repository over here to follow along:

Prequisites

  1. Configure your AWS profile: export AWS_PROFILE=<your-profile-name>
  2. Install dependencies: npm install
  3. Deploy the serverless application: npm run sls -- deploy --stage <your-ephemeral-environment-name>. Don’t forget to change <your-ephemeral-environment-name>to the name of your own ephemeral environment.
  4. Create your own .env.<your-ephemeral-environment-name>.

The following should be the format of your environment file.

API_BASE_URL=XXX
BLOG_POSTS_TABLE=XXX
AWS_REGION=XXX

You can get the values of these environment variables through the AWS Management Console or through plugins provided by the Serverless Framework.

The proposed feedback loop

First we must create our Lambda function. This createBlogPost Lambda function gets the event body from the API Gateway and stores the blog post title and content into the DynamoDB table. The following is the createBlogPost Lambda function code written in Typescript:

// src/functions/createBlogPost.ts

import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
import { DynamoDBClient, PutItemCommand } from "@aws-sdk/client-dynamodb";
import { ulid } from "ulid";
export const handler = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
console.log("Received event:", JSON.stringify(event, null, 2));
const client = new DynamoDBClient({ region: process.env.AWS_REGION });

const requestBody = JSON.parse(event.body || "{}");
const title = requestBody.title;
const content = requestBody.content;

if (!title || !content) {
return {
statusCode: 400,
body: JSON.stringify({
message: "Title and content are required",
}),
headers: {
"Content-Type": "application/json",
},
};
}

const newBlogPost = {
id: ulid(),
title,
content,
createdAt: new Date().toISOString(),
};

const params = {
TableName: process.env.BLOG_POSTS_TABLE,
Item: {
id: { S: newBlogPost.id },
title: { S: title },
content: { S: content },
createdAt: { S: newBlogPost.createdAt },
},
};

const command = new PutItemCommand(params);
let response;

try {
response = await client.send(command);
} catch (error) {
console.error("Error creating item:", error);
return {
statusCode: 500,
body: JSON.stringify({
message: "Error creating item",
error: error,
}),
};
}

return {
statusCode: 201,
body: JSON.stringify({
message: "Item created successfully",
data: newBlogPost,
}),
headers: {
"Content-Type": "application/json",
},
};
};

At this point, now that we have our Lambda function, the most common thing to do is to deploy this immediately to the cloud. But this would be following the cumbersome feedback loop stated above where you must wait for the deployment to finish and only then can you test your application. So instead of deploying our application, we then write an integration test.

// __tests__/test_cases/integration/createBlogPost.test.ts

require("dotenv").config({
path: `.env.${process.env.STAGE}`,
});
const chance = require("chance").Chance();

describe("createBlogPost", () => {
it("should create a blog post", async () => {
const handler = require("@src/functions/createBlogPost").handler;

const title = chance.sentence();
const content = chance.paragraph();

const event = {
body: JSON.stringify({
title,
content,
}),
};

const response = await handler(event);

expect(response.statusCode).toEqual(201);
expect(response.body).toBeDefined();

const parsedResponse = JSON.parse(response.body);
expect(parsedResponse.data.title).toBe(title);
expect(parsedResponse.data.content).toBe(content);
});
});

Do note that this integration test is pointing to our local Lambda function code (const handler = require(“@src/functions/createBlogPost).handler) not the Lambda function deployed in the cloud. So when we run our integration test, its running our local Lambda function code but all those DynamoDB operations are being sent to the DynamoDB table in the cloud. This is remocal testing. Where local code (in this case our local Lambda function) will be tested against remote/deployed AWS services (in this case our DynamoDB table).

The following are the steps to run our integration tests:

  1. export STAGE=<your-ephemeral-environment-name>. This is to reference your .env.<your-ephemeral-environment-name>.
  2. npm run integration-test

Now if you want to edit your Lambda function, go ahead! Maybe save another field aside from the title and content of the blog post such as a snippet field that shows the first 50 characters of the content or add a created_at and updated_at field. Simply run the test again and check how your code behaves. No need to deploy and wait for about 3 to 5 minutes before you can check its behavior.

On top of this, if you’re familiar using the step debugger in your own IDE, then go ahead. These are only some of the huge advantages of testing your serverless application. It gives you the speed and agility to develop and develop Lambda code which ultimately boosts your productivity.

Caveats of integration testing

Although integration testing gives us the speed and productivity we wanted, it does not fully cover the whole user journey.

You might wonder, how on earth were we able to successfully put an item to our DynamoDB table in the cloud using our local Lambda code?

The answer to this question is one of the steps we did during our prerequisites, when we exported our AWS profile export AWS_PROFILE=<your-aws-profile>. The local Lambda code assumes the AWS credentials of the current AWS profile.

So does this mean, if my profile had admin permissions, then the local Lambda code also has these permissions?

Yup! Scary isn’t it? That is why we must be careful in creating our local Lambda code. This, however, is not the case when we deploy our local Lambda code to the cloud. Lambda functions will always assume the basic execution role. The Serverless Framework does a great job in abstracting this but we still need to add specific permissions for our Lambda function to communicate with other services. Take a look at how we created our Lambda function in the serverless.yml:

## serverless.yml

...

functions:
createBlogPost:
handler: src/functions/createBlogPost.handler
environment:
BLOG_POSTS_TABLE: !Ref BlogPostsTable
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource: !GetAtt BlogPostsTable.Arn
events:
- httpApi: "POST /blogposts"

...

If we failed to indicate that the Lambda function should have the dynamodb:PutItem permission, then the Lambda function in the cloud would throw an error where it says that it does not have permission to call the PutItem operation to the specified DynamoDB table.

This is something that our integration test couldn’t have caught. Which is why we resort to another type of test which is the end-to-end test. The end-to end-test of this create blog post feature is the following:

// __tests__/test_cases/e2e/createBlogPost.test.ts

require("dotenv").config({
path: `.env.${process.env.STAGE}`,
});
import axios from "axios";
const chance = require("chance").Chance();

describe("When we call POST /blogposts", () => {
it("should create a new item", async () => {
console.log(`API_BASE_URL: ${process.env.API_BASE_URL}`);

const title = chance.sentence({ words: 5 });
const content = chance.paragraph();
let response;

try {
response = await axios.post(
`${process.env.API_BASE_URL}/blogposts`,
{
title,
content,
},
{
headers: {
"Content-Type": "application/json",
Authorization: "test",
},
}
);
} catch (error: any) {
console.error(error);
}

expect(response).toBeDefined();

if (response) {
expect(response.status).toBe(201);
expect(response.data.data.title).toBe(title);
expect(response.data.data.content).toBe(content);
}
});
});

Firstly, it is important to know that you must first deploy your Lambda function to the cloud for your end-to-end test to work. As you can see in the code above, it calls the Lambda function in the cloud via its API endpoint which is POST /blogposts , not the local Lambda function code.

The following are the steps to run our e2e tests:

  1. Deploy the application: npm run sls -- deploy --stage <your-ephemeral-environment-name>
  2. export STAGE=<your-ephemeral-environment-name>. This is to reference your .env.<your-ephemeral-environment-name>.
  3. npm run e2e-test

End-to-end tests may give us a lot of value in making sure our APIs work. However, if we only rely on e2e tests, then our productivity will drop significantly. This is because, as stated earlier, it forces us to deploy for every code change which defeats the purpose of improving developer experience.

Next steps and conclusion

How about the unit tests?

In our example above, you might notice there was no unit tests. This is because after careful consideration, creating a unit test for the createBlogPost Lambda function does not give any value at all. It’s simply about getting the request body and storing its values into our database. There was no custom nor complex logic in it.

However, one might argue that this block of code in our Lambda function gives us the need to write a unit test:

...

const requestBody = JSON.parse(event.body || "{}");
const title = requestBody.title;
const content = requestBody.content;

if (!title || !content) {
return {
statusCode: 400,
body: JSON.stringify({
message: "Title and content are required",
}),
headers: {
"Content-Type": "application/json",
},
};
}

...

And yes, I definitely agree. This checks if the title and content field is absent then it returns an error. Writing a unit test for this block of code will definitely improve our test coverage. However, as mentioned awhile ago, its all about the return of investment. There are other areas of the whole “create a blog post” implementation which concerns the API Gateway and the DynamoDB table that demands our immediate testing attention.

Wow this is all new.

Certainly, the transition to serverless represents a significant paradigm shift for developers, particularly those accustomed to the more traditional, monolithic infrastructure models. This shift, while offering the allure of scalability, cost-effectiveness, and operational efficiency, requires a foundational change in how applications are designed, deployed, and managed. The initial stages of this transition can be particularly challenging as developers grapple with the intricacies of stateless computing, event-driven architectures, and the integration of third-party services and APIs. Such a steep learning curve is necessary and a considerable investment in time and resources for learning and experimentation.

Every one who tried implementing serverless solutions have encountered all of this problems and I myself took a lot of months of research and development to improve my developer experience. Serverless really is beautiful and it is for sure here to stay. For startups, especially MSMEs (micro, small, and medium enterprises), serverless might be the most cost-effective type of infrastructure but again, there is a learning curve where you must be able to embrace a completely new developer experience.

In conclusion, despite the initial hurdles, the journey towards mastering serverless architectures can be profoundly rewarding. For MSMEs, the payoffs in terms of scalability, cost reduction, and the ability to innovate quickly are undeniable. As the technology matures and the community around it grows, who knows, maybe our current feedback loop improves to ease this transition and at the same time, further improve the current developer experience. Therefore, while the upfront challenges are non-trivial, the long-term benefits of serverless computing make it a compelling choice for businesses looking to leverage the cloud for their digital transformation initiatives.

I want to give a shoutout to Yan Cui and all the other Serverless Heroes and Serverless Community Builders for introducing these techniques and concepts. I was about to give up on serverless computing because of its terrible developer experience back then but these guys saved me. This is now my way of giving back. I hope you had a good read. 😁

--

--

Hi, I am Iggy. A DevOps Engineer based in the Philippines and the current User Group Leader of AWS User Group Davao. https://www.linkedin.com/in/iggyyuson/