Simplifying serverless AWS development with Node tooling


Figure 1: Example of AWS serverless architecture

Using a serverless architecture to host your app, website, or API is a great way to avoid the cost and maintenance of a dedicated server. Through the use of AWS lambdas, API gateway, and DynamoDB, we can create a fully featured, hosted web app (see figure 1) that can service many of the common uses of server-based applications. However, this added simplicity presents a new set of challenges: IAM permission management, environment configuration, and migration migraines. By utilizing tools such as Claudia.js, Dynamoose, and Dynograte, we can mitigate these concerns and develop serverless applications with much less overhead.

Claudia.js and Claudia API Builder



API Documentation

What is Claudia.js?

The single best tool to improve your serverless development experience in AWS is Claudia. Claudia is an AWS deployment tool and requires no changes to your existing code to be used. Claudia will help to manage the configuration between API Gateway and Lambda. Unlike other services which are an abstraction built on top of AWS, Claudia simply complements AWS by making it easier to configure and manage without changing how you structure and run your projects. It will perform actions such as creating endpoints, managing how data is passed from API Gateway to your lambdas, as well as lambda settings and IAM permissions. Claudia also configures console.log to send log statements to Cloudwatch, allowing for much easier debugging of deployed code. Check out the Claudia website for more detailed information about Claudia and how it is useful.


Using Claudia.js

Claudia has many command line arguments that can be found here, but the two most important that I will discuss are claudia create and claudia update.


`claudia create`

claudia create is the command used to initialize API Gateway and create a lambda function. This simple command is very powerful. It can create API endpoints for you, set up versioning and IAM policies, and more. Using this tool will save you several hours of work setting up API Gateway and lambda.

Please note that some flags such as --policies can only be run on claudia create. It is important to make sure these flags are configured correctly since they cannot be changed using claudia update. Manual change through the AWS console is required after a successful claudia create.


`claudia update`

claudia update is the command used to update an already existing claudia-configured AWS project and is similar to claudia create. It will simply package your local project and push it to AWS creating any new endpoints for you.


Using Claudia API Builder

Claudia API builder complements an already powerful development tool by allowing you to configure and build your API in an express-like format:



let ApiBuilder = require('claudia-api-builder');

let api = new ApiBuilder();

module.exports = api;

api.get('/hello', function (request) {

return 'Hello World!'


This simple example is all the code you need to write in order to have a working API endpoint. Simply deploying this using `claudia create --region [your region] --api-module app` will configure a working endpoint at `/hello` on AWS that can now be publicly accessed.

[The request object]( in the above function allows you to access things like the query string, body, url parameters, headers, and info about API gateway. This provides a convenient and easy way of accessing the data needed to process any requests.
<h2>How we used it</h2>
We began using Claudia in the early stages of our project's development after spending almost an entire day struggling with API gateway setup, CORS issues, and data transmission to Lambda. Claudia helped us to quickly and simply move past these issues without a second thought allowing us to keep shipping features. Claudia API builder helped us to link and centralize our code base providing a clean and simple way to view all of our existing endpoints and follow the flow of their execution without the need to reference any info on the AWS console.

We used Claudia's versioning flag along with our internal system of github branching (`--version $(git symbolic-ref --short -q HEAD)`) to manage multiple different stages of our product’s development (dev, staging, and production) without much overhead. As long as we did our deployments on the appropriate branch, the appropriate stage on AWS was updated! This was instrumental in organizing our workflow and deliverables.


# Dynamoose


[API Documentation](

<h2>What is Dynamoose?</h2>
Dynamoose is a tool to abstract away the complex and messy syntax of pure DynamoDB calls. It ensures a schema is created and maintained and gives great tools attached to the schema such as field validation, required fields, and types. Utilizing this tooling allows you to ensure your data is always saved correctly. It also extends much of the functionality while abstracting away the bad parts of the AWS document client. This allows actions like conditional saves to AWS and batch GETs and PUTs with no data or object limit per request to manually handle.

<h2>Using Dynamoose</h2>
A function for updating a user might look this using the vanilla DynamoDB API:


let user = {id: 71, first_name: 'robert', last_name: 'Johnson', DOB: '1/1/91', ... }

var updateParams = {

TableName: 'Users'

Key: { id: },

UpdateExpression: 'SET first_name = :first_name, last_name = :last_name, DOB = :DOB, ... ',

ExpressionAttributeValues: { ':first_name': user.first_name, ':last_name': user.last_name, ':DOB': user.DOB, ... },


AWSDocumentClient.update(updateParams, (err, data) => {

if (err) {

console.log('Error updating dynamoDB table')

} else {

console.log('Saved successfully!')



This function isn’t very readable, contains a lot of repetition of variable names, and missing a single semicolon or delimiter will cause your entire call to fail. This process is particularly painful in tables with many columns as it requires specifying every column in the UpdateExpression that you want to update. Any data validation here would have to be done inline and batch PUTs or GETs of large amounts of data would need to be handled with Promise.all and a while loop sending out multiple paginated requests.

This is what the same function might look like using Dynamoose. Keep in mind that the schema only needs to be configured once and can be used to create multiple tables.

//this would likely be written in a schema file somewhere and imported

let userSchema = new dynamoose.Schema({

id: {

type: Number,

validate: function(id) { return id > 0; },

hashKey: true


first_name: {

type: String,


last_name: {

type: String,


DOB: {

type: String,



}, {

throughput: {read: 10, write: 10}


//what you would interface with in your code

let User = new dynamoose.model('Users', userSchema);

let robert = new User({

id: 71,

first_name: 'Robert',

last_name: 'Johnson',

DOB: '1/1/91',


}) {

if(err) {

console.log('Error updating dynamoDB table')

} else {

console.log('Saved successfully!')



As you can see, this code is a few lines longer, but much more readable and explicit. It allows us to work with objects rather than long, complex, delimited strings. Using this format will feel much more natural and free to javascript developers.




What is Dynograte?

Dynograte is a simple DynamoDB migration tool that provides an easy way to write, log, and roll back migrations. This tool takes a DynamoDB object and either a function or folder containing files of functions to run as a migration. You can create migrations with the tool to ensure unique file names. It will save a record of each migration in a separate DynamoDB table of your choosing and will only run migrations it does not have a record of executing.


Using Dynograte

Dynograte is easy to set up. Simply create a directory where you dynograte files will be stored, and create a file to execute your migrations with the following code:

const path = require('path');

const dynamodb = new aws.DynamoDB(config.dynamodb);

let migrationDir = path.resolve(__dirname, './dynamodb-migrations');

return dynograte.migrate({

dynamodb: dynamodb,

migrationTableName: 'my-awesome-migration-table',

migrationDir: migrationDir


This will create a table to store migration records named my-awesome-migration-table and run the migrations stored at the relative path ./dynamodb-migrations. You can also use this in conjunction with dynamoose although somewhat indirectly. You can’t pass the dynamoose object through dynograte and need to use dynamoose.ddb() to obtain the AWS.DynamoDB object. You can however re-import dynamoose into each of your migration files. A migration file will look something like this:

//import dynamoose

exports.up = function(dynograte) {

//dynamoose code to perform a migration


Inside the function block is where all of your code to perform the migration will go. You can perform any action here that is allowed by the AWS SDK to interact with dynamoDB. You can also supply other values to export to modify how the migration executes such as exports.retry = true to retry the migration if it fails. Rolling back a migration can be performed by using exports.down in place of exports.up, but requires writing your own rollback method.

How we used it

We used Dynograte in order to migrate a significant change in our production DynamoDB tables. Using this tool, we were able to change the primary and secondary keys for a table (a task that is not possible using the AWS console). We were able to accomplish this by creating a backup of table A, pulling data from table A, manipulating the data, deleting table A, and creating table B with the same name after a successful wait for deletion. This way we were able to keep all current data while changing how the data was accessed through the keys and not losing any associations to our table name stored in other tables.



These three tools great simplify the experience of developing a serverless node application. Utilizing Claudia provides a method for developers to deploy confidently and easily, manage settings and policies, and create API endpoints; Dynamoose simplifies database access by providing more compact and understandable functions to perform actions on your database; and Dynograte simplifies and tracks migrations performed on DynamoDB. By abstracting away the common pain points of AWS development, developers can spend less time managing AWS and spend more time developing their applications.