Golang REST API using Serverless framework

A step-by-step guide to building a Golang REST API using Serverless Framework, PostgreSQL and AWS Lambda.

Mario Arizaj
11 min readJan 19, 2021
Photo by https://github.com/MariaLetta/free-gophers-pack

In the recent years, Go (Golang) has seen a large increase in its usage and the community. Companies all over the world are starting new projects, or even converting the old ones in Go. Also, the Serverless architecture is getting more popular by the day, with developers leaning towards it more and more.

This story is a step-by-step guide about building a REST API using the Serverless Framework with Go, PostgreSQL in AWS RDS and AWS Lambda.

Prerequisite

To follow the guide you will need:

  1. AWS Account
  2. Golang on your local machine
  3. AWS CLI

Without wasting more time let’s get into it.

Creating an IAM user

The service we are going to build for these example will require permissions for using some of our AWS resources. The easy and not-recommended way is to use our root account and give our service all root privileges, which is not a good idea for a lot of reasons, one of them being security. If your service is compromised, then the credentials of your AWS account will be leaked and people can gain access of your root account. The recommended way to give access to third-party services to use your AWS resources is creating an IAM user.

Let’s login into our AWS account and search for IAM in our search bar. This will take you to the IAM service screen. At the top left of the new screen there is an option called `Add user` which will take us to the user creating page.

Image of Create User page on IAM

We need to choose a name for our user and also decide wether to give this user Programmatic Access (Access the resources only through CLI or SDK) or Console Management Access as well. I am going to go for the first option since I will never need to log in to the console using this user. The next step, is the Permissions step. We need to give this user permissions to access certain resources on our AWS account. From the three presented options choose Attach existing policies directly and click at Create Policy on the left corner.

Set permissions for the IAM user

Then, in the following screen you will be presented with two options, create the policy using the visual editor or using a JSON policy definition. I like to use the JSON policy editor. Let’s copy and paste this policy definition that I have created. I generated this policy using serverless-policy-generator from dancrumb , and also made a few slight modifications for some other permissions that we are going to need. Note that if you want to use other AWS resources like DynamoDB the policy needs further modifications. Also make sure you replace the region in the policy if your aws region is different than mine. For this example I have chosen eu-central-1 which is the closest to me.

After the policy is created, go back to your Create User tab, refresh the policies list and search for our newly created policy. Then Create the User and you should be able to download the users credentials csv.

Then spin up a terminal locally and run the command aws configure . This will prompt you to enter the credentials you just downloaded, a region and also an output format. The region should match your policies region and the output should be json. Now we should be good to go for our next step.

Creating the RDS PostgreSQL Database

Let’s go back to our AWS console and navigate to the Security Group . We are going to need a security group to attach to our RDS Database so we can be able to access the Database remotely from our computer, and also give access to our service. For the sake of simplicity and time, I am going to Allow All Inbound Traffic and all Outbound Traffic to this security group. If you are going to keep the Database beyond the scope of this article I suggest that you give access only to your devices on this security group. If you want to learn more AWS official documentation is a great place to broaden your knowledge about security groups.

Now we need to go to the RDS service in our console and click on Create Database which should take us on the Database creation options page. I am going to go with PostgreSQL for the database engine and the templates section, if you have an account with the free tier, chose that option, otherwise you need to chose Dev/Test (Be aware that some costs may apply for both options). Choose an identifier for your database instance, a Master username and a Master password. I recommend using an auto-generated password. Leave the defaults for the database instance size and the only change on Storage would be to disable autoscaling. Then in Connectivity chose Yes for Public Access and under VPC security group choose the Security group that you just created. If you do not see your security group, make sure you are on the same region as the Security Group that you just created. Leave the defaults for all the other options and Create the database. It may take several minutes for the Database to become active but you should be able to see a screen like this one once the database is available.

The database page on RDS service

On the top right corner, you should be able to view your database credentials. Copy the database connectivity information and credentials in a place that you might be able to retrieve later in the guide.

Now it is time to connect to the database remotely from our laptop. I am going to use DataGrip for connection to the database. Another good option for PostgreSQL are PgAdmin or you could also use the terminal.
After you download DataGrip open the application and Create a new project. You should be able to add Data Source by clicking on the plus sign on the top left corner and choosing PostgreSQL as a Data Source. This will prompt you with a form that requests information about our Database credentials, host and port. For the Database Name type postgres since that is the name of the default Database that AWS created for us.

Connecting to our Database remotely using DataGrip

After filling out the form, click on Test Connection which should confirm that the information is correct. If something goes wrong, go back to your database information on AWS console and make sure that your security group allows your computer to access the Database remotely.

After the connection with the database is successful, lets jump to a query console and create our service database.

For our example service we are going to need two resources, authors and some articles these authors have published. After the database creation is successful let’s jump into our final phase, that of actually writing some code.

Install Serverless framework and generate the project.

To install Serverless make sure you have node installed locally and run
npm install -g serverless . This should install the framework into your computer and you should be able to see the Serverless version by running serverless --version . You could also use sls alias instead of serverless.

This article assumes you have Go installed on your computer and also have correctly set up your Go Workspace and $GOPATH. Navigate to your GOPATH and create a directory called github.com/{your-github-username} . If you want to use other code hosting solutions, change the first directory to whatever you will use.

Now run serverless create -t aws-go-dep -p serverless-example on the created directory. This will create a new project called serverless-example . Open the generated project with your code editor of choice. I am going to be using Goland. The folder structure generated by serverless should be similar to the image below.

Folder structure generated by serverless

We need to modify a few things from this default template that serverless uses for go. First remove Gopkg.toml file which is used by dep to vendor the dependencies, but this is a deprecated tool and now the correct way to vendor go dependencies should be go modules . Now run go mod init on the root of the project which will generate a file called go.mod . Now all our dependencies will be stored at this file. The other changes we are going to make are inside the Makefile . We need to change this file to make it compatible with go modules . Copy and paste the contents of the Makefile below to your project. The changes to the build process make sure that we get all of our dependencies and we vendor them before building the binaries.

For now, let’s leave the serverless.yml file as is and run make deploy on the root of our project. After the deployment process finishes, we will see some APIGetaway endpoints in the deployment output. Let’s grab one of the endpoints and test it using Postman .

Sending a GET request to /hello endpoint

Now let’s start modifying the project so we could be able to GET andCREATE Author and Article resources.

Use Parameter Store to securely retrieve Database Credentials

First let’s get back to our AWS console and navigate to System Manager Parameter Store. We are going to use the Parameter Store to store our database connection parameters so our code can look those up in a secure manner. We need to create 5 parameters. All these parameters have to be defined as secureStrings. Note that for db-name we are going to use the serverless_example and for the user info, the user we created in the above step using DataGrip.

SSM Parameter store parameters

Connect to the database from our service

Now back to our IDE and our generated project. First, let’s create a new directory called api and inside this directory another one called common . In this common file we are going to store a few functions that are going to be used by all of our lambda functions. Now let’s download a Postgres ORM for Golang. My personal favourite is https://github.com/go-pg/pg. To install this ORM go to the root of the project and run go get github.com/go-pg/pg/v10 . If you are familiar with other drivers or ORMs than you are free to chose the best solution for you.

For our service to be able to pull the Parameters stored in Systems Manager in AWS and expose them as environment variables we need to make some changes to our serverles.yml file.

As you can see, I have added an environment property to the provider object. Then on that property I am setting some environment variables which are being pulled by ssm . The ~true at the end of the parameter specifies that the Parameter is encrypted in the parameter store.

Now let’s create a file called storage.go inside /api/common which is going to hold our Database connection and our storage interface.

The first thing I have defined on the storage.go file is the Storage Interface. Here we are going to define all the function signatures for our Storage function. For now let’s just leave it as an empty interface. Then we have a Storage type which has only one property, the database connection. For the lambda to be able to reuse the database connection, I have defined the StorageIFace variable sto as global so that we can check for that variable on the ConnectToDB function and only create a new connection when the last one is not initialised. This is a common lambda function best practice, so our functions can take advantage of the execution context by reusing resources.

Create our models and Storage functions

Now it is time for use to define our models. As I mentioned earlier we will only need two models, an Author and an Article. Start by creating a directory called models on our project directory followed by two files author.go and article.go . Then define the Author and Article structs in their appropriate file.

Now, create two more files in the common directory called author_storage.go and article_storage.go . All these functions that we are defining in these files will be responsible for Creating, Getting, Updating and Deleting authors and articles .

Now let’s change our StorageIFace so it includes all these function signatures.

After our storage is in place, the next thing we need to do is get rid of hello and world directories. Once these directories are removed, we will need to create a few more directories under api . Each directory inside api besides common is going to hold an Endpoint which represents a Lambda function.

So let’s continue by creating some more files under api.

This first file create_author.go is responsible for decoding the request and creating an author in our database.

Create author

This second file is used for… you guessed it, creating an article.

Create article

This one is used for getting all the authors and returning them as an array.

Get authors

And this last one is used for retrieving all the articles from the database.

Now that we have all the handlers in place, we need to modify our serverless.yml file to include our other endpoints.

We also need to change our Makefile file so it can be able to build all the functions into our bin directory.

Now we can safely run make deploy to deploy our Serverless application in AWS. After the program is successfully deployed, we can confirm that all the endpoints work as expected using Postman.

Next Steps

Obviously this is just a dummy API and has a lot of place for improvement. Let me throw some ideas so you can try and figure these next steps by yourself:

  1. Add authentication to the endpoints.
  2. Use serverless-stage-manager to support multiple stages (dev, prod, offline).
  3. Implement other CRUD operations for our resources like GetOne , Update and Delete .
  4. If there will be more handlers, automate the process of the Makefile to use loops for building the binaries.

Some last words

As you might be able to see from my profile, this is my first post on Medium and I am hoping it won’t be the last. If you have any questions or something is not working as expected, let me know on the comments section down below. If you want to check out the source code for this project here is the link https://github.com/marioarizaj/serverless-example .

--

--