Posts List
moustapha.dev : I built and deployed my first go backend blog post cover image

I built and deployed my first go backend

July 18th 25

Recently I've fallen in love with go. The language simplicity, it's powerful concurrency model and the exhaustiveness of its standard library sets it apart. For the last month, I built and deployed for a client a blog app backend API using go. In this post, I will talk about tools I used, challenges I faced and a comparison of cloud providers for deploying a go backend.

The tech stack

I wanted to use something really lightweight, only the standard library where it's possible to avoid too much dependencies and most importantly, something performant. I also wanted something that's well abstracted and decoupled in a way that makes it easy to test.

Considering those constraints, this is the set of tools I choosed to use :

000001_add_user_table.up.sql
create table "user" (
	id bigserial primary key,
	email varchar not null unique,
	created_at timestamp not null default now()
);
000001_add_user_table.down.sql
drop table "user";

This approach gives you a lot of control but it's sometimes time consumming and error-prone. Generally, your run locally the up migration, then the down migration and then up again, to make sure everything runs as expected. But the main reason I dislike it is that it's hard to see what your current database schema looks like, without opening the database or trying to imagine it through the different migration files and that's pretty challenging when your schema is frequently updated.

To run the migrations, I have locally a docker container. I'll explain the dev and deployment workflow later. On production, I tried executing the migrations directly on the entrypoint, from the code, which would run the migrations once every time the app starts but there a several issues with that. When the migration fails to run for any reason, you have to options : you can ignore it and contiue running the application, which is not an option because the source code will try to access fields that aren't available in the current database. Or, you would just fail the request or panic, which simply stops the running process, not an option too.

Based on that, the migrations should be ran on the CI/CD pipeline. So when a migration fails to run, the build fails and the erroneous migration is never shipped to production.

But I see particular cases where running the migration programmatically makes sense.

Finally, I switched from golang-migrate to atlas and I instantly got a better DX.

  1. You write the desired database schema
schema.sql
create table "user" (
  id bigserial primary key,
  email text not null unique,
  first_name text default null,
  last_name text default null,
  avatar_url text default null,
  role role default 'reader' not null,
  created_at timestamptz default now() not null,
  updated_at timestamptz default now() not null
);
  1. Atlas generates the migrations steps needed
source .env.migrate && atlas migrate diff "$(name)" --dir "file://db/migrations" --to "file://db/schema.sql" --dev-url "docker://postgres/17-alpine/dev?search_path=public"
20250712185056_add_user_table.sql
-- Create "user" table
CREATE TABLE "public"."user" (
  "id" bigserial NOT NULL,
  "email" text NOT NULL,
  "first_name" text NULL,
  "last_name" text NULL,
  "avatar_url" text NULL,
  "role" "public"."role" NOT NULL DEFAULT 'reader',
  "created_at" timestamptz NOT NULL DEFAULT now(),
  "updated_at" timestamptz NOT NULL DEFAULT now(),
  PRIMARY KEY ("id"),
  CONSTRAINT "user_email_key" UNIQUE ("email")
);
  1. You run the migrations
source .env.migrate && atlas migrate apply --dir "file://db/migrations" --url $$DATABASE_URL

And voilà. You avoid writing manually migrations and see the current database schema easily. Atlas is very powerful and provides a lot of futures that I want to explore and the future. If you need to rollback a given migration, Atlas provides tools to do it automatically. From here, I'm pretty happy with the set of tools and how they all work together.

Development and deployment workflow

I'm using docker both for development and deployment. On dev, I use docker compose to setup the app, the postgres database and run the migrations.

services:
  database:
    image: "postgres:17-alpine"
    ports:
      - "5432:5432"
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: $POSTGRES_DB
      POSTGRES_USER: $POSTGRES_USER
      POSTGRES_PASSWORD: $POSTGRES_PASSWORD
 
  # still using golang-migrate. TODO: use atlas
  migrate:
    image: migrate/migrate
    volumes:
      - ./db/migrations:/migrations
    depends_on:
      - database
    command: -source=file://migrations -database postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@database:5432/$POSTGRES_DB?sslmode=disable up
 
  backend:
    build:
      context: .
      dockerfile: dev.Dockerfile
    volumes:
      - ./:/app
    ports:
      - "3000:3000"
    environment:
      GOOGLE_OAUTH_CLIENT_ID: $GOOGLE_OAUTH_CLIENT_ID
      GOOGLE_OAUTH_CALLBACK_URL: $GOOGLE_OAUTH_CALLBACK_URL
      GOOGLE_OAUTH_CLIENT_SECRET: $GOOGLE_OAUTH_CLIENT_SECRET
    depends_on:
      - migrate
 
volumes:
  postgres-data:

This is the backend dockerfile for development

FROM golang:1.24.4-alpine AS build-stage
 
WORKDIR /app
 
# Refresh server on file change
RUN go install github.com/air-verse/air@latest
 
COPY go.mod go.sum ./
 
RUN go mod download
 
COPY . .
 
ENV PORT=3000
 
EXPOSE ${PORT}
 
CMD ["sh", "-c", "$GOPATH/bin/air ."]

As you may see, I'm using air to automatically refresh the server as the code changes.

The production Dockerfile is a little bit different.

FROM golang:1.24.4-alpine AS build-stage
 
WORKDIR /app
 
COPY go.mod go.sum ./
 
RUN go mod download
 
COPY . .
 
RUN CGO_ENABLED=0 GOOS=linux go build -o /binary
 
FROM gcr.io/distroless/base-debian12:nonroot AS build-release-stage
 
WORKDIR /
 
COPY --from=build-stage /binary /binary
 
ENV PORT=3000
ENV GIN_MODE=release
 
EXPOSE ${PORT}
 
USER nonroot:nonroot
 
ENTRYPOINT ["/binary"]

And that's pretty much it. I have a Makefile to run most tasks in one simple command.

Makefile
run:
	docker-compose up
 
run-build:
	docker-compose up --build
 
dev:
	docker-compose -f dev-compose.yml up
 
dev-build:
	docker-compose -f dev-compose.yml up --build
 
test:
	go run -mod=mod github.com/rakyll/gotest@latest -v ./...
 
coverage:
	go test -v -coverprofile cover.out ./... && \
	go tool cover -html ./cover.out -o ./cover.html
 
generate-migration:
	source .env.migrate && atlas migrate diff "$(name)" --dir "file://db/migrations" --to "file://db/schema.sql" --dev-url "docker://postgres/17-alpine/dev?search_path=public"
 
migrate-up:
	source .env.migrate && atlas migrate apply --dir "file://db/migrations" --url $$DATABASE_URL

Deployment

The deployment is pretty easy and straightforward. I tested some cloud providers for the deployment : self-hosted VPS (Linode), Google Cloud Run, Fly.io, AWS Fargate, Render, DigitalOcean etc and this is what I needed :

From my experience, I find that Railway is a pretty good option. It deploys Docker containers, the pricing is usage based (memory, CPU, network egress and volume) and I find the general experience just nice. You can register now and get a free $5 credit to test it. Currently, I have the app, a postgres database and a redis instance (for everything caching and rate-limiting) running and it's not (yet?) costing me more than 5 dollars monthly. Of course, the website don't have yet a lot of traffic and that may change in the future. I also want to test more providers but right now I'm just happy with Railway. If you want to try Railway, please consider using my referral link.

Loaded extension

My Railway architecture

I'm also using Cloudflare, as a security layer on top of the backend API.

For the CI/CD pipeline, I just have a simple Github action that run the tests and the migrations. On Railway, you can wait for CI to finish successfully before building the container.

.github/workflows/run-tests-and-migrate-db.yml
name: Run tests and migrate database
 
on:
  push:
    branches: [prod]
 
jobs:
  test:
    runs-on: ubuntu-latest
 
    steps:
      - uses: actions/checkout@v4
      - name: Setup Go
        uses: actions/setup-go@v5
        with:
          go-version: '1.24.x'
      - name: Install dependencies
        run: go get .
      - name: Test
        run: go test -v ./...
 
  deploy:
    needs: [test]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: ariga/setup-atlas@v0
      - name: Deploy Atlas Migrations
        uses: ariga/atlas-action/migrate/apply@v1
        with:
          url: ${{ secrets.DATABASE_URL }}
          dir: file://db/migrations
Loaded extension

Github action run and migration summary

Tests

I'm not yet using TDD, but it's something I want to explore in a near future. Right now, I'm writing test for internal function and api endpoints. It looks like this

func TestOAuthProviderHandler(t *testing.T) {
	err := godotenv.Load("../.env.example")
	if err != nil {
		t.Error(err)
	}
 
	err = internal.SetupEnv()
	if err != nil {
		t.Error(err)
	}
	expectedURL := "https://somewhere.com/redirectauth"
 
	router := internal.GetRouter()
 
	router.GET("/auth", func(c *gin.Context) {
		internal.OAuthProviderHandler(
			c,
			func(res http.ResponseWriter, req *http.Request) (string, error) {
				return expectedURL, nil
			},
		)
	})
 
	responseRecorder := httptest.NewRecorder()
 
	req := httptest.NewRequest(http.MethodGet, "/auth?provider=testprovider", nil)
	router.ServeHTTP(responseRecorder, req)
 
	responseBody, err := io.ReadAll(responseRecorder.Body)
	responseRecorder.Body.Close()
	if err != nil {
		t.Error(err)
	}
 
	response := strings.TrimSpace(string(responseBody))
 
	expectedResponse, err := json.Marshal(struct {
		internal.Response
		AuthURL string `json:"auth_url"`
	}{
		internal.Response{Success: true},
		expectedURL,
	})
 
	if err != nil {
		t.Error(err)
	}
 
	assert.Equal(t, responseRecorder.Code, http.StatusOK)
	assert.Equal(
		t,
		string(response),
		string(expectedResponse),
	)
}

I'm mostly using stdlib httptest utils and the assert package.

Things I want to improve

One thing I particularly appreciated with Fly.io is the out-of-the-box Grafana dashboard. I want to add more telemetry and monitoring to the app in the future.

Conclusion

Writing this backend has been very joyful and most importantly, it get me back excited about writing code and loving the craft, a very different experience compared to my previous corporate Typescript boreout for the last two years. I had the opportunity to dive into the ecosystem, test a lot of tools and services. It also helped me get more comfortable with go and I see myself today being very productive with it. I'm looking forward to share more about Go and maybe open-sourcing some of the nice utilies I'm using internally. If you want to get started with go, I recently wrote a quick language tour.

This post is a little bit technical and long. I tried my best to keep it short. Maybe I'll talk about performance, benchmarking and scaling in a next post.

I hope that this post helped anyone trying to build a go backend.

Thanks for reading.