README
¶
¡Vamos!
A library for a Go HTTP server. It is configured with TLS 1.3, rate limiting, logging, metrics, health checks, & profiling. It is integrated with Openbao, Postgres, & Redis.
A virtual development environment is included in this repository.
A corporate development team can deploy a prototype into a production environment as a micro-service and expect operational maturity. Vamos hastens development and eases operation.
Quick Start
Provide the application a config file named dev.json or prod.json in the config directory. View the _example/config/dev.json. The file is concerned with the following:
- The location of the server guarding secrets.
- Local file paths to read x509 cert & key, & intermediate CA.
- The location of a Postgres instance and its sensitive credential.
- Fake data to provide to Postgres for development & testing.
- Server details, e.g., host, port, timeouts.
- TLS config as server
- TLS config as client
- Optional rate limiter
- Optional static files
- Health evaluations.
- Logging level.
- Toggling of metrics.
- Location of a Redis server.
- Openbao HTTP endpoint & JSON key for password.
- Fake data for a local Postgres server.
Sequence of Database List
Notice data.relational in _example/config/dev.json is an array. The sequence is preserved after the configuration is read. Accessing a database requires acknowledging its position in the array. In the example, the command to connect to a database includes a reference to its position in the array.
// _example/main.go
package main
// abbreviated for clarity...
const DB_FIRST = 0
func main() {
cfg := config.Read()
db1, _ := rdbms.ConnectDB(cfg, DB_FIRST)
TLS Configuration
Notice httpserver.tls_server and httpserver.tls_client represent different sets of certificates and keys in a TlsSecret struct. The former is for the Go application to establish TLS connections with clients, and the latter is for mutual TLS as a client inside a corporate network. The former will be used to create a X509 Certificate that will be included in the TLS configuration of http.Server. The latter can be used as a X509 Certificate in a Redis client, etc.
The field httpserver.tls_server.cert_path represents a HTTP endpoint offered by OpenBao, and httpserver.tls_server.cert_field represents a JSON key in the data read from OpenBao.
The SkeletonKey from the Secrets package can easily read sensitive data from OpenBao and transform it into a useful X509 certificate for any developer.
Overloaded TlsSecret struct
A notable problem with the current usage of the TlsSecret struct is that I'm forcing it to perform double-duty. When configuring the Openbao client, the TlsSecret field named cert_path is simply a file path, so that the client can read a local .pem file to build a TLS connection to the secrets storage. When configuring other clients to communicate with Postgres and Redis, then the fields represent HTTP endpoints hosted on the Openbao server. So an executable reads locally hosted .pem files to build a secure connection to the Openbao server, then reads subsequent secrets from Openbao to build secure connections to other servers. I should probably reform the httpServer struct and the TlsSecret struct.
Local dev Openbao NOT rotating certs
The local dev Openbao isn't rotating X509 certificates. I should probably employ that feature, but currently I simply write certificates into secrets storage.
Build
After all that is defined, determine the version number of the application. This is a good opportunity to include a tool that reads the Git Log and interprets Conventional Commits to determine the version.
Provide two environmental variables: One to define whether this deployment exists in development or production, and another to offer the Openbao access token for secrets storage.
# Start the Dev Environment with the included makefile. See next section.
~/your_app $ go env -w GOEXPERIMENT=greenteagc
~/your_app $ go build -v -ldflags="-s -X 'github.com/Shoowa/vamos/config.AppVersion=v.0.0.0' " -o yourapp
~/your_app $ APP_ENV=DEV OPENBAO_TOKEN=token ./yourapp
Development Environment
This is for MacOS. You will need two things: Podman and Golang.
~/vamos $ make podman_create_vm
~/vamos $ podman ps -a
You will receive three things in this dev env.
- Openbao to hold passwords and certificates. This can be improved to handle cert-rotation.
- Postgres to permanently hold data.
- Redis to temporarily hold data.
You will receive a new instance of Postgres with a user and database, and an instance of Openbao with a loaded password kept at dev-postgres-test. That path matches the config field data.relational.[0].secret in the _example/.
Postgres & Openbao will need a few minutes to start.
A natively installed instance of Postgres is fine when it is the only dependency, but I imagine anyone using this will have an existing installation of Postgres configured for a different development context. We can use Postgres inside a virtual machine to avoid disruptions. And we can add other databases and dependencies.
A virtual machine managed by podman^p1 will host databases needed by the application. The virtual machine runs Linux, specifically Fedora CoreOS.^p2 And systemD will manage containers hosting databases.
The included makefile offers a command that copies a few .container files from a directory named _linux/ to a new directory on the MacOS host. And copies a .sql initilization script for Postgres. Then uses podman to create a virtual machine named dev_vamos that can read the new directory. Then uses systemD to fetch container images and run them. And setup the Postgres instance in a container.
Instead of using podman commands to manipulate the containers directly, we can use systemD inside the Linux virtual machine to start and stop containers.
~/vamos $ podman machine ssh dev_vamos "systemctl --user status dev_postgres"
● dev_postgres.service - Launch Postgres 18 with native UUIDv7
Loaded: loaded (/var/home/core/.config/containers/systemd/dev_postgres.container; generated)
Drop-In: /usr/lib/systemd/user/service.d
└─10-timeout-abort.conf
Active: active (running) since Fri 2025-07-04 09:49:39 EDT; 6s ago
Invocation: 3b0202c669c640a1a6a96bd8bab6f4d5
Main PID: 9034 (conmon)
Tasks: 24 (limit: 2155)
Memory: 39.4M (peak: 55.5M)
CPU: 287ms
CGroup: /user.slice/user-501.slice/[email protected]/app.slice/dev_postgres.service
├─libpod-payload-4f575acdf6c9155ee2e079ba37c9220e9aef7bb47430af6c9ad969d26cf12d30
│ ├─9036 postgres
│ ├─9062 "postgres: io worker 1"
│ ├─9063 "postgres: io worker 0"
│ ├─9064 "postgres: io worker 2"
│ ├─9065 "postgres: checkpointer "
│ ├─9066 "postgres: background writer "
│ ├─9068 "postgres: walwriter "
│ ├─9069 "postgres: autovacuum launcher "
│ └─9070 "postgres: logical replication launcher "
└─runtime
├─9018 rootlessport
├─9025 rootlessport-child
└─9034 /usr/bin/conmon --api-version 1 # removed for brevity
Connect to the database named test_data in the containerized Postgres instance from the MacOS host.
~/vamos $ psql -h localhost -U tester -d test_data
Inspect the condition of Openbao and whether or not it received a password.
~/vamos podman machine ssh dev_vamos "systemctl --user status secrets.target dev_openbao openbao_add_pw"
Change the password archived in Openbao as much as you want.
# httpie command
~/vamos http POST :8200/v1/secret/data/dev-postgres-test X-Vault-Token:token Content-Type:application/json data:='{ "password": "openbao777" }'
Postgres Database
A container image of Postgres 18 Beta is preferred for the native UUIDv7 feature. How is a container obtained and managed by podman in this development environment?
A special .container file is read from a user directory named .config/containers/systemd/ in the VM by a podman tool named quadlet. And quadlet parses the file to produce a systemD service file. The resulting .service file can download a container image and run it. More details can be studied in the makefile under the command podman_create_vm.
The quadlet .container file includes a few commands commonly used to run containers in both Docker and podman.
# _linux/dev_postgres.container
[Unit]
Description=Launch Postgres 18 with native UUIDv7
[Container]
Image=docker.io/library/postgres:18beta2-alpine3.22
ContainerName=postgres
Environment=POSTGRES_PASSWORD=password
Environment=POSTGRES_USERNAME=postgres
Environment=POSTGRES_HOST_AUTH_METHOD=trust
PublishPort=5432:5432
Volume=/data/postgres:/var/lib/postgresql/18/docker
Volume=/data/setup/setup_db1.sql:/docker-entrypoint-initdb.d/setup_db1.sql
PidsLimit=100
[Service]
Restart=on-failure
RestartSec=10
[Install]
RequiredBy=databases.target
The _example/testdata/setup_db1.sql file will be copied from the project on the host to the volume of the virtual machine, then mounted to the Postgres container. Postgres only reads this file once during its initilization. It will skip reading it whenever the container is started again.
-- _example/testdata/setup_db1.sql
DROP DATABASE IF EXISTS test_data;
CREATE DATABASE test_data;
CREATE USER tester WITH PASSWORD 'password';
\c test_data
GRANT ALL ON SCHEMA public TO tester;
Notice the command to switch from the default database to the newly created test_data database. The default user must be in the latter database to effectively grant privileges to another account.
To launch the Postgres development instance, simply ssh into the podman virtual machine and order systemD to start the service. Logs can be viewed via journalD.
~/ $ podman machine ssh dev_vamos "systemctl --user start dev_postgres"
~/ $ podman machine ssh dev_vamos "journalctl --user -u dev_postgres"
The extension .service is excluded from the commands for brevity.
Database Tooling
A couple of CLI tools that won't be imported into the application.
~/vamos $ go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
~/vamos $ go install github.com/sqlc-dev/sqlc/cmd/sqlc@latest
Database Migration
The CLI tool migrate creates numbered .sql files that we can fill in with SQL commands. Then it applies them in numbered order to a Postgres database.^d1
Create a .sql file that will hold the commands to create a table named authors.
~/vamos/_example $ migrate create -ext sql -dir sqlc/migrations/first -seq create_authors
~/vamos/_example $ tree sqlc/migrations/first
sqlc/migrations/first
├── 000001_create_authors.down.sql
├── 000001_create_authors.up.sql
In 000001_create_authors.up.sql, write the following SQL commands:
CREATE TABLE IF NOT EXISTS authors (
id UUID DEFAULT uuidv7() PRIMARY KEY,
name text NOT NULL,
bio text
);
After writing a SQL command to create a table, apply the command. Notice the subdirectory associated with a particular database, in this case first. Notice the keyword up as the final token in the command.
~/vamos/_example $ export TEST_DB=postgres://tester@localhost:5432/test_data?sslmode=disable
~/vamos/_example $ migrate -database $TEST_DB -path sqlc/migrations/first up
The creation of any tables and any adjustments offered by *.up.sql can be reversed by following the SQL commands written in *.down.sql files.
Database Code Generation
The command line tool sqlC reads .sql files and writes Go code we can import into the application.^d2
# sqlc/sqlc.yaml
version: "2"
sql:
- engine: "postgresql"
queries: "queries/first"
schema: "migrations/first"
gen:
go:
package: "first"
out: "data/first"
sql_package: "pgx/v5"
emit_json_tags: true
In sqlc/sqlc.yaml, one or more database engines can be listed to help the Go application connect to two different Postgres databases. Each entry relies on a directory of .sql files written for queries, and a directory of .sql files named migrations written for creating tables. sqlC reads these files as inputs.
The produced code will reside in the first package in a newly created subdirectory named data/first and another package can reside in a separate subdirectory, i.e., data/second. The code will use the pgx/v5 driver, and include JSON tags in the fields of the generated structs that represent data entities.
After we draft a .sql file for a hypothetical table of authors, like so:
-- sqlc/migrations/first/000001_create_authors.up.sql
CREATE TABLE IF NOT EXISTS authors (
id UUID DEFAULT uuidv7() PRIMARY KEY,
name text NOT NULL,
bio text
);
We can execute the command to create Go code that will interact with the Postgres database.
~/vamos/_example $ sqlc generate -f sqlc/sqlc.yaml
The tool sqlC produces the following code in a models.go file:
// sqlc/data/first/models.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
package first
// abbreviated for clarity...
type Author struct {
ID pgtype.UUID `json:"id"`
Name string `json:"name"`
Bio pgtype.Text `json:"bio"`
}
Author will be accessble in a method of a struct named Queries.
// sqlc/data/first/authors.sql.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
// source: authors.sql
package first
// abbreviated for clarity...
const getAuthor = `-- name: GetAuthor :one
SELECT id, name, bio FROM authors WHERE name = $1 LIMIT 1
func (q *Queries) GetAuthor(ctx context.Context, name string) (Author, error) {
row := q.db.QueryRow(ctx, getAuthor, name)
var i Author
err := row.Scan(&i.ID, &i.Name, &i.Bio)
return i, err
}
And Queries is generated in sqlc/data/first/db.go. It holds the database handle, i.e., the connection pool.
// sqlc/data/first/db.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
package first
// abbreviated for clarity...
func New(db DBTX) *Queries {
return &Queries{db: db}
}
The Postgres connection pool created in main() is transferred to Backbone when configuring the Backbone with the Options pattern.^o1
// _example/main.go
package main
// abbreviated for clarity...
func main() {
db1, _ := rdbms.ConnectDB(cfg, DB_FIRST)
backbone := router.NewBackbone(
router.WithLogger(srvLogger),
router.WithDbHandle(db1),
router.WithCache(redis),
)
}
The Backbone struct holds the dependencies needed by the HTTP Handlers. It resides in the Router package.
// router/backbone.go
package router
// abbreviated for clarity...
func WithDbHandle(dbHandle *pgxpool.Pool) Option {
return func(b *Backbone) {
b.DbHandle = dbHandle
}
}
func WithCache(client *redis.Client) Option {
return func(b *Backbone) {
b.Cache = client
}
}
In a downstream executable that imports this library and leverages sqlC, the database handle will need to be transferred to the *Queries struct and held inside a wrapper.
// _example/routes/routes.go
package routes
// abbreviated for clarity...
type Deps struct {
*router.Backbone
Query *first.Queries
}
func WrapBackbone(b *router.Backbone) *Deps {
d := &Deps{b, first.New(b.DbHandle)}
return d
}
Develop
Create a feature with an existing SQL Table by following this process:
- Draft a SQL query.
- Generate Go code in sqlc/data/ based on the new SQL.
- Draft a new HTTP Handler.
- Register the new HTTP Handler with the Router.
- Add a log line.
- Add a metric line
Draft A SQL Query
In the directory _example/sqlc/queries/first, add a file named authors.sql, then write this inside it.
-- name: GetAuthor :one
SELECT * FROM authors WHERE name = $1 LIMIT 1;
Then use sqlC to transform that SQL query into Go code.
~/vamos/_example $ sqlc generate -f sqlc/sqlc.yaml
sqlC will read the comment, then create a const with that name, and assign a query to it. Then it will create a method with the same name that uses the const.
// sqlc/data/first/authors.sql.go
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.28.0
// source: authors.sql
package first
const getAuthor = `-- name: GetAuthor :one
SELECT id, name, bio FROM authors WHERE name = $1 LIMIT 1
`
func (q *Queries) GetAuthor(ctx context.Context, name string) (Author, error) {
row := q.db.QueryRow(ctx, getAuthor, name)
var i Author
err := row.Scan(&i.ID, &i.Name, &i.Bio)
return i, err
}
The method GetAuthor() can be invoked inside an HTTP handler.
HTTP Handlers, Databases, & Errors
Developers can focus on the package router to create RESTful features.
Dependency injection is the technique used to provide database handles to the HTTP handlers on the web server. Handlers are simply methods of the struct Backbone, or methods of the struct wrapping Backbone in a downstream executable. Access a Postgres database in the field DbHandle or through a Queries struct residing in the wrapper built in a downstream executable.
A Backbone method named ServerError has been created to easily respond to errant HTTP requests.
// router/backbone.go
package router
// abbreviated for clarity...
func (b *Backbone) ServerError(w http.ResponseWriter, r *http.Request, err error) {
method := r.Method
path := r.URL.Path
switch {
case errors.Is(err, context.Canceled):
b.Logger.Warn("HTTP", "status", StatusClientClosed, "method", method, "path", path)
case errors.Is(err, context.DeadlineExceeded):
b.Logger.Error("HTTP", "status", http.StatusGatewayTimeout, "method", method, "path", path)
http.Error(w, "timeout", http.StatusGatewayTimeout)
case errors.Is(err, sql.ErrNoRows):
w.WriteHeader(http.StatusNoContent)
default:
b.Logger.Error("HTTP", "err", err.Error(), "method", method, "path", path)
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
Error handling can be invoked in an executable's http.Handler like this:
// _example/routes/routes.go
package routes
// abbreviated for clarity...
func (d *Deps) readAuthorName(w http.ResponseWriter, req *http.Request) {
surname := req.PathValue("surname")
timer, cancel := context.WithTimeout(req.Context(), TIMEOUT_REQUEST)
defer cancel()
result, err := d.Query.GetAuthor(timer, surname)
// Pass err to the ServerError method, and return early.
if err != nil {
d.ServerError(w, req, err)
return
}
w.Write([]byte(result.Name))
}
Add New http.Handler to Router
In a downstream executable, add a method named GetEndpoints() to the custom dependency struct that wraps the Backbone to conform to the library interface Gatherer. This is required for the router to adopt routes written in the executable.
Select the HTTP method that is most appropriate for the writing and reading of data. The ability to select GET or POST as an argument in parameter pattern is a new feature of the language in version 1.22.^r1
// _example/routes/routes.go
package routes
// abbreviated for clarity...
type Deps struct {
*router.Backbone
Query *first.Queries // NOT generated in this example.
}
func (d *Deps) GetEndpoints() []router.Endpoint {
return []router.Endpoint{
{"GET /test2", d.hndlr2},
{"GET /readAuthorName/{surname}", d.readAuthorName},
}
}
Developer Logs
Inside a http.Handler, record errors and extra data by simply invoking the Logger residing in the Backbone struct.
This is how a hypothetical http.Handler drafted in the library looks. Notice it can directly access a Backbone field.
package router
// abbreviated for clarity...
func (b *Backbone) doSomething(w http.ResponseWriter, req *http.Request) {
timer, cancel := context.WithTimeout(req.Context(), TIMEOUT_REQUEST)
defer cancel()
result, err := b.DbHandle.Ping(timer)
if err != nil {
d.Logger.Error("big_message", "err", err.Error())
d.ServerError(w, req, err)
return
}
b.Logger.Info("Did something important... but we should silently succeed.")
w.Write([]byte("ok"))
}
This is a hypothetical http.Handler drafted in a downstream executable that imports the library. It is a method on a struct named Deps that wraps around the Backbone. And Deps holds a sqlC generated Queries struct in a custom field conveniently named Query.
//routes/features_v1.go
package routes
// abbreviated for clarity...
import (
"_example/sqlc/data/first" // sqlC generated code
"github.com/Shoowa/vamos/router"
)
type Deps struct {
*router.Backbone
Query *first.Queries
}
d := &Deps{backbone, first.New(backbone.DbHandle)} // add sqlC *Queries struct
func (d *Deps) readAuthorName(w http.ResponseWriter, req *http.Request) {
surname := req.PathValue("surname")
timer, cancel := context.WithTimeout(req.Context(), TIMEOUT_REQUEST)
defer cancel()
result, err := d.Query.GetAuthor(timer, surname)
if err != nil {
d.ServerError(w, req, err)
return
}
w.Write([]byte(result.Name))
}
Metrics
Metrics are created by Prometheus in the package metrics and scraped on the endpoint /metrics. The package captures go runtime metrics, e.g., go_threads, go_goroutines, etc.^m2
A convenient function for creating a Counter and registering it is available to the downstream consumer of this library. Simply provide a name and description for the Counter.
// metrics/metrics.go
package metrics
import "github.com/prometheus/client_golang/prometheus"
func CreateCounter(name string, help string) prometheus.Counter {
opts := prometheus.CounterOpts{
Name: name,
Help: help,
}
counter := prometheus.NewCounter(opts)
prometheus.MustRegister(counter)
return counter
}
New metrics need to be created in the executable, so they can be imported by a HTTP Handler. This example shows an executable package named metric that imports the library's metrics package. No creative naming here.
package metric
import "github.com/Shoowa/vamos/metrics"
var ReadAuthCount = metrics.CreateCounter("read_authorSurname_count", "no_help")
Then the local Counter is imported into the executable routes package.
// _example/routes/routes.go
package routes
// abbreviated for clarity...
import "metric/metric"
func (d *Deps) readAuthorName(w http.ResponseWriter, req *http.Request) {
metric.ReadAuthCount.Inc()
surname := req.PathValue("surname")
// skipping body...
}
Observe the new data on the /metrics route.
~/vamos $ curl localhost:8080/metrics
# abbreviated for clarity...
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP read_authorSurname_count no_help
# TYPE read_authorSurname_count counter
read_authorSurname_count 0
Health Record
Applications usually receive a request for a health status, then perform some logic to evaluate the health of the application and the health of any dependencies, then answer. That flow of events doesn't happen in this web app.
Instead, the web server responds to any request for health by simply reading from a custom struct named Health.
// router/operations.go
package router
// abbreviated for clarity...
type Health struct {
Rdbms bool
Heap bool
Routines bool
}
Health has several boolean fields. Any request for the status of health is answered by a http.Handler that reads from these fields and evaluates the totality of the boolean conditions.
// router/operations.go
package router
// abbreviated for clarity...
func (h *Health) PassFail() bool {
return h.Rdbms && h.Heap && h.Routines
}
The answer is then provided as a HTTP Header -- either 204 or 503.
// router/routes_operations.go
package router
// abbreviated for clarity...
func assessHealth(health *Health, logger *slog.Logger) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
status := health.PassFail()
if status {
w.WriteHeader(http.StatusNoContent)
} else {
logger.Error("Failed health check")
w.WriteHeader(http.StatusServiceUnavailable)
}
}
}
How is the health of those records evaluated? An individual function that determines the condition of a resource is inserted into a timed loop inside a go routine. Notice the function named checkHeapSize is an argument to the beep function.
// router/operations.go
package router
// abbreviated for clarity...
func setupHealthChecks(cfg *config.Config, b *Backbone) {
pingDbTimer := time.Duration(cfg.Health.PingDbTimer)
heapTimer := time.Duration(cfg.Health.HeapTimer)
health := new(Health)
// Use closure to add the Health Record to the pinger.
pingDB := func() { b.PingDB(health) }
// Use closure to configure method CheckHeapSize.
heapSize := 1024 * 1024 * cfg.Health.HeapSize
checkHeapSize := func() { b.CheckHeapSize(health, heapSize, b.Logger) }
go beep(pingDbTimer, pingDB)
go beep(heapTimer, checkHeapSize)
}
And beep creates a Ticker^t1 that will emit a signal periodically. Then enters a loop that awaits the signal. Upon receiving the signal, a function represented by the parameter task is invoked. checkHeapSize will be invoked as the task.
// router/operations.go
package router
// abbreviated for clarity...
func beep(seconds time.Duration, task func()) {
ticker := time.NewTicker(seconds * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
task()
}
}
}
What is the benefit of this convoluted setup? No matter how often an external service hammers the /health endpoint, it will be less taxing because it simply reads a boolean. The real work of evaluating any resource is held in a discrete function, and there can be a few or many. They all run in the background. They each update a particular health status on their own time. And the configuration of time is determined by the operator of this application.
Cache
Access the Redis client in the Backbone struct when constructing HTTP Handlers.
package router
// abbreviated for clarity...
func (b *Deps) writeCache(w http.ResponseWriter, req *http.Request) {
stuff := req.PathValue("item") // You'll probably use JSON instead.
timer, cancel := context.WithTimeout(req.Context(), TIMEOUT_REQUEST)
defer cancel()
cacheErr := d.Cache.Set(timer, "KEY", stuff, 120*time.Second).Err()
if cacheErr != nil {
d.ServerError(w, req, cacheErr)
return
}
w.Write([]byte("All good")) // Don't really do this in production.
}
Build
Generate a SemVer based on the Git Commit record, then provide that value as input to the build step. An informative record of Git Commits can aid any operator during an incident.
~/vamos/_example $ go env -w GOEXPERIMENT=greenteagc
~/vamos/_example $ go build -v -ldflags="-s -X '/github.com/Shoowa/vamos/config.AppVersion=v.0.0.0' "
The linker flag -s removes symbol table info and DWARF info to produce a smaller executable. And -X^b1 sets the value of a string variable named AppVersion that resides in the config package. This allows us to dynamically write the version of the application after each new commit & build.
package config
import (
"os"
)
var AppVersion string
Testing
Native Functions & Discrete Packages
Three natively written functions determine equality, the absence of errors, and truth. One less dependency in the application. Below is an example of a testing function residing in testhelper.go.
// testhelper/testhelper.go
func Equals(tb testing.TB, exp, act any) {
if !reflect.DeepEqual(exp, act) {
_, file, line, _ := runtime.Caller(1)
fmt.Printf("\033[31m%s:%d:\n\n\texp: %#v\n\n\tgot: %#v\033[39m\n\n", filepath.Base(file), line, exp, act)
tb.FailNow()
}
}
These functions can be invoked by a test package. Use the dot at the beginning of the import path to avoid prefacing every invocation with the name of the testhelper package.
// secrets/secrets.go
package secrets_test
import (
"testing"
. "vamos/secrets"
. "vamos/testhelper"
)
func Test_Configuration(t *testing.T) {
// abbreviated for clarity...
Equals(t, "token", openbao.Token)
}
Notice secrets_test is a separate package from the package secrets. All the tests reside in the former and the functionality resides in the latter. The package secrets_test needs to import the package secrets, and only public functions & fields can be tested. This encourages black box testing and clean code.
Integration Tests
A few steps are required to test interaction with a database.
- Apply SQL commands to change the local development database.
- Generate Go code in sqlc/data/ to interact with updated database.
- Run Go tests marked integration.
- Reverse SQL commands.
It is possible to write code into a *test package that can create tables, insert sample data, and then drop tables whenever a test is launched. Errors can force the test to halt and leave the database with the new state without reversing it. For this reason, it is easier to rely on a tool outside of the application test suite to create and delete Postgres tables. I rely on migrate. However, I prefer using code in the test suite to insert sample data.
Use a make command to easily perform the aforementioned tasks.
~/vamos $ make test_database
In a downstream executable, integration tests can be invoked like this:
~/vamos/_example $ PROJECT_NAME=_example go test ./... -count=1 -tags=integration
Test Suite Setup & Teardown
The application will amend the test suite by first repositioning the root of a test executable in order to read files that provide sample data and the configuration file. Then the test suite will write data to the database, then run the test functions. Lastly, the report is offered.
// _example/tests/data_test.go
package data_test
import (
// abbreviated for clarity...
"testing"
"github.com/Shoowa/vamos/testhelper"
)
func TestMain(m *testing.M) {
// Direct app to read dev.json
os.Setenv("APP_ENV", "DEV")
// Reposition root of test executable.
testhelper.Change_to_project_root()
timer, _ := context.WithTimeout(context.Background(), time.Second*5)
// Setup common resource for all integration tests in only this package.
dbErr := testhelper.CreateTestTable(timer)
if dbErr != nil {
panic(dbErr)
}
os.Unsetenv("APP_ENV")
code := m.Run()
os.Exit(code)
}
The first function tested is the one that creates a connection pool. No other test runs concurrently in this moment. The environment inside the test is adjusted to induce reading configuration data for the development environment.
func Test_ConnectDB(t *testing.T) {
t.Setenv("APP_ENV", "DEV")
t.Setenv("OPENBAO_TOKEN", "token")
cfg := config.Read()
db, dbErr := rdbms.ConnectDB(cfg, cfg.Test.DbPosition)
Ok(t, dbErr)
t.Cleanup(func() { db.Close() })
}
In the _example executable, concurrent reading operations are tested in tests/data_test.go. And they rely on a common connection pool created in the same test group. The final action of the test group is to close the connection pool.
func Test_ReadingData(t *testing.T) {
t.Setenv("APP_ENV", "DEV")
t.Setenv("OPENBAO_TOKEN", "token")
cfg := config.Read()
db, _ := rdbms.ConnectDB(cfg, cfg.Test.DbPosition)
q := first.New(db) // return sqlC generated *Queries
timer, _ := context.WithTimeout(context.Background(), TIMEOUT_READ)
t.Run("Read one author", func(t *testing.T) {
readOneAuthor(t, q, timer)
})
t.Run("Read many authors", func(t *testing.T) {
readManyAuthors(t, q, timer)
})
t.Run("Read most productive author", func(t *testing.T) {
readMostProductiveAuthor(t, q, timer)
})
t.Run("Read most productive author & book", func(t *testing.T) {
readMostProductiveAuthorAndBook(t, q, timer)
})
t.Cleanup(func() { db.Close() })
}
Reliable Qualities
Postgres Connection
The Postgres connection pool retains access to the Openbao secrets storage in a method named BeforeConnect. This method ensures that the connection pool can read fresh credentials, so it enables the security practice of revoking & rotating credentials.
// data/rdbms/rdbms.go
package rdbms
// abbreviated for clarity...
func configure(cfg *config.Config, dbPosition int) (*pgxpool.Config, error) {
db := WhichDB(cfg, dbPosition)
// abbreviated function body for clarity...
pgxConfig.BeforeConnect = func(ctx context.Context, cc *pgx.ConnConfig) error {
secretsReader := new(secrets.SkeletonKey)
secretsReader.Create(cfg)
pw, pwErr := secretsReader.ReadPathAndKey(db.Secret, db.SecretKey)
if pwErr != nil {
return pwErr
}
cc.Password = pw
return nil
}
}
Graceful Shutdown
Requests need to be terminated during a rolling deployment in a manner that preserves the data of the customer, enhances the user experience, and avoids alarms that can mistakenly summon staff.
The webserver is launched in a separate go routine, then a channel is opened to receive termination signals. This blocks the main func until either signal 2 or signal 15 is received. Then the server is gracefully stopped. When that fails, then the errors are logged and the server is forcefully stopped.
// server/server.go
package server
// abbreviated for clarity...
func Start(l *slog.Logger, s *http.Server) {
go gracefulIgnition(s)
l.Info("HTTP Server activated")
catchSigTerm()
l.Info("Begin decommissioning HTTP server.")
shutErr := gracefulShutdown(s)
if shutErr != nil {
l.Error("HTTP Server shutdown error", "ERR:", shutErr.Error())
killErr := s.Close()
if killErr != nil {
l.Error("HTTP Server kill error", "ERR:", killErr.Error())
}
}
l.Info("HTTP Server halted")
}
This is convenient to invoke as one line in an executable.
// _example/main.go
package main
// abbreviated for clarity...
func main() {
// skipping the setup...
server.Start(logger, webserver)
}
Signal 15 allows the program to close listening connections and idle connections while awaiting active connections. This is essential in a dynamic environment like a Kubernetes cluster. A kubelet transmits Signal 15 to a container and pods wait 30 seconds for application cleanup.^k1
// server/server.go
package server
// abbreviated for clarity...
func catchSigTerm() {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
<-sigChan
}
After Signal 15 is received, server.GracefulShutdown(webserver) is invoked. It wraps http.Server.Shutdown(shutdownCtx) with a 15 second timer. And the cancellation function stop() will also be invoked.
// server/server.go
package server
// abbreviated for clarity...
const GRACE_PERIOD = time.Second * 15
func GracefulShutdown(s *http.Server) {
quitCtx, quit := context.WithTimeout(context.Background(), GRACE_PERIOD)
defer quit()
err := s.Shutdown(quitCtx)
if err != nil {
return err
}
return nil
}
stop() was assigned to the server during configuration. It signals to all the child contexts derived from base and used by the HTTP Handlers to terminate any active connections.
// server/server.go
package server
// abbreviated for clarity...
func NewServer(cfg *config.Config, router http.Handler) *http.Server {
base, stop := context.WithCancel(context.Background())
s := &http.Server{
BaseContext: func(lstnr net.Listener) context.Context { return base },
}
s.RegisterOnShutdown(stop) // Cancellation Func assigned to shutdown.
return s
}
Inserting the webserver in a go routine is required to avoid a hasty shutdown. When http.Server.Shutdown() is invoked, http.Server.ListenAndServe() returns immediately.^s1 ListenAndServe() was blocking in a go routine, and becomes un-blocked. If ListenAndServe() had been implemented in main(), then it would immediately un-block and main() would immediately return.
Operate
Two environmental variables are needed by the application to read a configuration file and access storage of sensitive credentials.
~/vamos $ APP_ENV=DEV OPENBAO_TOKEN=token ./vamos
Rate Limiting
A simple Token Bucket rate limiter from the official external library can be activated in the config file. Toggle the field *global_rate_limiter_ to true and define the amount of tokens refilled per second in the average field, and define the amount spent per second in the burst field.
{
"httpserver": {
"global_rate_limiter": {
"active" : true,
"average" : 100,
"burst" : 200
},
"port": "8443",
"timeout_read": 5,
"timeout_write": 10,
"timeout_idle": 60
}
}
Consider the amount of goroutines monitored in health.routines_per_core when defining the amount of tolerated requests.
Router Creation Requires An Interface
The NewRouter function accepts a custom interface named Gatherer, so that it can actually accept two different types of structs. The first struct, Backbone, will be directly used often in the library. The second will be used in a downstream executable as a wrapper around the Backbone. Both can conform to the Gatherer interface by adopting certain methods enumerated in router/backbone.go.
Though an interface isn't required to use a wrapper in a downstream executable, it does ease testing. So I haphazardly drafted one.
Metrics
Metrics are created by Prometheus in the package metrics and scraped on the endpoint /metrics.
Configuration
Several Prometheus Collectors^m3 and their sub-metrics can be toggled on or off in the config file. A set of runtime metrics measures garbage collection, memory, and the scheduler^m4, and even the CPU and Mutexes.^m5 A Process Collector measures the state of the CPU, MEM, file descriptors, and the start time of the process.^m6
"metrics": {
"garbage_collection": true,
"memory": true,
"scheduler": false,
"cpu": false,
"lock": false,
"process": false
}
The NewDBStatsCollector expects a DB struct from the STLDIB^m7, so I can't implement it with the PGX connection pool struct.
HTTP Requests
New metrics needs to be registered to be activated.
The routing middleware in router/middleware.go counts the number of HTTP responses by HTTP Verb & Path.
// router/middleware.go
package router
// abbreviated for clarity...
func logRequests(logger *slog.Logger, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
logger.Info(
"Inbound",
"method", r.Method,
"path", r.URL.Path,
"uagent", r.Header.Get("User-Agent"),
)
next.ServeHTTP(w, r)
})
}
Logging Configuration
Logging is configured as debug in development or as warn in production.
The level is read in logging.go.
// logging/logging.go
package logging
func configure(cfg *config.Config) *slog.HandlerOptions {
logLevel := &slog.LevelVar{}
if cfg.Logger.Debug == true {
logLevel.Set(slog.LevelDebug)
} else {
logLevel.Set(slog.LevelWarn)
}
opts := &slog.HandlerOptions{Level: logLevel}
return opts
}
The primary logger is configured to include two details that can aid anyone debugging an incident in production. The version of the language, and the version of the application. Every child logger inherits these details.
// logging/logging.go
package logging
func CreateLogger(cfg *config.Config) *slog.Logger {
goVersion := slog.String("lang", runtime.Version())
appVersion := slog.String("app", config.AppVersion)
group := slog.Group("version", goVersion, appVersion)
opts := configure(cfg)
handler := slog.NewJSONHandler(os.Stdout, opts)
logger := slog.New(handler).With(group)
slog.SetDefault(logger)
return logger
}
This can be observed during startup.
~/vamos $ APP_ENV=DEV OPENBAO_TOKEN=token ./vamos
{"time":"2025-07-24T13:05:01.477738-04:00","level":"INFO","msg":"Begin logging","version":{"lang":"go1.24.0","app":"v.0.0.0"},"level":"DEBUG"}
When skipping the APP_ENV e-var, the app will deploy under production conditions represented in config/prod.json.
Logging Middleware
The middleware is configured in router/middleware.go as a closure that passes a logger into a http.Handler.
// router/middleware.go
package router
// abbreviated for clarity...
func logRequests(logger *slog.Logger, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
logger.Info(
"Inbound",
"method", r.Method,
"path", r.URL.Path,
"uagent", r.Header.Get("User-Agent"),
)
next.ServeHTTP(w, r)
})
}
Details of every request are recorded. The HTTP method, path, and User-Agent header are highlighted. After those details are logged, the function continues to the next http.Handler.
By satisfying this interface, the http.Server can treat middleware as a router.
// server/server.go
package server
// abbreviated for clarity...
func NewServer(cfg *config.Config, router http.Handler) *http.Server {
s := &http.Server{
Addr: ":" + cfg.HttpServer.Port,
Handler: router,
}
return s
}
The Backbone struct conforms to the custom interface Gatherer, so it can be accepted by the function NewRouter. Backbone holds the logger that can be used by HTTP Handlers and middleware.
// router/router.go
package router
// abbreviated for clarity...
func NewRouter(dependencies Gatherer) http.Handler {
mux := http.NewServeMux()
// Read list of HTTP methods & http.Handlers.
endpoints := dependencies.GetEndpoints()
// Add each HTP path and handler to the router.
for _, endpoint := range endpoints {
mux.HandleFunc(endpoint.VerbAndPath, endpoint.Handler)
}
// Apply middleware to the router.
responseRecordingMW := recordResponses(mux)
loggingMW := logRequests(dependencies.GetLogger(), responseRecordingMW)
gaugingMW := gaugeRequests(loggingMW)
return gaugingMW
}
Then every incoming request is logged in a standard manner.
~/vamos $ APP_ENV=DEV OPENBAO_TOKEN=token ./vamos
# skipping other logs...
{"time":"2025-08-05T16:45:17.23609-04:00","level":"INFO","msg":"Inbound","version":{"lang":"go1.24.0","app":"v.0.0.0"},"server":{"method":"GET","path":"/health","uagent":"HTTPie/3.2.4"}}
Continuous Profiling
We can obtain useful data from the production environment during a memory problem.
A Backbone field named HeapSnapshot holds a pointer to a buffer that collects information generated by the runtime/pprof/WriteHeapProfile(io.Writer) function.
// router/operations.go
package router
// abbreviated for clarity...
type Backbone struct {
Logger *slog.Logger
DbHandle *pgxpool.Pool
HeapSnapshot *bytes.Buffer
}
The Backbone struct implements the method Write([]byte) (n int, err error) to comply with the Writer interface expected by WriteHeapProfile.^i1 And a custom implemention resets the buffer before each write to avoid a memory leak.
// router/operations.go
package router
// abbreviated for clarity...
func (b *Backbone) Write(p []byte) (n int, err error) {
b.HeapSnapshot.Reset()
return b.HeapSnapshot.Write(p)
}
After a configured threshold for memory is surpassed, heap data will be gathered.
// router/operations.go
package router
// abbreviated for clarity...
func (b *Backbone) checkHeapSize(health *Health, threshold uint64) {
var stats runtime.MemStats
runtime.ReadMemStats(&stats)
if stats.HeapAlloc < threshold {
health.Heap = true
return
}
health.Heap = false
b.Logger.Warn("Heap surpassed threshold!", "threshold", threshold, "allocated", stats.HeapAlloc)
err := pprof.WriteHeapProfile(b)
if err != nil {
b.Logger.Error("Error writing heap profile", "ERR:", err.Error())
}
}
Another method can be drafted that will read from the buffer and exfiltrate the data for review by developers & operations staff.
Links
Directories
¶
| Path | Synopsis |
|---|---|
|
data
|
|
|
Package METRICS provides Prometheus metrics.
|
Package METRICS provides Prometheus metrics. |
|
Package ROUTER provides a sanely configured router.
|
Package ROUTER provides a sanely configured router. |
|
Package SERVER provides a sanely configured webserver.
|
Package SERVER provides a sanely configured webserver. |