← See all notes

Table of Contents

Go

Learning 🔗

Courses 🔗

Tricks / Best practices 🔗

Flags first config 🔗

config.go:

package config

import (
	"flag"
	"log"
	"os"

	"github.com/peterbourgon/ff/v3"
)

type Config struct {
	Port   string
	DbHost string
	DbUser string
	DbPass string
}

func LoadConfig() *Config {
	// Define the flags
	fs := flag.NewFlagSet("mysvc", flag.ExitOnError)
	var (
		port   = fs.String("port", "8080", "Server port (ENV: PORT)")
		dbHost = fs.String("db_host", "localhost", "Database host (ENV: DB_HOST)")
		dbUser = fs.String("db_user", "", "Database user (ENV: DB_USER)")
		dbPass = fs.String("db_pass", "", "Database password (ENV: DB_PASS)")
	)

	// Parse flags with support for environment variables
	ff.Parse(fs, os.Args[1:], ff.WithEnvVarPrefix(""))

	// Return the populated configuration
	return &Config{
		Port:   *port,
		DbHost: *dbHost,
		DbUser: *dbUser,
		DbPass: *dbPass,
	}
}

}

main.go:

package main

import (
	"log"
	"mysvc/config"
)

func main() {
	// Load configuration
	cfg := config.LoadConfig()

	// Output configuration for demonstration (replace this with your app logic)
	log.Printf("Server starting on port %s", cfg.Port)
	log.Printf("Database host: %s", cfg.DbHost)
	log.Printf("Database user: %s", cfg.DbUser)
}

Memory/GC optimizations 🔗

GOGC=100 # default, consider increasing to 1000, 10000, or setting it to off
GOMEMLIMIT=2750MiB

Consider https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i-learnt-to-stop-worrying-and-love-the-heap/ as well.

Debugging GC 🔗

GODEBUG=gctrace=1

Formatting / Linting 🔗

Code layout 🔗

https://gist.github.com/gmcabrita/7f2d91545855571ecedb48aa1423a9f7

Auto reload app 🔗

Can use https://github.com/watchexec/watchexec or https://github.com/cosmtrek/air instead.

while true
do
  go install ./cmd/mysvc && mysvc &
  fswatch -1 .
  pkill mysvc && wait
done

sqlc 🔗

Defer and errdefer 🔗

Defer: From Basic To Traps

From: https://news.ycombinator.com/item?id=28095488

// must use named returns so you can observe the returned error
func yourfunc() (thing thingtype, err error) {
  thing := constructIt()

  defer func(){
    err = errors.Join(err, f.Close())
  }()

  // use thing to do a few things that could error
  err = thing.setup()
  if err != nil {
    return nil, err
  }
  other, err := thing.other()
  if err != nil {
    return // naked returns are fine too
  }
  // etc

  return
}

See also:

Interface guard 🔗

https://github.com/uber-go/guide/blob/master/style.md#verify-interface-compliance

type Handler struct {
	// ...
}

var _ http.Handler = (*Handler)(nil)

Errors as exported constants 🔗

Export errors as contant structs to avoid Hyrum’s law.

https://pkg.go.dev/net/http#pkg-constants

type MyError struct {
	ErrorString string
}

var (
	ErrNotSupported = &ProtocolError{"feature not supported"}
	ErrUnexpectedEndState = &ProtocolError{"unexpected end state"}
)

Caveats 🔗

Netpoll socket buffering limits 🔗

Unfortunately, Netpoll only buffers at most 128 sockets in a single EPoll call, meaning we’re stuck making several EPoll calls in order to fetch all the sockets becoming available. In a CPU profile, this manifests as nearly 65% of our CPU time being spent on syscall.EpollWait in runtime.netpoll!

Image

To resolve this issue, the solution is quite apparent: we need to run a larger number of Go runtimes per host and reduce their individual network I/O workloads to something the Go runtime can manage.

Thankfully, in our case, this was as easy as spinning up 8 application containers per host on different ports (skipping Docker NAT) and pointing our Typescript Frontend API at the additional addresses to route its requests.

After implementing this change, we saw a 4x increase in performance.

From a previous maximum throughput of ~1.3M Scylla queries per second across 3 containers on 3 hosts, we see a new maximum of ~2.8M Scylla queries per second (for now) across 24 containers on 3 hosts.

From a previous maximum throughput of ~90K requests served per second to the AppView Frontend, we saw a jump to ~185k requests served per second.

Our p50 and p99 latencies dropped by more than 50% during load tests, and at the same time the CPU utilization on each Dataplane host saw a reduction from 80% across all cores to under 40% across all cores.

This find finally allows us to get the full performance out of our baremetal hardware and will let us scale out much more effectively in the future.

This may change in the future! See https://github.com/golang/go/issues/65064.

State machines 🔗

Packages 🔗