Three new books, Go Optimizations 101, Go Details & Tips 101 and Go Generics 101 are published now. It is most cost-effective to buy all of them through this book bundle in the Leanpub book store.

Memory Blocks

Go is a language which supports automatic memory management, such as automatic memory allocation and automatic garbage collection. So Go programmers can do programming without handling the underlying verbose memory management. This not only brings much convenience and saves Go programmers lots of time, but also helps Go programmers avoid many careless bugs.

Although knowing the underlying memory management implementation details is not necessary for Go programmers to write Go code, understanding some concepts and being aware of some facts in the memory management implementation by the standard Go compiler and runtime is very helpful for Go programmers to write high quality Go code.

This article will explain some concepts and list some facts of the implementation of memory block allocation and garbage collection by the standard Go compiler and runtime. Other aspects, such as memory apply and memory release in memory management, will not be touched in this article.

Memory Blocks

A memory block is a continuous memory segment to host value parts at run time. Different memory blocks may have different sizes, to host different value parts. One memory block may host multiple value parts at the same time, but each value part can only be hosted within one memory block, no matter how large the size of that value part is. In other words, for any value part, it never crosses memory blocks.

There are many reasons when one memory block may host multiple value parts. Some of them:

A Value References the Memory Blocks Which Host Its Value Parts

We have known that a value part can reference another value part. Here, we extend the reference definition by saying a memory block is referenced by all the value parts it hosts. So if a value part v is referenced by another value part, then the other value will also reference the memory block hosting v, indirectly.

When Will Memory Blocks Be Allocated?

In Go, memory blocks may be allocated but not limited at following situations:

Where Will Memory Blocks Be Allocated On?

For every Go program compiled by the official standard Go compiler, at run time, each goroutine will maintain a stack, which is a memory segment. It acts as a memory pool for some memory blocks to be allocated from/on. Before Go Toolchain 1.19, the initial size of a stack is always 2KiB. Since Go Toolchain 1.19, the initial size is adaptive. The stack of a goroutine will grow and shrink as needed in goroutine running. The minimum stack size is 2KiB.

(Please note, there is a global limit of stack size each goroutine may reach. If a goroutine exceeds the limit while growing its stack, the program crashes. As of Go Toolchain 1.22.n, the default maximum stack size is 1 GB on 64-bit systems, and 250 MB on 32-bit systems. We can call the SetMaxStack function in the runtime/debug standard package to change the size. And please note that, by the current official standard Go compiler implementation, the actual allowed maximum stack size is the largest power of 2 which is not larger than then MaxStack setting. So for the default setting, the actual allowed maximum stack size is 512 MiB on 64-bit systems, and 128 MiB on 32-bit systems.)

Memory blocks can be allocated on stacks. Memory blocks allocated on the stack of a goroutine can only be used (referenced) in the goroutine internally. They are goroutine localized resources. They are not safe to be referenced crossing goroutines. A goroutine can access or modify the value parts hosted on a memory block allocated on the stack of the goroutine without using any data synchronization techniques.

Heap is a singleton in each program. It is a virtual concept. If a memory block is not allocated on any goroutine stack, then we say the memory block is allocated on heap. Value parts hosted on memory blocks allocated on heap can be used by multiple goroutines. In other words, they can be used concurrently. Their uses should be synchronized when needed.

Heap is a conservative place to allocate memory blocks on. If compilers detect a memory block will be referenced crossing goroutines or can't easily confirm whether or not the memory block is safe to be put on the stack of a goroutine, then the memory block will be allocated on heap at run time. This means some values which can be safely allocated on stacks may also be allocated on heap.

In fact, stacks are not essential for Go programs. Go compiler/runtime can allocate all memory block on heap. Supporting stacks is just to make Go programs run more efficiently:

If a memory block is allocated somewhere, we can also say the value parts hosted on the memory block are allocated on the same place.

If some value parts of a local variable declared in a function is allocated on heap, we can say the value parts (and the variable) escape to heap. By using Go Toolchain, we can run go build -gcflags -m to check which local values (value parts) will escape to heap at run time. As mentioned above, the current escape analyzer in the standard Go compiler is still not perfect, many local value parts can be allocated on stacks safely will still escape to heap.

An active value part allocated on heap still in use must be referenced by at least one value part allocated on a stack. If a value escaping to heap is a declared local variable, and assume its type is T, Go runtime will create (a memory block for) an implicit pointer of type *T on the stack of the current goroutine. The value of the pointer stores the address of the memory block allocated for the variable on heap (a.k.a., the address of the local variable of type T). Go compiler will also replace all uses of the variable with dereferences of the pointer value at compile time. The *T pointer value on stack may be marked as dead since a later time, so the reference relation from it to the T value on heap will disappear. The reference relation from the *T value on stack to the T value on heap plays an important role in the garbage collection process which will be described below.

Similarly, we can view each package-level variable is allocated on heap, and the variable is referenced by an implicit pointer which is allocated on a global memory zone. In fact, the implicit pointer references the direct part of the package-level variable, and the direct part of the variable references some other value parts.

A memory block allocated on heap may be referenced by multiple value parts allocated on different stacks at the same time.

Some facts:

A memory block created by calling new function may be allocated on heap or stacks. This is different to C++.

When the size of a goroutine stack changes (for stack growth or shrinkage), a new memory segment will be allocated for the stack. So the memory blocks allocated on the stack will very likely be moved, or their addresses will change. Consequently, the pointers, which must be also allocated on the stack, referencing these memory blocks also need to be modified accordingly. The following is such an example.
package main

// The following directive is to prevent
// calls to the function f being inlined.
func f(i int) byte {
	var a [1<<20]byte // make stack grow
	return a[i]

func main(){
	var x int

We will find that the two printed addresses are different (as of the standard Go compiler v1.22.n).

When Can a Memory Block Be Collected?

Memory blocks allocated for direct parts of package-level variables will never be collected.

The stack of a goroutine will be collected as a whole when the goroutine exits. So there is no need to collect the memory blocks allocated on stacks, individually, one by one. Stacks are not collected by the garbage collector.

For a memory block allocated on heap, it can be safely collected only if it is no longer referenced (either directly or indirectly) by all the value parts allocated on goroutine stacks and the global memory zone. We call such memory blocks as unused memory blocks. Unused memory blocks on heap will be collected by the garbage collector.

Here is an example to show when some memory blocks can be collected:
package main

var p *int

func main() {
	done := make(chan bool)
	// "done" will be used in main and the following
	// new goroutine, so it will be allocated on heap.

	go func() {
		x, y, z := 123, 456, 789
		_ = z  // z can be allocated on stack safely.
		p = &x // For x and y are both ever referenced
		p = &y // by the global p, so they will be both
		       // allocated on heap.

		// Now, x is not referenced by anyone, so
		// its memory block can be collected now.

		p = nil
		// Now, y is also not referenced by anyone,
		// so its memory block can be collected now.

		done <- true

	// Now the above goroutine exits, the done channel
	// is not used any more, a smart compiler may
	// think it can be collected now.

	// ...

Sometimes, smart compilers, such as the standard Go compiler, may make some optimizations so that some references are removed earlier than we expect. Here is such an example.
package main

import "fmt"

func main() {
	// Assume the length of the slice is so large
	// that its elements must be allocated on heap.
	bs := make([]byte, 1 << 31)

	// A smart compiler can detect that the
	// underlying part of the slice bs will never be
	// used later, so that the underlying part of the
	// slice bs can be garbage collected safely now.


Please read value parts to learn the internal structures of slice values.

By the way, sometimes, we may hope the slice bs is guaranteed to not being garbage collected until fmt.Println is called, then we can use a runtime.KeepAlive function call to tell garbage collectors that the slice bs and the value parts referenced by it are still in use.

For example,
package main

import "fmt"
import "runtime"

func main() {
	bs := make([]int, 1000000)


	// A runtime.KeepAlive(bs) call is also
	// okay for this specified example.

How Are Unused Memory Blocks Detected?

The current standard Go compiler (v1.22.n) uses a concurrent, tri-color, mark-sweep garbage collector. Here this article will only make a simple explanation for the algorithm.

A garbage collection (GC) process is divided into two phases, the mark phase and the sweep phase. In the mark phase, the collector (a group of goroutines actually) uses the tri-color algorithm to analyze which memory blocks are unused.

The following quote is taken from a Go blog article and is modified a bit to make it clearer.
At the start of a GC cycle all heap memory blocks are white. The GC visits all roots, which are objects directly accessible by the application such as globals and things on the stack, and colors these grey. The GC then chooses a grey object, blackens it, and then scans it for pointers to other objects. When this scan finds a pointer to a white memory block, it turns that object grey. This process repeats until there are no more grey objects. At this point, white (heap) memory blocks are known to be unreachable and can be reused.

(About why the algorithm uses three colors instead of two colors, please search "write barrier golang" for details. Here only provides two references: eliminate STW stack re-scanning and mbarrier.go.)

In the sweep phase, the marked unused memory blocks will be collected.

An unused memory block may not be released to OS immediately after it is collected, so that it can be reused for new some value parts. Don't worry, the official Go runtime is much less memory greedy than most Java runtimes.

The GC algorithm is a non-compacting one, so it will not move memory blocks to rearrange them.

When Will a New Garbage Collection Process Start?

Garbage collection processes will consume much CPU resources and some memory resources. So there is not always a garbage collection process in running. A new garbage collection process will be only triggered when some run-time metrics reach certain conditions. How the conditions are defined is a garbage collection pacer problem.

The garbage collection pacer implementation of the official standard Go runtime is still being improved from version to version. So it is hard to describe the implementation precisely and keep the descriptions up-to-date at the same time. Here, I just list some reference articles on this topic:


The Go 101 project is hosted on Github. Welcome to improve Go 101 articles by submitting corrections for all kinds of mistakes, such as typos, grammar errors, wording inaccuracies, description flaws, code bugs and broken links.

If you would like to learn some Go details and facts every serveral days, please follow Go 101's official Twitter account @zigo_101.

The digital versions of this book are available at the following places:
Tapir, the author of Go 101, has been on writing the Go 101 series books and maintaining the website since 2016 July. New contents will be continually added to the book and the website from time to time. Tapir is also an indie game developer. You can also support Go 101 by playing Tapir's games (made for both Android and iPhone/iPad):
Individual donations via PayPal are also welcome.