Defining methods in Go - Time & Space Complexity
When we define methods in Go, it's important to know how their execution time changes as the input grows.
We want to see how the time needed to run a method changes when we call it with different data sizes.
Analyze the time complexity of the following code snippet.
package main
import "fmt"
type Counter struct {
values []int
}
func (c *Counter) Sum() int {
total := 0
for _, v := range c.values {
total += v
}
return total
}
func main() {
c := Counter{values: []int{1, 2, 3, 4, 5}}
fmt.Println(c.Sum())
}
This code defines a method Sum for a Counter type that adds up all numbers in its values slice.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping through the
valuesslice inside theSummethod. - How many times: Once for each element in the
valuesslice.
As the number of elements in values grows, the method does more additions, one per element.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 additions |
| 100 | 100 additions |
| 1000 | 1000 additions |
Pattern observation: The work grows directly with the number of elements. Double the elements, double the work.
Time Complexity: O(n)
This means the time to run the method grows in a straight line with the number of items it sums.
[X] Wrong: "Defining a method makes the code run slower regardless of what it does."
[OK] Correct: The method itself is just a way to organize code. The time depends on what the method does, not just that it is a method.
Understanding how methods work and how their time grows helps you explain your code clearly and think about efficiency in real projects.
What if the Sum method also called another method inside a loop? How would the time complexity change?