It's no real secret that I do love the programming language Go. So I was really delighted to see that Go apparently does all the right things when it comes to their time
package that handles time zones etc correctly by default as opposed to be something bolted on after the fact like most other languages.
But for some unknown reason it's just way too complex to convert a millisecond resolution Unix timestamp to time.Time
. The built-in time.Unix() function only supports second and nanosecond precision.
This means that you either have to multiply the millis to nanoseconds or split them into seconds and nanoseconds. So obviously my naive implementation was:
time.Unix(0, timestamp * int64(1000000))
But that code looked ugly to me - especially if you have to do this a few times around the codebase - so I wrote a function.
But for some reason I also decided to benchmark my function as I am working on a performance critical piece of code right now. And it turns out that the simple multiplication to turn millis into nanos is 2x slower than dividing the millis into seconds and then turning the remainder into nanos.
time.Unix(ms/int64(millisInSecond), (ms%int64(millisInSecond))*int64(nsInSecond))
Benchmark:
goos: darwin goarch: amd64 pkg: github.com/tigraine/go-timemillis BenchmarkMult-8 2000000000 0.50 ns/op BenchmarkDiv-8 2000000000 0.25 ns/op
So I packaged my findings into a library which is now available on GitHub: go-timemilli