Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate amino performance/mem usage #254

Open
liamsi opened this issue Mar 4, 2019 · 2 comments
Open

Investigate amino performance/mem usage #254

liamsi opened this issue Mar 4, 2019 · 2 comments

Comments

@liamsi
Copy link
Contributor

liamsi commented Mar 4, 2019

While looking into adding amino to this list: https://github.com/alecthomas/go_serialization_benchmarks (see this branch: https://github.com/Liamsi/go_serialization_benchmarks/tree/add_amino), it seems like amino is quite slow compared to other similar libraries:

BenchmarkAminoMarshal-12                         300000              5492 ns/op            4138 B/op         72 allocs/op
BenchmarkAminoUnmarshal-12                       1000000              1081 ns/op             194 B/op          7 allocs/op
BenchmarkProtobufMarshal-12                      2000000               676 ns/op             200 B/op          7 allocs/op
BenchmarkProtobufUnmarshal-12                    3000000               593 ns/op             192 B/op         10 allocs/op
BenchmarkGoprotobufMarshal-12                    5000000               325 ns/op              96 B/op          2 allocs/op
BenchmarkGoprotobufUnmarshal-12                  3000000               498 ns/op             200 B/op         10 allocs/op
BenchmarkGogoprotobufMarshal-12                 20000000               119 ns/op              64 B/op          1 allocs/op
BenchmarkGogoprotobufUnmarshal-12               10000000               166 ns/op              96 B/op          3 allocs/op

Hopefully, we should be able to vastly improve this performance without completely reworking the structure.

Here is what the profiler tells us about memory/cpu while running above benchmarks for unmarshaling/marshaling:



marshal_cpu_profile


marshal_mem_profile


unmarshal_cpu_profile


unmarshal_mem_profile


Zaki suggested to add a hint in the readme about performance issue and in which cases users would might want to refrain from using amino.

@rickyyangz
Copy link
Contributor

Two things are not fair for your amino case.

  1. Before marshalling, you need to reset timer or stop timer before generating testing data.
func BenchmarkAminoMarshal(b *testing.B) {
        // b.StopTimer()
	data := generateAmino()
	b.ReportAllocs()
	b.StartTimer() 
	s := AminoSerializer{amino.NewCodec()}
	for i := 0; i < b.N; i++ {
			s.MustMarshalBinaryBare(data[rand.Intn(len(data))])
	}
}
  1. the type of BirthDay is time.Time while in other cases it is int64

@liamsi
Copy link
Contributor Author

liamsi commented Jun 24, 2020

good point @rickyyangz. Did you you re-benchmark with your suggested changes? I would assume the performance to still be much slower than generated protobuf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants