avro v0.4.3.0 Release Notes

Release Date: 2019-03-04 // about 5 years ago
  • ๐Ÿ”„ Changes

    • Switch out pure-zlib for zlib by @null
    • ๐Ÿ†• New Data.Avro.Codec module This commit adds the new Data.Avro.Codec module which offers a Codec framework and implements null and deflate codec as required by the Avro spec. Great care was taken to construct the deflate decompressor in an incremental manner to avoid bloating memory too much. by @null
    • Actually make use of new Codec functionality by @null
    • Encode container blocks with codec by @null
    • โœ… Make tests use nullCodec by @null
    • Comments by @alexbiehl
    • โœ‚ Remove redundant import by @alexbiehl
    • โž• Add hspec-discover to build-tool-depends new-test will resolve and provide hspec-discover for us if we ask nicely. by @null
    • โœ… Test codecs in containerSpec by @null
    • Implemented a benchmark suite, with some benchmarks encoding/decoding arrays. The initial benchmark suite measures encoding and decoding four kinds of arrays: * 10,000 booleans * 10,000 ints * 10,000 longs * 10,000 records with the booleans, ints and longs as fields This by itself gives us an interesting result: encoding/decoding a record takes ~2x as much time as encoding/decoding separate arrays with exactly the same data. encode/array/bools mean 4.955 ms ( +- 66.76 ฮผs ) encode/array/ints mean 5.366 ms ( +- 59.39 ฮผs ) encode/array/longs mean 6.291 ms ( +- 86.93 ฮผs ) encode/array/records mean 35.84 ms ( +- 333.0 ฮผs ) decode/array/bools mean 20.50 ms ( +- 230.4 ฮผs ) decode/array/ints mean 64.30 ms ( +- 3.144 ms ) decode/array/longs mean 110.8 ms ( +- 3.757 ms ) decode/array/records mean 301.7 ms ( +- 8.999 ms ) You can run the benchmarks with this shorter output using the following command: cabal new-run bench-time -- -s I think I implemented these benchmarks correctly, but it would be great for somebody else to take a look and make sure that I'm actually measuring what I want to measure and things like the setup costs for the benchmark don't get measured. This is just a starting pointโ€”it would be great to have more benchmarks. We should also start benchmarking against the TH-generated types directly. Currently the TH instances go through the intermediate Value type, but I've found that this approach is really inefficient in some internal code I've been working on. In the near future, we will want to implement the equivalent of Aeson's toEncoding method to avoid the intermediate representation, so we should have some benchmarks up to measure the impact of this change. (I'll open an issue about this as well.) by @TikhonJelvis
    • ๐Ÿ“œ Flexible parser for decodeRawBlocks by @null
    • ๐Ÿ›  Fix test suite by @null
    • โช Revert "Fix test suite" This reverts commit a355456. by @null
    • โช Revert "Flexible parser for decodeRawBlocks" This reverts commit 1428f26. by @null
    • Simplify decompression function type by @null
    • Whitespace in RawBlockSpec by @null
    • Call out codec on encodeContainer by @null
    • Stricten getBoolean and getNonNegative Before: decode/array/bools time 24.02 ms decode/array/ints time 82.41 ms decode/array/longs time 130.7 ms decode/array/records time 311.5 ms After: decode/array/bools time 18.46 ms decode/array/ints time 27.34 ms decode/array/longs time 34.18 ms decode/array/records time 154.0 ms by @null
    • ๐Ÿ”€ Merge pull request #84 from alexbiehl/codecs Proper support for codecs by @AlexeyRaga
    • ๐Ÿ”€ Merge pull request #86 from TikhonJelvis/benchmark Implemented a benchmark suite, with some benchmarks encoding/decoding arrays. by @AlexeyRaga
    • ๐Ÿ”€ Merge pull request #89 from alexbiehl/stricten Stricten getBoolean and getNonNegative by @AlexeyRaga
    • Cleanup .cabal file by @AlexeyRaga
    • ๐Ÿ”€ Merge pull request #90 from haskell-works/cleanup-cabal Cleanup .cabal file by @AlexeyRaga
    • ๐Ÿš€ Release v0.4.3.0 by @AlexeyRaga