Behavior of Kafka GZIP, Snappy and LZ4 Compression

In a test where I am generating a random 10MB+ string I see only gzip offering any compression value when setting on the producer, both snappy and lz4 are actually adding bytes

COMPRESSION TYPE: gzip, ORIGINAL: 10485019, COMPRESSED: 7364618
COMPRESSION TYPE: snappy, ORIGINAL: 10485019, COMPRESSED: 10488238
COMPRESSION TYPE: lz4, ORIGINAL: 10485019, COMPRESSED: 10485666
COMPRESSION TYPE: none, ORIGINAL: 10485019, COMPRESSED: 10485019

To get the compressed content value I'm doing what is done in the KafkaProducer creating an OutputStream via the CompressionType.wrapForOutput

OutputStream outputStream =
            compressionType.wrapForOutput(byteBufferOutputStream, (byte) 0);

After writing and flushing the test message to the outputStream I'm determining the compressed value based on the underlying buffer position

int compressed = byteBufferOutputStream.buffer().position

I'm using latest build from trunk kafka-1.2.0-SNAPSHOT

I have verified each time that the compression codec is correct in the topic partition log using the DumpLogSegments tool

bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/test-0/00000000000000000000.log -print-data-log 

Am I missing something here?