This post is archived and probably outdated.

Efficient Node.js Buffer usage

2019-09-27 20:30:00

When building network libraries for Node.js, as we do at work, one quite quickly comes by Node's Buffer type. A Buffer gives access to a memory region in a quite raw form, allowing to handle raw data and allowing to interpret binary streams. The Buffer interface predates ES6 TypedArrays and has some optimizations.

Two optimisations are notable:

For one the slice() method does not copy data, but returns a view on the underlying data. This makes it quite efficient to work on a window of the data, but when writing one has to be careful. Simple example:

const buffer = Buffer.from("hello");
const slice = buffer.slice(1,2);
slice[0] = 97;
console.log(buffer.toString('utf8')); // will print 'hallo'

The second one is that allocating a small buffer won't actually go to the operating system and allocate a memory area, but Node.js has a memory region from which small buffers can be derived quickly. This actually works by using the slicing logic.

const buffer1 = Buffer.from("hello");
const buffer2 = Buffer.from("world");
console.log(buffer1.byteOffset);   // 0
console.log(buffer2.byteOffset);   // 8

This indicates that both buffers use the same memory region with an alignment of 8 bytes.

So how does this work? - Underlying the Buffer, in modern versions of Node is an ArrayBuffer. We can ask the Buffer to provide us with the underlying ArrayBuffer using the .buffer property. One thing one has to be careful about is that for a slice the returned ArrayBuffer will be the full buffer and not only a sliced part. Giving the two Buffers from above we can see this.

const buffer3 = Buffer.from(buffer2.buffer);
console.log(buffer3.length);    // 8192
const buffer4 = buffer3.slice(0, 5);
console.log(buffer4.toString('utf8'));  // hello

A raw ArrayBuffer doesn't provide many things, but we can create a new Buffer on top of it. This won't copy the data, but use the same memory as above. We can see the the pre-allocated block by Node (in the version I'm using for this test) apparantely is 8192 bytes. 8k is a common size used for multiple buffers. A factor in such a choice is that many filesystems use 512 byte blocks and 8k is a handleable multiple of it. Additionally CPU caches often are mutiples of 8k. So this is not a fully arbitrary choice.

Since most of the data most likely is rubbish we slice it and look at the beginning of the data and notably we're not seeing "world", but "hello" confirming that these are indeed using the same underlying buffer.

As said Buffer is a Node-specific type and different Node.js libraries can't handle it. One of thosel ibraries we use is the Google Protocol Buffer library. In simple terms protobuf is a format for serializing data for example for exchange over network. We can call it a typed and more efficient JSON alternative.

Protobuf's deserializeBinary() function now won't work with Buffer instaces, but requires a TypedArray, like Uint8Array. A Uint8Array is the ES6 counterpart to Node's Buffer. It is a layer on top of ArrayBuffer for accessing bytes.

The easy way to make it work is converting a Buffer to Uint8Array like this:

const buf = stream.read(length);
const u8 = new Uint8Array(buf);
const msg = Message.deserializeBinary(u8);

However this is inefficient as the Uint8Array constructor will copy the data. But as we learned, both Buffer and Uint8Array are just views on top of an ArrayBuffer. The Buffer gives us access to the ArrayBuffer and if we are careful about our slicing offsets we can ask the Uint8Array to re-use the same memory without copying:

const buf = stream.read(length);
const u8 = new Uint8Array(buf.buffer, buf.byteOffset, buf.length);
const msg = Message.deserialzeBinary(u8);

A tiny bit more to type, but a notable gain in performance, which we can verify with a trivial benchmark:

console.time("A");
for (let i = 0; i < 1000; ++i) {
    const b = new Buffer(10000);
    const u = new Uint8Array(b);
}
console.timeEnd("A");

console.time("B");
for (let i = 0; i < 1000; ++i) {
    const b = new Buffer(10000);
    const u = new Uint8Array(b.buffer, b.byteOffset, b.length);
}
console.timeEnd("B");

On my machine I see results like

A: 9.895ms
B: 5.216ms

Difference in a real application of course will be different.