elasticsearch: distributing indices over multiple disk volumes

I have one index which is quite large (about 100Gb), so I had to extend my disk space on my digital ocean survey by adding another volume (I run everything on only one node). I told elasticsearch that it now has to consider two disk locations by

/usr/share/elasticsearch/bin/elasticsearch -Epath.data=/var/lib/elasticsearch,/mnt/volume-sfo2-01/es_data

elasticsearch does seem to have taken notice of this since it wrote some stuff to the new location

# ls /mnt/volume-sfo2-01/es_data/
nodes

reading the elasticsearch documentary I got the impression that elasticsearch is arranging the new index over the two disk locations, but will not split a shard between the two locations. So I initialized the index with 5 shards so that it can split the data between the volumes.

The survey does seem to have detected the two data paths since the log file shows

[2017-06-17T19:16:57,079][INFO ][o.e.e.NodeEnvironment    ] [WU6cQ-o] using [2] data paths, mounts [[/ (/dev/vda1), /mnt/volume-sfo2-01 (/dev/sda)]], net usable_space [29.6gb], net total_space [98.1gb], spins? [possibly], types [ext4]

However, when I index the new indices, with constantly uses all the disk space on my original disk and eventually runs out of disk space with the error

raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.TransportError: TransportError(500, u'index_failed_engine_exception', u'Index failed for [pubmed_paper#25949809]')

It never shifts one of the shards to the second volume? Do I miss anything? Can I manually guide the disk space usage?