Commit transfer performance for large files to HTTP+SVN server

I have a SVN repository behind an Apache HTTPS server that stores small and large (+1GB) files. When I commit a large file, the transfer speed is about 10MB/sec (using a 1GBit network line). When I look at CPU utilization on the server, it is saturated with about 85% being consumed by apache2, and some 15% by the disk driver.

I have already tried disabling Apache logging and SSL, but that didn't help to improve the transfer speed. This makes me think that mod_dav_svn is using most of the CPU? I have also tried to increase the amount of available cores on the server (default = 1 core), but this mysteriously slows down the commits while httpd remains using 1 core. And setting SVNCompressionLevel 0 also didn't result in any noticeable speed improvement.

Is there any way to significantly increase the transfer speed through parallelization or some other optimization?

Server:

  • Debian 9.3
  • Apache 2.4.25
  • libapache2-mod-svn 1.9.5
  • svn repository: default FSFS config (i.e. all commented out in fsfs.conf). The HDD can write up to 30Mb/sec (hardware limited) without saturating the CPU (tested with copying). FS is NTFS, using ntfs-3g with big_writes enabled which is using some 10-15% CPU while writing @10MB/sec.

Client:

  • svn 1.8.13

CPU: first generation Intel Core @3.20Ghz

Obviously, I would be very pleased if I could transfer at 25-30MB/sec.

1 answer

  • answered 2018-01-12 07:02 bahrep

    Is there any way to significantly increase the transfer speed through parallelization or some other optimization?

    Yes, there is. However, the question lacks necessary details about the SVN client and server version, the server's and FSFS repository configuration and the hardware it runs on. It is hard to tell what kind of optimizations will help in your case. You may want to upgrade your server and client to the latest versions and disable the compression in the server's config.

    FYI: VisualSVN Server in my tests can deliver 1Gbps speed.