Why my D code is not performant as expected?

I am doing a benchmark test for my own fun! I have written a part of code in many programming languages and benchmark it using ab to see which is faster and how much. I know the method may not be so valid and can not be used as an evident to use some, but for my own information I am doing so. The other factor I want to know is how easy/difficult is writing the same sample in each language. I wrote the code in Python/Python(asyncio),Haskell,Go, Kotlin and D. I expcted the D port to be faster than Go (or at least equal in speed). But unfortunately my D code is much slower than Go. Here I put oth codes and please help me why the code is not fast as expected. Or am I wrong absolutely in my expectations?

import cbor;
import std.array : appender;
import std.format;
import std.json;
import vibe.vibe;


struct Location
{
    float latitude;
    float longitude;
    float altitude;
    float bearing;
}
RedisClient redis;


void main()
{
    auto settings = new HTTPServerSettings;
    redis = connectRedis("localhost", 6379);

    settings.port = 8080;
    settings.bindAddresses = ["::1", "127.0.0.1"];
    listenHTTP(settings, &hello);

    logInfo("Please open http://127.0.0.1:8080/ in your browser.");
   runApplication();
}

void hello(HTTPServerRequest req, HTTPServerResponse res)
{

if (req.path == "/locations") {

    immutable auto data = req.json;
    immutable auto loc = deserializeJson!Location(data);
    auto buffer = appender!(ubyte[])();
    encodeCborAggregate!(Flag!"WithFieldName".yes)(buffer, loc);
    auto db = redis.getDatabase(0);

    db.set("Vehicle", cast(string) buffer.data);
    res.writeBody("Ok"); 
    }
}

And here is the Go

package main

import (
    "github.com/kataras/iris"
    "github.com/kataras/iris/context"
)

import "github.com/go-redis/redis"

import (
    "bytes"
    "github.com/2tvenom/cbor"
)

type Location struct {
    Latitude  float32 `json:"latitude"`
    Longitude float32 `json:"longitude"`
    Altitude  float32 `json:"altitude"`
    Bearing   float32 `json:"bearing"`
}

func main() {
    app := iris.New()
    client := redis.NewClient(&redis.Options{Addr: "localhost:6379"})

    app.Post("/locations", func(ctx context.Context) {
        var loc Location
        ctx.ReadJSON(&loc)
        var buffTest bytes.Buffer
        encoder := cbor.NewEncoder(&buffTest)
        encoder.Marshal(loc)
        client.Set("vehicle", buffTest.Bytes(), 0)
        client.Close()
        ctx.Writef("ok")
    })
    app.Run(iris.Addr(":8080"), iris.WithCharset("UTF-8"))
}

Using ab, Go results in about 4200 req/sec, while D about 2800 req/sec!

1 answer

  • answered 2017-06-17 20:05 Schwern

    You're not just benchmarking Go vs D. You're also benchmarking your particular choice of non-standard Go and D libraries against each other: cbor, vibe, iris, etc. And you're benchmarking your particular implementation which can easily vary by 1000x in performance.

    With this many variables, the raw benchmark numbers are pretty meaningless for comparing the performance of two languages. It's possible any one of those 3rd party libraries are causing a performance problem. Really you're comparing just those two particular programs. This is the core problem of trying to compare anything but trivial programs across languages: there's too many variables.


    You can reduce the impact of some of these variables with performance profiling; in Go this would be go tool pprof. This will tell you what functions and lines are being called how many times and taking how much resources. With that you can find bottlenecks, places in the code which are consuming a lot of resources, and focus optimization efforts there.

    As you do profile and optimization rounds for each version, you'll get closer to comparing real, optimized implementations. Or you'll have a better understanding of what each language and library does efficiently, and what they don't.


    The problem of comparing languages is heavily influenced by the particular problem and the particular programmer. X programmers invariably find X to be the best language not because X is the best language, but because X programmers are their best when writing in X and probably chose a problem they're comfortable with. Because of this, there are a number of projects to crowd source the best implementation for each language.

    The one which immediately comes to mind is The Computer Language Benchmarks Game. They do Go, but not D. Maybe you can add it?