Increase Haddop_HEAPSIZE in amazon EMR to run job with a few million input files

I am running into an issue with my EMR jobs where too many input files throws out of memory errors. Doing some research I think changing the HADOOP_HEAPSIZE config parameter is the solution. Old amazon forums from 2010 say it cannot be done. can we do that now in 2018??

I run my jobs using the C# API for EMR and normally I set configurations using statements like below. can I set HADOOP_HEAPSIZE using similar commands.

 config.Args.Insert(2, "-D");
             config.Args.Insert(3, "mapreduce.output.fileoutputformat.compress=true");
             config.Args.Insert(4, "-D");
             config.Args.Insert(5, "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec");
             config.Args.Insert(6, "-D");
             config.Args.Insert(7, "mapreduce.map.output.compress=true");
             config.Args.Insert(8, "-D");
             config.Args.Insert(9, "mapreduce.task.timeout=18000000");

If I need to bootstrap using a file, I can do that too. If someone can show me the contents of the file for the config change.

Thanks