Spark on Yarn: driver memory is checked on the client side? -


I thought that I consider yarn well on the architecture, but now I am surprised when I launch

Spark-deposit - Master Yarn-Cluster - Class com.domain.xxx.ddpaction.DdpApp --num- Executioner 24 - Dupe-Mode Cluster - Driver-Memory 4G - Xpiler-Memory2g - Exporter-Core 1 - Conf "Spark.Yarn.jar = / Spark / lib / spark-assembly-1.0.0 -addeep 2.4.0.jpg "ddpaction-3.1.0.jar thread-cluster config.yml

this one

  # fails with the original memory allocation (malloc) fails to allocate 2863333376 bytes for reserved memory  

Server from which I can use SPARC-LOMDED Launching is less than 2GB of free memory and this causes an error, but the resource manager, where the driver should execute More than 4GB driver-memory parameters Why is the driver-memory value, which I think should be scrutinized and allocated on the thread cluster in the Resource Manager, is allocated on the server that is presented in SPARC-thread-cluster mode?

This is a bug that was fixed in SPARC -1.4.0


Comments