I have created the clusetr with 4 node, each node has 32GB RAM and 64GB, 64GB, 32GB and 32GB SSD respectively. We were inserted the 34GB data, and trying to read all the 34GB(22400 records) from the application. While reading Master node is giving the error after successfully reading 15300 records out of 22400 records, error code is 1043:CM_MEMORY_LIMIT_EXCEED. If we looked in gs_admin monitor window, Master’s RAM space is fully allocated but others node has 90% of free space. we could not able to understand how RAM is distributed in cluster environment.
Cluster Config Details as follows:
CLUSTER NAME : GS_CLUSTER,
Number Nodes in Cluster : 4,
StoreMemoryLimit : 10GB Notification,
By my math, you have 37GB of memory used per replica, so 74gb total or 18.5gb per node. It is quite likely that combined transaction log file size and checkpoint file size exceed the 32gb capacity of your smallest disk especially if you’re doing a batch load. This does not take into account whether or not the OS is also installed that disk as it would decrease the available space further.
The solution is likely bigger disks or replication level = 1.
We can confirm the issue further if you can provide the output of gs_stat and df -h on the small disk nodes.