Memory management
Configure heap space and swap memory settings for Groundplex nodes.
This topic covers the following memory configuration options for Groundplex nodes:
- Heap space configuration — Set the maximum JVM heap size using the auto setting or a custom value.
- Compressed class space — Override the default compressed class space size to prevent pipeline preparation errors on large heap configurations.
- Swap memory for dynamic workloads — Configure swap memory to prevent OutOfMemory failures when pipeline memory usage is unpredictable.
- Swap configuration workflow — Steps to enable swap and tune health check timeouts to account for swap usage.
Memory and heap space configuration
To change the maximum heap space used by your Snaplex, edit the Maximum heap size field setting in the Update a Snaplex page.
The default is auto, meaning that SnapLogic automatically sets the maximum heap size based on the available memory. The auto setting uses the optimum fraction of physical memory available to the system and leaves sufficient memory for operating system usage, as follows:
- For RAM up to 4 GB, 75% is allocated to Java Virtual Machine (JVM).
- For RAM over 4 GB and up to 8 GB, 80% is allocated to JVM.
- For RAM over 8 GB and up to 32 GB, 85% is allocated to JVM.
- For RAM over 32 GB, 90% is allocated to JVM.
Custom heap setting
If you enter your own heap space value, one method is to set the value to approximately 1 GB less than the amount of RAM available on the machine.
For example, to set the heap space value to 5, update it with a g following the numerical value with no spaces 5g for 5 GiB.
It is recommended that you appropriately set the heap space for optimum Snaplex performance.
- If the heap space value is excessively high, this can cause the machine to swap memory to disk, degrading performance.
- If the heap space value is excessively low, this can cause pipelines that require higher memory to fail or degrade in performance.
Compressed class space
In Java, objects are instantiations of classes. Object data is stored on the heap, and class data is stored in nonheap space for memory allocation.
The default compressed class space size is automatically raised to 2 GB to prevent pipeline preparation errors when the JVM heap space size is more than 10 GB.
If a Pipeline Failed to Prepare error displays related to compressed class space, you can customize this setting in the Snaplex Global Properties.
To override the default, add the following global property, where N is the custom size of the compressed class space.
| Key | Value |
|---|---|
| jcc.jvm_options | -XX:CompressedClassSpaceSize=N |
Configure Snaplex memory for dynamic workloads
If your Snaplex workload is unpredictable or you can't plan the memory requirements, the default heap memory settings might not be suitable. Here are some situations where you might need to adjust these settings:
- The volume of data processed by the pipeline varies significantly.
- Multiple teams are involved in pipeline development or execution, making capacity planning difficult.
- Your test or development environment has fewer resources than the production Snaplex.
In these cases, setting the maximum heap size too low can cause memory issues. If your pipelines use more memory than anticipated, the JCC node might reach the maximum memory limit, causing an OutOfMemory exception error and a restart. This results in all currently running pipelines failing.
To prevent this, you can configure the swap memory on the Snaplex nodes. This allows the JCC node to use the configured memory under normal conditions and switch to swap memory if needed, with minimal impact on performance.
For Linux-based Groundplexes, we recommend using the swap values suggested in the table provided below. For example the recommended Swap value if the RAM (the amount of physical memory on the machine) is 8 GB and 32 GB is as explained:
- If RAM=8 GB, then recommended swap =
max(8 * 0.5, 8)=max(4, 8)= 8 GB - If RAM=32 GB, then recommended swap =
max(32 * 0.5, 8)=max(16, 8)= 16 GB
| Node Name | Node Type | Minimum Swap | Recommended Swap maximum (RAM *
0.5,8) |
|---|---|---|---|
| Medium Node | 2 vCPU 8 GB memory, 40 GB storage | 8 GB | 8 GB |
| Large Node | 4 vCPU 16 GB memory, 60 GB storage | 8 GB | 8 GB |
| X-Large Node | 8 vCPU 32 GB memory, 300 GB storage | 8 GB | 16 GB |
| 2X-Large Node | 16 vCPU 64 GB memory, 400 GB storage | 8 GB | 32 GB |
| Medium M.O. Node | 2 vCPU 16 GB memory, 40 GB storage | 8 GB | 8 GB |
| Large M.O. Node | 4 vCPU 32 GB memory, 60 GB storage | 8 GB | 16 GB |
| X-Large M.O. Node | 8 vCPU 64 GB memory, 300 GB storage | 8 GB | 32 GB |
| 2X-Large M.O. Node | 16 vCPU 128 GB memory, 400 GB storage | 8 GB | 64 GB |
When enabling swap space, it's important to ensure the volume's IO performance is high to maintain acceptable overall performance.
- Recommendations for AWS
- Use Instance Store instead of EBS volume to mount the swap data. For more details, refer to the Instance Store swap volumes documentation.
- Implications of swap usage
- If your workload exceeds the physical memory and begins to use swap,
the JCC node can slow down due to the additional IO overhead. To
mitigate this, configure higher timeouts for
jcc.status_timeout_secondsandjcc.jcc_poll_timeout_secondsfor health checks.
- Resource exhaustion
- Even with swap configured, if all available memory is used up, the JCC process may still run out of resources. This can cause the JCC process to restart, terminating all running pipelines. To prevent this, use larger nodes with more memory to ensure your workload can complete successfully.
- Swap configuration
- Limit the swap size to the maximum that the JCC node will use. Setting a larger swap size can degrade performance, particularly during Java Runtime Environment (JRE) garbage collection operations.
-
Memory swapping can result in performance degradation because of disk IO, especially if the Pipeline workload also utilizes local disk IO.
-
When the pipeline workload is dynamic and capacity planning is difficult, we strongly recommend the minimum swap configuration.
To utilize swap for the JCC process, you can use this workflow:
-
Enable swap on your host machine. The steps depend on your operating system (OS). For example, for Ubuntu Linux, you can use the steps in this tutorial.
-
Update your Maximum memory Snaplex setting to a lower percentage value, such that the absolute value is lower than the available memory. The load balancer uses this value when allocating Pipelines. The default is to set to 85%, which means that if the node memory usage is above 85% of the maximum heap size, then additional pipelines cannot start on the node.
If a node is swapping, execution times might get slower over time. You can use these two properties to avoid the monitor from terminating the JCC process because of health check timeout. While not specific to swapping, they might help increase when swap is used. In the Global properties section of the Node Properties tab in the Snaplex page:
-
jcc.jcc_poll_timeout_secondsis the timeout (default value is 10 seconds) for each health check poll request from the Monitor. -
jcc.status_timeout_secondsis the time period (default value is 300 seconds) that the Monitor process waits for before the JCC is restarted if the health check requests continuously fail.
| Key | Value |
|---|---|
jcc.jcc_poll_timeout_seconds |
60 |
jcc.status_timeout_seconds |
3600 |
A clearcache option is added to
the jcc.sh/jcc.bat file to clear the cache files from
the node.
You must ensure that the JCC is stopped before running the
clearcache command on both Windows and Linux
systems.