Best practices and use case

Best Practices

  • When upgrading a Windows-based Groundplex to use the slpropz configuration file, update the Monitor process by running jcc.bat update_service. If the Monitor process is not updated, the maximum heap space for the JCC process may be set incorrectly..
  • If you cannot create an SLDB file with international characters (such as æ, Æ, Ø) in the file name, update the jcc.use_lease_urls property in the Snaplex Global Properties to False. This workaround supports all UTF-8 characters, allowing you to use file names in any global language.
  • By default, if the first run of an Ultra Task fails, SnapLogic attempts to run the task up to five times. However, you can customize this retry limit for Ultra Tasks on a specific Snaplex. To configure the maximum number of retries, modify the ultra.max_redelivery_count parameter in the Global Properties of your Snaplex. Set this parameter to the desired number of retry attempts for a failed Ultra Task.
  • For critical workloads in the production environment, it is recommended that using at least two worker nodes and, for Ultra Tasks, two FeedMaster nodes. This setup helps avoid service disruptions during Snaplex upgrades.

Use case to display pipelines after the JCC node is killed

The use case pertains to the display of pipelines after the JCC node has been terminated.

If there are multiple JCC nodes running, then a task with several instances is divided among them. For example, if we have an Ultra Task of nine instances, they will be split into three, each between the three JCC nodes. However, if any one of the nodes crashes, then the JCC state is not updated in the SLDB (Service Level DataBase). As a result, the JCC instance will remain in the RUNNING state, creating zombie instances on the Dashboard.

The zombie instances are seen on the Dashboard for a span of eight hours or until they reach the maximum heartbeat limit specified in the cleanup pipelines method. After the cleanup process is complete, the instances will no longer be visible on the Dashboard.

Note: If the node crash is resolved within the eight-hour limit, the instances will be automatically cleaned up and have no visibility in the Dashboard.