Starting in CrushFTP 11.2.2+ you can run remote job engines in a pool.  Your main CrushFTP server acts as the manager for the job engines distributing jobs round robin across the pool.

!Requirements:
"jobs" config folder must be in a shared location.\\
"logs" for where jobs are storing their log files must be in a shared location or you won't be able to view them from the UI.\\
The pool of job engines cannot be changed dynamically, its loaded one time at main server startup.\\
The jobs key cannot be random, it must be a fixed value otherwise any restart of the main CrushFTP server would break communication with all job engines.\\
\\
!Configuration:
Edit prefs.XML and configure the jobs_host_param to point at the one or more remote job engines.  Also make sure the "jobs_location" in prefs.XML points to the common shared location.  Example:

{{{
<jobs_host_param>192.168.1.51:2500,192.168.1.52:2500,192.168.1.53:2500</jobs_host_param>
<jobs_location>/mnt/shared_drive/</jobs_location>
<job_broker_key>MyPermanentJobKey</job_broker_key>
}}}

To start a job engine that is listening for a control connection from the mains server, use the following command line:\\
{{{
java -Djobs_location=/mnt/shared_drive/ --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED -jar plugins/lib/CrushFTPJarProxy.jar -JOB_BROKER 2500
}}}

You can customize a "crushftp_init.sh" script to reference that launch argument to have a daemon process installed on Linux, etc.\\
\\
\\
This parameter just allows us to measure CPU metrics about the JVM: ''--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED''\\