At line 1 changed one line |
This plugin allows you to post process files using a powerful set of tasks. You can move files, rename them, copy them, execute external programs, write text files, use date variables, multithread operations, copy to FTP/FTPS/SFTP servers, etc. |
__IMPORTANT: due to the security updates since CrushFTP version 10.5.2+ any JDBC driver jar file needs to be placed into the CrushFTP10/plugins/lib/ directory, or it won't load.__ |
\\ |
\\ |
This plugin allows you to post-process files using a powerful set of tasks. You can move files, rename them, copy them, execute external programs, write text files, use date variables, multithread operations, copy to FTP/FTPS/SFTP servers, etc. |
At line 16 changed one line |
[Example Task Flow -- Looping though files and delete copied files.|CrushTaskExample12]\\ |
[Example Task Flow -- Looping through files and delete copied files.|CrushTaskExample12]\\ |
At line 21 added 5 lines |
[Example Task Flow -- Copy or move folders to another location preserving the inner folder structure (Recursive Copy/Move)|CrushTaskExample14]\\ |
[Example Task Flow -- Find and Copy items older than and newer than X days|CrushTaskExample15]\\ |
[Example Task Flow -- One-click upload and notify a contact about it|CrushTaskExample16]\\ |
[Example Task Flow -- Setting up a Controller job for multiple Data Centers and redundancy|CrushTaskExample17]\\ |
[Example Task Flow -- Renew Azure SAS token via Azure User impersonation| CrushTaskExample18] |
At line 27 added one line |
The source filter is on every task. It will filter out items that don't match the source from being included in the current task item. So you may want to apply certain actions to specific file types on a per-task basis. |
At line 20 removed 2 lines |
The source filter is on every task. It will filter out items that don't match the source from being included in the current task item. So you may want to apply certain actions to specific file types on a per task basis. |
|
At line 28 changed one line |
The find task will scan a particular folder, or remote FTP, or SFTP server getting directory listings recursively up to the depth allowed. These found items can then be used by future task items. This task is often the starting item in a list of tasks. |
The find task will scan a particular folder or remote FTP, or SFTP server getting directory listings recursively up to the depth allowed. These found items can then be used by future task items. This task is often the starting item in a list of tasks. |
At line 37 added 2 lines |
You can use Regular Expressions in the Find Filter. Regular expressions need to be prefixed with "REGEX:" literal. Also, keep in mind we do a match against the full source URL. Finally, we only support Java-style regular expressions. |
|
At line 36 changed one line |
If the FindCache option is used, the items found will further be filters out so prior known items aren't found again. |
If the FindCache option is used, the items found will further be filtered out so prior known items aren't found again. |
At line 54 changed one line |
The wait amount is how long to wait to verify the file is no longer being written to by some outside process. The modified date, and file size are both compared. It will wait put o the max amount of time before giving up and aborting the job. |
The wait amount is how long to wait to verify the file is no longer being written to by some outside process. The modified date and file size are both compared. It will wait the maximum amount of time before giving up and aborting the job. |
At line 56 changed one line |
If you want to work with the copied files, you can have them added intuit he list of files that are being processed for future tasks. Be careful with this because if you do this, and then do a delete, both the original and copy would then be deleted. |
If you want to work with the copied files, you can have them added to the list of files that are being processed for future tasks. Be careful with this because if you do this, and then do a delete, both the original and copy would then be deleted. |
At line 89 changed 2 lines |
!input: |
*Command: the name of (or full path to) the external binary. In case running scripts, here comes the script interpreter binary name, cmd.exe for a Windows batch script\\ |
!input/parameters: |
*__Command:__ the name of (or full path to) the external binary. In case running scripts, here comes the script interpreter binary name, cmd.exe for a Windows batch script\\ |
At line 92 changed one line |
*Argument: the list of arguments passed to Command, the list items separated by Separator character ";" (semicolon) by default. In case running scripts, here comes\\ |
*__Argument:__ the list of arguments passed to Command, the list items separated by __Separator__ character ";" (semicolon) by default. In case running scripts, here comes\\ |
At line 94 changed one line |
*Working directory: the parent path of the binary or script file, we do a change directory into this folder before invoking Command. In case running scripts, here comes\\ |
*__Working directory:__ the parent path of the binary or script file, we do a change directory into this folder before invoking Command. In case running scripts, here comes\\ |
At line 96 changed one line |
*Environment variables: leave it empty, rarely needed |
*__Environment variables:__ leave it empty, rarely needed\\ |
*__Separator:__ by default ; (__semicolon__), needed due to the specific way white spaces are handled. Need to use this instead of whitespace characters in the __Arguments__ list,\\ |
those will be replaced at run time with white spaces. |
At line 98 changed one line |
the task will feed it's output to the next task item, if any, can be referenced by the {execute_log} server variable |
the task will feed it's output to the next task item, if any, can be referenced by the {execute_log} server variable. Alternatively, you can always reference {last_execute_log} which is the last one for that entire job, not specific to one particular one. This is useful when just calling execute and not passing in a list of items to that task. |
At line 111 changed one line |
This task will instantly pass in any found files to the [preview] worker inside of CrushFTP requesting for the files to have their thumbnails generates. This is only useful if you are showing previews of image type files on the [WebInterface]. |
This task will instantly pass in any found files to the [preview] worker inside of CrushFTP requesting for the files to have their thumbnails generates. This is only useful if you are showing previews of image-type files on the [WebInterface]. |
At line 121 changed one line |
This task will jump to a specified job, and task item. If the job is left empty, then the task in the current job will be located and used. If a job is jumped to, the task that matches the name will be the starting point in that job. Jumping to a job is only possible for enterprise licenses. Once the jumped to job is complete, the task will continue on to the next step of the current job. |
This task will jump to a specified job, and task item. If the job is left empty, then the task in the current job will be located and used. If a job is jumped to, the task that matches the name will be the starting point in that job. Jumping to a job is only possible for enterprise licenses. Once the jumped to job is complete, the task will continue on to the next step of the current job.\\ |
Jump also has the special ability to "group" matching items together and call the "true" task with each grouping of items. This might be useful for example to group files together based on extension, or based on their modified date's Month, etc. The syntax for grouping is to use the left side condition of "GROUPBY" then matches pattern, then use some expression on the right side like {MMM} for example. |
At line 126 changed one line |
This allows you to make a variable that you will then reference in other future steps. So you might make a "archive" variable, and then reference it in other steps as {archive} so that fi that changed, you would only need to change it in one location. |
This allows you to make a variable that you will then reference in other future steps. So you might make a "archive" variable and then reference it in other steps as {archive} so that if that changed, you would only need to change it in one location. |
At line 131 changed one line |
This will unzip a file that is in the list. Its suggested you filter down on *.zip to only get valid items in case of a mixed list of items. The external unzip method will call the OS's unzip utility to unzip. This will only work on OS X, or Linux / Unix based systems. Otherwise the internal method will work too. |
This will unzip a file that is in the list. It's suggested you filter down on *.zip to only get valid items in case of a mixed list of items. The external unzip method will call the OS's unzip utility to unzip. This will only work on OS X, or Linux / Unix-based systems. Otherwise, the internal method will work too. |
At line 136 changed one line |
This will take all items in the list and zip them into a single file. Its suggested to then do an Exclude task to them remove all items, and then go Find the single zip you just made if you want to do further processing. |
This will take all items in the list and zip them into a single file. It's suggested to then do an Exclude task to them remove all items, and then go Find the single zip you just made if you want to do further processing. |
At line 151 changed one line |
This task allows for taking the items in the list and sending them on to a AS2 server. You can configure all the typical AS2 settings for encryption and signing. |
This task allows for taking the items in the list and sending them on to a AS2 server. You can configure all the typical AS2 settings for encryption and signing.\\ |
The recipient URL can also have custom headers defined. Example:\\ |
''Example: https://other_domain.com/as2#Contet-Type=text/plain&Another-Header-Name=something else&etc'' |
At line 156 changed one line |
The HTTP task lets you post events to another HTTP server with information about files that were transferred. You can control various aspects about how the connection is made and put in your variables for the data you want posted. |
The HTTP task lets you post events to another HTTP server with information about files that were transferred. You can control various aspects about how the connection is made and put in your variables for the data you want to be posted. You have some additional variables like these to reference: {http_response_code}, {http_response_message}, {http_response_log}. |
At line 168 changed one line |
}}} |
}}}\\ |
\\ |
__PBE (Password-based encryption)__\\ |
\\ |
Set the key location to "password:".\\ |
Provide the password in the "Key password(if any)" input field.\\ |
At line 185 changed one line |
This task can be used to run SQL statements or even full scripts against an SQL database server. |
This task can be used to run SQL statements or even full scripts against an SQL database server. The DB driver class name and the URL syntax is usually specific to a certain DB family, need to find out from the documentation of the JDBC driver. |
At line 187 changed one line |
[attachments|sql_task.jpg] |
[{Image src='sql_task.jpg' width='..' height='..' align='left' style='..' class='..' }] |
At line 189 changed 7 lines |
!input/parameters: |
*DB Driver: the JDBC driver class name.\\ |
*DB Driver: the list of arguments passed to Command, the list items separated by Separator character ";" (semicolon) by default. In case running scripts, here comes\\ |
the script file name followed by any arguments to be passed to |
*Working directory: the parent path of the binary or script file, we do a change directory into this folder before invoking Command. In case running scripts, here comes\\ |
the full parent path of the script file |
*Environment variables: leave it empty, rarely needed |
|
!parameters: |
*DB Driver: the JDBC driver class name.\\ |
*DB Driver File: the JDBC driver jar file path on server.\\ |
*DB URL: the SQL server URL we connect to.\\ |
*DB User, DB Password: the login credentials of the DB query account.\\ |
*Query: input field for the SQL statement or script.\\ |
The __Rollback if a transaction fails__ flag if set, the task signals the SQL server before issuing the transaction to automatically roll back in case of a runtime error was raised. |
\\ |
!input: |
Generic list items passing from the previous task. Input fields also accept server variables, where it makes sense. |
At line 197 changed one line |
the task will feed it's output to the next task item, if any, can be referenced by the {execute_log} server variable |
The SQL task will feed it's output to the next task item, if any. The DB table column names can be used as server variables, enclosed within curly brackets, to reference table cell values of a row. The next task will usually loop through the rows list. |