At line 3 changed one line |
About Glacier : It is an online file storage web service that provides storage for data archiving and backup. (for more info : [https://docs.aws.amazon.com/glacier/index.html#lang/en_us]) You need to download this jar file and place it in your CrushFTP ▸ WebInterface ▸ Plugins ▸ lib folder. [aws-java-sdk.jar]\\ |
About Glacier : It is an online file storage web service that provides storage for data archiving and backup. (for more info : [https://docs.aws.amazon.com/glacier/index.html#lang/en_us]) You need to download this jar file and place it in your CrushFTP ▸ plugins ▸ lib folder. [aws-java-sdk.jar]\\ |
At line 5 added one line |
The url should looks like (Replace the url with your corresponding data!):\\ |
At line 8 changed 2 lines |
The default region is : |
%%prettify |
\\ |
[attachments|glacier_vfs.png]\\ |
\\ |
Select the proper region form the Server combobox. The default region is : [us-east-1]\\ |
Give the Vault name at Vault Name field or you can leave it empty and it will list all the Vaults you have on the given region. Upload is only allowed under a Vault folder. We hold a special "glacier" folder on the CrushFTP server which has the folder structure simulated, and "file" items which are XML pointers to the real glacier archive data. Each archive will have the following archive description:\\ |
At line 11 changed one line |
us-east-1 |
<m><v>4</v><p>[Base64 encoded path]</p><lm>[the current date]</lm></m> |
At line 13 changed 2 lines |
/% |
|
You can turn off the xml reference store by checking the "Delete local representation after upload" flag. It will delete the xml pointer one second after the upload.\\ |
\\ |
!!! Glacier task |
\\ |
If you already have archives on glacier not managed by CrusFTP, you can create CrushFTP simulated folder and file(XML pointers to your archive data) structure by this task.\\ |
It requires two step as it first creates an Amazon job, and it will download the glacier inventory result, once the Amazon job is finished (usually it requires 3-5 hour to be finished), then based on the downloaded inventory will create the CrushFTP's simulated structures.(for more info : [https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-inventory.html])\\ |
\\ |
[attachments|glacier_task.png]\\ |
\\ |
The Crush job needs to be run at least twice:\\ |
\\ |
1. It creates the Amazon job. The Amazon job id will be stored in [glacier_info.XML] file located at Cache folder (By default it points to the CrushFTP job folder see at task settings)\\ |
{{{ |
<?xml version="1.0" encoding="UTF-8"?> |
<GlacierTask type="properties"> |
<job_id>Amazon job id</job_id> |
</GlacierTask> |
}}} |
2. It checks the Amazon job status, and download the inventory when the Amazon job is finished.\\ |
If the [glacier_info.XML] exist, based on the Amazon job's id checks the result of the job, you can notify the job result using an email task after the glacier task with Amazon job status variable(values : In progress, Failed, Succeeded): |
{{{ |
{glacier_job_satus} |
}}} |
Once the Amazon job status is Succeeded, it downloads the Glacier Vault Inventory and creates the CrushFTP's glacier folder and file(XML pointers to your archive data) structures based on glacier inventory. |
The archive description should have the following format: |
{{{ |
<m><v>...</v><p>[Base64 encoded path]</p> ....</m> |
}}} |
If your glacier archive descriptions does not have the format like above, it will creates just the XML pointers with archive description as file name. |