At line 88 changed one line |
Multipart form uploading is also possible, but far more complex of a topic. You should already be familiar with how multipart works. The parameters for uploading are:\\ |
Multipart form uploading is also possible, but far more complex of a topic. You should already be familiar with how multipart works. The parameters for uploading as below. The email related stuff is only needed if the sendEmail header is true. See also the example curl call at the end of this page, some headers can be left out.\\ |
At line 99 added 28 lines |
!!SHARE\\ |
Share a file or folder. The parameters for sharing are:\\ |
{{{ |
command: publish |
allowUploads: false |
attach: true |
hide_download: false |
attach_real: false |
baseUrl: http%253A%252F%252Fdc1.intranet.local%253A8080%252F |
emailBcc: |
emailReplyTo: |
emailBody: %253Cp%253EA%2520user%2520would%2520like%2520to%2520share%2520a%2520file%2520with%2520you.%2526nbsp%253B%253C%252Fp%253E%253Cp%253EThis%2520link%2520will%2520expire%2520on%2520%257Bdate%257D%2520at%2520%257Btime%257D%2520https%253A%252F%252Fyourdomian.com%252F%257Bweb_link_end%257D%253C%252Fp%253E%253Cp%253E%253Cbr%253E%253C%252Fp%253E |
share_comments: |
emailCc: |
emailFrom: admin%40intranet.local |
emailSubject: Sharing%253A%2520A%2520new%2520file%2520is%2520being%2520shared%2520with%2520you. |
emailTo: test%40intranet.local |
shareUsername: false |
shareUsernames: |
shareUsernamePermissions: (resume)(view)(slideshow) |
expire: 7%2F3%2F2021+23%3A59 |
paths: %252Ftestfile1.txt |
publishType: reference |
logins: -1 |
sendEmail: true |
direct_link: true |
|
}}} |
At line 103 changed one line |
command=logout&c2f=ABCD |
command=logout |
At line 133 added 43 lines |
|
|
!!Multiplexing Uploads:\\ |
\\ |
First you need to issue an command=openFile request. Include the upload_path parameter and a transfer_type parameter of upload or download. Also include upload_id (keep it short, alphabet characters, 6 chars is enough), and upload_size and optionally start_resume_loc if you want to resume from a particular byte position. CrushFTP will respond with the upload_id if the openFile succeeds, otherwise an error message. The response is in XML. Please note you can only have one multiplexed upload going at a time.\\ |
\\ |
{{{ |
command=openFile&upload_path=/test.txt&upload_size=52428800&upload_id=XhFtwp&start_resume_loc=0&c2f=ABCD |
}}} |
\\ |
Now it’s ready to accepts chunks. To send a chunk, you send a POST to the server with the path of /U/transferid~chunk_num~chunk_size\\ |
Then the body of the post is your chunk data. chunk_num starts from position 1, not zero. Chunk sizes should not ever exceed 10MByte.\\ |
{{{ |
POST /U/XhFtwp~1~5242880 HTTP/1.1 |
}}} |
\\ |
You signal the end of a transfer by issuing the closeFile request and providing the total number of chunks to be expected. The order of arriving chunks does not matter but you should not get more than 100mb off from the currently sent chunk or CrushFTP may start imposing speed limits to protect its memory usage. The response to this http command may be delayed as we are waiting for the chunks to come in. We may give you a reply about “Decompressing...” the file, which may or may not be the reason for the delay…it could be we are flushing to a S3 backend, etc. We will give you the md5 hash though once things are done as part of the XML response. You can keep calling the closeFile if needed if you got the “Decompressing…” message. lastModified is optional to set the modified time of the uploaded file. The server may ignore your requested time depending on server config.\\ |
{{{ |
command=closeFile&transfer_type=upload&lastModified=0&total_chunks=17&c2f=ABCD |
}}} |
\\ |
\\ |
!!Multiplexing Downloads:\\ |
\\ |
Issue the command=download with the transfer_type set to download to start a multiplexed download. include the path parameter and optionally the start_resume_loc if you are wanting to start the download at a particular byte position. download_id is an optional id (keep it short if you do include one, 6 chars alphabetical is enough), if you don’t include one, we create a 6 character random one for you. Please note you can only have one multiplexed download going at a time.\\ |
{{{ |
command=download&transfer_type=download&path=/test.txt&download_id=XhFtwp&start_resume_loc=0&c2f=ABCD |
}}} |
\\ |
To do a download chunk, it’s similar. But the path for the POST request is: /D/transferid~chunk_num\\ |
{{{ |
POST /D/XhFtwp~1 HTTP/1.1 |
}}} |
\\ |
If your HTTP response size is 0 bytes, that signals your chunk request was over the maximum possible and the file size is smaller. You should not request additional chunks as they will all result in 0 size responses too. Otherwise the HTTP response size is the size of that chunk you requested. If you try and request too far in advance of the chunk_num then what the server is allowing, you will get a 404 reply. Later when you request that same chunk, you will get the chunk instead of an error. So try not to get too aggressive or the server may not be able to satisfy your request. A 200 response code from the server indicates succession these requests, with either a chunk or a 0 byte response to indicate your done, and you can close the file if you have all your intermediate chunks.\\ |
\\ |
Closing the download is similar to closing the upload.\\ |
{{{ |
command=closeFile&transfer_type=download&c2f=ABCD |
}}} |
\\ |
It’s recommended to use at least 10 concurrent threads when doing multiplexed transfers or else you are no better off then a single threaded transfer.\\ |
\\ |
At line 198 added 8 lines |
List:\\ |
{{{ |
curl --data "command=getXMLListing&path=/&format=jsonobj" -u user:pass http://127.0.0.1:8080/ |
}}} |
Create Share:\\ |
{{{ |
curl --data "command=publish" --data "emailTo=" --data "publishType=reference" --data "direct_link=true" --data "sendEmail=false" --data-urlencode "expire=7/3/2021+23:59" --data-urlencode "paths=/testfile1.txt" --data "logins=-1" --data-urlencode "baseUrl=https://yourdomain.com/" -u test:1234 http://192.168.3.101:8080/ |
}}} |