What I am trying to do:
The errors I am getting vary depending on location and chunk size. Here in the USA, I can upload a 220mb file in 10mb chunks and there are no issues. If I try reducing the chunk size to the minimum/default of 5120kb, I will sometimes get an error:
One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.
The big issues arise when trying to upload files from Taiwan/China.
I am trying to upload a 67mb file to S3. The standard ‘Put’ vi times out (error 56) so I tried uploading in 5120kb-10240kb chunks. This results in the error above, a timeout error, or:
LabVIEW could not verify the authenticity of the server.
I have not been able to successfully upload any files over 1mb through LabVIEW (in Asia). I can easily upload to S3 from the AWS browser; files of any size.
The most common error that I am getting is a general timeout error, code 56.
When I switch my AWS Server location to Seoul, South Korea the uploads locations in Asia seem to work much better, only occasionally getting errors.
My main question is: Is there a way to increase the timeout in this VI?
The HTTP PUT operation is using the default 10 second timeout for the request. The PUT operation does take a timeout parameter so it can be something other than the default.
Ideally timeouts for the requests would be exposed in the API. A similar request was made on this post for not using the default -1 timeout on GET operations.
Thank you for this post that solved my problem.
Indeed I had to increase the timeout value to upload files up to 5MB...
I regret, not to see a new version of NI AWS driver with this timeout control available on high level functions. I had to overwrite a lot of VIs in my project to solve this and that something we would like to avoid.
For those interested : I also overwritted the very low level VI : SHA-2_hash chunk_U32_Overwrite.vi
I disabled the openG function in it "Build cluster Error" (not required to work normally). Indeed when I download a lot of big files, this function is called l huge amount of times and I discovered that this openG function make me loose a non-negligible amount of time.