I am trying to transfer a large file and receiving an error that the file is too large.
We are using
rc = http_req( 'PUT'
: wrkurl
: rcvFile
: *OMIT
: sndFile
: *OMIT );
In debug we see the error occur on the open
In procedure http_req
if %parms >= 5 and %addr(SendStmf) <> *null;
sndFd = open( %trimr(SendStmf) : O_RDONLY );
if sndFd = -1;
SetError( HTTP_FDOPEN
:'open(): ' + %str(strerror(errno)) );
return -1;
endif;
The error Message from debug file (/tmp/httpapi_debug.txt)
New iconv() objects set, PostRem=819. PostLoc=0. ProtRem=819. ProtLoc=0
SetError() #22: open(): Object is too large to process.
Any suggestions appreciated.
Thanks
Don
Object is too large to process
-
- Posts: 22
- Joined: Thu Jul 29, 2021 8:00 am
Re: Object is too large to process
If the file is greater than 2GB then you have to add O_LARGEFILE to the open flags
You can read more about it here:
https://www.ibm.com/docs/en/i/7.2?topic ... /open.html
In the IFSIO_H copy source it is defined like this:
An alternative could be to change the api used from open() to open64()
https://www.ibm.com/docs/en/i/7.2?topic ... pen64.html
You can read more about it here:
https://www.ibm.com/docs/en/i/7.2?topic ... /open.html
In the IFSIO_H copy source it is defined like this:
Code: Select all
* 00100000000000000000000000000000 Large file access
* (for >2GB files)
D O_LARGEFILE C 536870912
https://www.ibm.com/docs/en/i/7.2?topic ... pen64.html
Re: Object is too large to process
Thank you Scott.
We are still progressing and found stat64() and will review open64()
However I am still looking for an example of chunking as at the current I am struggling on how to do this.
We needed to introduce stat64() and a larger field size for the size of the file which requires a considerable change adding a new optional parameter for large file size to use the new xxx64 functions. If our mods look good enough I will send you back an updated version.
If you had a needed to transfer large files would you use httpapir4 or would you consider a different approach ? Given ftp, tftp sftp are not possible.
The file is being transferred to an Azure Blob.
As always thank you for you enormous contribution to the IBMi community
Don
We are still progressing and found stat64() and will review open64()
However I am still looking for an example of chunking as at the current I am struggling on how to do this.
We needed to introduce stat64() and a larger field size for the size of the file which requires a considerable change adding a new optional parameter for large file size to use the new xxx64 functions. If our mods look good enough I will send you back an updated version.
If you had a needed to transfer large files would you use httpapir4 or would you consider a different approach ? Given ftp, tftp sftp are not possible.
The file is being transferred to an Azure Blob.
As always thank you for you enormous contribution to the IBMi community
Don
-
- Site Admin
- Posts: 872
- Joined: Sun Jul 04, 2021 5:12 am
Re: Object is too large to process
You should be thanking Peder. I am only reading this for the first time now.
You don't need open64(), just pass the O_LARGEFILE flag like Peder explained. stat64 would be needed in some circumstances.
Not clear what you mean, here. Is something wrong with the chunking in HTTPAPI that you need to change it?
I wouldn't add any parameters. I would just change HTTPAPI to use large file support throughout. It will still work with smaller files, so there's no need for any extra parameters. It won't work in V3Rx releases (as it did when originally written) but support for those releases has long since been discontinued.msddcb wrote: ↑Fri Nov 08, 2024 8:22 am We needed to introduce stat64() and a larger field size for the size of the file which requires a considerable change adding a new optional parameter for large file size to use the new xxx64 functions. If our mods look good enough I will send you back an updated version.
The size of the file would not be a factor in my decision of which network protocol to use. If you are using REST APIs, then HTTP is probably your only option. If you are using something else, I'd use the protocol that goes with that method.
I haven't worked with Azure blobs before, so couldn't tell you what your options are.
-
- Site Admin
- Posts: 872
- Joined: Sun Jul 04, 2021 5:12 am
Re: Object is too large to process
Would it be easier for me to just make the mods?
Re: Object is too large to process
Apologies Peder, I should read who the response was from.You should be thanking Peder. I am only reading this for the first time now.
Thank you for your assistance.
I don't expect there is any problem with chunking in HTTPAPI, I just have never used it and could not find an example source member addressing it and I do not know how to enable it.Not clear what you mean, here. Is something wrong with the chunking in HTTPAPI that you need to change it?
We are using http_req with PUT.
I am not following, enabling large file support. Is this replacing all the IFS API's with the xxx64 version and the associated data structures ?I wouldn't add any parameters. I would just change HTTPAPI to use large file support throughout. It will still work with smaller files, so there's no need for any extra parameters. It won't work in V3Rx releases (as it did when originally written) but support for those releases has long since been discontinued.
Thanks
Don
Re: Object is too large to process
Depends on the answer to my previous question on how to enable large file support.Would it be easier for me to just make the mods?
It appears at a first glance the parameters for the xxx64 version of the API's are very similar just larger size.
So if you have time and would like to do the mods then that would be great, if not I am happy to attempt with some clarification.
Thanks
Don
-
- Site Admin
- Posts: 872
- Joined: Sun Jul 04, 2021 5:12 am
Re: Object is too large to process
Honestly, I think it'd be quicker for me to make the mods than it would be to explain it.
You only need to change open() by adding the O_LARGEFILE and stat() with stat64(). Changing the APIs is easy. The hard part is that the lengths received from stat64() are used throughout the program, and in some cases even part of the public interface. So changing this needs to be done in a backward compatible way... otherwise existing programs could break.
You don't need to do anything with chunking, it will work as-is. And it isn't soemting you enable. If the server sends data that is chunked, HTTPAPI must handle it. If it doesn't, HTTPAPI has to handle it without chunking.
You only need to change open() by adding the O_LARGEFILE and stat() with stat64(). Changing the APIs is easy. The hard part is that the lengths received from stat64() are used throughout the program, and in some cases even part of the public interface. So changing this needs to be done in a backward compatible way... otherwise existing programs could break.
You don't need to do anything with chunking, it will work as-is. And it isn't soemting you enable. If the server sends data that is chunked, HTTPAPI must handle it. If it doesn't, HTTPAPI has to handle it without chunking.
Re: Object is too large to process
We are using PUT not GET. Wouldn't the use of chunking be part of sending request ? Sorry if this is a dumb question.You don't need to do anything with chunking, it will work as-is. And it isn't something you enable. If the server sends data that is chunked, HTTPAPI must handle it. If it doesn't, HTTPAPI has to handle it without chunking.
Thank you
Don
-
- Site Admin
- Posts: 872
- Joined: Sun Jul 04, 2021 5:12 am
Re: Object is too large to process
HTTPAPI never uses chunking for sending data, only receiving.
It doesn't matter if it's PUT, GET, POST, etc..
It doesn't matter if it's PUT, GET, POST, etc..