Uploads a chunk of a file for an upload session.
The actual endpoint URL is returned by the Create upload session
and Get upload session
endpoints.
bytes 8388608-16777215/445856194
The byte range of the chunk.
Must not overlap with the range of a part already uploaded this session. Each part’s size must be exactly equal in size to the part size specified in the upload session that you created. One exception is the last part of the file, as this can be smaller.
When providing the value for content-range
, remember that:
sha=fpRyg5eVQletdZqEKaFlqwBXJzM=
The RFC3230 message digest of the chunk uploaded.
Only SHA1 is supported. The SHA1 digest must be base64
encoded. The format of this header is as
sha=BASE64_ENCODED_DIGEST
.
To get the value for the SHA
digest, use the
openSSL command to encode the file part:
openssl sha1 -binary <FILE_PART_NAME> | base64
D5E3F7A
The ID of the upload session.
The binary content of the file
Chunk has been uploaded successfully.
Returns an error if the chunk conflicts with another chunk previously uploaded.
Returns an error if a precondition was not met.
Returns an error if the content range does not match a specified range for the session.
An unexpected client error.
curl -i -X PUT "https://upload.box.com/2.0/files/upload_sessions/F971964745A5CD0C001BBE4E58196BFD" \
-H "authorization: Bearer <ACCESS_TOKEN>" \
-H "digest: sha=fpRyg5eVQletdZqEKaFlqwBXJzM=" \
-H "content-range: bytes 8388608-16777215/445856194" \
-H "content-type: application/octet-stream" \
--data-binary @<FILE_NAME>
await client.chunkedUploads.uploadFilePart(
acc.uploadSessionId,
generateByteStreamFromBuffer(chunkBuffer),
{
digest: digest,
contentRange: contentRange,
} satisfies UploadFilePartHeadersInput,
);
client.chunked_uploads.upload_file_part(
acc.upload_session_id,
generate_byte_stream_from_buffer(chunk_buffer),
digest,
content_range,
)
await client.ChunkedUploads.UploadFilePartAsync(uploadSessionId: acc.UploadSessionId, requestBody: Utils.GenerateByteStreamFromBuffer(buffer: chunkBuffer), headers: new UploadFilePartHeaders(digest: digest, contentRange: contentRange));
try await client.chunkedUploads.uploadFilePart(uploadSessionId: acc.uploadSessionId, requestBody: Utils.generateByteStreamFromBuffer(buffer: chunkBuffer), headers: UploadFilePartHeaders(digest: digest, contentRange: contentRange))
//Reading a large file
FileInputStream fis = new FileInputStream("My_Large_File.txt");
//Create the digest input stream to calculate the digest for the whole file.
DigestInputStream dis = new DigestInputStream(fis, digest);
List<BoxFileUploadSessionPart> parts = new ArrayList<BoxFileUploadSessionPart>();
//Get the part size. Each uploaded part should match the part size returned as part of the upload session.
//The last part of the file can be less than part size if the remaining bytes of the last part is less than
//the given part size
long partSize = sessionInfo.getPartSize();
//Start byte of the part
long offset = 0;
//Overall of bytes processed so far
long processed = 0;
while (processed < fileSize) {
long diff = fileSize - processed;
//The size last part of the file can be less than the part size.
if (diff < partSize) {
partSize = diff;
}
//Upload a part. It can be uploaded asynchorously
BoxFileUploadSessionPart part = session.uploadPart(dis, offset, (int)partSize, fileSize);
parts.add(part);
//Increase the offset and proceesed bytes to calculate the Content-Range header.
processed += partSize;
offset += partSize;
}
upload_session = client.upload_session('11493C07ED3EABB6E59874D3A1EF3581')
offset = upload_session.part_size * 3
total_size = 26000000
part_bytes = b'abcdefgh'
part = upload_session.upload_part_bytes(part_bytes, offset, total_size)
print(f'Successfully uploaded part ID {part["part_id"]}')
// Upload the part starting at byte offset 8388608 to upload session '93D9A837B45F' with part ID 'feedbeef'
client.files.uploadPart('93D9A837B45F', part, 8388608, 2147483648, {part_id: 'feedbeef'}, callback);
{
"part": {
"offset": 16777216,
"part_id": "6F2D3486",
"sha1": "134b65991ed521fcfe4724b7d814ab8ded5185dc",
"size": 3222784
}
}