Struct: s3manager.Uploader

import "../ibm-cos-sdk-go/service/s3/s3manager"

Overview

The Uploader structure that calls Upload(). It is safe to call Upload() on this structure for multiple objects and across concurrent goroutines. Mutating the Uploader's properties is not safe to be done concurrently.

The ContentMD5 member for pre-computed MD5 checksums will be ignored for multipart uploads. Objects that will be uploaded in a single part, the ContentMD5 will be used.

The Checksum members for pre-computed checksums will be ignored for multipart uploads. Objects that will be uploaded in a single part, will include the checksum member in the request.

Implemented Interfaces

s3crypto.Cipher, s3manager.ReadSeekerWriteTo, s3manageriface.UploadWithIterator, s3manageriface.UploaderAPI, s3manager.WriterReadFrom

Structure Field Summary collapse

Method Summary collapse

Structure Field Details

BufferProvider ReadSeekerWriteToProvider

Defines the buffer strategy used when uploading a part

Concurrency int

The number of goroutines to spin up in parallel per call to Upload when sending parts. If this is set to zero, the DefaultUploadConcurrency value will be used.

The concurrency pool is not shared between calls to Upload.

LeavePartsOnError bool

Setting this value to true will cause the SDK to avoid calling AbortMultipartUpload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.

Note that storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.

MaxUploadParts int

MaxUploadParts is the max number of parts which will be uploaded to S3. Will be used to calculate the partsize of the object to be uploaded. E.g: 5GB file, with MaxUploadParts set to 100, will upload the file as 100, 50MB parts. With a limited of s3.MaxUploadParts (10,000 parts).

MaxUploadParts must not be used to limit the total number of bytes uploaded. Use a type like to io.LimitReader (golang.org/pkg/io/#LimitedReader) instead. An io.LimitReader is helpful when uploading an unbounded reader to S3, and you know its maximum size. Otherwise the reader's io.EOF returned error must be used to signal end of stream.

Defaults to package const's MaxUploadParts value.

PartSize int64

The buffer size (in bytes) to use when buffering data into chunks and sending them as parts to S3. The minimum allowed part size is 5MB, and if this value is set to zero, the DefaultUploadPartSize value will be used.

RequestOptions []request.Option

List of request options that will be passed down to individual API operation requests made by the uploader.

S3 s3iface.S3API

The client to use when uploading to S3.

Method Details

func (u Uploader) Upload(input *UploadInput, options ...func(*Uploader)) (*UploadOutput, error)

Upload uploads an object to S3, intelligently buffering large files into smaller chunks and sending them in parallel across multiple goroutines. You can configure the buffer size and concurrency through the Uploader's parameters.

Additional functional options can be provided to configure the individual upload. These options are copies of the Uploader instance Upload is called from. Modifying the options will not impact the original Uploader instance.

Use the WithUploaderRequestOptions helper function to pass in request options that will be applied to all API operations made with this uploader.

It is safe to call this method concurrently across goroutines.

Example:

// Upload input parameters upParams := &s3manager.UploadInput{ Bucket: &bucketName, Key: &keyName, Body: file, } // Perform an upload. result, err := uploader.Upload(upParams) // Perform upload with options different than the those in the Uploader. result, err := uploader.Upload(upParams, func(u *s3manager.Uploader) { u.PartSize = 10 * 1024 * 1024 // 10MB part size u.LeavePartsOnError = true // Don't delete the parts if the upload fails. })


275
276
277
// File 'service/s3/s3manager/upload.go', line 275

func (u Uploader) Upload(input *UploadInput, options ...func(*Uploader)) (*UploadOutput, error) { return u.UploadWithContext(aws.BackgroundContext(), input, options...) }

func (u Uploader) UploadWithContext(ctx aws.Context, input *UploadInput, opts ...func(*Uploader)) (*UploadOutput, error)

UploadWithContext uploads an object to S3, intelligently buffering large files into smaller chunks and sending them in parallel across multiple goroutines. You can configure the buffer size and concurrency through the Uploader's parameters.

UploadWithContext is the same as Upload with the additional support for Context input parameters. The Context must not be nil. A nil Context will cause a panic. Use the context to add deadlining, timeouts, etc. The UploadWithContext may create sub-contexts for individual underlying requests.

Additional functional options can be provided to configure the individual upload. These options are copies of the Uploader instance Upload is called from. Modifying the options will not impact the original Uploader instance.

Use the WithUploaderRequestOptions helper function to pass in request options that will be applied to all API operations made with this uploader.

It is safe to call this method concurrently across goroutines.



297
298
299
300
301
302
303
304
305
306
307
// File 'service/s3/s3manager/upload.go', line 297

func (u Uploader) UploadWithContext(ctx aws.Context, input *UploadInput, opts ...func(*Uploader)) (*UploadOutput, error) { i := uploader{in: input, cfg: u, ctx: ctx} for _, opt := range opts { opt(&i.cfg) } i.cfg.RequestOptions = append(i.cfg.RequestOptions, request.WithAppendUserAgent("S3Manager")) return i.upload() }

func (u Uploader) UploadWithIterator(ctx aws.Context, iter BatchUploadIterator, opts ...func(*Uploader)) error

UploadWithIterator will upload a batched amount of objects to S3. This operation uses the iterator pattern to know which object to upload next. Since this is an interface this allows for custom defined functionality.

Example:

svc:= s3manager.NewUploader(sess) objects := []BatchUploadObject{ { Object: &s3manager.UploadInput { Key: aws.String("key"), Bucket: aws.String("bucket"), }, }, } iter := &s3manager.UploadObjectsIterator{Objects: objects} if err := svc.UploadWithIterator(aws.BackgroundContext(), iter); err != nil { return err }


330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
// File 'service/s3/s3manager/upload.go', line 330

func (u Uploader) UploadWithIterator(ctx aws.Context, iter BatchUploadIterator, opts ...func(*Uploader)) error { var errs []Error for iter.Next() { object := iter.UploadObject() if _, err := u.UploadWithContext(ctx, object.Object, opts...); err != nil { s3Err := Error{ OrigErr: err, Bucket: object.Object.Bucket, Key: object.Object.Key, } errs = append(errs, s3Err) } if object.After == nil { continue } if err := object.After(); err != nil { s3Err := Error{ OrigErr: err, Bucket: object.Object.Bucket, Key: object.Object.Key, } errs = append(errs, s3Err) } } if len(errs) > 0 { return NewBatchError("BatchedUploadIncomplete", "some objects have failed to upload.", errs) } return nil }