Package: s3manager

import "../ibm-cos-sdk-go/service/s3/s3manager"

Overview

Package s3manager provides utilities to upload and download objects from S3 concurrently. Helpful for when working with large objects.

Sub-Packages

s3manageriface

Constants

const DefaultBatchSize = readonly

DefaultBatchSize is the batch size we initialize when constructing a batch delete client. This value is used when calling DeleteObjects. This represents how many objects to delete per DeleteObjects call.

Value:

100
const ErrDeleteBatchFailCode = readonly

ErrDeleteBatchFailCode represents an error code which will be returned only when DeleteObjects.Errors has an error that does not contain a code.

Value:

"DeleteBatchError"
const MaxUploadParts = readonly

MaxUploadParts is the maximum allowed number of parts in a multi-part upload on Amazon S3.

Value:

10000
const MinUploadPartSize int64 = readonly

MinUploadPartSize is the minimum allowed part size when uploading a part to Amazon S3.

Value:

1024 * 1024 * 5
const DefaultUploadPartSize = readonly

DefaultUploadPartSize is the default part size to buffer chunks of a payload into.

Value:

const DefaultUploadConcurrency = readonly

DefaultUploadConcurrency is the default number of goroutines to spin up when using Upload().

Value:

5
const DefaultDownloadPartSize = readonly

DefaultDownloadPartSize is the default range of bytes to get at a time when using Download().

Value:

1024 * 1024 * 5
const DefaultDownloadConcurrency = readonly

DefaultDownloadConcurrency is the default number of goroutines to spin up when using Download().

Value:

5

Type Summary collapse

Interface Summary collapse

Function Summary collapse

Type Details

BatchDeleteObject struct

BatchDeleteObject is a wrapper object for calling the batch delete operation.

Structure Fields:

Object *s3.DeleteObjectInput

BatchDownloadObject struct

BatchDownloadObject contains all necessary information to run a batch operation once.

Structure Fields:

Object *s3.GetObjectInput
Writer io.WriterAt

BatchUploadObject struct

BatchUploadObject contains all necessary information to run a batch operation once.

Structure Fields:

Object *UploadInput

UploadInput struct

UploadInput provides the input parameters for uploading a stream or buffer to an object in an Amazon S3 bucket. This type is similar to the s3 package's PutObjectInput with the exception that the Body member is an io.Reader instead of an io.ReadSeeker.

The ContentMD5 member for pre-computed MD5 checksums will be ignored for multipart uploads. Objects that will be uploaded in a single part, the ContentMD5 will be used.

The Checksum members for pre-computed checksums will be ignored for multipart uploads. Objects that will be uploaded in a single part, will include the checksum member in the request.

Structure Fields:

Bucket *string required

The bucket name to which the PUT action was initiated.

When using this action with an access point, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points (docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) in the Amazon S3 User Guide.

When you use this action with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When you use this action with S3 on Outposts through the Amazon Web Services SDKs, you provide the Outposts access point ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see What is S3 on Outposts? (docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html) in the Amazon S3 User Guide.

Bucket is a required field

Key *string required

Object key for which the PUT action was initiated.

Key is a required field

_ struct{}
ACL *string enum

The canned ACL to apply to the object. For more information, see Canned ACL (docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#CannedACL).

This action is not supported by Amazon S3 on Outposts.

Enum Values:

(no defined enumerable values)
Body io.Reader

The readable body payload to send to S3.

CacheControl *string

Can be used to specify caching behavior along the request/reply chain. For more information, see www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9 (www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9).

ContentDisposition *string

Specifies presentational information for the object. For more information, see www.rfc-editor.org/rfc/rfc6266#section-4 (www.rfc-editor.org/rfc/rfc6266#section-4).

ContentEncoding *string

Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. For more information, see www.rfc-editor.org/rfc/rfc9110.html#field.content-encoding (www.rfc-editor.org/rfc/rfc9110.html#field.content-encoding).

ContentLanguage *string

The language the content is in.

ContentMD5 *string

The base64-encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, we recommend using the Content-MD5 mechanism as an end-to-end integrity check. For more information about REST request authentication, see REST Authentication (docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html).

If the ContentMD5 is provided for a multipart upload, it will be ignored. Objects that will be uploaded in a single part, the ContentMD5 will be used.

ContentType *string

A standard MIME type describing the format of the contents. For more information, see www.rfc-editor.org/rfc/rfc9110.html#name-content-type (www.rfc-editor.org/rfc/rfc9110.html#name-content-type).

Expires *time.Time

The date and time at which the object is no longer cacheable. For more information, see www.rfc-editor.org/rfc/rfc7234#section-5.3 (www.rfc-editor.org/rfc/rfc7234#section-5.3).

GrantFullControl *string

Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.

This action is not supported by Amazon S3 on Outposts.

GrantRead *string

Allows grantee to read the object data and its metadata.

This action is not supported by Amazon S3 on Outposts.

GrantReadACP *string

Allows grantee to read the object ACL.

This action is not supported by Amazon S3 on Outposts.

GrantWriteACP *string

Allows grantee to write the ACL for the applicable object.

This action is not supported by Amazon S3 on Outposts.

Metadata map[string]*string

A map of metadata to store with the object in S3.

ObjectLockLegalHoldStatus *string enum

Specifies whether a legal hold will be applied to this object. For more information about S3 Object Lock, see Object Lock (docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html).

Enum Values:

(no defined enumerable values)
ObjectLockMode *string enum

The Object Lock mode that you want to apply to this object.

Enum Values:

(no defined enumerable values)
ObjectLockRetainUntilDate *time.Time

The date and time when you want this object's Object Lock to expire.

RequestPayer *string enum

Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination Amazon S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets (docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) in the Amazon S3 User Guide.

Enum Values:

(no defined enumerable values)
RetentionExpirationDate *time.Time

Date on which it will be legal to delete or modify the object. This field can only be specified if Retention-Directive is REPLACE. You can only specify this or the Retention-Period header. If both are specified a 400 error will be returned. If neither is specified the bucket's DefaultRetention period will be used.

RetentionLegalHoldId *string

A single legal hold to apply to the object. This field can only be specified if Retention-Directive is REPLACE. A legal hold is a character long string of max length 64. The object cannot be overwritten or deleted until all legal holds associated with the object are removed.

RetentionPeriod *int64

Retention period to store on the object in seconds. If this field and Retention-Expiration-Date are specified a 400 error is returned. If neither is specified the bucket's DefaultRetention period will be used. 0 is a legal value assuming the bucket's minimum retention period is also 0.

SSECustomerAlgorithm *string

Specifies the algorithm to use to when encrypting the object (for example, AES256).

SSECustomerKey *string

Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header.

SSECustomerKeyMD5 *string

Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

SSEKMSKeyId *string

If x-amz-server-side-encryption has a valid value of aws:kms or aws:kms:dsse, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the Key Management Service (KMS) symmetric encryption customer managed key that was used for the object. If you specify x-amz-server-side-encryption:aws:kms or x-amz-server-side-encryption:aws:kms:dsse, but do not providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the Amazon Web Services managed key (aws/s3) to protect the data. If the KMS key does not exist in the same account that's issuing the command, you must use the full ARN and not just the ID.

ServerSideEncryption *string enum

The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms, aws:kms:dsse).

Enum Values:

(no defined enumerable values)
StorageClass *string enum

By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class. Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, see Storage Classes (docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) in the Amazon S3 User Guide.

Enum Values:

(no defined enumerable values)
Tagging *string

The tag-set for the object. The tag-set must be encoded as URL Query parameters. (For example, “Key1=Value1”)

WebsiteRedirectLocation *string

If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. For information about object metadata, see Object Key and Metadata (docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html).

In the following example, the request header sets the redirect to an object (anotherPage.html) in the same bucket:

x-amz-website-redirect-location: /anotherPage.html

In the following example, the request header sets the object redirect to another website:

x-amz-website-redirect-location: www.example.com/

For more information about website hosting in Amazon S3, see Hosting Websites on Amazon S3 (docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html) and How to Configure Website Page Redirects (docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html).

UploadOutput struct

UploadOutput represents a response from the Upload() call.

Structure Fields:

Location string

The URL where the object was uploaded to.

VersionID *string

The version of the object that was uploaded. Will only be populated if the S3 Bucket is versioned. If the bucket is not versioned this field will not be set.

UploadID string

The ID for a multipart upload to S3. In the case of an error the error can be cast to the MultiUploadFailure interface to extract the upload ID.

ETag *string

Entity tag of the object.

Function Details

func GetBucketRegion(ctx aws.Context, c client.ConfigProvider, bucket, regionHint string, opts ...request.Option) (string, error)

GetBucketRegion will attempt to get the region for a bucket using the regionHint to determine which AWS partition to perform the query on.

The request will not be signed, and will not use your AWS credentials.

A “NotFound” error code will be returned if the bucket does not exist in the AWS partition the regionHint belongs to. If the regionHint parameter is an empty string GetBucketRegion will fallback to the ConfigProvider's region config. If the regionHint is empty, and the ConfigProvider does not have a region value, an error will be returned..

For example to get the region of a bucket which exists in “eu-central-1” you could provide a region hint of “us-west-2”.

sess := session.Must(session.NewSession()) bucket := "my-bucket" region, err := s3manager.GetBucketRegion(ctx, sess, bucket, "us-west-2") if err != nil { if aerr, ok := err.(awserr.Error); ok && aerr.Code() == "NotFound" { fmt.Fprintf(os.Stderr, "unable to find bucket %s's region not found\n", bucket) } return err } fmt.Printf("Bucket %s is in %s region\n", bucket, region)

By default the request will be made to the Amazon S3 endpoint using the Path style addressing.

s3.us-west-2.amazonaws.com/bucketname

This is not compatible with Amazon S3's FIPS endpoints. To override this behavior to use Virtual Host style addressing, provide a functional option that will set the Request's Config.S3ForcePathStyle to aws.Bool(false).

region, err := s3manager.GetBucketRegion(ctx, sess, "bucketname", "us-west-2", func(r *request.Request) { r.S3ForcePathStyle = aws.Bool(false) })

To configure the GetBucketRegion to make a request via the Amazon S3 FIPS endpoints directly when a FIPS region name is not available, (e.g. fips-us-gov-west-1) set the Config.Endpoint on the Session, or client the utility is called with. The hint region will be ignored if an endpoint URL is configured on the session or client.

sess, err := session.NewSession(&aws.Config{ Endpoint: aws.String("https://s3-fips.us-west-2.amazonaws.com"), }) region, err := s3manager.GetBucketRegion(context.Background(), sess, "bucketname", "")


62
63
64
65
66
67
68
69
// File 'service/s3/s3manager/bucket_region.go', line 62

func GetBucketRegion(ctx aws.Context, c client.ConfigProvider, bucket, regionHint string, opts ...request.Option) (string, error) { var cfg aws.Config if len(regionHint) != 0 { cfg.Region = aws.String(regionHint) } svc := s3.New(c, &cfg) return GetBucketRegionWithClient(ctx, svc, bucket, opts...) }

func GetBucketRegionWithClient(ctx aws.Context, svc s3iface.S3API, bucket string, opts ...request.Option) (string, error)

GetBucketRegionWithClient is the same as GetBucketRegion with the exception that it takes a S3 service client instead of a Session. The regionHint is derived from the region the S3 service client was created in.

By default the request will be made to the Amazon S3 endpoint using the Path style addressing.

s3.us-west-2.amazonaws.com/bucketname

This is not compatible with Amazon S3's FIPS endpoints. To override this behavior to use Virtual Host style addressing, provide a functional option that will set the Request's Config.S3ForcePathStyle to aws.Bool(false).

region, err := s3manager.GetBucketRegionWithClient(ctx, client, "bucketname", func(r *request.Request) { r.S3ForcePathStyle = aws.Bool(false) })

To configure the GetBucketRegion to make a request via the Amazon S3 FIPS endpoints directly when a FIPS region name is not available, (e.g. fips-us-gov-west-1) set the Config.Endpoint on the Session, or client the utility is called with. The hint region will be ignored if an endpoint URL is configured on the session or client.

region, err := s3manager.GetBucketRegionWithClient(context.Background(), s3.New(sess, &aws.Config{ Endpoint: aws.String("https://s3-fips.us-west-2.amazonaws.com"), }), "bucketname")

See GetBucketRegion for more information.



103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
// File 'service/s3/s3manager/bucket_region.go', line 103

func GetBucketRegionWithClient(ctx aws.Context, svc s3iface.S3API, bucket string, opts ...request.Option) (string, error) { req, _ := svc.HeadBucketRequest(&s3.HeadBucketInput{ Bucket: aws.String(bucket), }) req.Config.S3ForcePathStyle = aws.Bool(true) req.Config.Credentials = credentials.AnonymousCredentials req.SetContext(ctx) // Disable HTTP redirects to prevent an invalid 301 from eating the response // because Go's HTTP client will fail, and drop the response if an 301 is // received without a location header. S3 will return a 301 without the // location header for HeadObject API calls. req.DisableFollowRedirects = true var bucketRegion string req.Handlers.Send.PushBack(func(r *request.Request) { bucketRegion = r.HTTPResponse.Header.Get(bucketRegionHeader) if len(bucketRegion) == 0 { return } r.HTTPResponse.StatusCode = 200 r.HTTPResponse.Status = "OK" r.Error = nil }) // Replace the endpoint validation handler to not require a region if an // endpoint URL was specified. Since these requests are not authenticated, // requiring a region is not needed when an endpoint URL is provided. req.Handlers.Validate.Swap( corehandlers.ValidateEndpointHandler.Name, request.NamedHandler{ Name: "validateEndpointWithoutRegion", Fn: validateEndpointWithoutRegion, }, ) req.ApplyOptions(opts...) if err := req.Send(); err != nil { return "", err } bucketRegion = s3.NormalizeBucketLocation(bucketRegion) return bucketRegion, nil }

func NewBatchDelete(c client.ConfigProvider, options ...func(*BatchDelete)) *BatchDelete

NewBatchDelete will return a new delete client that can delete a batched amount of objects.

Example:

batcher := s3manager.NewBatchDelete(sess, size) objects := []BatchDeleteObject{ { Object: &s3.DeleteObjectInput { Key: aws.String("key"), Bucket: aws.String("bucket"), }, }, } if err := batcher.Delete(aws.BackgroundContext(), &s3manager.DeleteObjectsIterator{ Objects: objects, }); err != nil { return err }


257
258
259
260
// File 'service/s3/s3manager/batch.go', line 257

func NewBatchDelete(c client.ConfigProvider, options ...func(*BatchDelete)) *BatchDelete { client := s3.New(c) return NewBatchDeleteWithClient(client, options...) }

func NewBatchDeleteWithClient(client s3iface.S3API, options ...func(*BatchDelete)) *BatchDelete

NewBatchDeleteWithClient will return a new delete client that can delete a batched amount of objects.

Example:

batcher := s3manager.NewBatchDeleteWithClient(client, size) objects := []BatchDeleteObject{ { Object: &s3.DeleteObjectInput { Key: aws.String("key"), Bucket: aws.String("bucket"), }, }, } if err := batcher.Delete(aws.BackgroundContext(), &s3manager.DeleteObjectsIterator{ Objects: objects, }); err != nil { return err }


223
224
225
226
227
228
229
230
231
232
233
234
// File 'service/s3/s3manager/batch.go', line 223

func NewBatchDeleteWithClient(client s3iface.S3API, options ...func(*BatchDelete)) *BatchDelete { svc := &BatchDelete{ Client: client, BatchSize: DefaultBatchSize, } for _, opt := range options { opt(svc) } return svc }

func NewBatchError(code, message string, err []Error) awserr.Error

NewBatchError will return a BatchError that satisfies the awserr.Error interface.



74
75
76
77
78
79
80
// File 'service/s3/s3manager/batch.go', line 74

func NewBatchError(code, message string, err []Error) awserr.Error { return &BatchError{ Errors: err, code: code, message: message, } }

func NewDeleteListIterator(svc s3iface.S3API, input *s3.ListObjectsInput, opts ...func(*DeleteListIterator)) BatchDeleteIterator

NewDeleteListIterator will return a new DeleteListIterator.



145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
// File 'service/s3/s3manager/batch.go', line 145

func NewDeleteListIterator(svc s3iface.S3API, input *s3.ListObjectsInput, opts ...func(*DeleteListIterator)) BatchDeleteIterator { iter := &DeleteListIterator{ Bucket: input.Bucket, Paginator: request.Pagination{ NewRequest: func() (*request.Request, error) { var inCpy *s3.ListObjectsInput if input != nil { tmp := *input inCpy = &tmp } req, _ := svc.ListObjectsRequest(inCpy) return req, nil }, }, } for _, opt := range opts { opt(iter) } return iter }

func NewDownloader(c client.ConfigProvider, options ...func(*Downloader)) *Downloader

NewDownloader creates a new Downloader instance to downloads objects from S3 in concurrent chunks. Pass in additional functional options to customize the downloader behavior. Requires a client.ConfigProvider in order to create a S3 service client. The session.Session satisfies the client.ConfigProvider interface.

Example:

// The session the S3 Downloader will use sess := session.Must(session.NewSession()) // Create a downloader with the session and default options downloader := s3manager.NewDownloader(sess) // Create a downloader with the session and custom options downloader := s3manager.NewDownloader(sess, func(d *s3manager.Downloader) { d.PartSize = 64 * 1024 * 1024 // 64MB per part })


99
100
101
// File 'service/s3/s3manager/download.go', line 99

func NewDownloader(c client.ConfigProvider, options ...func(*Downloader)) *Downloader { return newDownloader(s3.New(c), options...) }

func NewDownloaderWithClient(svc s3iface.S3API, options ...func(*Downloader)) *Downloader

NewDownloaderWithClient creates a new Downloader instance to downloads objects from S3 in concurrent chunks. Pass in additional functional options to customize the downloader behavior. Requires a S3 service client to make S3 API calls.

Example:

// The session the S3 Downloader will use sess := session.Must(session.NewSession()) // The S3 client the S3 Downloader will use s3Svc := s3.New(sess) // Create a downloader with the s3 client and default options downloader := s3manager.NewDownloaderWithClient(s3Svc) // Create a downloader with the s3 client and custom options downloader := s3manager.NewDownloaderWithClient(s3Svc, func(d *s3manager.Downloader) { d.PartSize = 64 * 1024 * 1024 // 64MB per part })


137
138
139
// File 'service/s3/s3manager/download.go', line 137

func NewDownloaderWithClient(svc s3iface.S3API, options ...func(*Downloader)) *Downloader { return newDownloader(svc, options...) }

func NewUploader(c client.ConfigProvider, options ...func(*Uploader)) *Uploader

NewUploader creates a new Uploader instance to upload objects to S3. Pass In additional functional options to customize the uploader's behavior. Requires a client.ConfigProvider in order to create a S3 service client. The session.Session satisfies the client.ConfigProvider interface.

Example:

// The session the S3 Uploader will use sess := session.Must(session.NewSession()) // Create an uploader with the session and default options uploader := s3manager.NewUploader(sess) // Create an uploader with the session and custom options uploader := s3manager.NewUploader(session, func(u *s3manager.Uploader) { u.PartSize = 64 * 1024 * 1024 // 64MB per part })


199
200
201
// File 'service/s3/s3manager/upload.go', line 199

func NewUploader(c client.ConfigProvider, options ...func(*Uploader)) *Uploader { return newUploader(s3.New(c), options...) }

func NewUploaderWithClient(svc s3iface.S3API, options ...func(*Uploader)) *Uploader

NewUploaderWithClient creates a new Uploader instance to upload objects to S3. Pass in additional functional options to customize the uploader's behavior. Requires a S3 service client to make S3 API calls.

Example:

// The session the S3 Uploader will use sess := session.Must(session.NewSession()) // S3 service client the Upload manager will use. s3Svc := s3.New(sess) // Create an uploader with S3 client and default options uploader := s3manager.NewUploaderWithClient(s3Svc) // Create an uploader with S3 client and custom options uploader := s3manager.NewUploaderWithClient(s3Svc, func(u *s3manager.Uploader) { u.PartSize = 64 * 1024 * 1024 // 64MB per part })


241
242
243
// File 'service/s3/s3manager/upload.go', line 241

func NewUploaderWithClient(svc s3iface.S3API, options ...func(*Uploader)) *Uploader { return newUploader(svc, options...) }

func WithDownloaderRequestOptions(opts ...request.Option) func(*Downloader)

WithDownloaderRequestOptions appends to the Downloader's API request options.



75
76
77
78
79
// File 'service/s3/s3manager/download.go', line 75

func WithDownloaderRequestOptions(opts ...request.Option) func(*Downloader) { return func(d *Downloader) { d.RequestOptions = append(d.RequestOptions, opts...) } }

func WithUploaderRequestOptions(opts ...request.Option) func(*Uploader)

WithUploaderRequestOptions appends to the Uploader's API request options.



116
117
118
119
120
// File 'service/s3/s3manager/upload.go', line 116

func WithUploaderRequestOptions(opts ...request.Option) func(*Uploader) { return func(u *Uploader) { u.RequestOptions = append(u.RequestOptions, opts...) } }