@Generated(value="com.ibm.cos.v2:codegen") public final class CopyObjectRequest extends S3Request implements ToCopyableBuilder<CopyObjectRequest.Builder,CopyObjectRequest>
| Modifier and Type | Class and Description |
|---|---|
static interface |
CopyObjectRequest.Builder |
| Modifier and Type | Method and Description |
|---|---|
ObjectCannedACL |
acl()
The canned access control list (ACL) to apply to the object.
|
String |
aclAsString()
The canned access control list (ACL) to apply to the object.
|
String |
bucket()
Deprecated.
|
Boolean |
bucketKeyEnabled()
Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using
Key Management Service (KMS) keys (SSE-KMS).
|
static CopyObjectRequest.Builder |
builder() |
String |
cacheControl()
Specifies the caching behavior along the request/reply chain.
|
ChecksumAlgorithm |
checksumAlgorithm()
Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object.
|
String |
checksumAlgorithmAsString()
Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object.
|
String |
contentDisposition()
Specifies presentational information for the object.
|
String |
contentEncoding()
Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be
applied to obtain the media-type referenced by the Content-Type header field.
|
String |
contentLanguage()
The language the content is in.
|
String |
contentType()
A standard MIME type that describes the format of the object data.
|
String |
copySource()
Deprecated.
The
copySource parameter has been deprecated in favor of the more user-friendly
sourceBucket, sourceKey, and sourceVersionId parameters. The
copySource parameter will remain fully functional, but it must not be used in conjunction
with its replacement parameters. |
String |
copySourceIfMatch()
Copies the object if its entity tag (ETag) matches the specified tag.
|
Instant |
copySourceIfModifiedSince()
Copies the object if it has been modified since the specified time.
|
String |
copySourceIfNoneMatch()
Copies the object if its entity tag (ETag) is different than the specified ETag.
|
Instant |
copySourceIfUnmodifiedSince()
Copies the object if it hasn't been modified since the specified time.
|
String |
copySourceSSECustomerAlgorithm()
Specifies the algorithm to use when decrypting the source object (for example,
AES256). |
String |
copySourceSSECustomerKey()
Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object.
|
String |
copySourceSSECustomerKeyMD5()
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.
|
String |
destinationBucket()
The name of the destination bucket.
|
String |
destinationKey()
The key of the destination object.
|
boolean |
equals(Object obj) |
boolean |
equalsBySdkFields(Object obj)
Indicates whether some other object is "equal to" this one by SDK fields.
|
String |
expectedBucketOwner()
The account ID of the expected destination bucket owner.
|
String |
expectedSourceBucketOwner()
The account ID of the expected source bucket owner.
|
Instant |
expires()
The date and time at which the object is no longer cacheable.
|
<T> Optional<T> |
getValueForField(String fieldName,
Class<T> clazz)
Used to retrieve the value of a field from any class that extends
SdkRequest. |
String |
grantFullControl()
Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.
|
String |
grantRead()
Allows grantee to read the object data and its metadata.
|
String |
grantReadACP()
Allows grantee to read the object ACL.
|
String |
grantWriteACP()
Allows grantee to write the ACL for the applicable object.
|
int |
hashCode() |
boolean |
hasMetadata()
For responses, this returns true if the service returned a value for the Metadata property.
|
String |
key()
Deprecated.
Use
destinationKey() |
Map<String,String> |
metadata()
A map of metadata to store with the object in S3.
|
MetadataDirective |
metadataDirective()
Specifies whether the metadata is copied from the source object or replaced with metadata that's provided in the
request.
|
String |
metadataDirectiveAsString()
Specifies whether the metadata is copied from the source object or replaced with metadata that's provided in the
request.
|
ObjectLockLegalHoldStatus |
objectLockLegalHoldStatus()
Specifies whether you want to apply a legal hold to the object copy.
|
String |
objectLockLegalHoldStatusAsString()
Specifies whether you want to apply a legal hold to the object copy.
|
ObjectLockMode |
objectLockMode()
The Object Lock mode that you want to apply to the object copy.
|
String |
objectLockModeAsString()
The Object Lock mode that you want to apply to the object copy.
|
Instant |
objectLockRetainUntilDate()
The date and time when you want the Object Lock of the object copy to expire.
|
RequestPayer |
requestPayer()
Returns the value of the RequestPayer property for this object.
|
String |
requestPayerAsString()
Returns the value of the RequestPayer property for this object.
|
RetentionDirective |
retentionDirective() |
Instant |
retentionExpirationDate() |
String |
retentionLegalHoldId() |
Long |
retentionPeriod() |
Map<String,SdkField<?>> |
sdkFieldNameToField() |
List<SdkField<?>> |
sdkFields() |
static Class<? extends CopyObjectRequest.Builder> |
serializableBuilderClass() |
ServerSideEncryption |
serverSideEncryption()
The server-side encryption algorithm used when storing this object in Amazon S3.
|
String |
serverSideEncryptionAsString()
The server-side encryption algorithm used when storing this object in Amazon S3.
|
String |
sourceBucket()
The name of the bucket containing the object to copy.
|
String |
sourceKey()
The key of the object to copy.
|
String |
sourceVersionId()
Specifies a particular version of the source object to copy.
|
String |
sseCustomerAlgorithm()
Specifies the algorithm to use when encrypting the object (for example,
AES256). |
String |
sseCustomerKey()
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data.
|
String |
sseCustomerKeyMD5()
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.
|
String |
ssekmsEncryptionContext()
Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for the
destination object encryption.
|
String |
ssekmsKeyId()
Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption.
|
StorageClass |
storageClass()
If the
x-amz-storage-class header is not used, the copied object will be stored in the
STANDARD Storage Class by default. |
String |
storageClassAsString()
If the
x-amz-storage-class header is not used, the copied object will be stored in the
STANDARD Storage Class by default. |
String |
tagging()
The tag-set for the object copy in the destination bucket.
|
TaggingDirective |
taggingDirective()
Specifies whether the object tag-set is copied from the source object or replaced with the tag-set that's
provided in the request.
|
String |
taggingDirectiveAsString()
Specifies whether the object tag-set is copied from the source object or replaced with the tag-set that's
provided in the request.
|
CopyObjectRequest.Builder |
toBuilder()
Take this object and create a builder that contains all of the current property values of this object.
|
String |
toString()
Returns a string representation of this object.
|
String |
websiteRedirectLocation()
If the destination bucket is configured as a website, redirects requests for this object copy to another object
in the same bucket or to an external URL.
|
overrideConfigurationclone, finalize, getClass, notify, notifyAll, wait, wait, waitcopypublic final ObjectCannedACL acl()
The canned access control list (ACL) to apply to the object.
When you copy an object, the ACL metadata is not preserved and is set to private by default. Only
the owner has full access control. To override the default ACL setting, specify a new ACL when you generate a
copy request. For more information, see Using ACLs.
If the destination bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object
Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept
PUT requests that don't specify an ACL or PUT requests that specify bucket owner full
control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL
expressed in the XML format. For more information, see Controlling ownership of
objects and disabling ACLs in the Amazon S3 User Guide.
If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
If the service returns an enum value that is not available in the current SDK version, acl will return
ObjectCannedACL.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from
aclAsString().
When you copy an object, the ACL metadata is not preserved and is set to private by default.
Only the owner has full access control. To override the default ACL setting, specify a new ACL when you
generate a copy request. For more information, see Using ACLs.
If the destination bucket that you're copying objects to uses the bucket owner enforced setting for S3
Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only
accept PUT requests that don't specify an ACL or PUT requests that specify
bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an
equivalent form of this ACL expressed in the XML format. For more information, see Controlling
ownership of objects and disabling ACLs in the Amazon S3 User Guide.
If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
ObjectCannedACLpublic final String aclAsString()
The canned access control list (ACL) to apply to the object.
When you copy an object, the ACL metadata is not preserved and is set to private by default. Only
the owner has full access control. To override the default ACL setting, specify a new ACL when you generate a
copy request. For more information, see Using ACLs.
If the destination bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object
Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept
PUT requests that don't specify an ACL or PUT requests that specify bucket owner full
control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL
expressed in the XML format. For more information, see Controlling ownership of
objects and disabling ACLs in the Amazon S3 User Guide.
If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
If the service returns an enum value that is not available in the current SDK version, acl will return
ObjectCannedACL.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from
aclAsString().
When you copy an object, the ACL metadata is not preserved and is set to private by default.
Only the owner has full access control. To override the default ACL setting, specify a new ACL when you
generate a copy request. For more information, see Using ACLs.
If the destination bucket that you're copying objects to uses the bucket owner enforced setting for S3
Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only
accept PUT requests that don't specify an ACL or PUT requests that specify
bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an
equivalent form of this ACL expressed in the XML format. For more information, see Controlling
ownership of objects and disabling ACLs in the Amazon S3 User Guide.
If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
ObjectCannedACLpublic final String cacheControl()
Specifies the caching behavior along the request/reply chain.
public final ChecksumAlgorithm checksumAlgorithm()
Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
When you copy an object, if the source object has a checksum, that checksum value will be copied to the new
object by default. If the CopyObject request does not include this
x-amz-checksum-algorithm header, the checksum algorithm will be copied from the source object to the
destination object (if it's present on the source object). You can optionally specify a different checksum
algorithm to use with the x-amz-checksum-algorithm header. Unrecognized or unsupported values will
respond with the HTTP status code 400 Bad Request.
For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum
algorithm that's used for performance.
If the service returns an enum value that is not available in the current SDK version, checksumAlgorithm
will return ChecksumAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from checksumAlgorithmAsString().
When you copy an object, if the source object has a checksum, that checksum value will be copied to the
new object by default. If the CopyObject request does not include this
x-amz-checksum-algorithm header, the checksum algorithm will be copied from the source
object to the destination object (if it's present on the source object). You can optionally specify a
different checksum algorithm to use with the x-amz-checksum-algorithm header. Unrecognized
or unsupported values will respond with the HTTP status code 400 Bad Request.
For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum
algorithm that's used for performance.
ChecksumAlgorithmpublic final String checksumAlgorithmAsString()
Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
When you copy an object, if the source object has a checksum, that checksum value will be copied to the new
object by default. If the CopyObject request does not include this
x-amz-checksum-algorithm header, the checksum algorithm will be copied from the source object to the
destination object (if it's present on the source object). You can optionally specify a different checksum
algorithm to use with the x-amz-checksum-algorithm header. Unrecognized or unsupported values will
respond with the HTTP status code 400 Bad Request.
For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum
algorithm that's used for performance.
If the service returns an enum value that is not available in the current SDK version, checksumAlgorithm
will return ChecksumAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from checksumAlgorithmAsString().
When you copy an object, if the source object has a checksum, that checksum value will be copied to the
new object by default. If the CopyObject request does not include this
x-amz-checksum-algorithm header, the checksum algorithm will be copied from the source
object to the destination object (if it's present on the source object). You can optionally specify a
different checksum algorithm to use with the x-amz-checksum-algorithm header. Unrecognized
or unsupported values will respond with the HTTP status code 400 Bad Request.
For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum
algorithm that's used for performance.
ChecksumAlgorithmpublic final String contentDisposition()
Specifies presentational information for the object. Indicates whether an object should be displayed in a web browser or downloaded as a file. It allows specifying the desired filename for the downloaded file.
public final String contentEncoding()
Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
For directory buckets, only the aws-chunked value is supported in this header field.
For directory buckets, only the aws-chunked value is supported in this header field.
public final String contentLanguage()
The language the content is in.
public final String contentType()
A standard MIME type that describes the format of the object data.
@Deprecated public final String copySource()
copySource parameter has been deprecated in favor of the more user-friendly
sourceBucket, sourceKey, and sourceVersionId parameters. The
copySource parameter will remain fully functional, but it must not be used in conjunction
with its replacement parameters.Specifies the source object for the copy operation. The source object can be up to 5 GB. If the source object is an object that was uploaded by using a multipart upload, the object copy will be a single part object after the source object is copied to the destination bucket.
You specify the value of the copy source in one of two formats, depending on whether you want to access the source object through an access point:
For objects not accessed through an access point, specify the name of the source bucket and the key of the source
object, separated by a slash (/). For example, to copy the object reports/january.pdf from the
general purpose bucket awsexamplebucket, use awsexamplebucket/reports/january.pdf. The
value must be URL-encoded. To copy the object reports/january.pdf from the directory bucket
awsexamplebucket--use1-az5--x-s3, use
awsexamplebucket--use1-az5--x-s3/reports/january.pdf. The value must be URL-encoded.
For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as accessed
through the access point, in the format
arn:aws:s3:<Region>:<account-id>:accesspoint/<access-point-name>/object/<key>
. For example, to copy the object reports/january.pdf through access point
my-access-point owned by account 123456789012 in Region us-west-2, use the
URL encoding of
arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf. The value
must be URL encoded.
Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.
Access points are not supported by directory buckets.
Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as accessed in
the format
arn:aws:s3-outposts:<Region>:<account-id>:outpost/<outpost-id>/object/<key>.
For example, to copy the object reports/january.pdf through outpost my-outpost owned by
account 123456789012 in Region us-west-2, use the URL encoding of
arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf. The value
must be URL-encoded.
If your source bucket versioning is enabled, the x-amz-copy-source header by default identifies the
current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the
object was deleted. To copy a different version, use the versionId query parameter. Specifically,
append ?versionId=<version-id> to the value (for example,
awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893). If you don't
specify a version ID, Amazon S3 copies the latest version of the source object.
If you enable versioning on the destination bucket, Amazon S3 generates a unique version ID for the copied
object. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID
of the copied object in the x-amz-version-id response header in the response.
If you do not enable versioning or suspend it on the destination bucket, the version ID that Amazon S3 generates
in the x-amz-version-id response header is always null.
Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.
You specify the value of the copy source in one of two formats, depending on whether you want to access the source object through an access point:
For objects not accessed through an access point, specify the name of the source bucket and the key of
the source object, separated by a slash (/). For example, to copy the object
reports/january.pdf from the general purpose bucket awsexamplebucket, use
awsexamplebucket/reports/january.pdf. The value must be URL-encoded. To copy the object
reports/january.pdf from the directory bucket awsexamplebucket--use1-az5--x-s3,
use awsexamplebucket--use1-az5--x-s3/reports/january.pdf. The value must be URL-encoded.
For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as
accessed through the access point, in the format
arn:aws:s3:<Region>:<account-id>:accesspoint/<access-point-name>/object/<key>
. For example, to copy the object reports/january.pdf through access point
my-access-point owned by account 123456789012 in Region us-west-2,
use the URL encoding of
arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf.
The value must be URL encoded.
Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.
Access points are not supported by directory buckets.
Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as
accessed in the format
arn:aws:s3-outposts:<Region>:<account-id>:outpost/<outpost-id>/object/<key>
. For example, to copy the object reports/january.pdf through outpost
my-outpost owned by account 123456789012 in Region us-west-2, use
the URL encoding of
arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf.
The value must be URL-encoded.
If your source bucket versioning is enabled, the x-amz-copy-source header by default
identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3
behaves as if the object was deleted. To copy a different version, use the versionId query
parameter. Specifically, append ?versionId=<version-id> to the value (for example,
awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893). If you
don't specify a version ID, Amazon S3 copies the latest version of the source object.
If you enable versioning on the destination bucket, Amazon S3 generates a unique version ID for the
copied object. This version ID is different from the version ID of the source object. Amazon S3 returns
the version ID of the copied object in the x-amz-version-id response header in the response.
If you do not enable versioning or suspend it on the destination bucket, the version ID that Amazon S3
generates in the x-amz-version-id response header is always null.
Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.
public final String copySourceIfMatch()
Copies the object if its entity tag (ETag) matches the specified tag.
If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the
data:
x-amz-copy-source-if-match condition evaluates to true
x-amz-copy-source-if-unmodified-since condition evaluates to false
If both the x-amz-copy-source-if-match and
x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as
follows, Amazon S3 returns 200 OK and copies the data:
x-amz-copy-source-if-match condition evaluates to true
x-amz-copy-source-if-unmodified-since condition evaluates to false
public final Instant copySourceIfModifiedSince()
Copies the object if it has been modified since the specified time.
If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the
412 Precondition Failed response code:
x-amz-copy-source-if-none-match condition evaluates to false
x-amz-copy-source-if-modified-since condition evaluates to true
If both the x-amz-copy-source-if-none-match and
x-amz-copy-source-if-modified-since headers are present in the request and evaluate as
follows, Amazon S3 returns the 412 Precondition Failed response code:
x-amz-copy-source-if-none-match condition evaluates to false
x-amz-copy-source-if-modified-since condition evaluates to true
public final String copySourceIfNoneMatch()
Copies the object if its entity tag (ETag) is different than the specified ETag.
If both the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the
412 Precondition Failed response code:
x-amz-copy-source-if-none-match condition evaluates to false
x-amz-copy-source-if-modified-since condition evaluates to true
If both the x-amz-copy-source-if-none-match and
x-amz-copy-source-if-modified-since headers are present in the request and evaluate as
follows, Amazon S3 returns the 412 Precondition Failed response code:
x-amz-copy-source-if-none-match condition evaluates to false
x-amz-copy-source-if-modified-since condition evaluates to true
public final Instant copySourceIfUnmodifiedSince()
Copies the object if it hasn't been modified since the specified time.
If both the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the
data:
x-amz-copy-source-if-match condition evaluates to true
x-amz-copy-source-if-unmodified-since condition evaluates to false
If both the x-amz-copy-source-if-match and
x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as
follows, Amazon S3 returns 200 OK and copies the data:
x-amz-copy-source-if-match condition evaluates to true
x-amz-copy-source-if-unmodified-since condition evaluates to false
public final Instant expires()
The date and time at which the object is no longer cacheable.
public final String grantFullControl()
Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
public final String grantRead()
Allows grantee to read the object data and its metadata.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
public final String grantReadACP()
Allows grantee to read the object ACL.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
public final String grantWriteACP()
Allows grantee to write the ACL for the applicable object.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
public final boolean hasMetadata()
isEmpty() method on the property). This is
useful because the SDK will never return a null collection or map, but you may need to differentiate between the
service returning nothing (or null) and the service returning an empty collection or map. For requests, this
returns true if a value for the property was specified in the request builder, and false if a value was not
specified.public final Map<String,String> metadata()
A map of metadata to store with the object in S3.
Attempts to modify the collection returned by this method will result in an UnsupportedOperationException.
This method will never return null. If you would like to know whether the service returned this field (so that
you can differentiate between null and empty), you can use the hasMetadata() method.
public final MetadataDirective metadataDirective()
Specifies whether the metadata is copied from the source object or replaced with metadata that's provided in the
request. When copying an object, you can preserve all metadata (the default) or specify new metadata. If this
header isn’t specified, COPY is the default behavior.
General purpose bucket - For general purpose buckets, when you grant permissions, you can use the
s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are
uploaded. For more information, see Amazon S3 condition key
examples in the Amazon S3 User Guide.
x-amz-website-redirect-location is unique to each object and is not copied when using the
x-amz-metadata-directive header. To copy the value, you must specify
x-amz-website-redirect-location in the request header.
If the service returns an enum value that is not available in the current SDK version, metadataDirective
will return MetadataDirective.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from metadataDirectiveAsString().
COPY is the default behavior.
General purpose bucket - For general purpose buckets, when you grant permissions, you can use the
s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects
are uploaded. For more information, see Amazon S3 condition key
examples in the Amazon S3 User Guide.
x-amz-website-redirect-location is unique to each object and is not copied when using the
x-amz-metadata-directive header. To copy the value, you must specify
x-amz-website-redirect-location in the request header.
MetadataDirectivepublic final String metadataDirectiveAsString()
Specifies whether the metadata is copied from the source object or replaced with metadata that's provided in the
request. When copying an object, you can preserve all metadata (the default) or specify new metadata. If this
header isn’t specified, COPY is the default behavior.
General purpose bucket - For general purpose buckets, when you grant permissions, you can use the
s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are
uploaded. For more information, see Amazon S3 condition key
examples in the Amazon S3 User Guide.
x-amz-website-redirect-location is unique to each object and is not copied when using the
x-amz-metadata-directive header. To copy the value, you must specify
x-amz-website-redirect-location in the request header.
If the service returns an enum value that is not available in the current SDK version, metadataDirective
will return MetadataDirective.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from metadataDirectiveAsString().
COPY is the default behavior.
General purpose bucket - For general purpose buckets, when you grant permissions, you can use the
s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects
are uploaded. For more information, see Amazon S3 condition key
examples in the Amazon S3 User Guide.
x-amz-website-redirect-location is unique to each object and is not copied when using the
x-amz-metadata-directive header. To copy the value, you must specify
x-amz-website-redirect-location in the request header.
MetadataDirectivepublic final TaggingDirective taggingDirective()
Specifies whether the object tag-set is copied from the source object or replaced with the tag-set that's provided in the request.
The default value is COPY.
Directory buckets - For directory buckets in a CopyObject operation, only the empty tag-set
is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a
501 Not Implemented status code. When the destination bucket is a directory bucket, you will receive
a 501 Not Implemented response in any of the following situations:
When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE the tag-set of a source object and set a non-empty value to
x-amz-tagging.
When you don't set the x-amz-tagging-directive header and the source object has non-empty tags. This
is because the default value of x-amz-tagging-directive is COPY.
Because only the empty tag-set is supported for directory buckets in a CopyObject operation, the
following situations are allowed:
When you attempt to COPY the tag-set from a directory bucket source object that has no tags to a
general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE the tag-set of a directory bucket source object and set the
x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a general purpose bucket source object that has non-empty
tags and set the x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a directory bucket source object and don't set the
x-amz-tagging value of the directory bucket destination object. This is because the default value of
x-amz-tagging is the empty value.
If the service returns an enum value that is not available in the current SDK version, taggingDirective
will return TaggingDirective.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from taggingDirectiveAsString().
The default value is COPY.
Directory buckets - For directory buckets in a CopyObject operation, only the empty
tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will
receive a 501 Not Implemented status code. When the destination bucket is a directory
bucket, you will receive a 501 Not Implemented response in any of the following situations:
When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE the tag-set of a source object and set a non-empty value to
x-amz-tagging.
When you don't set the x-amz-tagging-directive header and the source object has non-empty
tags. This is because the default value of x-amz-tagging-directive is COPY.
Because only the empty tag-set is supported for directory buckets in a CopyObject operation,
the following situations are allowed:
When you attempt to COPY the tag-set from a directory bucket source object that has no tags
to a general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE the tag-set of a directory bucket source object and set the
x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a general purpose bucket source object that has
non-empty tags and set the x-amz-tagging value of the directory bucket destination object to
empty.
When you attempt to REPLACE the tag-set of a directory bucket source object and don't set
the x-amz-tagging value of the directory bucket destination object. This is because the
default value of x-amz-tagging is the empty value.
TaggingDirectivepublic final String taggingDirectiveAsString()
Specifies whether the object tag-set is copied from the source object or replaced with the tag-set that's provided in the request.
The default value is COPY.
Directory buckets - For directory buckets in a CopyObject operation, only the empty tag-set
is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a
501 Not Implemented status code. When the destination bucket is a directory bucket, you will receive
a 501 Not Implemented response in any of the following situations:
When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE the tag-set of a source object and set a non-empty value to
x-amz-tagging.
When you don't set the x-amz-tagging-directive header and the source object has non-empty tags. This
is because the default value of x-amz-tagging-directive is COPY.
Because only the empty tag-set is supported for directory buckets in a CopyObject operation, the
following situations are allowed:
When you attempt to COPY the tag-set from a directory bucket source object that has no tags to a
general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE the tag-set of a directory bucket source object and set the
x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a general purpose bucket source object that has non-empty
tags and set the x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a directory bucket source object and don't set the
x-amz-tagging value of the directory bucket destination object. This is because the default value of
x-amz-tagging is the empty value.
If the service returns an enum value that is not available in the current SDK version, taggingDirective
will return TaggingDirective.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from taggingDirectiveAsString().
The default value is COPY.
Directory buckets - For directory buckets in a CopyObject operation, only the empty
tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will
receive a 501 Not Implemented status code. When the destination bucket is a directory
bucket, you will receive a 501 Not Implemented response in any of the following situations:
When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE the tag-set of a source object and set a non-empty value to
x-amz-tagging.
When you don't set the x-amz-tagging-directive header and the source object has non-empty
tags. This is because the default value of x-amz-tagging-directive is COPY.
Because only the empty tag-set is supported for directory buckets in a CopyObject operation,
the following situations are allowed:
When you attempt to COPY the tag-set from a directory bucket source object that has no tags
to a general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE the tag-set of a directory bucket source object and set the
x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a general purpose bucket source object that has
non-empty tags and set the x-amz-tagging value of the directory bucket destination object to
empty.
When you attempt to REPLACE the tag-set of a directory bucket source object and don't set
the x-amz-tagging value of the directory bucket destination object. This is because the
default value of x-amz-tagging is the empty value.
TaggingDirectivepublic final ServerSideEncryption serverSideEncryption()
The server-side encryption algorithm used when storing this object in Amazon S3. Unrecognized or unsupported
values won’t write a destination object and will receive a 400 Bad Request response.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a different default encryption configuration, Amazon S3 uses the corresponding encryption key to encrypt the target object copy.
With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption in the Amazon S3 User Guide.
General purpose buckets
For general purpose buckets, there are the following supported options for server-side encryption: server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), and server-side encryption with customer-provided encryption keys (SSE-C). Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.
When you perform a CopyObject operation, if you want to use a different type of encryption setting
for the target object, you can specify appropriate encryption-related headers to encrypt the target object with
an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is
different from the default encryption configuration of the destination bucket, the encryption setting in your
request takes precedence.
Directory buckets
For directory buckets, there are only two supported options for server-side encryption: server-side encryption
with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (
aws:kms). We recommend that the bucket's default encryption uses the desired encryption
configuration and you don't override the bucket default encryption in your CreateSession requests or
PUT object requests. Then, new objects are automatically encrypted with the desired encryption
settings. For more information, see Protecting data
with server-side encryption in the Amazon S3 User Guide. For more information about the encryption
overriding behaviors in directory buckets, see Specifying
server-side encryption with KMS for new object uploads.
To encrypt new object copies to a directory bucket with SSE-KMS, we recommend you specify SSE-KMS as the
directory bucket's default encryption configuration with a KMS key (specifically, a customer managed
key). The Amazon Web Services
managed key (aws/s3) isn't supported. Your SSE-KMS configuration can only support 1 customer managed key
per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you
can't override the customer managed key for the bucket's SSE-KMS configuration. Then, when you perform a
CopyObject operation and want to specify server-side encryption settings for new object copies with
SSE-KMS in the encryption-related request headers, you must ensure the encryption key is the same customer
managed key that you specified for the directory bucket's default encryption configuration.
If the service returns an enum value that is not available in the current SDK version,
serverSideEncryption will return ServerSideEncryption.UNKNOWN_TO_SDK_VERSION. The raw value
returned by the service is available from serverSideEncryptionAsString().
400 Bad Request
response.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a different default encryption configuration, Amazon S3 uses the corresponding encryption key to encrypt the target object copy.
With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption in the Amazon S3 User Guide.
General purpose buckets
For general purpose buckets, there are the following supported options for server-side encryption: server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), and server-side encryption with customer-provided encryption keys (SSE-C). Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.
When you perform a CopyObject operation, if you want to use a different type of encryption
setting for the target object, you can specify appropriate encryption-related headers to encrypt the
target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption
setting in your request is different from the default encryption configuration of the destination bucket,
the encryption setting in your request takes precedence.
Directory buckets
For directory buckets, there are only two supported options for server-side encryption: server-side
encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS
keys (SSE-KMS) (aws:kms). We recommend that the bucket's default encryption uses the desired
encryption configuration and you don't override the bucket default encryption in your
CreateSession requests or PUT object requests. Then, new objects are
automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information
about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.
To encrypt new object copies to a directory bucket with SSE-KMS, we recommend you specify SSE-KMS as the
directory bucket's default encryption configuration with a KMS key (specifically, a customer managed
key). The Amazon Web
Services managed key (aws/s3) isn't supported. Your SSE-KMS configuration can only
support 1 customer managed
key per directory bucket for the lifetime of the bucket. After you specify a customer managed key for
SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration. Then, when
you perform a CopyObject operation and want to specify server-side encryption settings for
new object copies with SSE-KMS in the encryption-related request headers, you must ensure the encryption
key is the same customer managed key that you specified for the directory bucket's default encryption
configuration.
ServerSideEncryptionpublic final String serverSideEncryptionAsString()
The server-side encryption algorithm used when storing this object in Amazon S3. Unrecognized or unsupported
values won’t write a destination object and will receive a 400 Bad Request response.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a different default encryption configuration, Amazon S3 uses the corresponding encryption key to encrypt the target object copy.
With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption in the Amazon S3 User Guide.
General purpose buckets
For general purpose buckets, there are the following supported options for server-side encryption: server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), and server-side encryption with customer-provided encryption keys (SSE-C). Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.
When you perform a CopyObject operation, if you want to use a different type of encryption setting
for the target object, you can specify appropriate encryption-related headers to encrypt the target object with
an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is
different from the default encryption configuration of the destination bucket, the encryption setting in your
request takes precedence.
Directory buckets
For directory buckets, there are only two supported options for server-side encryption: server-side encryption
with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (
aws:kms). We recommend that the bucket's default encryption uses the desired encryption
configuration and you don't override the bucket default encryption in your CreateSession requests or
PUT object requests. Then, new objects are automatically encrypted with the desired encryption
settings. For more information, see Protecting data
with server-side encryption in the Amazon S3 User Guide. For more information about the encryption
overriding behaviors in directory buckets, see Specifying
server-side encryption with KMS for new object uploads.
To encrypt new object copies to a directory bucket with SSE-KMS, we recommend you specify SSE-KMS as the
directory bucket's default encryption configuration with a KMS key (specifically, a customer managed
key). The Amazon Web Services
managed key (aws/s3) isn't supported. Your SSE-KMS configuration can only support 1 customer managed key
per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you
can't override the customer managed key for the bucket's SSE-KMS configuration. Then, when you perform a
CopyObject operation and want to specify server-side encryption settings for new object copies with
SSE-KMS in the encryption-related request headers, you must ensure the encryption key is the same customer
managed key that you specified for the directory bucket's default encryption configuration.
If the service returns an enum value that is not available in the current SDK version,
serverSideEncryption will return ServerSideEncryption.UNKNOWN_TO_SDK_VERSION. The raw value
returned by the service is available from serverSideEncryptionAsString().
400 Bad Request
response.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a different default encryption configuration, Amazon S3 uses the corresponding encryption key to encrypt the target object copy.
With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption in the Amazon S3 User Guide.
General purpose buckets
For general purpose buckets, there are the following supported options for server-side encryption: server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), and server-side encryption with customer-provided encryption keys (SSE-C). Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.
When you perform a CopyObject operation, if you want to use a different type of encryption
setting for the target object, you can specify appropriate encryption-related headers to encrypt the
target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption
setting in your request is different from the default encryption configuration of the destination bucket,
the encryption setting in your request takes precedence.
Directory buckets
For directory buckets, there are only two supported options for server-side encryption: server-side
encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS
keys (SSE-KMS) (aws:kms). We recommend that the bucket's default encryption uses the desired
encryption configuration and you don't override the bucket default encryption in your
CreateSession requests or PUT object requests. Then, new objects are
automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information
about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.
To encrypt new object copies to a directory bucket with SSE-KMS, we recommend you specify SSE-KMS as the
directory bucket's default encryption configuration with a KMS key (specifically, a customer managed
key). The Amazon Web
Services managed key (aws/s3) isn't supported. Your SSE-KMS configuration can only
support 1 customer managed
key per directory bucket for the lifetime of the bucket. After you specify a customer managed key for
SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration. Then, when
you perform a CopyObject operation and want to specify server-side encryption settings for
new object copies with SSE-KMS in the encryption-related request headers, you must ensure the encryption
key is the same customer managed key that you specified for the directory bucket's default encryption
configuration.
ServerSideEncryptionpublic final StorageClass storageClass()
If the x-amz-storage-class header is not used, the copied object will be stored in the
STANDARD Storage Class by default. The STANDARD storage class provides high durability
and high availability. Depending on performance needs, you can specify a different Storage Class.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE (the S3 Express One Zone
storage class) in Availability Zones and ONEZONE_IA (the S3 One Zone-Infrequent Access storage
class) in Dedicated Local Zones. Unsupported storage class values won't write a destination object and will
respond with the HTTP status code 400 Bad Request.
Amazon S3 on Outposts - S3 on Outposts only uses the OUTPOSTS Storage Class.
You can use the CopyObject action to change the storage class of an object that is already stored in
Amazon S3 by using the x-amz-storage-class header. For more information, see Storage Classes in the
Amazon S3 User Guide.
Before using an object as a source object for the copy operation, you must restore a copy of it if it meets any of the following conditions:
The storage class of the source object is GLACIER or DEEP_ARCHIVE.
The storage class of the source object is INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is Archive Access or Deep Archive Access.
For more information, see RestoreObject and Copying Objects in the Amazon S3 User Guide.
If the service returns an enum value that is not available in the current SDK version, storageClass will
return StorageClass.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from
storageClassAsString().
x-amz-storage-class header is not used, the copied object will be stored in the
STANDARD Storage Class by default. The STANDARD storage class provides high
durability and high availability. Depending on performance needs, you can specify a different Storage
Class.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE (the S3 Express
One Zone storage class) in Availability Zones and ONEZONE_IA (the S3 One Zone-Infrequent
Access storage class) in Dedicated Local Zones. Unsupported storage class values won't write a
destination object and will respond with the HTTP status code 400 Bad Request.
Amazon S3 on Outposts - S3 on Outposts only uses the OUTPOSTS Storage Class.
You can use the CopyObject action to change the storage class of an object that is already
stored in Amazon S3 by using the x-amz-storage-class header. For more information, see Storage Classes in
the Amazon S3 User Guide.
Before using an object as a source object for the copy operation, you must restore a copy of it if it meets any of the following conditions:
The storage class of the source object is GLACIER or DEEP_ARCHIVE.
The storage class of the source object is INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is Archive Access or
Deep Archive Access.
For more information, see RestoreObject and Copying Objects in the Amazon S3 User Guide.
StorageClasspublic final String storageClassAsString()
If the x-amz-storage-class header is not used, the copied object will be stored in the
STANDARD Storage Class by default. The STANDARD storage class provides high durability
and high availability. Depending on performance needs, you can specify a different Storage Class.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE (the S3 Express One Zone
storage class) in Availability Zones and ONEZONE_IA (the S3 One Zone-Infrequent Access storage
class) in Dedicated Local Zones. Unsupported storage class values won't write a destination object and will
respond with the HTTP status code 400 Bad Request.
Amazon S3 on Outposts - S3 on Outposts only uses the OUTPOSTS Storage Class.
You can use the CopyObject action to change the storage class of an object that is already stored in
Amazon S3 by using the x-amz-storage-class header. For more information, see Storage Classes in the
Amazon S3 User Guide.
Before using an object as a source object for the copy operation, you must restore a copy of it if it meets any of the following conditions:
The storage class of the source object is GLACIER or DEEP_ARCHIVE.
The storage class of the source object is INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is Archive Access or Deep Archive Access.
For more information, see RestoreObject and Copying Objects in the Amazon S3 User Guide.
If the service returns an enum value that is not available in the current SDK version, storageClass will
return StorageClass.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from
storageClassAsString().
x-amz-storage-class header is not used, the copied object will be stored in the
STANDARD Storage Class by default. The STANDARD storage class provides high
durability and high availability. Depending on performance needs, you can specify a different Storage
Class.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE (the S3 Express
One Zone storage class) in Availability Zones and ONEZONE_IA (the S3 One Zone-Infrequent
Access storage class) in Dedicated Local Zones. Unsupported storage class values won't write a
destination object and will respond with the HTTP status code 400 Bad Request.
Amazon S3 on Outposts - S3 on Outposts only uses the OUTPOSTS Storage Class.
You can use the CopyObject action to change the storage class of an object that is already
stored in Amazon S3 by using the x-amz-storage-class header. For more information, see Storage Classes in
the Amazon S3 User Guide.
Before using an object as a source object for the copy operation, you must restore a copy of it if it meets any of the following conditions:
The storage class of the source object is GLACIER or DEEP_ARCHIVE.
The storage class of the source object is INTELLIGENT_TIERING and it's S3 Intelligent-Tiering access tier is Archive Access or
Deep Archive Access.
For more information, see RestoreObject and Copying Objects in the Amazon S3 User Guide.
StorageClasspublic final String websiteRedirectLocation()
If the destination bucket is configured as a website, redirects requests for this object copy to another object
in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. This
value is unique to each object and is not copied when using the x-amz-metadata-directive header.
Instead, you may opt to provide this header in combination with the x-amz-metadata-directive header.
This functionality is not supported for directory buckets.
x-amz-metadata-directive header. Instead, you may opt to provide this header in combination
with the x-amz-metadata-directive header. This functionality is not supported for directory buckets.
public final String sseCustomerAlgorithm()
Specifies the algorithm to use when encrypting the object (for example, AES256).
When you perform a CopyObject operation, if you want to use a different type of encryption setting
for the target object, you can specify appropriate encryption-related headers to encrypt the target object with
an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is
different from the default encryption configuration of the destination bucket, the encryption setting in your
request takes precedence.
This functionality is not supported when the destination bucket is a directory bucket.
AES256).
When you perform a CopyObject operation, if you want to use a different type of encryption
setting for the target object, you can specify appropriate encryption-related headers to encrypt the
target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption
setting in your request is different from the default encryption configuration of the destination bucket,
the encryption setting in your request takes precedence.
This functionality is not supported when the destination bucket is a directory bucket.
public final String sseCustomerKey()
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to
store the object and then it is discarded. Amazon S3 does not store the encryption key. The key must be
appropriate for use with the algorithm specified in the
x-amz-server-side-encryption-customer-algorithm header.
This functionality is not supported when the destination bucket is a directory bucket.
x-amz-server-side-encryption-customer-algorithm header. This functionality is not supported when the destination bucket is a directory bucket.
public final String sseCustomerKeyMD5()
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported when the destination bucket is a directory bucket.
This functionality is not supported when the destination bucket is a directory bucket.
public final String ssekmsKeyId()
Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. All GET and PUT requests for an object protected by KMS will fail if they're not made via SSL or using SigV4. For information about configuring any of the officially supported Amazon Web Services SDKs and Amazon Web Services CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 User Guide.
Directory buckets - To encrypt data using SSE-KMS, it's recommended to specify the
x-amz-server-side-encryption header to aws:kms. Then, the
x-amz-server-side-encryption-aws-kms-key-id header implicitly uses the bucket's default KMS customer
managed key ID. If you want to explicitly set the x-amz-server-side-encryption-aws-kms-key-id
header, it must match the bucket's default customer managed key (using key ID or ARN, not alias). Your SSE-KMS
configuration can only support 1 customer managed key
per directory bucket's lifetime. The Amazon Web Services
managed key (aws/s3) isn't supported. Incorrect key specification results in an HTTP
400 Bad Request error.
Directory buckets - To encrypt data using SSE-KMS, it's recommended to specify the
x-amz-server-side-encryption header to aws:kms. Then, the
x-amz-server-side-encryption-aws-kms-key-id header implicitly uses the bucket's default KMS
customer managed key ID. If you want to explicitly set the
x-amz-server-side-encryption-aws-kms-key-id header, it must match the bucket's default
customer managed key (using key ID or ARN, not alias). Your SSE-KMS configuration can only support 1 customer managed
key per directory bucket's lifetime. The Amazon Web
Services managed key (aws/s3) isn't supported. Incorrect key specification results in an
HTTP 400 Bad Request error.
public final String ssekmsEncryptionContext()
Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for the destination object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.
General purpose buckets - This value must be explicitly added to specify encryption context for
CopyObject requests if you want an additional encryption context for your destination object. The
additional encryption context of the source object won't be copied to the destination object. For more
information, see Encryption context in the Amazon S3 User Guide.
Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.
General purpose buckets - This value must be explicitly added to specify encryption context for
CopyObject requests if you want an additional encryption context for your destination
object. The additional encryption context of the source object won't be copied to the destination object.
For more information, see Encryption context in the Amazon S3 User Guide.
Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.
public final Boolean bucketKeyEnabled()
Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS). If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object.
Setting this header to true causes Amazon S3 to use an S3 Bucket Key for object encryption with
SSE-KMS. Specifying this header with a COPY action doesn’t affect bucket-level settings for S3 Bucket Key.
For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.
Directory buckets - S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.
Setting this header to true causes Amazon S3 to use an S3 Bucket Key for object encryption
with SSE-KMS. Specifying this header with a COPY action doesn’t affect bucket-level settings for S3
Bucket Key.
For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.
Directory buckets - S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.
public final String copySourceSSECustomerAlgorithm()
Specifies the algorithm to use when decrypting the source object (for example, AES256).
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
AES256).
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
public final String copySourceSSECustomerKey()
Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be the same one that was used when the source object was created.
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
public final String copySourceSSECustomerKeyMD5()
Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
public final RequestPayer requestPayer()
If the service returns an enum value that is not available in the current SDK version, requestPayer will
return RequestPayer.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from
requestPayerAsString().
RequestPayerpublic final String requestPayerAsString()
If the service returns an enum value that is not available in the current SDK version, requestPayer will
return RequestPayer.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from
requestPayerAsString().
RequestPayerpublic final String tagging()
The tag-set for the object copy in the destination bucket. This value must be used in conjunction with the
x-amz-tagging-directive if you choose REPLACE for the
x-amz-tagging-directive. If you choose COPY for the
x-amz-tagging-directive, you don't need to set the x-amz-tagging header, because the
tag-set will be copied from the source object directly. The tag-set must be encoded as URL Query parameters.
The default value is the empty value.
Directory buckets - For directory buckets in a CopyObject operation, only the empty tag-set
is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a
501 Not Implemented status code. When the destination bucket is a directory bucket, you will receive
a 501 Not Implemented response in any of the following situations:
When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE the tag-set of a source object and set a non-empty value to
x-amz-tagging.
When you don't set the x-amz-tagging-directive header and the source object has non-empty tags. This
is because the default value of x-amz-tagging-directive is COPY.
Because only the empty tag-set is supported for directory buckets in a CopyObject operation, the
following situations are allowed:
When you attempt to COPY the tag-set from a directory bucket source object that has no tags to a
general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE the tag-set of a directory bucket source object and set the
x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a general purpose bucket source object that has non-empty
tags and set the x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a directory bucket source object and don't set the
x-amz-tagging value of the directory bucket destination object. This is because the default value of
x-amz-tagging is the empty value.
x-amz-tagging-directive if you choose REPLACE for the
x-amz-tagging-directive. If you choose COPY for the
x-amz-tagging-directive, you don't need to set the x-amz-tagging header,
because the tag-set will be copied from the source object directly. The tag-set must be encoded as URL
Query parameters.
The default value is the empty value.
Directory buckets - For directory buckets in a CopyObject operation, only the empty
tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will
receive a 501 Not Implemented status code. When the destination bucket is a directory
bucket, you will receive a 501 Not Implemented response in any of the following situations:
When you attempt to COPY the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE the tag-set of a source object and set a non-empty value to
x-amz-tagging.
When you don't set the x-amz-tagging-directive header and the source object has non-empty
tags. This is because the default value of x-amz-tagging-directive is COPY.
Because only the empty tag-set is supported for directory buckets in a CopyObject operation,
the following situations are allowed:
When you attempt to COPY the tag-set from a directory bucket source object that has no tags
to a general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE the tag-set of a directory bucket source object and set the
x-amz-tagging value of the directory bucket destination object to empty.
When you attempt to REPLACE the tag-set of a general purpose bucket source object that has
non-empty tags and set the x-amz-tagging value of the directory bucket destination object to
empty.
When you attempt to REPLACE the tag-set of a directory bucket source object and don't set
the x-amz-tagging value of the directory bucket destination object. This is because the
default value of x-amz-tagging is the empty value.
public final ObjectLockMode objectLockMode()
The Object Lock mode that you want to apply to the object copy.
This functionality is not supported for directory buckets.
If the service returns an enum value that is not available in the current SDK version, objectLockMode
will return ObjectLockMode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from objectLockModeAsString().
This functionality is not supported for directory buckets.
ObjectLockModepublic final String objectLockModeAsString()
The Object Lock mode that you want to apply to the object copy.
This functionality is not supported for directory buckets.
If the service returns an enum value that is not available in the current SDK version, objectLockMode
will return ObjectLockMode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available
from objectLockModeAsString().
This functionality is not supported for directory buckets.
ObjectLockModepublic final Instant objectLockRetainUntilDate()
The date and time when you want the Object Lock of the object copy to expire.
This functionality is not supported for directory buckets.
This functionality is not supported for directory buckets.
public final ObjectLockLegalHoldStatus objectLockLegalHoldStatus()
Specifies whether you want to apply a legal hold to the object copy.
This functionality is not supported for directory buckets.
If the service returns an enum value that is not available in the current SDK version,
objectLockLegalHoldStatus will return ObjectLockLegalHoldStatus.UNKNOWN_TO_SDK_VERSION. The raw
value returned by the service is available from objectLockLegalHoldStatusAsString().
This functionality is not supported for directory buckets.
ObjectLockLegalHoldStatuspublic final String objectLockLegalHoldStatusAsString()
Specifies whether you want to apply a legal hold to the object copy.
This functionality is not supported for directory buckets.
If the service returns an enum value that is not available in the current SDK version,
objectLockLegalHoldStatus will return ObjectLockLegalHoldStatus.UNKNOWN_TO_SDK_VERSION. The raw
value returned by the service is available from objectLockLegalHoldStatusAsString().
This functionality is not supported for directory buckets.
ObjectLockLegalHoldStatuspublic final String expectedBucketOwner()
The account ID of the expected destination bucket owner. If the account ID that you provide does not match the
actual owner of the destination bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
403 Forbidden (access denied).public final String expectedSourceBucketOwner()
The account ID of the expected source bucket owner. If the account ID that you provide does not match the actual
owner of the source bucket, the request fails with the HTTP status code 403 Forbidden (access
denied).
403 Forbidden
(access denied).@Deprecated public final String bucket()
destinationBucket()The name of the destination bucket.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style
requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability
Zone or Local Zone). Bucket names must follow the format
bucket-base-name--zone-id--x-s3 (for example,
amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucket naming
restrictions, see Directory bucket
naming rules in the Amazon S3 User Guide.
Copying objects across different Amazon Web Services Regions isn't supported when the source or destination
bucket is in Amazon Web Services Local Zones. The source and destination buckets must have the same parent Amazon
Web Services Region. Otherwise, you get an HTTP 400 Bad Request error with the error code
InvalidRequest.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must use the Outpost bucket access
point ARN or the access point alias for the destination bucket. You can only copy objects within the same Outpost
bucket. It's not supported to copy objects across different Amazon Web Services Outposts, between buckets on the
same Outposts, or between Outposts buckets and any other bucket types. For more information about S3 on Outposts,
see What is S3 on Outposts?
in the S3 on Outposts guide. When you use this action with S3 on Outposts through the REST API, you must
direct requests to the S3 on Outposts hostname, in the format
AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.
The hostname isn't required when you use the Amazon Web Services CLI or SDKs.
Directory buckets - When you use this operation with a directory bucket, you must use
virtual-hosted-style requests in the format
Bucket-name.s3express-zone-id.region-code.amazonaws.com. Path-style
requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone
or Local Zone). Bucket names must follow the format
bucket-base-name--zone-id--x-s3 (for example,
amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucket naming
restrictions, see Directory
bucket naming rules in the Amazon S3 User Guide.
Copying objects across different Amazon Web Services Regions isn't supported when the source or
destination bucket is in Amazon Web Services Local Zones. The source and destination buckets must have
the same parent Amazon Web Services Region. Otherwise, you get an HTTP 400 Bad Request error
with the error code InvalidRequest.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must use the Outpost bucket
access point ARN or the access point alias for the destination bucket. You can only copy objects within
the same Outpost bucket. It's not supported to copy objects across different Amazon Web Services
Outposts, between buckets on the same Outposts, or between Outposts buckets and any other bucket types.
For more information about S3 on Outposts, see What is S3 on
Outposts? in the S3 on Outposts guide. When you use this action with S3 on Outposts through
the REST API, you must direct requests to the S3 on Outposts hostname, in the format
AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. The hostname isn't required when you use the Amazon Web Services CLI or SDKs.
public final String destinationBucket()
The name of the destination bucket.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style
requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability
Zone or Local Zone). Bucket names must follow the format
bucket-base-name--zone-id--x-s3 (for example,
amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucket naming
restrictions, see Directory bucket
naming rules in the Amazon S3 User Guide.
Copying objects across different Amazon Web Services Regions isn't supported when the source or destination
bucket is in Amazon Web Services Local Zones. The source and destination buckets must have the same parent Amazon
Web Services Region. Otherwise, you get an HTTP 400 Bad Request error with the error code
InvalidRequest.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must use the Outpost bucket access
point ARN or the access point alias for the destination bucket. You can only copy objects within the same Outpost
bucket. It's not supported to copy objects across different Amazon Web Services Outposts, between buckets on the
same Outposts, or between Outposts buckets and any other bucket types. For more information about S3 on Outposts,
see What is S3 on Outposts?
in the S3 on Outposts guide. When you use this action with S3 on Outposts through the REST API, you must
direct requests to the S3 on Outposts hostname, in the format
AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.
The hostname isn't required when you use the Amazon Web Services CLI or SDKs.
Directory buckets - When you use this operation with a directory bucket, you must use
virtual-hosted-style requests in the format
Bucket-name.s3express-zone-id.region-code.amazonaws.com. Path-style
requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone
or Local Zone). Bucket names must follow the format
bucket-base-name--zone-id--x-s3 (for example,
amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucket naming
restrictions, see Directory
bucket naming rules in the Amazon S3 User Guide.
Copying objects across different Amazon Web Services Regions isn't supported when the source or
destination bucket is in Amazon Web Services Local Zones. The source and destination buckets must have
the same parent Amazon Web Services Region. Otherwise, you get an HTTP 400 Bad Request error
with the error code InvalidRequest.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must use the Outpost bucket
access point ARN or the access point alias for the destination bucket. You can only copy objects within
the same Outpost bucket. It's not supported to copy objects across different Amazon Web Services
Outposts, between buckets on the same Outposts, or between Outposts buckets and any other bucket types.
For more information about S3 on Outposts, see What is S3 on
Outposts? in the S3 on Outposts guide. When you use this action with S3 on Outposts through
the REST API, you must direct requests to the S3 on Outposts hostname, in the format
AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. The hostname isn't required when you use the Amazon Web Services CLI or SDKs.
@Deprecated public final String key()
destinationKey()The key of the destination object.
public final String destinationKey()
The key of the destination object.
public final String sourceBucket()
sourceBucket, sourceKey, and sourceVersionId parameters must not be used in conjunction
with the copySource parameter.sourceBucket, sourceKey, and sourceVersionId parameters must not be used in
conjunction with the copySource parameter.public final String sourceKey()
sourceBucket,
sourceKey, and sourceVersionId parameters must not be used in conjunction with the
copySource parameter.sourceBucket,
sourceKey, and sourceVersionId parameters must not be used in conjunction with the
copySource parameter.public final String sourceVersionId()
sourceBucket, sourceKey, and sourceVersionId parameters must not be used in conjunction
with the copySource parameter.sourceBucket, sourceKey, and sourceVersionId parameters must not be used in
conjunction with the copySource parameter.public final RetentionDirective retentionDirective()
public final Instant retentionExpirationDate()
public final String retentionLegalHoldId()
public final Long retentionPeriod()
public CopyObjectRequest.Builder toBuilder()
ToCopyableBuildertoBuilder in interface ToCopyableBuilder<CopyObjectRequest.Builder,CopyObjectRequest>toBuilder in class S3Requestpublic static CopyObjectRequest.Builder builder()
public static Class<? extends CopyObjectRequest.Builder> serializableBuilderClass()
public final int hashCode()
hashCode in class AwsRequestpublic final boolean equals(Object obj)
equals in class AwsRequestpublic final boolean equalsBySdkFields(Object obj)
SdkPojoSdkPojo class,
and is generated based on a service model.
If an SdkPojo class does not have any inherited fields, equalsBySdkFields
and equals are essentially the same.
equalsBySdkFields in interface SdkPojoobj - the object to be compared withpublic final String toString()
public final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz)
SdkRequestSdkRequest. The field name
specified should match the member name from the corresponding service-2.json model specified in the
codegen-resources folder for a given service. The class specifies what class to cast the returned value to.
If the returned value is also a modeled class, the SdkRequest.getValueForField(String, Class) method will
again be available.getValueForField in class SdkRequestfieldName - The name of the member to be retrieved.clazz - The class to cast the returned object to.public final Map<String,SdkField<?>> sdkFieldNameToField()
sdkFieldNameToField in interface SdkPojoCopyright © 2026. All rights reserved.