DataConnection Modules#

DataConnection#

class ibm_watson_machine_learning.helpers.connections.connections.DataConnection(location=None, connection=None, data_join_node_name=None, data_asset_id=None, connection_asset_id=None, **kwargs)[source]#

Bases: BaseDataConnection

Data Storage Connection class needed for WML training metadata (input data).

Parameters:
  • connection (NFSConnection or ConnectionAsset, optional) – connection parameters of specific type

  • location (Union[S3Location, FSLocation, AssetLocation]) – required location parameters of specific type

  • data_join_node_name (None or str or list[str], optional) –

    name(s) for node(s):

    • None - data file name will be used as node name

    • str - it will became node name

    • list[str] - multiple names passed, several nodes will have the same data connection (used for excel files with multiple sheets)

  • data_asset_id (str, optional) – data asset ID if DataConnection should be pointing out to data asset

classmethod from_studio(path)[source]#

Create DataConnections from the credentials stored (connected) in Watson Studio. Only for COS.

Parameters:

path (str) – path in COS bucket to the training dataset

Returns:

list with DataConnection objects

Return type:

list[DataConnection]

Example

data_connections = DataConnection.from_studio(path='iris_dataset.csv')
read(with_holdout_split=False, csv_separator=',', excel_sheet=None, encoding='utf-8', raw=False, binary=False, read_to_file=None, number_of_batch_rows=None, sampling_type=None, sample_size_limit=None, sample_rows_limit=None, sample_percentage_limit=None, **kwargs)[source]#

Download dataset stored in remote data storage. Returns batch up to 1GB.

Parameters:
  • with_holdout_split (bool, optional) – if True, data will be split to train and holdout dataset as it was by AutoAI

  • csv_separator (str, optional) – separator / delimiter for CSV file

  • excel_sheet (str, optional) – excel file sheet name to use, only use when xlsx file is an input, support for number of the sheet is deprecated

  • encoding (str, optional) – encoding type of the CSV

  • raw (bool, optional) – if False there wil be applied simple data preprocessing (the same as in the backend), if True, data will be not preprocessed

  • binary (bool, optional) – indicates to retrieve data in binary mode, the result will be a python binary type variable

  • read_to_file (str, optional) – stream read data to file under path specified as value of this parameter, use this parameter to prevent keeping data in-memory

  • number_of_batch_rows (int, optional) – number of rows to read in each batch when reading from flight connection

  • sampling_type (str, optional) – a sampling strategy how to read the data

  • sample_size_limit (int, optional) – upper limit for overall data that should be downloaded in bytes, default: 1 GB

  • sample_rows_limit (int, optional) – upper limit for overall data that should be downloaded in number of rows

  • sample_percentage_limit (float, optional) – upper limit for overall data that should be downloaded in percent of all dataset, this parameter is ignored, when sampling_type parameter is set to first_n_records, must be a float number between 0 and 1

Note

If more than one of: sample_size_limit, sample_rows_limit, sample_percentage_limit are set, then downloaded data is limited to the lowest threshold.

Returns:

one of:

  • pandas.DataFrame contains dataset from remote data storage : Xy_train

  • Tuple[pandas.DataFrame, pandas.DataFrame, pandas.DataFrame, pandas.DataFrame] : X_train, X_holdout, y_train, y_holdout

  • Tuple[pandas.DataFrame, pandas.DataFrame] : X_test, y_test containing training data and holdout data from remote storage

  • bytes object, auto holdout split from backend (only train data provided)

Examples

train_data_connections = optimizer.get_data_connections()

data = train_data_connections[0].read() # all train data

# or

X_train, X_holdout, y_train, y_holdout = train_data_connections[0].read(with_holdout_split=True) # train and holdout data

User provided train and test data:

optimizer.fit(training_data_reference=[DataConnection],
              training_results_reference=DataConnection,
              test_data_reference=DataConnection)

test_data_connection = optimizer.get_test_data_connections()
X_test, y_test = test_data_connection.read() # only holdout data

# and

train_data_connections = optimizer.get_data_connections()
data = train_connections[0].read() # only train data
set_client(wml_client)[source]#

Set initialized wml client in connection to enable write/read operations with connection to service.

Parameters:

wml_client (APIClient) – WML client to connect to service

Example

DataConnection.set_client(wml_client)
write(data, remote_name=None, **kwargs)[source]#

Upload file to a remote data storage.

Parameters:
  • data (str) – local path to the dataset or pandas.DataFrame with data

  • remote_name (str) – name that dataset should be stored with in remote data storage

S3Location#

class ibm_watson_machine_learning.helpers.connections.connections.S3Location(bucket, path, **kwargs)[source]#

Bases: BaseLocation

Connection class to COS data storage in S3 format.

Parameters:
  • bucket (str) – COS bucket name

  • path (str) – COS data path in the bucket

  • excel_sheet (str, optional) – name of excel sheet if pointed dataset is excel file used for Batched Deployment scoring

  • model_location (str, optional) – path to the pipeline model in the COS

  • training_status (str, optional) – path to the training status json in COS

get_location()[source]#

CloudAssetLocation#

class ibm_watson_machine_learning.helpers.connections.connections.CloudAssetLocation(asset_id)[source]#

Bases: AssetLocation

Connection class to data assets as input data references to batch deployment job on Cloud.

Parameters:

asset_id (str) – asset ID of the file loaded on space on Cloud

DeploymentOutputAssetLocation#

class ibm_watson_machine_learning.helpers.connections.connections.DeploymentOutputAssetLocation(name, description='')[source]#

Bases: BaseLocation

Connection class to data assets where output of batch deployment will be stored.

Parameters:
  • name (str) – name of .csv file which will be saved as data asset

  • description (str, optional) – description of the data asset