qbiocode package#
Subpackages#
- qbiocode.data_generation package
- Submodules
- qbiocode.data_generation.generator module
- qbiocode.data_generation.make_circles module
- qbiocode.data_generation.make_class module
- qbiocode.data_generation.make_moons module
- qbiocode.data_generation.make_s_curve module
- qbiocode.data_generation.make_spheres module
- qbiocode.data_generation.make_spirals module
- qbiocode.data_generation.make_swiss_roll module
- Module contents
- qbiocode.embeddings package
- qbiocode.evaluation package
- qbiocode.learning package
- Submodules
- qbiocode.learning.compute_dt module
- qbiocode.learning.compute_lr module
- qbiocode.learning.compute_mlp module
- qbiocode.learning.compute_nb module
- qbiocode.learning.compute_pqk module
- qbiocode.learning.compute_qnn module
- qbiocode.learning.compute_qsvc module
- qbiocode.learning.compute_rf module
- qbiocode.learning.compute_svc module
- qbiocode.learning.compute_vqc module
- qbiocode.learning.compute_xgb module
- Module contents
- Machine Learning Module for QBioCode
compute_dt()compute_dt_opt()compute_lr()compute_lr_opt()compute_mlp()compute_mlp_opt()compute_nb()compute_nb_opt()compute_pqk()compute_qnn()compute_qsvc()compute_rf()compute_rf_opt()compute_svc()compute_svc_opt()compute_vqc()compute_xgb()compute_xgb_opt()
- qbiocode.utils package
- Submodules
- qbiocode.utils.combine_evals_results module
- qbiocode.utils.dataset_checkpoint module
- qbiocode.utils.find_duplicates module
- qbiocode.utils.find_string module
- qbiocode.utils.helper_fn module
- qbiocode.utils.ibm_account module
- qbiocode.utils.qc_winner_finder module
- qbiocode.utils.qutils module
- Module contents
- Utilities Module for QBioCode
checkpoint_restart()combine_results()feature_encoding()find_duplicate_files()find_string_in_files()generate_qml_experiment_configs()get_ansatz()get_backend_session()get_creds()get_estimator()get_feature_map()get_optimizer()get_sampler()instantiate_runtime_service()qml_winner()scaler_fn()track_progress()
- qbiocode.visualization package
Submodules#
qbiocode.version module#
Module contents#
QBioCode: Quantum Machine Learning for Biological Data Analysis#
QBioCode is a comprehensive Python package for quantum machine learning (QML) research and applications in biological data analysis. It provides tools for data generation, classical and quantum machine learning algorithms, evaluation metrics, and visualization utilities.
Main Modules#
learning: Classical and quantum machine learning algorithms
embeddings: Feature embedding and encoding methods
evaluation: Model and dataset evaluation tools
data_generation: Synthetic dataset generators
visualization: Result visualization and correlation analysis
utils: Helper functions and utilities
Quick Start#
>>> from qbiocode import compute_rf, generate_data
>>> # Generate synthetic data
>>> generate_data(type_of_data='circles', save_path='data/circles')
>>> # Train a random forest model
>>> results = compute_rf(X_train, y_train, X_test, y_test)
- checkpoint_restart(previous_results_dir, completion_marker='RawDataEvaluation.csv', prefix_length=8, verbose=False)[source]#
Identify completed datasets from a previous run to enable checkpoint restart.
This function scans a results directory to find which datasets were fully processed in a previous run by checking for the presence of a completion marker file. This allows you to resume interrupted batch processing jobs without reprocessing completed datasets.
The function assumes that each dataset has its own subdirectory in the results directory, and that a specific file (completion marker) is created when processing completes successfully.
- Parameters:
previous_results_dir (str) – Path to the directory containing results from the previous (interrupted) run. Each subdirectory should correspond to one dataset.
completion_marker (str, optional) – Name of the file that indicates successful completion of a dataset. Default is ‘RawDataEvaluation.csv’ (used by QProfiler).
prefix_length (int, optional) – Number of characters to strip from the beginning of directory names to get the dataset name. Default is 8 (strips ‘dataset_’ prefix used by QProfiler). Set to 0 to use the full directory name.
verbose (bool, optional) – If True, print the list of completed datasets and count. Default is False.
- Returns:
List of dataset names that were fully processed in the previous run. These can be excluded when restarting the batch job.
- Return type:
List[str]
Examples
Basic usage with QProfiler default settings:
>>> completed = checkpoint_restart('/path/to/previous_results') >>> print(f"Found {len(completed)} completed datasets")
Resume processing only incomplete datasets:
>>> import os >>> all_datasets = [f for f in os.listdir('/path/to/data') if f.endswith('.csv')] >>> completed = checkpoint_restart('/path/to/previous_results') >>> remaining = [d for d in all_datasets if d not in completed] >>> print(f"Need to process {len(remaining)} more datasets")
Custom completion marker and no prefix stripping:
>>> completed = checkpoint_restart( ... '/path/to/results', ... completion_marker='ModelResults.csv', ... prefix_length=0, ... verbose=True ... )
Integration with QProfiler batch processing:
>>> from qbiocode.utils.dataset_checkpoint import checkpoint_restart >>> >>> # Get list of completed datasets from previous run >>> completed_datasets = checkpoint_restart( ... previous_results_dir='./previous_run_results', ... verbose=True ... ) >>> >>> # Get all datasets to process >>> all_datasets = [f.replace('.csv', '') for f in os.listdir('./data') ... if f.endswith('.csv')] >>> >>> # Filter to only incomplete datasets >>> datasets_to_process = [d for d in all_datasets if d not in completed_datasets] >>> >>> # Run QProfiler only on remaining datasets >>> # (use datasets_to_process in your batch processing loop)
Notes
The function only checks for the presence of the completion marker file, not its contents or validity
When restarting, you may need to manually combine results from the previous and current runs
Directory names are expected to have a consistent prefix (e.g., ‘dataset_’) that can be stripped using the prefix_length parameter
Non-directory entries in previous_results_dir are ignored
See also
qbiocode.evaluation.model_runMain QProfiler batch processing function
- compute_dt(X_train, X_test, y_train, y_test, args, verbose=False, model='Decision Tree', data_key='', criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, class_weight=None, ccp_alpha=0.0, monotonic_cst=None)[source]#
This function generates a model using a Decision Tree (DT) Classifier method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from config.yaml.
verbose (bool) – If True, prints additional information during execution.
model (str) – Name of the model being used, default is ‘Decision Tree’.
data_key (str) – Key for the dataset, if applicable.
criterion (str) – The function to measure the quality of a split. Default is ‘gini’.
splitter (str) – The strategy used to choose the split at each node. Default is ‘best’.
max_depth (int or None) – The maximum depth of the tree. Default is None.
min_samples_split (int) – The minimum number of samples required to split an internal node. Default is 2.
min_samples_leaf (int) – The minimum number of samples required to be at a leaf node. Default is 1.
min_weight_fraction_leaf (float) – The minimum weighted fraction of the sum total of weights required to be at a leaf node. Default is 0.0.
max_features (int, float, str or None) – The number of features to consider when looking for the best split. Default is None.
random_state (int or None) – Controls the randomness of the estimator. Default is None.
max_leaf_nodes (int or None) – Grow a tree with max_leaf_nodes in best-first fashion. Default is None.
min_impurity_decrease (float) – A node will be split if this split induces a decrease of the impurity greater than or equal to this value. Default is 0.0.
class_weight (dict or 'balanced' or None) – Weights associated with classes in the form {class_label: weight}. Default is None.
ccp_alpha (float) – Complexity parameter used for Minimal Cost-Complexity Pruning. Default is 0.0.
monotonic_cst – Monotonic constraints for tree nodes, if applicable. Default is None.
- Returns:
A dictionary containing the evaluation metrics, model parameters, and time taken for training and validation.
- Return type:
modeleval (dict)
- compute_dt_opt(X_train, X_test, y_train, y_test, args, verbose=False, model='Decision Tree', cv=5, criterion=[], max_depth=[], min_samples_split=[], min_samples_leaf=[], max_features=[])[source]#
This function also generates a model using a Decision Tree (DT) Classifier method as implemented in scikit-learn. The difference here is that this function runs a grid search. The range of the grid search for each parameter is specified in the config.yaml file. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The model is trained on the training dataset and validated on the test dataset. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from config.yaml.
verbose (bool) – If True, prints additional information during execution.
model (str) – Name of the model being used, default is ‘Decision Tree’.
cv (int) – Number of cross-validation folds. Default is 5.
criterion (list) – List of criteria to consider for splitting. Default is empty list.
max_depth (list) – List of maximum depths to consider. Default is empty list.
min_samples_split (list) – List of minimum samples required to split an internal node. Default is empty list.
min_samples_leaf (list) – List of minimum samples required to be at a leaf node. Default is empty list.
max_features (list) – List of maximum features to consider when looking for the best split. Default is empty list.
- Returns:
A dictionary containing the evaluation metrics, best parameters, and time taken for training and validation.
- Return type:
modeleval (dict)
- compute_lr(X_train, X_test, y_train, y_test, args, model='Logistic Regression', data_key='', penalty='l2', *, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='saga', max_iter=10000, multi_class='deprecated', verbose=False, warm_start=False, n_jobs=None, l1_ratio=None)[source]#
This function generates a model using a Logistic Regression (LR) method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (numpy.ndarray) – Training data features.
X_test (numpy.ndarray) – Test data features.
y_train (numpy.ndarray) – Training data labels.
y_test (numpy.ndarray) – Test data labels.
args (dict) – Additional arguments, such as dataset name and other configurations.
model (str) – Name of the model being used, default is ‘Logistic Regression’.
data_key (str) – Key for the dataset, default is an empty string.
penalty (str) – Regularization penalty, default is ‘l2’.
dual (bool) – Dual formulation, default is False.
tol (float) – Tolerance for stopping criteria, default is 0.0001.
C (float) – Inverse of regularization strength, default is 1.0.
fit_intercept (bool) – Whether to fit the intercept, default is True.
intercept_scaling (float) – Scaling factor for the intercept, default is 1.
class_weight (dict or None) – Weights associated with classes, default is None.
random_state (int or None) – Random seed for reproducibility, default is None.
solver (str) – Algorithm to use in the optimization problem, default is ‘saga’.
max_iter (int) – Maximum number of iterations for convergence, default is 10000.
multi_class (str) – Multi-class option, deprecated in this context.
verbose (bool) – Whether to print detailed logs, default is False.
warm_start (bool) – Whether to reuse the solution of the previous call to fit as initialization, default is False.
n_jobs (int or None) – Number of jobs to run in parallel for both fit and predict, default is None which means 1 unless in a joblib.parallel_backend context.
l1_ratio (float or None) – The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty=’elasticnet’, default is None.
- Returns:
A dictionary containing the evaluation metrics, model parameters, and time taken for training and validation.
- Return type:
modeleval (dict)
- compute_lr_opt(X_train, X_test, y_train, y_test, args, model='Logistic Regression', cv=5, penalty=[], C=[], solver=[], verbose=False, max_iter=[])[source]#
This function also generates a model using a Logistic Regression (LR) method as implemented in scikit-learn. The difference here is that this function runs a grid search. The range of the grid search for each parameter is specified in the config.yaml file. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (numpy.ndarray) – Training data features.
X_test (numpy.ndarray) – Test data features.
y_train (numpy.ndarray) – Training data labels.
y_test (numpy.ndarray) – Test data labels.
args (dict) – Additional arguments, such as dataset name and other configurations.
model (str) – Name of the model being used, default is ‘Logistic Regression’.
cv (int) – Number of cross-validation folds, default is 5.
penalty (list) – List of penalties to try, default is an empty list.
C (list) – List of inverse regularization strengths to try, default is an empty list.
solver (list) – List of solvers to try, default is an empty list.
verbose (bool) – Whether to print detailed logs, default is False.
max_iter (list) – List of maximum iterations to try, default is an empty list.
- Returns:
A dictionary containing the evaluation metrics, best parameters, and time taken for training and validation.
- Return type:
modeleval (dict)
- compute_mlp(X_train, X_test, y_train, y_test, args, verbose=False, model='Multi-layer Perceptron', data_key='', hidden_layer_sizes=(100,), activation='relu', solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=10000, shuffle=True, random_state=None, tol=0.0001, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000)[source]#
This function generates a model using a Multi-layer Perceptron (mlp), a neural network, method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (numpy.ndarray) – Training features.
X_test (numpy.ndarray) – Test features.
y_train (numpy.ndarray) – Training labels.
y_test (numpy.ndarray) – Test labels.
args (dict) – Additional arguments, such as config parameters.
verbose (bool) – If True, prints additional information during execution.
model (str) – Name of the model being used.
data_key (str) – Key for the dataset, if applicable.
hidden_layer_sizes (tuple) – The ith element represents the number of neurons in the ith hidden layer.
activation (str) – Activation function for the hidden layer.
solver (str) – The solver for weight optimization.
alpha (float) – L2 penalty (regularization term) parameter.
batch_size (int or str) – Size of minibatches for stochastic optimizers.
learning_rate (str) – Learning rate schedule for weight updates.
learning_rate_init (float) – Initial learning rate used.
power_t (float) – The exponent for inverse scaling learning rate.
max_iter (int) – Maximum number of iterations.
shuffle (bool) – Whether to shuffle samples in each iteration.
random_state (int or None) – Random seed for reproducibility.
tol (float) – Tolerance for stopping criteria.
warm_start (bool) – If True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution.
momentum (float) – Momentum for gradient descent update.
nesterovs_momentum (bool) – Whether to use Nesterov’s momentum or not.
early_stopping (bool) – Whether to use early stopping to terminate training when validation score is not improving.
validation_fraction (float) – Proportion of training data to set aside as validation set for early stopping.
beta_1 – Parameters for Adam optimizer.
beta_2 – Parameters for Adam optimizer.
epsilon – Parameters for Adam optimizer.
n_iter_no_change – Number of iterations with no improvement after which training will be stopped.
max_fun – Maximum number of function evaluations.
- Returns:
- A dictionary containing the evaluation metrics of the model on the test dataset, including accuracy, AUC, F1 score,
and the time taken to train and validate the model, along with the model parameters.
- Return type:
modeleval (dict)
- compute_mlp_opt(X_train, X_test, y_train, y_test, args, verbose=False, cv=5, model='Multi-layer Perceptron', hidden_layer_sizes=[], activation=[], max_iter=[], solver=[], alpha=[], learning_rate=[])[source]#
This function also generates a model using a Multi-layer Perceptron (mlp), a neural network, as implemented in scikit-learn (https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html). The difference here is that this function runs a grid search. The range of the grid search for each parameter is specified in the config.yaml file. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (numpy.ndarray) – Training features.
X_test (numpy.ndarray) – Test features.
y_train (numpy.ndarray) – Training labels.
y_test (numpy.ndarray) – Test labels.
args (dict) – Additional arguments, such as config parameters.
verbose (bool) – If True, prints additional information during execution.
cv (int) – Number of cross-validation folds.
model (str) – Name of the model being used.
hidden_layer_sizes (tuple or list) – The ith element represents the number of neurons in the ith hidden layer.
activation (str or list) – Activation function for the hidden layer.
max_iter (int or list) – Maximum number of iterations.
solver (str or list) – The solver for weight optimization.
alpha (float or list) – L2 penalty (regularization term) parameter.
learning_rate (str or list) – Learning rate schedule for weight updates.
- Returns:
- A dictionary containing the evaluation metrics of the model on the test dataset, including accuracy, AUC, F1 score,
and the time taken to train and validate the model, along with the best parameters found during grid search.
- Return type:
modeleval (dict)
- compute_nb(X_train, X_test, y_train, y_test, args, verbose=False, model='Naive Bayes', data_key='', var_smoothing=1e-09)[source]#
This function generates a model using a Gaussian Naive Bayes (NB) Classifier method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (numpy.ndarray) – Training features.
X_test (numpy.ndarray) – Test features.
y_train (numpy.ndarray) – Training labels.
y_test (numpy.ndarray) – Test labels.
args (dict) – Additional arguments, such as config parameters.
verbose (bool) – If True, prints additional information during execution.
model (str) – Name of the model being used.
data_key (str) – Key for the dataset, if applicable.
var_smoothing (float) – Portion of the largest variance of all features added to variances for calculation stability.
- Returns:
- A dictionary containing the evaluation metrics of the model on the test dataset, including accuracy, AUC, F1 score,
and the time taken to train and validate the model, along with the model parameters.
- Return type:
modeleval (dict)
- compute_nb_opt(X_train, X_test, y_train, y_test, args, verbose=False, model='Naive Bayes', cv=5, var_smoothing=[1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 0.0001, 0.001, 0.01])[source]#
This function generates a model using a Gaussian Naive Bayes (NB) Classifier method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points. :type X_train: numpy.ndarray :param X_train: Training features. :type X_train: numpy.ndarray :type X_test: numpy.ndarray :param X_test: Test features. :type X_test: numpy.ndarray :type y_train: numpy.ndarray :param y_train: Training labels. :type y_train: numpy.ndarray :type y_test: numpy.ndarray :param y_test: Test labels. :type y_test: numpy.ndarray :type args: dict :param args: Additional arguments, such as config parameters. :type args: dict :type verbose: bool :param verbose: If True, prints additional information during execution. :type verbose: bool :type model: str :param model: Name of the model being used. :type model: str :type cv: int :param cv: Number of cross-validation folds for grid search. :type cv: int :type var_smoothing: list :param var_smoothing: List of values for the var_smoothing parameter to be tested in grid search. :type var_smoothing: list
- Returns:
- A dictionary containing the evaluation metrics of the model on the test dataset, including accuracy, AUC, F1 score,
and the time taken to train and validate the model, along with the best parameters found during grid search.
- Return type:
modeleval (dict)
- compute_pqk(X_train, X_test, y_train, y_test, args, model='PQK', data_key='', verbose=False, encoding='Z', primitive='estimator', entanglement='linear', reps=2, classical_models=None)[source]#
This function generates quantum circuits, computes projections of the data onto these circuits, and evaluates the performance of classical machine learning models on the projected data. It uses a feature map to encode the data into quantum states and then measures the expectation values of Pauli operators to obtain the features. The classical models are trained on the projected training data and evaluated on the projected test data. The function returns evaluation metrics and model parameters. This function requires a quantum backend (simulator or real quantum hardware) for execution. It supports various configurations such as encoding methods, entanglement strategies, and repetitions of the feature map. The results are saved to files for training and test projections, which are reused if they already exist to avoid redundant computations. This function is part of the main quantum machine learning pipeline (QProfiler.py) and is intended for use in supervised learning tasks. It leverages quantum computing to enhance feature extraction and classification performance on complex datasets. The function returns the performance results, including accuracy, F1-score, AUC, runtime, as well as model parameters, and other relevant metrics.
- Parameters:
X_train (np.ndarray) – Training data features.
X_test (np.ndarray) – Test data features.
y_train (np.ndarray) – Training data labels.
y_test (np.ndarray) – Test data labels.
args (dict) – Arguments containing backend and other configurations.
model (str) – Model type, default is ‘PQK’.
data_key (str) – Key for the dataset, default is ‘’.
verbose (bool) – If True, print additional information, default is False.
encoding (str) – Encoding method for the quantum circuit, default is ‘Z’.
primitive (str) – Primitive type to use, default is ‘estimator’.
entanglement (str) – Entanglement strategy, default is ‘linear’.
reps (int) – Number of repetitions for the feature map, default is 2.
classical_models (list) – List of classical models to train on quantum projections. Options: ‘rf’, ‘mlp’, ‘svc’, ‘lr’, ‘xgb’. Default is [‘rf’, ‘mlp’, ‘svc’, ‘lr’, ‘xgb’].
- Returns:
A DataFrame containing evaluation metrics and model parameters for all models.
- Return type:
modeleval (pd.DataFrame)
- compute_qnn(X_train, X_test, y_train, y_test, args, model='QNN', data_key='', primitive='sampler', verbose=False, local_optimizer='COBYLA', maxiter=100, encoding='Z', entanglement='linear', reps=2, ansatz_type='amp')[source]#
This function computes a Quantum Neural Network (QNN) model on the provided training data and evaluates it on the test data. It constructs a QNN circuit with a specified feature map and ansatz, optimizes it using a chosen optimizer, and fits the model to the training data. It then predicts the labels for the test data and evaluates the model’s performance. The function returns the performance results, including accuracy, F1-score, AUC, runtime, as well as model parameters, and other relevant metrics.
- Parameters:
X_train (array-like) – Training feature set.
X_test (array-like) – Test feature set.
y_train (array-like) – Training labels.
y_test (array-like) – Test labels.
args (dict) – Dictionary containing configuration parameters for the QNN.
model (str, optional) – Model type. Defaults to ‘QNN’.
data_key (str, optional) – Key for the dataset. Defaults to ‘’.
primitive (Literal['estimator', 'sampler'], optional) – Type of primitive to use. Defaults to ‘sampler’.
verbose (bool, optional) – If True, prints additional information. Defaults to False.
local_optimizer (Literal['COBYLA', 'L_BFGS_B', 'GradientDescent'], optional) – Optimizer to use. Defaults to ‘COBYLA’.
maxiter (int, optional) – Maximum number of iterations for the optimizer. Defaults to 100.
encoding (str, optional) – Feature encoding method. Defaults to ‘Z’.
entanglement (str, optional) – Entanglement strategy for the circuit. Defaults to ‘linear’.
reps (int, optional) – Number of repetitions for the feature map and ansatz. Defaults to 2.
ansatz_type (str, optional) – Type of ansatz to use. Defaults to ‘amp’.
- Returns:
A dictionary containing the evaluation results, including accuracy, runtime, model parameters, and other relevant metrics.
- Return type:
modeleval (dict)
- compute_qsvc(X_train, X_test, y_train, y_test, args, model='QSVC', data_key='', C=1, gamma='scale', pegasos=False, encoding='ZZ', entanglement='linear', primitive='sampler', reps=2, verbose=False, local_optimizer='')[source]#
This function computes a quantum support vector classifier (QSVC) using the Qiskit Machine Learning library. It takes training and testing datasets, along with various parameters to configure the QSVC model. It initializes the quantum feature map, sets up the backend and session, and fits the QSVC model to the training data. It then predicts the labels for the test data and evaluates the model’s performance. The function returns the performance results, including accuracy, F1-score, AUC, runtime, as well as model parameters, and other relevant metrics.
- Parameters:
X_train (np.ndarray) – Training feature set.
X_test (np.ndarray) – Testing feature set.
y_train (np.ndarray) – Training labels.
y_test (np.ndarray) – Testing labels.
args (dict) – Dictionary containing arguments for the quantum backend and other settings.
model (str) – Model type, default is ‘QSVC’.
data_key (str) – Key for the dataset, default is an empty string.
C (float) – Regularization parameter for the SVM, default is 1.
gamma (str or float) – Kernel coefficient, default is ‘scale’.
pegasos (bool) – Whether to use Pegasos QSVC, default is False.
encoding (str) – Feature map encoding type, options are ‘ZZ’, ‘Z’, or ‘P’, default is ‘ZZ’.
entanglement (str) – Entanglement strategy for the feature map, default is ‘linear’.
primitive (str) – Primitive type to use, default is ‘sampler’.
reps (int) – Number of repetitions for the feature map, default is 2.
verbose (bool) – Whether to print additional information, default is False.
- Returns:
A dictionary containing the evaluation results, including accuracy, runtime, model parameters, and other relevant metrics.
- Return type:
modeleval (dict)
- compute_results_correlation(results_df, correlation='spearman', thresh=0.7)[source]#
This function takes in as input a Pandas Dataframe containing the results and data evaluations for a given dataset. It then produces a spearman correlation between the data evaluation characteristics (features) and instances where an F1 score was observed above a certain threshold (thresh). The function returns the input DataFrame with additional columns for datatype and model_embed_datatype, as well as a new DataFrame containing the computed correlations between metrics and features. The correlation is computed for each model-embedding-dataset combination, and the results are aggregated. The features considered for correlation include various data characteristics such as ‘Feature_Samples_ratio’, ‘Intrinsic_Dimension’, etc. The metrics considered for correlation include ‘accuracy’, ‘f1_score’, ‘time’, and ‘auc’. The function also calculates the median metric value and the fraction of instances above the specified threshold for each combination. The resulting DataFrame contains the model-embedding-dataset, metric, feature, median metric value, fraction above threshold, and the computed correlation. This function is useful for understanding how different data characteristics relate to model performance metrics, particularly in the context of machine learning models applied to datasets.
- Parameters:
results_df (pd.DataFrame) – A DataFrame containing the results and data evaluations.
correlation (str) – The type of correlation to compute, default is ‘spearman’.
thresh (float) – The threshold for F1 score to consider, default is 0.7.
- Returns:
The input DataFrame with additional columns for datatype and model_embed_datatype. correlations_df (pd.DataFrame): A DataFrame containing the computed correlations between metrics and features.
- Return type:
results_df (pd.DataFrame)
- compute_rf(X_train, X_test, y_train, y_test, args, verbose=False, model='Random Forest', data_key='', n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None, monotonic_cst=None)[source]#
This function generates a model using a Random Forest (RF) Classifier method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from a configuration file.
verbose (bool) – If True, prints additional information during execution.
model (str) – Name of the model being used, default is ‘Random Forest’.
data_key (str) – Key for identifying the dataset, default is an empty string.
n_estimators (int) – Number of trees in the forest, default is 100.
criterion (str) – The function to measure the quality of a split, default is ‘gini’.
max_depth (int or None) – Maximum depth of the tree, default is None.
min_samples_split (int) – Minimum number of samples required to split an internal node, default is 2.
min_samples_leaf (int) – Minimum number of samples required to be at a leaf node, default is 1.
min_weight_fraction_leaf (float) – Minimum weighted fraction of the sum total of weights required to be at a leaf node, default is 0.0.
max_features (str or int or float) – The number of features to consider when looking for the best split, default is ‘sqrt’.
max_leaf_nodes (int or None) – Grow trees with max_leaf_nodes in best-first fashion, default is None.
min_impurity_decrease (float) – A node will be split if this split induces a decrease of the impurity greater than or equal to this value, default is 0.0.
bootstrap (bool) – Whether bootstrap samples are used when building trees, default is True.
oob_score (bool) – Whether to use out-of-bag samples to estimate the generalization accuracy, default is False.
n_jobs (int or None) – Number of jobs to run in parallel for both fit and predict, default is None.
random_state (int or None) – Controls the randomness of the estimator, default is None.
warm_start (bool) – When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, default is False.
class_weight (dict or str or None) – Weights associated with classes in the form {class_label: weight}, default is None.
ccp_alpha (float) – Complexity parameter used for Minimal
- compute_rf_opt(X_train, X_test, y_train, y_test, args, verbose=False, cv=5, model='Random Forest', bootstrap=[], max_depth=[], max_features=[], min_samples_leaf=[], min_samples_split=[], n_estimators=[])[source]#
This function also generates a model using a Random Forest (RF) Classifier method as implemented in scikit-learn. The difference here is that this function runs a grid search. The range of the grid search for each parameter is specified in the config.yaml file. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from a configuration file.
verbose (bool) – If True, prints additional information during execution.
cv (int) – Number of cross-validation folds, default is 5.
model (str) – Name of the model being used, default is ‘Random Forest’.
bootstrap (list) – List of bootstrap options for grid search.
max_depth (list) – List of maximum depth options for grid search.
max_features (list) – List of maximum features options for grid search.
min_samples_leaf (list) – List of minimum samples leaf options for grid search.
min_samples_split (list) – List of minimum samples split options for grid search.
n_estimators (list) – List of number of estimators options for grid search.
- Returns:
A dictionary containing the evaluation metrics of the model, including accuracy, AUC, F1 score, and the time taken for training and validation.
- Return type:
modeleval (dict)
- compute_svc(X_train, X_test, y_train, y_test, args, model='SVC', data_key='', C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', break_ties=False, random_state=None)[source]#
This function generates a model using a Support Vector Classifier (SVC) method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from a configuration file.
model (str) – The type of model to use, default is ‘SVC’.
data_key (str) – Key for the dataset, default is an empty string.
C (float) – Regularization parameter, default is 1.0.
kernel (str) – Specifies the kernel type to be used in the algorithm, default is ‘rbf’.
degree (int) – Degree of the polynomial kernel function (‘poly’), default is 3.
gamma (str or float) – Kernel coefficient for ‘rbf’, ‘poly’, and ‘sigmoid’, default is ‘scale’.
coef0 (float) – Independent term in kernel function, default is 0.0.
shrinking (bool) – Whether to use the shrinking heuristic, default is True.
probability (bool) – Whether to enable probability estimates, default is False.
tol (float) – Tolerance for stopping criteria, default is 0.001.
cache_size (int) – Size of the kernel cache in MB, default is 200.
class_weight (dict or None) – Weights associated with classes, default is None.
verbose (bool) – Whether to print detailed logs, default is False.
max_iter (int) – Hard limit on iterations within solver, -1 means no limit, default is -1.
decision_function_shape (str) – Determines the shape of the decision function, default is ‘ovr’.
break_ties (bool) – Whether to break ties in multiclass classification, default is False.
random_state (int or None) – Controls the randomness of the estimator, default is None.
- Returns:
A dictionary containing the evaluation metrics of the model, including accuracy, AUC, F1 score, and the time taken to train and validate the model.
- Return type:
modeleval (dict)
- compute_svc_opt(X_train, X_test, y_train, y_test, args, verbose=False, cv=5, model='SVC', C=[], gamma=[], kernel=[])[source]#
This function generates a model using a Support Vector Classifier (SVC) method as implemented in scikit-learn. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The model is trained on the training dataset and validated on the test dataset. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from a configuration file.
verbose (bool) – Whether to print detailed logs, default is False.
cv (int) – Number of cross-validation folds, default is 5.
model (str) – The type of model to use, default is ‘SVC’.
C (list or float) – Regularization parameter(s), default is an empty list.
gamma (list or str) – Kernel coefficient(s) for ‘rbf’, ‘poly’, and ‘sigmoid’, default is an empty list.
kernel (list or str) – Specifies the kernel type(s) to be used in the algorithm, default is an empty list.
- compute_vqc(X_train, X_test, y_train, y_test, args, verbose=False, model='VQC', data_key='', local_optimizer='COBYLA', maxiter=100, encoding='Z', entanglement='linear', reps=2, primitive='sampler', ansatz_type='amp')[source]#
This function computes a Variational Quantum Classifier (VQC) using the Qiskit Machine Learning library. It takes training and testing datasets, along with various parameters to configure the VQC model. It initializes the quantum feature map, sets up the backend and session, and fits the VQC model to the training data. It then predicts the labels for the test data and evaluates the model’s performance. The function returns the performance results, including accuracy, F1-score, AUC, runtime, as well as model parameters, and other relevant metrics.
- Parameters:
X_train (array-like) – Training feature set.
X_test (array-like) – Testing feature set.
y_train (array-like) – Training labels.
y_test (array-like) – Testing labels.
args (dict) – Dictionary containing configuration parameters for the VQC.
verbose (bool, optional) – If True, prints additional information. Defaults to False.
model (str, optional) – Model type. Defaults to ‘VQC’.
data_key (str, optional) – Key for the dataset. Defaults to ‘’.
local_optimizer (str, optional) – Local optimizer to use. Defaults to ‘COBYLA’.
maxiter (int, optional) – Maximum number of iterations for the optimizer. Defaults to 100.
encoding (str, optional) – Feature map encoding type. Defaults to ‘Z’.
entanglement (str, optional) – Entanglement strategy. Defaults to ‘linear’.
reps (int, optional) – Number of repetitions for the feature map and ansatz. Defaults to 2.
primitive (str, optional) – Primitive type (‘sampler’ or ‘estimator’). Defaults to ‘sampler’.
ansatz_type (str, optional) – Type of ansatz to use. Defaults to ‘amp’.
- Returns:
Evaluation results including accuracy, time taken, and model parameters.
- Return type:
dict
- compute_xgb(X_train, X_test, y_train, y_test, args, verbose=False, model='xgb', data_key='', n_estimators=100, *, criterion='gini', max_depth=None, subsample=0.5, learning_rate=0.5, colsample_bytree=1, min_child_weight=1)[source]#
This function generates a model using an Extreme Gradient Boositing (xgb) Classifier method as implemented in xgboost. It takes in parameter arguments specified in the config.yaml file, but will use the default parameters specified above if none are passed. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from a configuration file.
verbose (bool) – If True, prints additional information during execution.
model (str) – Name of the model being used, default is ‘XGBoost’.
data_key (str) – Key for identifying the dataset, default is an empty string.
n_estimators (int) – Number of trees in the forest, default is 100.
max_depth (int or None) – Maximum depth of the tree, default is None.
subsample (float) – Subsample ratio of the training instances. Default 0.5
learning_rate (float) – Step size shrinkage used in update to prevent overfitting. Default is 0.5
colsample_bytree (float) – subsample ratio of columns when constructing each tree. Default is 1
min_child_weight (int) – Minimum sum of instance weight (hessian) needed in a child. Default is 1
- Raises:
ImportError – If XGBoost is not properly installed or configured.
- compute_xgb_opt(X_train, X_test, y_train, y_test, args, verbose=False, cv=5, model='xgb', bootstrap=[], max_depth=[], max_features=[], learning_rate=[], subsample=[], colsample_bytree=[], n_estimators=[], min_child_weight=[])[source]#
This function generates a model using an Extreme Gradient Boositing (xgb) Classifier method as implemented in xgboost. The difference here is that this function runs a grid search. The range of the grid search for each parameter is specified in the config.yaml file. The combination of parameters that led to the best performance is saved and returned as best_params, which can then be used on similar datasets, without having to run the grid search. The model is trained on the training dataset and validated on the test dataset. The function returns the evaluation of the model on the test dataset, including accuracy, AUC, F1 score, and the time taken to train and validate the model across the grid search. This function is designed to be used in a supervised learning context, where the goal is to classify data points.
- Parameters:
X_train (array-like) – Training data features.
X_test (array-like) – Test data features.
y_train (array-like) – Training data labels.
y_test (array-like) – Test data labels.
args (dict) – Additional arguments, typically from a configuration file.
verbose (bool) – If True, prints additional information during execution.
cv (int) – Number of cross-validation folds, default is 5.
model (str) – Name of the model being used, default is ‘Random Forest’.
bootstrap (list) – List of bootstrap options for grid search.
max_depth (list) – List of maximum depth options for grid search.
subsample (list) – List of subsample ratio of the training instances options for grid search.
learning_rate (list) – List of step size shrinkage used in update to prevent overfitting options for grid search.
colsample_bytree (list) – List of subsample ratio of columns when constructing each tree options for grid search.
n_estimators (list) – List of number of estimators options for grid search.
min_child_weight (list) – List of minimum sum of instance weight (hessian) needed in a childoptions for grid search.
- Returns:
A dictionary containing the evaluation metrics of the model, including accuracy, AUC, F1 score, and the time taken for training and validation.
- Return type:
modeleval (dict)
- Raises:
ImportError – If XGBoost is not properly installed or configured.
- evaluate(df, y, file)[source]#
This function evaluates a dataset and returns a transposed summary DataFrame with various statistical measures, derived from the dataset. Using the functions defined above, it computes intrinsic dimension, condition number, Fisher Discriminant Ratio, total correlation, mutual information, variance, coefficient of variation, data sparsity, low variance features, data density, fractal dimension, data distributions (skewness and kurtosis), entropy of the target variable, and manifold complexity. The summary DataFrame is transposed for easier readability and contains the dataset name, number of features, number of samples, feature-to-sample ratio, and various statistical measures. This function is useful for quickly summarizing the characteristics of a dataset, especially in the context of machine learning and data analysis, allowing you to correlate the dataset’s properties with its performance in predictive modeling tasks.
- Parameters:
df (pandas.DataFrame) – Dataset in pandas with observation in rows, features in columns
y (int) – supervised binary class label
file (str) – Name of the dataset file for identification in the summary DataFrame
- Returns:
Summary DataFrame containing various statistical measures of the dataset
- Return type:
transposed (pandas.DataFrame)
- feature_encoding(feature1, sparse_output=False, feature_encoding='None')[source]#
Encode categorical features using various encoding strategies.
Transforms categorical features into numerical representations suitable for machine learning algorithms. Supports one-hot encoding, ordinal encoding, or no encoding.
- Parameters:
feature1 (array-like of shape (n_samples,)) – Input categorical feature to be encoded. Should be a 1D array.
sparse_output (bool, default=False) – If True and feature_encoding=’OneHotEncoder’, returns a sparse matrix. If False, returns a dense array. Ignored for other encoding methods.
feature_encoding ({'None', 'OneHotEncoder', 'OrdinalEncoder'}, default='None') –
Encoding method to apply:
’None’: No encoding, returns original feature
’OneHotEncoder’: Create binary columns for each category
’OrdinalEncoder’: Map categories to integer values
- Returns:
feature1_encoded – Encoded feature. Shape depends on encoding method:
’None’: shape (n_samples, 1)
’OrdinalEncoder’: shape (n_samples, 1)
’OneHotEncoder’: shape (n_samples, n_categories)
- Return type:
array-like
Notes
One-hot encoding creates a binary column for each unique category, useful when categories have no ordinal relationship. Ordinal encoding assigns integer values, suitable when categories have a natural order.
The function automatically reshapes the input to (-1, 1) format required by scikit-learn encoders.
Examples
>>> import numpy as np >>> from qbiocode.utils import feature_encoding >>> categories = np.array(['A', 'B', 'C', 'A', 'B']) >>> # One-hot encoding >>> encoded_onehot = feature_encoding(categories, feature_encoding='OneHotEncoder') >>> # Ordinal encoding >>> encoded_ordinal = feature_encoding(categories, feature_encoding='OrdinalEncoder')
See also
sklearn.preprocessing.OneHotEncoderEncode categorical features as one-hot
sklearn.preprocessing.OrdinalEncoderEncode categorical features as integers
- generate_circles_datasets(n_samples=[100, 120, 140, 160, 180, 200, 220, 240, 260, 280], noise=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], save_path=None, random_state=42)[source]#
Generate multiple concentric circles datasets with varying parameters.
Creates a series of 2D datasets where samples form two concentric circles, providing a classic non-linearly separable binary classification problem. Each configuration varies the number of samples and noise level.
- Parameters:
n_samples (list of int, default=range(100, 300, 20)) – List of sample sizes to generate for each configuration.
noise (list of float, default=[0.1, 0.2, ..., 0.9]) – List of noise standard deviations to apply to the data.
save_path (str, default='circles_data') – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘circles_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains binary labels (0 or 1)
Examples
>>> from qbiocode.data_generation import generate_circles_datasets >>> generate_circles_datasets(n_samples=[100, 200], noise=[0.1, 0.3]) Generating circles dataset...
- generate_classification_datasets(n_samples, n_features, n_informative, n_redundant, n_classes, n_clusters_per_class, weights, save_path=None, random_state=42)[source]#
Generate multiple high-dimensional classification datasets with varying parameters.
Creates a series of synthetic datasets for multi-class classification problems with configurable feature characteristics including informative features, redundant features, and class distributions.
- Parameters:
n_samples (list of int) – List of sample sizes to generate for each configuration.
n_features (list of int) – List of total feature counts (must be >= n_informative + n_redundant).
n_informative (list of int) – List of informative feature counts that are useful for prediction.
n_redundant (list of int) – List of redundant feature counts (linear combinations of informative features).
n_classes (list of int) – List of class counts for multi-class classification.
n_clusters_per_class (list of int) – List of cluster counts per class.
weights (list of list of float) – List of class weight distributions (must sum to 1.0).
save_path (str, optional) – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘class_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains class labels
Only valid configurations where (n_informative + n_redundant) <= n_features are generated
Examples
>>> from qbiocode.data_generation import generate_classification_datasets >>> generate_classification_datasets( ... n_samples=[100], n_features=[20], n_informative=[5], ... n_redundant=[2], n_classes=[2], n_clusters_per_class=[1], ... weights=[[0.5, 0.5]], save_path='data' ... ) Generating classes dataset...
- generate_data(type_of_data=None, save_path=None, n_samples=[100, 120, 140, 160, 180, 200, 220, 240, 260, 280], noise=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], hole=[True, False], n_classes=[2], dim=[3, 6, 9, 12], rad=[3, 6, 9, 12], n_features=[10, 30, 50], n_informative=[2, 6], n_redundant=[2, 6], n_clusters_per_class=[1], weights=[[0.3, 0.7], [0.4, 0.6], [0.5, 0.5]], random_state=42)[source]#
Generate synthetic datasets for machine learning benchmarking.
Unified interface to generate various types of synthetic datasets with configurable parameters. Each dataset type creates multiple configurations by varying the specified parameters.
- Parameters:
type_of_data (str) – Type of dataset to generate. Options: ‘circles’, ‘moons’, ‘classes’, ‘s_curve’, ‘spheres’, ‘spirals’, ‘swiss_roll’.
save_path (str) – Directory path where datasets will be saved.
n_samples (list of int, default=range(100, 300, 20)) – Sample sizes for dataset configurations.
noise (list of float, default=[0.1, 0.2, ..., 0.9]) – Noise levels to apply.
hole (list of bool, default=[True, False]) – Whether to include hole (for swiss_roll only).
n_classes (list of int, default=[2]) – Number of classes (for spirals and classes).
dim (list of int, default=[3, 6, 9, 12]) – Dimensionalities (for spheres and spirals).
rad (list of float, default=[3, 6, 9, 12]) – Radii (for spheres only).
n_features (list of int, default=range(10, 60, 20)) – Feature counts (for classes only).
n_informative (list of int, default=range(2, 8, 4)) – Informative feature counts (for classes only).
n_redundant (list of int, default=range(2, 8, 4)) – Redundant feature counts (for classes only).
n_clusters_per_class (list of int, default=range(1, 2, 3)) – Clusters per class (for classes only).
weights (list of list of float, default=[[0.3, 0.7], [0.4, 0.6], [0.5, 0.5]]) – Class weight distributions (for classes only).
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves generated datasets to the specified path.
- Return type:
None
- Raises:
ValueError – If type_of_data is not one of the supported types.
Examples
>>> from qbiocode.data_generation import generate_data >>> generate_data(type_of_data='circles', save_path='data/circles') Generating circles dataset... Dataset generation complete.
- generate_moons_datasets(n_samples=[100, 120, 140, 160, 180, 200, 220, 240, 260, 280], noise=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], save_path=None, random_state=42)[source]#
Generate multiple two-moons datasets with varying parameters.
Creates a series of 2D datasets where samples form two interleaving half-circles (moons), providing a challenging non-linearly separable binary classification problem. Each configuration varies the number of samples and noise level.
- Parameters:
n_samples (list of int, default=range(100, 300, 20)) – List of sample sizes to generate for each configuration.
noise (list of float, default=[0.1, 0.2, ..., 0.9]) – List of noise standard deviations to apply to the data.
save_path (str, optional) – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘moons_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains binary labels (0 or 1)
Two-moons datasets are commonly used to evaluate algorithms on interleaving patterns
Examples
>>> from qbiocode.data_generation import generate_moons_datasets >>> generate_moons_datasets(n_samples=[100, 200], noise=[0.1, 0.3], save_path='data') Generating moons dataset...
- generate_s_curve_datasets(n_samples=[100, 120, 140, 160, 180, 200, 220, 240, 260, 280], noise=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], save_path=None, random_state=42)[source]#
Generate multiple 3D S-curve datasets with varying parameters.
Creates a series of 3D datasets where samples lie on an S-shaped manifold, a classic benchmark for manifold learning and dimensionality reduction algorithms. Each configuration varies the number of samples and noise level.
- Parameters:
n_samples (list of int, default=range(100, 300, 20)) – List of sample sizes to generate for each configuration.
noise (list of float, default=[0.1, 0.2, ..., 0.9]) – List of noise standard deviations to apply to the data.
save_path (str, optional) – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘s_curve_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains the position along the manifold (continuous values)
S-curve is a standard benchmark for testing manifold learning algorithms
Examples
>>> from qbiocode.data_generation import generate_s_curve_datasets >>> generate_s_curve_datasets(n_samples=[200], noise=[0.1], save_path='data') Generating S Curve dataset...
- generate_spheres_datasets(n_s=[100, 125, 150, 175, 200, 225, 250, 275], dim=[5, 10], radius=[5, 10, 15], save_path=None, random_state=42)[source]#
Generate multiple concentric n-dimensional spheres datasets with varying parameters.
Creates a series of high-dimensional datasets where samples form two concentric spherical shells, providing a challenging non-linearly separable binary classification problem in high dimensions. Each configuration varies the number of samples, dimensionality, and sphere radii.
- Parameters:
n_s (list of int, default=range(100, 300, 25)) – List of sample sizes per class to generate for each configuration.
dim (list of int, default=range(5, 15, 5)) – List of dimensionalities for the spheres.
radius (list of float, default=range(5, 20, 5)) – List of outer sphere radii (inner sphere is 0.5 * outer radius).
save_path (str, optional) – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘spheres_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains binary labels (0 for outer, 1 for inner sphere)
Samples are generated in spherical shells (not solid spheres) for better separation
Examples
>>> from qbiocode.data_generation import generate_spheres_datasets >>> generate_spheres_datasets(n_s=[100], dim=[5], radius=[10], save_path='data') Generating spheres dataset...
- generate_spirals_datasets(n_s=[100, 150, 200, 250], n_c=[2], n_n=[0.3, 0.6, 0.9], n_d=[3, 6, 9, 12], save_path=None, random_state=42)[source]#
Generate multiple n-dimensional spiral datasets with varying parameters.
Creates a series of high-dimensional datasets where samples form intertwined spiral patterns, providing challenging non-linearly separable multi-class classification problems. Each configuration varies the number of samples, classes, noise level, and dimensionality.
- Parameters:
n_s (list of int, default=range(100, 300, 50)) – List of sample sizes to generate for each configuration.
n_c (list of int, default=[2]) – List of class counts (number of spiral arms).
n_n (list of float, default=[0.3, 0.6, 0.9]) – List of noise standard deviations to apply to the data.
n_d (list of int, default=[3, 6, 9, 12]) – List of dimensionalities (must be 3, 6, 9, or 12).
save_path (str, optional) – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘spirals_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains class labels
Spiral patterns become increasingly complex in higher dimensions
Examples
>>> from qbiocode.data_generation import generate_spirals_datasets >>> generate_spirals_datasets(n_s=[200], n_c=[2], n_n=[0.3], n_d=[3], save_path='data') Generating spirals dataset...
- generate_swiss_roll_datasets(n_samples=[100, 120, 140, 160, 180, 200, 220, 240, 260, 280], noise=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], hole=[True, False], save_path=None, random_state=42)[source]#
Generate multiple 3D Swiss roll datasets with varying parameters.
Creates a series of 3D datasets where samples lie on a Swiss roll manifold, a classic benchmark for manifold learning and dimensionality reduction algorithms. Each configuration varies the number of samples, noise level, and whether the roll has a hole in the center.
- Parameters:
n_samples (list of int, default=range(100, 300, 20)) – List of sample sizes to generate for each configuration.
noise (list of float, default=[0.1, 0.2, ..., 0.9]) – List of noise standard deviations to apply to the data.
hole (list of bool, default=[True, False]) – List of boolean values indicating whether to generate Swiss roll with hole.
save_path (str, optional) – Directory path where datasets and configuration files will be saved.
random_state (int, default=42) – Random seed for reproducibility.
- Returns:
Saves CSV files for each dataset configuration and a JSON file with all configuration parameters.
- Return type:
None
Notes
Each dataset is saved as ‘swiss_roll_data-{i}.csv’ where i is the configuration number
Configuration parameters are saved in ‘dataset_config.json’
The last column ‘class’ contains the position along the manifold (continuous values)
Swiss roll is a standard benchmark for testing manifold learning algorithms
Examples
>>> from qbiocode.data_generation import generate_swiss_roll_datasets >>> generate_swiss_roll_datasets(n_samples=[200], noise=[0.1], hole=[False], save_path='data') Generating swiss roll dataset...
- get_embeddings(embedding, X_train, X_test, n_neighbors=30, n_components=None, method=None)[source]#
This function applies the specified embedding technique to the training and test datasets.
- Parameters:
embedding (str) – The embedding technique to use. Options are ‘none’, ‘pca’, ‘nmf’, ‘lle’, ‘isomap’, ‘spectral’, or ‘umap’.
X_train (array-like) – The training dataset.
X_test (array-like) – The test dataset.
n_neighbors (int, optional) – Number of neighbors for certain embeddings. Defaults to 30.
n_components (int, optional) – Number of components for the embedding. If None, it defaults to the number of features in X_train.
method (str, optional) – Method for Locally Linear Embedding. Defaults to None.
- Returns:
Transformed training and test datasets.
- Return type:
tuple
- model_run(X_train, X_test, y_train, y_test, data_key, args)[source]#
This function runs the ML methods, with or without a grid search, as specified in the config.yaml file. It returns a python dictionary contatining these results, which can then be parsed out. It is designed to run each of the ML methods in parallel, for each data set (this is done by calling the Parallel module in results below). The arguments X_train, X_test, y_train, y_test are all passed in from the main script (qmlbench.py) as the input datasets are processed, while the remaining arguments are passed from the config.yaml file.
- Parameters:
X_train (pd.DataFrame) – Training features.
X_test (pd.DataFrame) – Testing features.
y_train (pd.Series) – Training labels.
y_test (pd.Series) – Testing labels.
data_key (str) – Key for the dataset being processed.
args (dict) – Dictionary containing configuration parameters, including: - model: List of models to run. - n_jobs: Number of parallel jobs to run. - grid_search: Boolean indicating whether to perform grid search. - cross_validation: Cross-validation strategy. - gridsearch_<model>_args: Arguments for grid search for each model. - <model>_args: Additional arguments for each model.
- Returns:
A dictionary containing the results of the models run, with keys as model names and values as their respective results. This dictionary can readily be converted to a Pandas Dataframe, as seen in the ‘ModelResults.csv’ files that are produced in the results directory when the main profiler is run (qbiocode-profiler.py).
- Return type:
model_total_result (dict)
- modeleval(y_test, y_predicted, beg_time, params, args, model, verbose=True, average='weighted')[source]#
Evaluates the model performance using accuracy, F1 score, and AUC.
- Parameters:
y_test (array-like) – True labels for the test set.
y_predicted (array-like) – Predicted labels by the model.
beg_time (float) – Start time for measuring execution time.
params (dict) – Model parameters used during training.
args (dict) – Additional arguments, including grid search flag.
model (str) – Name of the model being evaluated.
verbose (bool) – If True, prints the evaluation results.
average (str) – Type of averaging to use for F1 score calculation. Default is ‘weighted’.
- Returns:
DataFrame containing the evaluation results, including accuracy, F1 score, AUC, and model parameters.
- Return type:
pd.DataFrame
- plot_results_correlation(correlations_df, metric='f1_score', title='', correlation_type='', figsize=(6, 6), save_file_path='', size='correlation', xticks=True, key='model_embed_datatype', legend_offset=1.4)[source]#
This function plots the spearman correlation dot plots using the previously generated correlations_df dataframe. The larger the circle, the higher the F1 score for that particular data set. The circle colors correspond to the correlations between the data characteristics (evaluations) and the F1 score. Red corresponds to a postive correlation, while blue indicates an anti-correlation. The strength of either type of correlation is represented by the shade of coloring – the darker the circle, the more correlated/anticorrelated that particular characteristic is to the model’s performance.
- Parameters:
correlations_df (pd.DataFrame) – A DataFrame containing the computed correlations between metrics and features.
metric (str) – The metric to plot, default is ‘f1_score’.
title (str) – The title of the plot, default is an empty string.
correlation_type (str) – The type of correlation to display in the legend, default is an empty string.
figsize (tuple) – The size of the figure, default is (6, 6).
save_file_path (str) – The file path to save the plot, default is an empty string.
size (str) – The column name to use for the size of the dots, default is ‘correlation’.
- Returns:
Displays the plot and saves it to the specified file path if provided.
- Return type:
None
- pqk(X_train, X_test, args, store=False, data_key='', encoding='Z', data_map=True, primitive='estimator', entanglement='linear', reps=2)[source]#
This function generates quantum circuits, computes projections of the data onto these circuits. It uses a feature map to encode the data into quantum states and then measures the expectation values of Pauli operators to obtain the features. This function requires a quantum backend (simulator or real quantum hardware) for execution. It supports various configurations such as encoding methods, entanglement strategies, and repetitions of the feature map. Optionally the results are saved to files for training and test projections.
- Parameters:
X_train (np.ndarray) – Training data features.
X_test (np.ndarray) – Test data features.
args (dict) – Arguments containing backend and other configurations.
store (bool) – If true projections are stored, using data_key as indefitier
data_key (str) – Key for the dataset, default is ‘’.
encoding (str) – Encoding method for the quantum circuit, default is ‘Z’.
data_map (bool) – If true ensures that all multiplicative factors of data features inside single qubit gates are 1.0. Not applicable for Hejsemberg feature maps
primitive (str) – Primitive type to use, default is ‘estimator’.
entanglement (str) – Entanglement strategy, default is ‘linear’.
reps (int) – Number of repetitions for the feature map, default is 2.
- Returns:
A dictionary containing evaluation metrics and model parameters.
- Return type:
modeleval (dict)
- qml_winner(results_df, rawevals_df, output_dir, tag)[source]#
This function finds data sets where QML was beneficial (higher F1 scores than CML) and create new .csv files with the relevant evaluation and performance for these specific datasets, for further analysis. It also computes the best results per method across all splits and the best results per dataset. It returns two DataFrames: one with the datasets where QML methods outperformed CML methods, and another with the evaluation scores for the best QML method for each of these datasets. It also saves these DataFrames as .csv files in the specified output directory.
- Parameters:
results_df (pandas.DataFrame) – Dataset in pandas corresponding to ‘ModelResults.csv’
rawevals_df (pandas.DataFrame) – Dataset in pandas corresponding to ‘RawDataEvaluation.csv’
- Returns:
- contais the input datasets for which at least one QML method
performed better than CML. DataFrame contains the scores of all the methods.
- winner_eval_score (pandas.DataFrame): contains the input datasets, their evaluation, and scores for the
specific qml method that yielded the best score.
- Return type:
qml_winners (pandas.DataFrame)
- scaler_fn(X, scaling='None')[source]#
Apply scaling transformation to input data.
Scales the input data using one of three methods: no scaling, standard scaling (z-score normalization), or min-max scaling to [0, 1] range.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Input data to be scaled.
scaling ({'None', 'StandardScaler', 'MinMaxScaler'}, default='None') –
Scaling method to apply:
’None’: No scaling, returns original data
’StandardScaler’: Standardize features by removing mean and scaling to unit variance
’MinMaxScaler’: Scale features to [0, 1] range
- Returns:
X_scaled – Scaled data. If scaling=’None’, returns original data unchanged.
- Return type:
array-like of shape (n_samples, n_features)
Notes
StandardScaler transforms data to have mean=0 and variance=1:
\[z = \frac{x - \mu}{\sigma}\]MinMaxScaler transforms data to [0, 1] range:
\[x_{scaled} = \frac{x - x_{min}}{x_{max} - x_{min}}\]Examples
>>> import numpy as np >>> from qbiocode.utils import scaler_fn >>> X = np.array([[1, 2], [3, 4], [5, 6]]) >>> X_scaled = scaler_fn(X, scaling='StandardScaler') >>> X_minmax = scaler_fn(X, scaling='MinMaxScaler')
See also
sklearn.preprocessing.StandardScalerStandardize features
sklearn.preprocessing.MinMaxScalerScale features to a range