LBFGS
Local minimisation routines. Global optimisation, single and double-ended transition state searches all rely on local minimisation
- topsearch.minimisation.lbfgs.minimise(func_grad, initial_position: NDArray[Any, Any], bounds: list, conv_crit: float = 1e-06, history_size: int = 5, n_steps: int = 200, args: list = None) tuple[NDArray[Any, Any], float, dict]
Wrapper for the scipy box-constrained LBFGS implementation. Takes in a function, gradient and initial position and performs local minimisation subject to the specified bounds
- topsearch.minimisation.lbfgs.minimise_ase(potential, initial_position: NDArray[Any, Any], numbers: NDArray[Any, Any], conv_crit: float = 0.001, n_steps: int = 200, args: list = None, output_level=0) tuple[NDArray[Any, Any], float, dict]
Wrapper for ASE implementation of LBFGS.
- Parameters:
potential (MachineLearningPotential or any other Potential type with an ase calculator associated)
initial_position (1D NDArray with positions to be optimised)
numbers (atomic numbers)
conv_crit (float, maximum force to define convergence. Note that this differs from scipy version of function)
n_steps (int, maximum number of steps)
args (list, not used currently, there for compatibility)
- Returns:
min_coords (minimised coordinates as 1D array)
f_val (energy value)
results_dict (info on the success or otherwise of optimisation)