sparse_ho.optimizers.GradientDescent¶
- class sparse_ho.optimizers.GradientDescent(n_outer=100, step_size=None, p_grad_norm=1, verbose=False, tol=1e-05, tol_decrease=None, t_max=10000)¶
Gradient descent for the outer problem. This gradient descent scheme uses a (heuristic) adaptive stepsize:
log_alphak = log_alphak - p_grad_norm * grad_outer / norm(grad_outer)
- Parameters
- n_outer: int, optional (default=100).
number of maximum updates of alpha.
- step_size: float
stepsize of the gradient descent
- p_grad_norm: float
Coefficient multiplying grad_outer / norm(grad_outer) in the gradient descent.
- verbose: bool, optional (default=False)
Indicates whether information about hyperparameter optimization process is printed or not.
- tolfloat, optional (default=1e-5)
Tolerance for the inner optimization solver.
- tol_decrease: bool
To use or not a tolerance decrease strategy in the gradient descent.
- t_max: float, optional (default=10000)
Maximum running time threshold in seconds.
- __init__(n_outer=100, step_size=None, p_grad_norm=1, verbose=False, tol=1e-05, tol_decrease=None, t_max=10000)¶
Methods
__init__
([n_outer, step_size, p_grad_norm, ...])