cerebras.pytorch.optim
#
Contains all Cerebras compliant Optimizer classes. |
Contains all Cerebras compliant Optimizer classes.
- class cerebras.pytorch.optim.Optimizer(params, defaults, enable_global_step=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.torch.optim.Optimizer
,abc.ABC
The abstract Cerebras base optimizer class.
Enforces that the preinitialize method is implemented wherein the optimizer state should be initialized ahead of time
- Parameters
params (Union[Iterable[torch.Tensor], Iterable[Dict[str, Any]]]) – Specifies what Tensors should be optimized.
defaults (Dict[str, Any]) – a dict containing default values of optimization options (used when a parameter group doesn’t specify them).
enable_global_step (bool) – If True, the optimizer will keep track of the global step for each parameter.
- increment_global_step(p)[source]#
Increases the global steps by 1 and returns the current value of global step tensor in torch.float32 format.
- register_zero_grad_pre_hook(hook)[source]#
Register an optimizer zero_grad pre hook which will be called before optimizer zero_grad. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The
optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.- Parameters
hook (Callable) – The user defined hook to be registered.
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- register_zero_grad_post_hook(hook)[source]#
Register an optimizer zero_grad post hook which will be called after optimizer zero_grad. It should have the following signature:
hook(optimizer, args, kwargs)
The
optimizer
argument is the optimizer instance being used.- Parameters
hook (Callable) – The user defined hook to be registered.
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- zero_grad(*args, **kwargs)[source]#
Runs the optimizer zero_grad method and calls any pre and post hooks
- class cerebras.pytorch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0, maximize=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Adadelta optimizer implemented to perform the required pre-initialization of the optimizer state.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (Optional[Callable]) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.Adafactor(params, lr, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=- 0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=False, warmup_init=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Adafactor optimizer implemented to conform to execution within the constraints of the Cerebras WSE.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (
Callable
, optional) – A closure that reevaluatesloss. (the model and returns the) –
- class cerebras.pytorch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-06, maximize=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Adagrad optimizer implemented to conform to execution within the constraints of the Cerebras WSE.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
lr_decay (float, optional) – learning rate decay (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-10)
maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization: http://jmlr.org/papers/v12/duchi11a.html
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.Adamax(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0.0, maximize=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Adamax optimizer implemented to perform the required pre-initialization of the optimizer state.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (Optional[Callable]) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0.0, amsgrad=False)[source]#
Bases:
cerebras.pytorch.optim.AdamBase.AdamBase
Adam specific overrides to AdamBase
- class cerebras.pytorch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0.0, correct_bias=True, amsgrad=False)[source]#
Bases:
cerebras.pytorch.optim.AdamBase.AdamBase
AdamW specific overrides to AdamBase
- class cerebras.pytorch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0, maximize=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
ASGD optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initializing optimizer state.
For more details, see https://dl.acm.org/citation.cfm?id=131098
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (Callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.Lamb(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0, adam=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Implements Lamb algorithm. It has been proposed in Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
adam (bool, optional) – always use trust ratio = 1, which turns this into Adam. Useful for comparison purposes.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.Lion(params, lr=0.0001, betas=(0.9, 0.99), weight_decay=0.0)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Implements Lion algorithm. As proposed in Symbolic Discovery of Optimization Algorithms.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-4)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.99))
weight_decay (float, optional) – weight decay coefficient (default: 0)
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.NAdam(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, momentum_decay=0.004)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Implements NAdam algorithm to execute within the constraints of the Cerebras WSE, including pre-initializing optimizer state.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 2e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
momentum_decay (float, optional) – momentum momentum_decay (default: 4e-3)
foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None)
For further details regarding the algorithm refer to Incorporating Nesterov Momentum into Adam: https://openreview.net/forum?id=OM0jvwB8jIp57ZJjtNEZ
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0.0)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
RAdam optimizer implemented to conform to execution within the constraints of the Cerebras WSE.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
RMSprop optimizer implemented to perform the required pre-initialization of the optimizer state.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.Rprop(params, lr=0.001, etas=(0.5, 1.2), step_sizes=(1e-06, 50.0))[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
Rprop optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initializing optimizer state
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
etas (Tuple[float, float], optional) – step size multipliers
step_size (Tuple[float, float], optional) – Tuple of min, max step size values. Step size is clamped to be between these values.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- class cerebras.pytorch.optim.SGD(params, lr, momentum=0, dampening=0, weight_decay=0, nesterov=False, maximize=False)[source]#
Bases:
cerebras.pytorch.optim.optimizer.Optimizer
SGD optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initializing optimizer state
- Parameters
params (Iterable[torch.nn.Parameter]) – Model parameters
lr (float) – The learning rate to use
momentum (float) – momentum factor
dampening (float) – dampening for momentum
weight_decay (float) – weight decay (L2 penalty)
nesterov (bool) – enables Nesterov momentum
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
optim helpers#
Contains all Cerebras compliant Optimizer classes.
- cerebras.pytorch.optim.configure_optimizer(optimizer_type, params, **kwargs)[source]#
Configures and requires an Optimizer specified using the provided optimizer type
The optimizer class’s signature is inspected and relevant parameters are extracted from the keyword arguments
- Parameters
optimizer_type (str) – The name of the optimizer to configure
params – The model parameters passed to the optimizer
For example,
optimizer_params = { "optimizer_type": "SGD", "lr": 0.001, "momentum": 0.5, } optimizer = cstorch.optim.configure_optimizer( optimizer_type=optimizer_params.pop("optimizer_type"), params=model.parameters(), **optimizer_params )Deprecated since version 2.3: Use
configure_scheduler
instead.
- cerebras.pytorch.optim.configure_lr_scheduler(optimizer, learning_rate, adjust_learning_rate=None)[source]#
Configures a learning rate scheduler specified using the provided lr_scheduler type
The learning rate scheduler’s class’s signature is inspected and relevant parameters are extracted from the keyword arguments
- Parameters
optimizer – The optimizer passed to the lr_scheduler
learning_rate – learning rate schedule
adjust_learning_rate (dict) – key: layer types, val: lr scaling factor
The following list describes the possible
learning_rate
parameter formats:
learning_rate
is a Python scalar (int
orfloat
)In this case,
configure_lr_scheduler
returns an instance ofConstantLR
with the provided value as the constant learning rate.
learning_rate
is a dictionaryIn this case, the dictionary is expected to contain the key
scheduler
which contains the name of the scheduler you want to configure.The rest of the parameters in the dictionary are passed in a keyword arguments to the specified schedulers init method.
learning_rate
is a list of dictionariesIn this case, we assume what is being configured is a
SequentialLR
unless the any one of the dictionaries contains the keymain_scheduler
and the corresponding value isChainedLR
.In either case, each element of the list is expected to be a dictionary that follows the format as outlines in case 2.
If what is being configured is indeed a
SequentialLR
, each dictionary entry is also expected to contain the keytotal_iters
specifying the total number of iterations each scheduler should be applied for.
- cerebras.pytorch.optim.configure_optimizer_params(optimizer_type, kwargs)[source]#
Configures and requires an Optimizer specified using the provided optimizer type
The optimizer class’s signature is inspected and relevant parameters are extracted from the keyword arguments.
- Parameters
optimizer_type (str) – The name of the optimizer to configure
kwargs (dict) – Flattened optimizer params
- Returns
Optimizer cls, and args for initialization
- cerebras.pytorch.optim.configure_scheduler_params(learning_rate)[source]#
Get the kwargs and LR class from params
- Parameters
learning_rate (dict) – learning rate config
- Returns
LR class and args
- Return type
cls, kw_args
- cerebras.pytorch.optim.configure_scheduler(optimizer, schedulers_params)[source]#
Configures a generic scheduler from scheduler params. The scheduler class’ signature is inspected and relevant parameters are extracted from the keyword arguments.
- Parameters
optimizer – The optimizer passed to each scheduler.
schedulers_params (dict) – A dict of scheduler params.
scheduler_params
is expected to be a dictionary with a single key corresponding to the name of aScheduler
. The value at this key is a sub-dictionary containing key-value pairs matching the arguments of the scheduler (exceptoptimizer
).Example:
LinearLR: initial_learning_rate: 0.01 end_learning_rate: 0.001 total_iters: 100
Some schedulers take other schedulers as an argument. In that case, nest the sub-scheduler dictionaries inside. For
SequentialLR
andSequentialWD
milestones
is calculated by the function and can be ignored.SequentialLR: - LinearLR: initial_learning_rate: 0.01 end_learning_rate: 0.001 total_iters: 100 - ExponentialLR: initial_learning_rate: 0.001 decay_rate: 0.8 total_iters: 100
Generic Scheduler class in cerebras.pytorch
#
optim.scheduler.Scheduler#
- class cerebras.pytorch.optim.scheduler.Scheduler(optimizer, total_iters, last_epoch=- 1, param_group_tags=None)[source]#
Generic scheduler class for various optimizer params.
- Parameters
optimizer – The optimizer to schedule
total_iters – Number of steps to perform the decay
last_epoch – the initial step to start at
param_group_tags – param group tags to target update for
- abstract property param_group_key#
Key of the param group value to modify. For example, ‘lr’ or ‘weight_decay’.
Learning Rate Schedulers in cerebras.pytorch
#
Available learning rate schedulers in the cerebras.pytorch
package
optim.lr_scheduler.LRScheduler#
optim.lr_scheduler.ConstantLR#
- class cerebras.pytorch.optim.lr_scheduler.ConstantLR(*args, **kwargs)[source]#
Maintains a constant learning rate for each parameter group (no decaying).
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
val – The learning_rate value to maintain
total_iters (int) – The number of steps to decay for
- property val#
optim.lr_scheduler.PolynomialLR#
- class cerebras.pytorch.optim.lr_scheduler.PolynomialLR(*args, **kwargs)[source]#
Decays the learning rate of each parameter group using a polynomial function in the given total_iters.
This class is similar to the Pytorch PolynomialLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
end_learning_rate (float) – The final learning rate
total_iters (int) – Number of steps to perform the decay
power (float) – Exponent to apply to “x” (as in y=mx+b), which is ratio of step completion (1 for linear) Default: 1.0 (only Linear supported at the moment)
cycle (bool) – Whether to cycle
- property initial_val#
- property end_val#
optim.lr_scheduler.LinearLR#
optim.lr_scheduler.ExponentialLR#
- class cerebras.pytorch.optim.lr_scheduler.ExponentialLR(*args, **kwargs)[source]#
Decays the learning rate of each parameter group by decay_rate every step.
This class is similar to the Pytorch ExponentialLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
total_iters (int) – Number of steps to perform the decay
decay_rate (float) – The decay rate
staircase (bool) – If True decay the learning rate at discrete intervals
- property initial_val#
optim.lr_scheduler.InverseExponentialTimeDecayLR#
- class cerebras.pytorch.optim.lr_scheduler.InverseExponentialTimeDecayLR(*args, **kwargs)[source]#
Decays the learning rate inverse-exponentially over time, as described in the Keras InverseTimeDecay class.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
step_exponent (int) – Exponential value.
total_iters (int) – Number of steps to perform the decay.
decay_rate (float) – The decay rate.
staircase (bool) – If True decay the learning rate at discrete intervals.
- property initial_val#
optim.lr_scheduler.InverseSquareRootDecayLR#
- class cerebras.pytorch.optim.lr_scheduler.InverseSquareRootDecayLR(*args, **kwargs)[source]#
Decays the learning rate inverse-squareroot over time, as described in the following equation:
\[\begin{aligned} lr_t & = \frac{\text{scale}}{\sqrt{\max\{t, \text{warmup_steps}\}}}. \end{aligned}\]- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
scale (float) – Multiplicative factor to scale the result.
warmup_steps (int) – use initial_learning_rate for the first warmup_steps.
- property initial_val#
optim.lr_scheduler.CosineDecayLR#
- class cerebras.pytorch.optim.lr_scheduler.CosineDecayLR(*args, **kwargs)[source]#
Applies the cosine decay schedule as described in the Keras CosineDecay class.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
end_learning_rate (float) – The final learning rate
total_iters (int) – Number of steps to perform the decay
- property initial_val#
- property end_val#
optim.lr_scheduler.SequentialLR#
- class cerebras.pytorch.optim.lr_scheduler.SequentialLR(*args, **kwargs)[source]#
Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given step.
This class is a wrapper around the Pytorch SequentialLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – Wrapped optimizer
schedulers (list) – List of chained schedulers.
milestones (list) – List of integers that reflects milestone points.
last_epoch (int) – The index of last epoch. Default: -1.
optim.lr_scheduler.PiecewiseConstantLR#
- class cerebras.pytorch.optim.lr_scheduler.PiecewiseConstantLR(*args, **kwargs)[source]#
Adjusts the learning rate to a predefined constant at each milestone and holds this value until the next milestone. Notice that such adjustment can happen simultaneously with other changes to the learning rate from outside this scheduler.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
learning_rates (List[float]) – List of learning rates to maintain before/during each milestone.
milestones (List[int]) – List of step indices. Must be increasing.
optim.lr_scheduler.MultiStepLR#
- class cerebras.pytorch.optim.lr_scheduler.MultiStepLR(*args, **kwargs)[source]#
Decays the learning rate of each parameter group by gamma once the number of steps reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.
This class is similar to the Pytorch MultiStepLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
gamma (float) – Multiplicative factor of learning rate decay.
milestones (List[int]) – List of step indices. Must be increasing.
- property initial_val#
optim.lr_scheduler.StepLR#
- class cerebras.pytorch.optim.lr_scheduler.StepLR(*args, **kwargs)[source]#
Decays the learning rate of each parameter group by gamma every step_size. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.
This class is similar to the Pytorch StepLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
step_size (int) – Period of decay.
gamma (float) – Multiplicative factor of decay.
- property initial_val#
optim.lr_scheduler.CosineAnnealingLR#
- class cerebras.pytorch.optim.lr_scheduler.CosineAnnealingLR(*args, **kwargs)[source]#
Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr and \(T_{cur}\) is the number of steps since the last restart in SGDR:
\[\begin{split}\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}\end{split}\]Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning rate at each step becomes:
\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)\]It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts.
This class is similar to the Pytorch CosineAnnealingLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
T_max (int) – Maximum number of iterations.
eta_min (float) – Minimum learning rate.
- property initial_val#
optim.lr_scheduler.LambdaLR#
- class cerebras.pytorch.optim.lr_scheduler.LambdaLR(*args, **kwargs)[source]#
Sets the learning rate of each parameter group to the initial lr times a given function (which is specified by overriding set_value_lambda).
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
- property initial_val#
optim.lr_scheduler.CosineAnnealingWarmRestarts#
- class cerebras.pytorch.optim.lr_scheduler.CosineAnnealingWarmRestarts(*args, **kwargs)[source]#
Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr, \(T_{cur}\) is the number of steps since the last restart and \(T_{i}\) is the number of steps between two warm restarts in SGDR:
\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{i}}\pi\right)\right)\]When \(T_{cur}=T_{i}\), set \(\eta_t = \eta_{min}\). When \(T_{cur}=0\) after restart, set \(\eta_t=\eta_{max}\).
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts.
This class is similar to the Pytorch CosineAnnealingWarmRestarts LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
T_0 (int) – Number of iterations for the first restart.
T_mult (int) – A factor increases Ti after a restart. Currently T_mult must be set to 1.0
eta_min (float) – Minimum learning rate.
- property initial_val#
optim.lr_scheduler.MultiplicativeLR#
- class cerebras.pytorch.optim.lr_scheduler.MultiplicativeLR(*args, **kwargs)[source]#
Multiply the learning rate of each parameter group by the supplied coefficient.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – The initial learning rate.
coefficient (float) – Multiplicative factor of learning rate.
- property initial_val#
optim.lr_scheduler.ChainedScheduler#
optim.lr_scheduler.CyclicLR#
- class cerebras.pytorch.optim.lr_scheduler.CyclicLR(*args, **kwargs)[source]#
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper:
“triangular”: A basic triangular cycle without amplitude scaling.
- “triangular2”: A basic triangular cycle that scales initial amplitude by
half each cycle.
- “exp_range”: A cycle that scales initial amplitude by
\(\text{gamma}^{\text{cycle iterations}}\) at each cycle iteration.
This class is similar to the Pytorch CyclicLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule.
base_lr (float) – Initial learning rate which is the lower boundary in the cycle.
max_lr (float) – Upper learning rate boundaries in the cycle.
step_size_up (int) – Number of training iterations in the increasing half of a cycle.
step_size_down (int) – Number of training iterations in the decreasing half of a cycle.
mode (str) – One of {‘triangular’, ‘triangular2’, ‘exp_range’}.
gamma (float) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations).
scale_mode (str) – {‘cycle’, ‘iterations’} Defines whether scale_fn is evaluated on cycle number or cycle iterations.
- property base_val#
- property max_val#
optim.lr_scheduler.OneCycleLR#
- class cerebras.pytorch.optim.lr_scheduler.OneCycleLR(*args, **kwargs)[source]#
Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.
This scheduler is not chainable.
This class is similar to the Pytorch OneCycleLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_learning_rate (float) – Initial learning rate. Compared with PyTorch, this is equivalent to max_lr / div_factor.
max_lr (float) – Upper learning rate boundaries in the cycle.
total_steps (int) – The total number of steps in the cycle.
pct_start (float) – The percentage of the cycle (in number of steps) spent increasing the learning rate.
final_div_factor (float) – Determines the minimum learning rate via min_lr = initial_lr/final_div_factor.
three_phase (bool) – If True, use a third phase of the schedule to annihilate the learning rate
anneal_strategy (str) – Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing.
- property initial_val#
- property max_val#
Weight Decay Schedulers in cerebras.pytorch
#
Available weight decay schedulers in the cerebras.pytorch
package
optim.weight_decay_scheduler.WeightDecayScheduler#
optim.weight_decay_scheduler.ConstantWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.ConstantWD(optimizer, val, total_iters=None, param_group_tags=None)[source]#
Maintains a constant weight decay for each parameter group (no decaying).
- Parameters
optimizer – The optimizer to schedule
val (float) – The weight decay value to maintain
total_iters (int) – The number of steps to decay for
optim.weight_decay_scheduler.PolynomialWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.PolynomialWD(optimizer, initial_val, end_val, total_iters, power=1.0, cycle=False, param_group_tags=None)[source]#
Decays the weight decay of each parameter group using a polynomial function in the given total_iters.
This class is similar to the Pytorch PolynomialLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_val (float) – The initial weight decay
end_val (float) – The final weight decay
total_iters (int) – Number of steps to perform the decay
power (float) – Exponent to apply to “x” (as in y=mx+b), which is ratio of step completion (1 for linear) Default: 1.0 (only Linear supported at the moment)
cycle (bool) – Whether to cycle
optim.weight_decay_scheduler.LinearWD#
optim.weight_decay_scheduler.ExponentialWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.ExponentialWD(optimizer, initial_val, total_iters, decay_rate, staircase=False, param_group_tags=None)[source]#
Decays the weight decay of each parameter group by decay_rate every step.
This class is similar to the Pytorch ExponentialLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
total_iters (int) – Number of steps to perform the decay
decay_rate (float) – The decay rate
staircase (bool) – If True decay the weight decay at discrete intervals
optim.weight_decay_scheduler.InverseExponentialTimeDecayWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.InverseExponentialTimeDecayWD(optimizer, initial_val, step_exponent, total_iters, decay_rate, staircase=False, param_group_tags=None)[source]#
Decays the weight decay inverse-exponentially over time, as described in the Keras InverseTimeDecay class.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
step_exponent (int) – Exponential weight decay.
total_iters (int) – Number of steps to perform the decay.
decay_rate (float) – The decay rate.
staircase (bool) – If True decay the weight decay at discrete intervals.
optim.weight_decay_scheduler.InverseSquareRootDecayWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.InverseSquareRootDecayWD(optimizer, initial_val=1.0, scale=1.0, warmup_steps=1.0, param_group_tags=None)[source]#
Decays the weight decay inverse-squareroot over time, as described in the following equation:
\[\begin{aligned} wd_t & = \frac{\text{scale}}{\sqrt{\max\{t, \text{warmup_steps}\}}}. \end{aligned}\]- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
scale (float) – Multiplicative factor to scale the result.
warmup_steps (int) – use initial_val for the first warmup_steps.
optim.weight_decay_scheduler.CosineDecayWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.CosineDecayWD(optimizer, initial_val, end_val, total_iters, param_group_tags=None)[source]#
Applies the cosine decay schedule as described in the Keras CosineDecay class.
- Parameters
optimizer – The optimizer to schedule
initial_val (float) – The initial weight decay
end_val (float) – The final weight decay
total_iters (int) – Number of steps to perform the decay
optim.weight_decay_scheduler.SequentialWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.SequentialWD(optimizer, schedulers, milestones, last_epoch=- 1, param_group_tags=None)[source]#
Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given step.
This class is similar to Pytorch SequentialLR LRS.
- Parameters
optimizer – Wrapped optimizer
schedulers (list) – List of chained schedulers.
milestones (list) – List of integers that reflects milestone points.
last_epoch (int) – The index of last epoch. Default: -1.
optim.weight_decay_scheduler.PiecewiseConstantWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.PiecewiseConstantWD(optimizer, vals, milestones, param_group_tags=None)[source]#
Adjusts the weight decay to a predefined constant at each milestone and holds this value until the next milestone. Notice that such adjustment can happen simultaneously with other changes to the weight decays from outside this scheduler.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
vals (List[float]) – List of weight decays to maintain before/during each milestone.
milestones (List[int]) – List of step indices. Must be increasing.
optim.weight_decay_scheduler.MultiStepWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.MultiStepWD(optimizer, initial_val, gamma, milestones, param_group_tags=None)[source]#
Decays the weight decay of each parameter group by gamma once the number of steps reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the weight decay from outside this scheduler.
This class is similar to the Pytorch MultiStepLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
gamma (float) – Multiplicative factor of weight decay decay.
milestones (List[int]) – List of step indices. Must be increasing.
optim.weight_decay_scheduler.StepWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.StepWD(optimizer, initial_val, step_size, gamma, param_group_tags=None)[source]#
Decays the weight decay of each parameter group by gamma every step_size. Notice that such decay can happen simultaneously with other changes to the weight decay from outside this scheduler.
This class is similar to the Pytorch StepLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial val.
step_size (int) – Period of decay.
gamma (float) – Multiplicative factor of decay.
optim.weight_decay_scheduler.CosineAnnealingWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWD(optimizer, initial_val, T_max, eta_min=0.0, param_group_tags=None)[source]#
Set the weight decay of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial wd and \(T_{cur}\) is the number of steps since the last restart in SGDR:
\[\begin{split}\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}\end{split}\]Notice that because the schedule is defined recursively, the weight decay can be simultaneously modified outside this scheduler by other operators. If the weight decay is set solely by this scheduler, the weight decay at each step becomes:
\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)\]It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts.
This class is similar to the Pytorch CosineAnnealingLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
T_max (int) – Maximum number of iterations.
eta_min (float) – Minimum weight decay.
optim.weight_decay_scheduler.LambdaWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.LambdaWD(optimizer, initial_val, param_group_tags=None)[source]#
Sets the weight decay of each parameter group to the initial wd times a given function (which is specified by overriding set_value_lambda).
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
optim.weight_decay_scheduler.CosineAnnealingWarmRestartsWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWarmRestartsWD(optimizer, initial_val, T_0, T_mult=1, eta_min=0.0, param_group_tags=None)[source]#
Set the weight decay of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial wd, \(T_{cur}\) is the number of steps since the last restart and \(T_{i}\) is the number of steps between two warm restarts in SGDR:
\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{i}}\pi\right)\right)\]When \(T_{cur}=T_{i}\), set \(\eta_t = \eta_{min}\). When \(T_{cur}=0\) after restart, set \(\eta_t=\eta_{max}\).
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts.
This class is similar to the Pytorch CosineAnnealingWarmRestarts LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
T_0 (int) – Number of iterations for the first restart.
T_mult (int) – A factor increases Ti after a restart. Currently T_mult must be set to 1.0
eta_min (float) – Minimum weight decay.
optim.weight_decay_scheduler.MultiplicativeWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.MultiplicativeWD(optimizer, initial_val, coefficient, param_group_tags=None)[source]#
Multiply the weight decay of each parameter group by the supplied coefficient.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – The initial weight decay.
coefficient (float) – Multiplicative factor of weight decay.
optim.weight_decay_scheduler.ChainedWD#
optim.weight_decay_scheduler.CyclicWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.CyclicWD(optimizer, base_val, max_val, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_mode='cycle', param_group_tags=None)[source]#
Sets the weight decay of each parameter group according to cyclical weight decay policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical weight decay policy changes the weight decay after every batch. step should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper:
“triangular”: A basic triangular cycle without amplitude scaling.
- “triangular2”: A basic triangular cycle that scales initial amplitude by
half each cycle.
- “exp_range”: A cycle that scales initial amplitude by
\(\text{gamma}^{\text{cycle iterations}}\) at each cycle iteration.
This class is similar to the Pytorch CyclicLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule.
base_val (float) – Initial weight decay which is the lower boundary in the cycle.
max_val (float) – Upper weight decay boundaries in the cycle.
step_size_up (int) – Number of training iterations in the increasing half of a cycle.
step_size_down (int) – Number of training iterations in the decreasing half of a cycle.
mode (str) – One of {‘triangular’, ‘triangular2’, ‘exp_range’}.
gamma (float) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations).
scale_mode (str) – {‘cycle’, ‘iterations’} Defines whether scale_fn is evaluated on cycle number or cycle iterations.
optim.weight_decay_scheduler.OneCycleWD#
- class cerebras.pytorch.optim.weight_decay_scheduler.OneCycleWD(optimizer, initial_val, max_val, total_steps=1000, pct_start=0.3, final_div_factor=10000.0, three_phase=False, anneal_strategy='cos', param_group_tags=None)[source]#
Sets the weight decay of each parameter group according to the 1cycle weight decay policy. The 1cycle policy anneals the learning rate from an initial weight decay to some maximum weight decay and then from that maximum weight decay to some minimum weight decay much lower than the initial weight decay. This policy was initially described in the paper Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.
This scheduler is not chainable.
This class is similar to the Pytorch OneCycleLR LRS.
- Parameters
optimizer (torch.optim.Optimizer) – The optimizer to schedule
initial_val (float) – Initial weight decay. Compared with PyTorch, this is equivalent to max_val / div_factor.
max_val (float) – Upper weight decay boundaries in the cycle.
total_steps (int) – The total number of steps in the cycle.
pct_start (float) – The percentage of the cycle (in number of steps) spent increasing the weight decay.
final_div_factor (float) – Determines the minimum weight decay via min_val = initial_val/final_div_factor.
three_phase (bool) – If True, use a third phase of the schedule to annihilate the weight decay
anneal_strategy (str) – Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing.