cerebras.modelzoo.common.utils.model.lora.LoraConfig#
- class cerebras.modelzoo.common.utils.model.lora.LoraConfig(r=0, alpha=1, dropout=0.0, fan_in_fan_out=False, merge_weights=False, target_modules=None)[source]#
Bases:
object
r: Rank of LoRA matrix projections alpha: Scaling factor (see paper for additional details) dropout: Dropout to apply to LoRA updates fan_in_fan_out: merge_weights: Determines whether lora weights should be merged/folded
into underlying layers
- target_modules: A list of module names that must all exist in layers
that will be converted to LoRA. For example, setting target_modules to [“TransformerDecoderLayer”, “Linear”] would mean that all linear layers that were children of a TransformerDecoderLayer would be converted to LoRA.
Methods
Attributes
alpha
dropout
fan_in_fan_out
merge_weights
r
target_modules