# βοΈDropout techniques

Dropout refers to a technique primarily used to address the issue of overfitting in neural networks by randomly deactivating a portion of the activations. This forces the model to rely on only a subset of activations for effective learning. Dropout is controlled by a parameter representing the proportion of deactivated activations within a dense layer. As activations may be randomly deactivated in each epoch, the model relies on different activations for accurate training, thereby aiding model generalization. The dropout technique can be represented mathematically as follows:

In which:

X is the input data,

Y is the output data after applying the dropout technique,

p is the probability of dropping a unit,

M is a binary mask with the same shape as X, where each element is either 0 or 1 with probabilities p and 1-p, respectively.

Last updated