πŸ”¬
Minerva AI (EN)
  • πŸ‘‹Welcome to Minerva AI
  • STUDIES & FINDINGS
    • ☝️Abstract
    • πŸ“Introduction and related works
    • πŸ’‘Model overview
      • ♾️Convolutional neural network - CNN
      • ✍️Dropout techniques
      • 🧠Long short-term memory (LSTM)
      • πŸ™†Methodology
      • πŸ”—Data collection and merging
      • πŸ› οΈModel architecture
      • βž•Dataset and Parameter Optimization
      • πŸ‘ŒResults and discussion
      • βš“Trading Philosophy & Method
      • βž—Basic algorithm
      • 🌠Results
    • πŸ‘‰Conclusion
  • PRODUCT DEVELOPMENT
    • πŸ‘€Vision and development roadmap
    • 🌟Revenue models
    • πŸͺ™Tokenomics
  • TEAM
    • πŸ‘₯Founding team
  • RESOURCES
    • πŸ“—References
Powered by GitBook
On this page
  1. STUDIES & FINDINGS
  2. Model overview

Dropout techniques

PreviousConvolutional neural network - CNNNextLong short-term memory (LSTM)

Last updated 1 year ago

Dropout refers to a technique primarily used to address the issue of overfitting in neural networks by randomly deactivating a portion of the activations. This forces the model to rely on only a subset of activations for effective learning. Dropout is controlled by a parameter representing the proportion of deactivated activations within a dense layer. As activations may be randomly deactivated in each epoch, the model relies on different activations for accurate training, thereby aiding model generalization. The dropout technique can be represented mathematically as follows:

In which:

  • X is the input data,

  • Y is the output data after applying the dropout technique,

  • p is the probability of dropping a unit,

  • M is a binary mask with the same shape as X, where each element is either 0 or 1 with probabilities p and 1-p, respectively.

πŸ’‘
✍️