Go to the beginning of the article
Read the previous tip: Watch Your Own Data
There are some best practices that you should follow regardless of the task or type of DNN model you are using. For example, always split your data into three sets: training, validation, and testing. Always monitor training loss in addition to validation loss to determine if your model is underfitting or overfitting, and be open to adjusting the model architecture and parameters as needed.
Be very mindful of the distinction between underfitting and overfitting, as this is an area where people can easily become confused. If the training loss of your model does not reach a sufficiently low level, your model is underfitting and you should not even consider the validation loss. If the training loss has reached a low level and continues to decrease with increasing epochs, but the validation loss does not follow a similar trend, then you have an overfitting model.
To fix an underfitting model, you should increase the complexity of the model by adding one or more convolutional layers or adding more neurons to an existing convolutional layer. To fix an overfitting model, do the opposite: remove a convolutional layer or remove some neurons from a convolutional layer. Other techniques for combating overfitting include using dropout and regularization.
Read next tip: My Starting-Point CNN Model