Imbalanced dataset or imbalanced class problem offers various challenges. One of the many possible ways of solving this problem is via oversampling from the minority class. However, oversampling without addressing the following issues can be dangerous: Usually, we begin with splitting entire dataset into training and testing set. Training set is further split into training…

# Tag: overfitting

## Error analysis in supervised machine learning

Every supervised learning problem encounters either bias or variance error. Please refer to this page if you want to get more intuition about bias and variance error as it will help in understanding this post. Once you know where(bias or variance) your model is doing wrong, it becomes easier to get the next direction. This…

## Why NLP related models are prone to overfitting ?

Models trained on text data for any task like sentiment classification or any other supervised problem, are prone to overfitting. This is not due to any technique used for building the model. But it is more due to the usage of OOV token in NLP based models. OOV token is called Out of Vocabulary token…

## What is overfitting and underfitting ? Why do they occur? How do you overcome them?

There are three parts to this answer. What is overfitting and underfitting Why do they occur How can you overcome both of them. Overfitting is the result of over training the model while underfitting is the result of keeping the model too simple, both leading to high generalization error. Overtraining leads to a more complex…

## Overfitting is a result of which of the following causes :

Less amount of data Simple Model like a linear classifier Complex Model like a classifier of high degree polynomial All of the above Answer – (1), (3) Overfitting generally happens if the model tries to fit everything because it isÂ too complex or there is too less amount of data. When your model performs well on…

## What are the different ways of preventing over-fitting in a deep neural network ? Explain the intuition behind each

L2 norm regularization : Make the weights closer to zero prevent overfitting. L1 Norm regularization : Make the weights closer to zero and also induce sparsity in weights. Less common form of regularization Dropout regularization : Ensure some of the hidden units are dropped out at random to ensure the network does not overfit by…