Perplexity is a measure used in probabilistic modeling. In NLP it is used to measure how well the probabilistic model explains the observed data. It is closely related to likelihood, which is the value of the joint probability of the observed data.

Suppose the model generates data , then the perplexity can be computed as:

When each observation is an independent word, for NLP tasks, this is called per word perplexity. If each corresponds to a document, it is called per document perplexity.