Part of a series on |

Regression analysis |
---|

Models |

Estimation |

Background |

In statistics, **Poisson regression** is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable *Y* has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

- Regression models
- Maximum likelihood-based parameter estimation
- Poisson regression in practice
- "Exposure" and offset
- Overdispersion and zero inflation
- Use in survival analysis
- Extensions
- Regularized Poisson regression
- See also
- References
- Further reading

**Negative binomial regression** is a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution.

Poisson regression models are generalized linear models with the logarithm as the (canonical) link function, and the Poisson distribution function as the assumed probability distribution of the response.

If is a vector of independent variables, then the model takes the form

where and . Sometimes this is written more compactly as

where **x** is now an (*n* + 1)-dimensional vector consisting of *n* independent variables concatenated to the number one. Here * θ* is simply

Thus, when given a Poisson regression model * θ* and an input vector

If *Y*_{i} are independent observations with corresponding values **x**_{i} of the predictor variables, then * θ* can be estimated by maximum likelihood. The maximum-likelihood estimates lack a closed-form expression and must be found by numerical methods. The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques.

Given a set of parameters *θ* and an input vector *x*, the mean of the predicted Poisson distribution, as stated above, is given by

and thus, the Poisson distribution's probability mass function is given by

Now suppose we are given a data set consisting of *m* vectors , along with a set of *m* values . Then, for a given set of parameters *θ*, the probability of attaining this particular set of data is given by

By the method of maximum likelihood, we wish to find the set of parameters *θ* that makes this probability as large as possible. To do this, the equation is first rewritten as a likelihood function in terms of *θ*:

Note that the expression on the right hand side has not actually changed. A formula in this form is typically difficult to work with; instead, one uses the *log-likelihood*:

Notice that the parameters *θ* only appear in the first two terms of each term in the summation. Therefore, given that we are only interested in finding the best value for *θ* we may drop the *y*_{i}! and simply write

To find a maximum, we need to solve an equation which has no closed-form solution. However, the negative log-likelihood, , is a convex function, and so standard convex optimization techniques such as gradient descent can be applied to find the optimal value of *θ*.

Poisson regression may be appropriate when the dependent variable is a count, for instance of events such as the arrival of a telephone call at a call centre.^{ [1] } The events must be independent in the sense that the arrival of one call will not make another more or less likely, but the probability per unit time of events is understood to be related to covariates such as time of day.

Poisson regression may also be appropriate for rate data, where the rate is a count of events divided by some measure of that unit's *exposure* (a particular unit of observation). For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as an **offset**, where the exposure variable enters on the right-hand side of the equation, but with a parameter estimate (for log(exposure)) constrained to 1.

which implies

Offset in the case of a GLM in R can be achieved using the `offset()`

function:

`glm(y~offset(log(exposure))+x,family=poisson(link=log))`

A characteristic of the Poisson distribution is that its mean is equal to its variance. In certain circumstances, it will be found that the observed variance is greater than the mean; this is known as overdispersion and indicates that the model is not appropriate. A common reason is the omission of relevant explanatory variables, or dependent observations. Under some circumstances, the problem of overdispersion can be solved by using quasi-likelihood estimation or a negative binomial distribution instead.^{ [2] }^{ [3] }

Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: If *E*(*Y*) = *μ*, the quasi-Poisson model assumes var(*Y*) = *θμ* while the gamma-Poisson assumes var(*Y*) = *μ*(1 + *κμ*), where *θ* is the quasi-Poisson overdispersion parameter, and *κ* is the shape parameter of the negative binomial distribution. For both models, parameters are estimated using Iteratively reweighted least squares. For quasi-Poisson, the weights are *μ*/*θ*. For negative binomial, the weights are *μ*/(1 + *κμ*). With large *μ* and substantial extra-Poisson variation, the negative binomial weights are capped at 1/*κ*. Ver Hoef and Boveng discussed an example where they selected between the two by plotting mean squared residuals vs. the mean.^{ [4] }

Another common problem with Poisson regression is excess zeros: if there are two processes at work, one determining whether there are zero events or any events, and a Poisson process determining how many events there are, there will be more zeros than a Poisson regression would predict. An example would be the distribution of cigarettes smoked in an hour by members of a group where some individuals are non-smokers.

Other generalized linear models such as the negative binomial model or zero-inflated model may function better in these cases.

Poisson regression creates proportional hazards models, one class of survival analysis: see proportional hazards models for descriptions of Cox models.

When estimating the parameters for Poisson regression, one typically tries to find values for *θ* that maximize the likelihood of an expression of the form

where *m* is the number of examples in the data set, and is the probability mass function of the Poisson distribution with the mean set to . Regularization can be added to this optimization problem by instead maximizing^{ [5] }

for some positive constant . This technique, similar to ridge regression, can reduce overfitting.

The **likelihood function** describes the joint probability of the observed data as a function of the parameters of the chosen statistical model. For each specific parameter value in the parameter space, the likelihood function therefore assigns a probabilistic prediction to the observed data . Since it is essentially the product of sampling densities, the likelihood generally encapsulates both the data-generating process as well as the missing-data mechanism that produced the observed sample.

In statistics, **maximum likelihood estimation** (**MLE**) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In probability theory and statistics, the **gamma distribution** is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distribution. There are two different parameterizations in common use:

- With a shape parameter
*k*and a scale parameter*θ*. - With a shape parameter
*α*=*k*and an inverse scale parameter*β*= 1/*θ*, called a rate parameter.

In statistics, the **logistic model** is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one.

In probability and statistics, an **exponential family** is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term **exponential class** is sometimes used in place of "exponential family", or the older term **Koopman–Darmois family**. The terms "distribution" and "family" are often used loosely: properly, *an* exponential family is a *set* of distributions, where the specific distribution varies with the parameter; however, a parametric *family* of distributions is often referred to as "*a* distribution", and the set of all exponential families is sometimes loosely referred to as "the" exponential family. They are distinct because they possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

In statistics, an **expectation–maximization** (**EM**) **algorithm** is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the *E* step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

**Empirical Bayes methods** are procedures for statistical inference in which the prior distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Empirical Bayes, also known as maximum marginal likelihood, represents one approach for setting hyperparameters.

In statistics, a **generalized linear model** (**GLM**) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a *link function* and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

In Bayesian probability theory, if the posterior distribution *p*(*θ* | *x*) is in the same probability distribution family as the prior probability distribution *p*(θ), the prior and posterior are then called **conjugate distributions,** and the prior is called a **conjugate prior** for the likelihood function *p*(x | *θ*).

In statistics, a **marginal likelihood function**, or **integrated likelihood**, is a likelihood function in which some parameter variables have been marginalized. In the context of Bayesian statistics, it may also be referred to as the **evidence** or **model evidence**.

In statistics, **binomial regression** is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

In probability theory and statistics, the **beta-binomial distribution** is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The beta-binomial distribution is the binomial distribution in which the probability of success at each of *n* trials is not fixed but randomly drawn from a beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data.

In probability and statistics, a **natural exponential family** (**NEF**) is a class of probability distributions that is a special case of an exponential family (EF).

In probability and statistics, the class of **exponential dispersion models** (**EDM**) is a set of probability distributions that represents a generalisation of the natural exponential family. Exponential dispersion models play an important role in statistical theory, in particular in generalized linear models because they have a special structure which enables deductions to be made about appropriate statistical inference.

In probability theory and statistics, the **Poisson distribution**, named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.

In probability and statistics, a **compound probability distribution** is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with the parameters of that distribution themselves being random variables. If the parameter is a scale parameter, the resulting mixture is also called a **scale mixture**.

In statistics, the **variance function** is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

The **generalized functional linear model** (**GFLM**) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

Partial (pooled) likelihood estimation for panel data is a quasi-maximum likelihood method for panel analysis that assumes that density of *y _{it}* given

In statistics, the class of **vector generalized linear models** (**VGLMs**) was proposed to enlarge the scope of models catered for by generalized linear models (**GLMs**). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a *link function*. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

- ↑ Greene, William H. (2003).
*Econometric Analysis*(Fifth ed.). Prentice-Hall. pp. 740–752. ISBN 978-0130661890. - ↑ Paternoster R, Brame R (1997). "Multiple routes to delinquency? A test of developmental and general theories of crime".
*Criminology*.**35**: 45–84. doi:10.1111/j.1745-9125.1997.tb00870.x. - ↑ Berk R, MacDonald J (2008). "Overdispersion and Poisson regression".
*Journal of Quantitative Criminology*.**24**(3): 269–284. doi:10.1007/s10940-008-9048-4. - ↑ Ver Hoef, JAY M.; Boveng, Peter L. (2007-01-01). "Quasi-Poisson vs. Negative Binomial Regression: How should we model overdispersed count data?".
*Ecology*.**88**(11): 2766–2772. doi:10.1890/07-0043.1 . Retrieved 2016-09-01. - ↑ Perperoglou, Aris (2011-09-08). "Fitting survival data with penalized Poisson regression".
*Statistical Methods & Applications*. Springer Nature.**20**(4): 451–462. doi:10.1007/s10260-011-0172-1. ISSN 1618-2510.

- Cameron, A. C.; Trivedi, P. K. (1998).
*Regression analysis of count data*. Cambridge University Press. ISBN 978-0-521-63201-0. - Christensen, Ronald (1997).
*Log-linear models and logistic regression*. Springer Texts in Statistics (Second ed.). New York: Springer-Verlag. ISBN 978-0-387-98247-2. MR 1633357. - Gouriéroux, Christian (2000). "The Econometrics of Discrete Positive Variables: the Poisson Model".
*Econometrics of Qualitative Dependent Variables*. New York: Cambridge University Press. pp. 270–83. ISBN 978-0-521-58985-7. - Greene, William H. (2008). "Models for Event Counts and Duration".
*Econometric Analysis*(8th ed.). Upper Saddle River: Prentice Hall. pp. 906–944. ISBN 978-0-13-600383-0. - Hilbe, J. M. (2007).
*Negative Binomial Regression*. Cambridge University Press. ISBN 978-0-521-85772-7. - Jones, Andrew M.; et al. (2013). "Models for count data".
*Applied Health Economics*. London: Routledge. pp. 295–341. ISBN 978-0-415-67682-3. - Myers, Raymond H.; et al. (2010). "Logistic and Poisson Regression Models".
*Generalized Linear Models With Applications in Engineering and the Sciences*(Second ed.). New Jersey: Wiley. pp. 176–183. ISBN 978-0-470-45463-3.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.