Updated Regression & Curve Fitting Tool

Polynomial Regression Calculator

Fit polynomial regression models of the form \(y = b_0 + b_1 x + b_2 x^2 + \dots + b_d x^d\) for a single predictor \(x\). Enter X and Y data, choose the polynomial degree (1–6) and get coefficients, regression equation, \(R^2\), SSE, MSE and a prediction table.

Single-Variable Polynomial Degree 1–6 Coefficients & Equation R², SSE, MSE

Fit y = b₀ + b₁x + b₂x² + … with Polynomial Regression

This Polynomial Regression Calculator uses the normal equations \(\beta = (X^\top X)^{-1} X^\top y\) to estimate the coefficients of a polynomial model with a single predictor \(x\). The design matrix \(X\) is built as a Vandermonde matrix with columns \(1, x, x^2, \dots, x^d\). When the normal equations are numerically unstable or nearly singular, a small regularization term is added to stabilize the solution while keeping the fitted curve close to the least-squares polynomial.

The calculator returns the polynomial coefficients \(b_0, \dots, b_d\), the regression equation, sums of squares (SSE, SST), coefficient of determination \(R^2\), mean squared error (MSE) and a table of fitted values and residuals for each \(x_i\).

Enter X and Y as comma, space or line separated lists of equal length. For reliable polynomial fits, make sure you have more data points than coefficients: \(n > d + 1\).

Polynomial Regression Calculator – Complete Guide

Polynomial regression is a flexible extension of simple linear regression that allows the relationship between a predictor \(x\) and a response \(y\) to follow a curved pattern. Instead of fitting a straight line, you fit a polynomial of degree \(d\):

\[ y_i \approx b_0 + b_1 x_i + b_2 x_i^2 + \dots + b_d x_i^d, \]

where \(b_0, \dots, b_d\) are coefficients estimated from the data. When \(d = 1\) you recover ordinary simple linear regression; when \(d = 2\) you get a quadratic curve, and higher degrees allow more complex shapes.

1. Design Matrix and Normal Equations

Suppose you have \(n\) observations \((x_1, y_1), \dots, (x_n, y_n)\) and want to fit a polynomial of degree \(d\). The polynomial regression model can be written in matrix form as

\[ y = X \beta + \varepsilon, \]

where \(y\) is the \(n \times 1\) vector of responses, \(\beta\) is the \((d+1) \times 1\) vector of coefficients, and \(X\) is the \(n \times (d+1)\) design matrix

\[ X = \begin{bmatrix} 1 & x_1 & x_1^2 & \dots & x_1^d \\ 1 & x_2 & x_2^2 & \dots & x_2^d \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^d \end{bmatrix}. \]

In ordinary least squares (OLS) polynomial regression, the coefficient estimates \(\hat{\beta}\) minimize the sum of squared residuals

\[ \text{SSE} = \sum_{i=1}^n (y_i - \hat{y}_i)^2 = \sum_{i=1}^n (y_i - b_0 - b_1 x_i - \dots - b_d x_i^d)^2. \]

The minimizing solution satisfies the normal equations

\[ X^\top X\, \hat{\beta} = X^\top y, \quad \text{so that} \quad \hat{\beta} = (X^\top X)^{-1} X^\top y, \]

provided \(X^\top X\) is invertible. The calculator builds the Vandermonde design matrix from your data, constructs \(X^\top X\) and \(X^\top y\), and solves these equations numerically. When \(X^\top X\) is close to singular, a small ridge-like term is added to the diagonal to stabilize the inverse while keeping the fitted curve very close to the pure least-squares solution.

2. Fitted Values, Residuals and Sums of Squares

Once the coefficients \(\hat{\beta}\) are found, the fitted values are

\[ \hat{y}_i = \hat{b}_0 + \hat{b}_1 x_i + \hat{b}_2 x_i^2 + \dots + \hat{b}_d x_i^d, \]

and the residuals are

\[ e_i = y_i - \hat{y}_i. \]

The error sum of squares (also called residual sum of squares) is

\[ \text{SSE} = \sum_{i=1}^n e_i^2 = \sum_{i=1}^n (y_i - \hat{y}_i)^2, \]

while the total sum of squares around the mean \(\bar{y}\) is

\[ \text{SST} = \sum_{i=1}^n (y_i - \bar{y})^2, \quad \text{where} \quad \bar{y} = \frac{1}{n} \sum_{i=1}^n y_i. \]

The calculator reports SSE and a mean squared error (MSE) defined by

\[ \text{MSE} = \begin{cases} \dfrac{\text{SSE}}{n - (d+1)}, & \text{if } n > d+1, \\ \dfrac{\text{SSE}}{n}, & \text{otherwise (no residual degrees of freedom).} \end{cases} \]

3. Coefficient of Determination \(R^2\)

The coefficient of determination \(R^2\) measures how much of the variability in \(y\) is explained by the polynomial model. It is defined as

\[ R^2 = 1 - \frac{\text{SSE}}{\text{SST}}, \]

when \(\text{SST} > 0\). If all \(y_i\) are equal, then \(\text{SST} = 0\) and \(R^2\) is not defined in the usual way; the calculator handles this case separately. Values of \(R^2\) close to 1 indicate that the model explains a large portion of the variance in the data, while values near 0 indicate that the polynomial adds little explanatory power beyond the mean of \(y\).

4. How to Use the Polynomial Regression Calculator

  1. Enter your data: paste X values and Y values into the corresponding text areas. You can use commas, spaces or line breaks. Both lists must have the same number of values.
  2. Choose the polynomial degree: select \(d\) between 1 and 6. Remember that the number of data points \(n\) should be greater than \(d + 1\) so that there are enough observations to estimate the coefficients.
  3. Run the regression: click the Compute Polynomial Regression button. The calculator constructs the Vandermonde design matrix, solves the normal equations and computes fitted values.
  4. Review the regression equation and coefficients: the main results section shows the polynomial equation in standard form and lists the coefficients \(b_0, \dots, b_d\).
  5. Check fit quality: examine the reported SSE, MSE and \(R^2\). For many applications, a higher \(R^2\) and smaller SSE/MSE indicate a better fit, but model complexity and overfitting should also be considered.
  6. Inspect the prediction table: use the predicted values and residuals table to see how well the polynomial matches each data point and to spot any outliers or systematic patterns in the residuals.

5. Choosing the Polynomial Degree and Avoiding Overfitting

It is tempting to use very high-degree polynomials because they can closely follow the sample points. However, high-degree polynomials can suffer from numerical instability and overfitting: they may pass through the data but behave wildly between points or fail to generalize to new observations.

A common strategy is to start with a low degree (for example \(d = 1\) or \(2\)) and increase the degree only if there is clear evidence that a higher degree substantially improves the fit without introducing unreasonable oscillations. Practical knowledge of the underlying phenomenon should always guide the choice of degree.

Related Tools from MyTimeCalculator

Polynomial Regression Calculator FAQs

Frequently Asked Questions

Quick answers to common questions about fitting polynomial regression models, interpreting coefficients and understanding \(R^2\), SSE and MSE.

Linear regression fits a straight line of the form \(y = b_0 + b_1 x\). Polynomial regression extends this idea by adding powers of \(x\), such as \(x^2, x^3,\dots\), so that the model can capture curved relationships: \(y = b_0 + b_1 x + b_2 x^2 + \dots + b_d x^d\). Even though the curve is nonlinear in \(x\), the model is still linear in the coefficients, so standard least-squares methods apply.

At a minimum you need \(d + 1\) distinct data points to estimate the \(d + 1\) coefficients in a degree-\(d\) polynomial. In practice it is better to have substantially more than \(d + 1\) observations so that the model can be evaluated and tested on multiple points rather than just interpolating them. The calculator will warn you if the effective degrees of freedom are zero or negative.

\(R^2\) measures the proportion of the variability in \(y\) that is explained by the polynomial model. It is computed as \(R^2 = 1 - \text{SSE}/\text{SST}\). Values close to 1 indicate that the model explains a large share of the variation, while values near 0 indicate that it provides little improvement over simply using the mean of \(y\). However, \(R^2\) alone does not guarantee that the model is appropriate or free of overfitting, especially for high-degree polynomials.

When data points are nearly collinear in the polynomial feature space or when the degree is high, the matrix \(X^\top X\) can become nearly singular. Directly inverting such a matrix can lead to unstable estimates. To avoid this, the calculator adds a very small value to the diagonal of \(X^\top X\) when needed, which is similar to a light ridge regression. This improves numerical stability while keeping the fitted curve extremely close to the pure least-squares solution for typical datasets.

Yes, once you have the coefficients \(b_0, \dots, b_d\), you can plug any new value \(x_{\text{new}}\) into the equation \(y = b_0 + b_1 x_{\text{new}} + b_2 x_{\text{new}}^2 + \dots + b_d x_{\text{new}}^d\) to compute a predicted value. At the moment the calculator focuses on fitting and summarizing the model for the data you entered, but the reported equation can be used directly in a spreadsheet or another tool for additional predictions.