if you want to remove an article from website contact us from top.

# as the goodness of fit for the estimated multiple regression equation increases, _____.

Category :

### James

Guys, does anyone know the answer?

get as the goodness of fit for the estimated multiple regression equation increases, _____. from EN Bilgi.

## TEST 3

View Test Prep - TEST 3 from QMDS 2010 at University of Louisiana, Monroe. Multiple Choice 1. As the goodness of fit for the estimated multiple regression equation increases, a) the value of the

## TEST 3 - Multiple Choice 1. As the goodness of fit for the...

University of Louisiana, Monroe

QMDS 2010 Test Prep allenmm 5 100% (3)

This preview shows page 1 - 2 out of 5 pages.

Multiple Choice1.As the goodness of fit for the estimated multiple regression equation increases,a)the value of the adjusted multiple coefficient of determination decreasesb)the value of the regression equation’s constant b0 decreasesc)the value of the multiple coefficient of determination increasesd)the value of the correlation coefficient increases

2.In a multiple regression analysis involving 15 independent variables and 200 observations,SST = 800 and SSE = 240. The coefficient of determination is

3.A multiple regression model has

4.A measure of goodness of fit for the estimated regression equation is the

5.The ratio of MSE/MSR yieldsa) SSTb)the F statisticc)SSRd)None of these alternatives is correct.

Course Hero member to access this document

Related Textbook Solutions

See more Solutions

Fundamentals of Statistics

Sullivan Solutions

Elementary Statistics: A Step By Step Approach

Bluman Solutions

Elementary Statistics: Picturing the World

Farber/Larson Solutions

Statistics: Informed Decisions Using Data

Sullivan Solutions

Elementary Statistics: A Brief Version

Bluman Solutions

Elementary Statistics

Triola Solutions

Elementary Statistics Using the TI-83/84 Plus Calculator

Triola Solutions

Elementary Statistics Using Excel

Triola Solutions

Essentials of Statistics

Triola Solutions

Statistics Unplugged

Caldwell Solutions

Statistics for the Life Sciences

Samuels/Witmer Solutions

Course Hero member to access this document

End of preview. Want to read all 5 pages?

Course Hero member to access this document

Term Fall Professor Moore Tags

Regression Analysis, Multicollinearity

Report

## Students who viewed this also studied

Students who viewed this also studied Related Q&A

Bookmarked 0 Recently viewed

QMDS 2010 TEST 3 Viewing now

Interested in TEST 3

?

Bookmark it to view later.

## Other Related Materials

Other Related Materials

Source : www.coursehero.com

## Solved As the goodness of fit for the estimated multiple

Answer to Solved As the goodness of fit for the estimated multiple

Source : www.chegg.com

## 71 THE REGRESSION EQUATION

Regression analysis is a statistical technique that can test the hypothesis that a variable is dependent upon one or more other variables. Further, regression analysis can provide an estimate of the magnitude of the impact of a change in one variable on another. This last feature, of course, is all important in predicting future values.

Regression analysis is based upon a functional relationship among variables and further, assumes that the relationship is linear. This linearity assumption is required because, for the most part, the theoretical statistical properties of non-linear estimation are not well worked out yet by the mathematicians and econometricians. This presents us with some difficulties in economic analysis because many of our theoretical models are nonlinear. The marginal cost curve, for example, is decidedly nonlinear as is the total cost function, if we are to believe in the effect of specialization of labor and the Law of Diminishing Marginal Product. There are techniques for overcoming some of these difficulties, exponential and logarithmic transformation of the data for example, but at the outset we must recognize that standard ordinary least squares (OLS) regression analysis will always use a linear function to estimate what might be a nonlinear relationship.

The general linear regression model can be stated by the equation:

where β0 is the intercept, βi‘s are the slope between Y and the appropriate Xi, and ε (pronounced epsilon), is the error term that captures errors in measurement of Y and the effect on Y of any variables missing from the equation that would contribute to explaining variations in Y. This equation is the theoretical population equation and therefore uses Greek letters. The equation we will estimate will have the Roman equivalent symbols. This is parallel to how we kept track of the population parameters and sample parameters before. The symbol for the population mean was µ and for the sample mean and for the population standard deviation was σ and for the sample standard deviation was s. The equation that will be estimated with a sample of data for two independent variables will thus be:

As with our earlier work with probability distributions, this model works only if certain assumptions hold. These are that the Y is normally distributed, the errors are also normally distributed with a mean of zero and a constant standard deviation, and that the error terms are independent of the size of X and independent of each other.

### Assumptions of the Ordinary Least Squares Regression Model

Each of these assumptions needs a bit more explanation. If one of these assumptions fails to be true, then it will have an effect on the quality of the estimates. Some of the failures of these assumptions can be fixed while others result in estimates that quite simply provide no insight into the questions the model is trying to answer or worse, give biased estimates.

The independent variables, , are all measured without error, and are fixed numbers that are independent of the error term. This assumption is saying in effect that Y is deterministic, the result of a fixed component “X” and a random error component “ϵ.”

The error term is a random variable with a mean of zero and a constant variance. The meaning of this is that the variances of the independent variables are independent of the value of the variable. Consider the relationship between personal income and the quantity of a good purchased as an example of a case where the variance is dependent upon the value of the independent variable, income. It is plausible that as income increases the variation around the amount purchased will also increase simply because of the flexibility provided with higher levels of income. The assumption is for constant variance with respect to the magnitude of the independent variable called homoscedasticity. If the assumption fails, then it is called heteroscedasticity. (Figure) shows the case of homoscedasticity where all three distributions have the same variance around the predicted value of Y regardless of the magnitude of X.

While the independent variables are all fixed values they are from a probability distribution that is normally distributed. This can be seen in (Figure) by the shape of the distributions placed on the predicted line at the expected value of the relevant value of Y.

The independent variables are independent of Y, but are also assumed to be independent of the other X variables. The model is designed to estimate the effects of independent variables on some dependent variable in accordance with a proposed theory. The case where some or more of the independent variables are correlated is not unusual. There may be no cause and effect relationship among the independent variables, but nevertheless they move together. Take the case of a simple supply curve where quantity supplied is theoretically related to the price of the product and the prices of inputs. There may be multiple inputs that may over time move together from general inflationary pressure. The input prices will therefore violate this assumption of regression analysis. This condition is called multicollinearity, which will be taken up in detail later.

The error terms are uncorrelated with each other. This situation arises from an effect on one error term from another error term. While not exclusively a time series problem, it is here that we most often see this case. An X variable in time period one has an effect on the Y variable, but this effect then has an effect in the next time period. This effect gives rise to a relationship among the error terms. This case is called autocorrelation, “self-correlated.” The error terms are now not independent of each other, but rather have their own effect on subsequent error terms.

Source : opentextbc.ca

Do you want to see answer or more ?
James 17 day ago

Guys, does anyone know the answer?