A. We choose the one with the maximum population variance. | ||
B. We choose the one with the smallest population variance. | ||
C. It does not matter as long as both are unbiased. | ||
D. We choose the one that is consistent. |
A. The random variables will have a zero covariance. | ||
B. Their variances are zero. | ||
C. Their variances are the same. | ||
D. The random variables are statistically independent. |
A. We would reject H0 at the 5% significance level but not at the 1% level. | ||
B. We would reject H0 at the 5% significance level and also at the 1% level. | ||
C. We would not reject H0 or at the 1% level. | ||
D. We would reject H0 at the 1% significance level but not at the 5% level. |
A. A Type I error was made. | ||
B. A Type II error was made. | ||
C. Both Type I and Type II errors were made. | ||
D. Neither Type I nor Type II error was made. |
A. A Type I error was made. | ||
B. A Type II error was made. | ||
C. Both Type I and Type II errors were made. | ||
D. Neither Type I nor Type II error was made. |
A. The standard deviation of the sample estimate is unknown. | ||
B. It is quicker to perform than any other tests. | ||
C. We have the values of tables to refer to. | ||
D. The mean of the population is unknown. |
A. There is a 95% chance that the true population mean lies within the given interval. | ||
B. If the procedure for computing a 95% confidence interval is used over and over, 95% of the time the interval will contain the true parameter value. | ||
C. We are 95% confident that the null hypothesis is true. | ||
D. We are 95% confident that our sample mean is the true population mean. |
A. Only I is correct. | ||
B. Both II and III are correct. | ||
C. Both I and III are correct. | ||
D. Only III is correct. |
A. Sometimes it is impossible to find an estimator that is unbiased for small samples and having a consistent estimator is better than not having an estimator at all. | ||
B. The expected value rules are weak analytical instruments and as such, oftentimes, we are unable to say anything at all about the expectation of an estimator. | ||
C. All of the above | ||
D. None of the above |
A. The distribution of sample means will approximate a normal distribution as the sample size becomes large, even if the underlying population distribution is not normal. | ||
B. If the underlying population distribution is not normal, the distribution of sample means cannot approximate a normal distribution, even if the sample size becomes large. | ||
C. The sample means will approximate a normal distribution as the sample size becomes large, only if the underlying population distribution is normal. | ||
D. Irrespective of the sample size, the distribution of the sample mean will always approximate a normal distribution. |
A. Minimize the sum of the residuals: | ||
B. Minimize the sum of Yi across all observations: . | ||
C. Minimize the sum of squares of residuals: | ||
D. Minimize the sum of squares of the non-stochastic component: |
A. For each extra year of schooling, earnings increase by $3.27. | ||
B. For each extra year of schooling, hourly earnings increase by $3.27. | ||
C. For each extra year of schooling, earnings increase by $6.27. | ||
D. For each extra year of schooling, hourly earnings increase by $6.27. |
A. A person with zero years of schooling is likely to earn $13 per hour. | ||
B. A person with zero years of schooling is likely to earn $3 per hour. | ||
C. It represents the hourly earnings of those with mean level of schooling. | ||
D. Not enough information is given to interpret the coefficient. |
A. About 60 percent of the variation in per person daily bread consumption is explained by the variation in the retail price of bread. | ||
B. About 40 percent of the variation in per person daily bread consumption is explained by the variation in the consumer's income. | ||
C. About 60 percent of the bread is consumed by an individual daily. | ||
D. If the price of one pound of bread goes up by a dollar, 60 percent of the daily consumption of bread per person is affected. |
A. The OLS regression coefficients are chosen such that the sum of the squares of the residuals is minimized. Goodness of Fit is maximized when the sum of the squares of the residuals is maximized. | ||
B. The OLS regression coefficients are chosen such that the sum of the squares of the residuals is minimized. Goodness of Fit is minimized when the sum of the squares of the residuals is minimized. | ||
C. The OLS regression coefficients are chosen such that the sum of the squares of the residuals is maximized. Goodness of Fit is also maximized when the sum of the squares of the residuals is maximized. | ||
D. The OLS regression coefficients are chosen such that the sum of the squares of the residuals is minimized. Goodness of Fit is also maximized when the sum of the squares of the residuals is minimized. |
A. The disturbance term has zero expectation. | ||
B. The disturbance term is heteroscedastic. | ||
C. The values of the disturbance term have independent distributions. | ||
D. The disturbance term has a normal distribution. |
A. They are minimum variance but biased estimators. | ||
B. They are not necessarily consistent. | ||
C. They are unbiased but do not necessarily have minimum variance. | ||
D. They are efficient estimators or minimum-variance unbiased estimators. |
A. t = 2.5. Reject H0 at the 5% level but not at the 1% level. | ||
B. t = .18. Do not reject H0 at the 5% level and not at the 1% level. | ||
C. t = 2.5. Reject H0 at the 1% level but not at the 5% level. | ||
D. t = 25. Reject H0 at the 5% level and at the 1% level. |
A. t = 0.43. Do not reject H0 at the 1% level. | ||
B. t = .066. Do not reject H0 at the 1% level. | ||
C. t = 4.58. Reject H0 at the 1% level. | ||
D. t = 2.67. Reject H0 at the 5% level but not at the 1% level. |
A. t = 0.83. Do not reject H₀ at the 5% level. | ||
B. t = .83. Reject H₀ at the 5% level. | ||
C. t = 0.83. Reject H₀ at the 1% level. | ||
D. t = 1.20. Reject H₀ at the 5% level but not at the 1% level. |
A. The t-test | ||
B. The F-statistic | ||
C. R-square | ||
D. R²/ (1- R²) |
A. The actual inflation rate on the average decreased by 1.46 percent for every percentage point increase in the unemployment rate in the given time period t. | ||
B. The actual inflation rate on the average increased by 1.46 percent for every percentage point increase in the unemployment rate in the given time period t. | ||
C. Holding the expected inflation rate constant, the actual inflation rate on the average decreased by 1.46 percent for every percentage point increase in the unemployment rate in the given time period t. | ||
D. Holding the expected inflation rate constant, the actual inflation rate on the average decreased by 1.59 percent for every percentage point increase in the unemployment rate in the given time period t. |
A. The R² value of 0.86 implies that about 86 percent of the variation in the actual inflation rate is explained by the two explanatory variables together. | ||
B. The R² value of 0.86 implies that about 86 percent of the variation in the two explanatory variables is explained by the actual inflation rate. | ||
C. The R² value of 0.86 implies that the degree of correlation between the two explanatory variables is 86 percent. | ||
D. There is not enough information to interpret R². |
A. The coefficients are biased. | ||
B. The variances of coefficients are very large. | ||
C. The t-statistics are very high. | ||
D. The coefficients are not consistent. |
A. Decreasing the number of observations | ||
B. Combining the correlated variables | ||
C. Dropping one of the collinear variables | ||
D. Combining cross-sectional and time-series data |
A. A two-sided t-test | ||
B. An F-test | ||
C. A one sided t-test | ||
D. Not enough information is provided to make the conclusion. |
A. The coefficients are biased. | ||
B. The t-statistics are very high. | ||
C. The t-ratio of one or more coefficients is statistically insignificant, although the F-test for their joint explanatory power is highly significant. | ||
D. The variances of coefficients are very small. |
A. It measures the elasticity of y with respect to x. | ||
B. It measures the rate of growth of y with respect to x. | ||
C. It measures the change in y caused by a one-unit change in x. | ||
D. It measures the relative change in y caused by an absolute change in x. |
A. Over the given time period, real GDP in the US was constant at a rate of 3.5% per year. | ||
B. It measures the relative change in the real GDP in the US, due to a relative change in the time period. | ||
C. Over the given time period, real GDP in the US grew at a rate of 35% per year. | ||
D. Over the given time period, real GDP in the US grew at a rate of 3.5% per year. |
A. log Y = log β₁ + β₂ log X + u | ||
B. log Y = log β₁ + β₂ log X + log u | ||
C. log Y = log β₁ + β₂ log X + e log u | ||
D. log Y = β₁β₂ log X + log u |
A. The disturbance term should necessarily be equal to zero. | ||
B. The disturbance term should necessarily be positive. | ||
C. The disturbance term should necessarily be negative. | ||
D. The disturbance term should satisfy the conditions of the simple regression model (i.e. they are identical and independently distributed, have zero expectation, and are homoscedastic) and be normally distributed. |
A. It is the effect of X₂ on Y, holding X₃ and X₂X₃ constant. | ||
B. It would be wrong to interpret it as the marginal effect of X₂ on Y, because it is not possible to keep both X₃ and X₂X₃ constant, when X₂ changes. | ||
C. It is the marginal effect of X₂ on Y. | ||
D. It is the effect of X₂ on Y, holding X₃ constant. |
A. Assume that one of the two is constant and then run the regression. | ||
B. Omit the interactive term, because excluding it will not change the results much. | ||
C. Rescale X₂ and X₃ so that they are measured from their sample means. | ||
D. Run a regression of X₂ on X₃ (or vice versa) and include the results in the above regression. |
A. β₂(1/XY) | ||
B. - β₂(1/XY) | ||
C. -β₂(1/Y) | ||
D. -β₂(1/X) |
A. It indicates the annual salary of a female sales manager for a given level of experience. | ||
B. It indicates the rate of change in the annual salary of a female sales manager for a given level of experience. | ||
C. It indicates the annual salary of a male sales manager for a given level of experience. | ||
D. It indicates the rate of change in the annual salary of a male sales manager for a given level of experience. |
A. $71 thousand | ||
B. $78 thousand | ||
C. $71.08 thousand | ||
D. $ 84 thousand |
A. The sales coefficient tells us that for every dollar increase in sales, the profits would increase by $56. | ||
B. The sales coefficient tells us that after taking into account the seasonal effect, a one dollar increase in sales would increase the profits by 8 cents. | ||
C. The sales coefficient tells us that for every dollar increase in sales, the profits would increase by $56.08. | ||
D. The sales coefficient tells us that in the first quarter, the profits were $56.08. |
A. If the qualitative variable has n categories, ensure that there are n dummy variables. | ||
B. Ensure that the dummy variables are perfectly collinear. | ||
C. If the qualitative variable has n categories, ensure that there are n - 1 dummy variables. | ||
D. If the qualitative variable has n categories, ensure that there are n + 1 dummy variables. |
A. α₂ is the differential intercept indicating the difference in the wages between college graduates and non-college graduates. | ||
B. α₂ is the coefficient indicating the increase in wages of a college graduate if experience increases by one more year. | ||
C. α₂ is the coefficient indicating the increase in wages of a non-college graduate if experience increases by one more year. | ||
D. α₂ is the differential slope coefficient indicating by how much the slope coefficient of a college graduate's wage function differs from the slope coefficient of a non-college graduate's wage function. |
A. When comparing two regressions, the Chow test does not explicitly specify which coefficient (slope, intercept, or both) is different between the regressions, whereas the dummy variable can easily identify this difference. | ||
B. The Chow test increases the degrees of freedom, where as the dummy variable approach reduces it. | ||
C. The dummy variable approach and the Chow test are equivalent, and there is no distinct advantage of one over the other. | ||
D. The Chow test is time consuming, whereas the dummy variable technique is quick. |
A. The coefficients are biased (in general), but the standard errors are valid. | ||
B. The coefficients are unbiased (in general), but the standard errors are invalid. | ||
C. The coefficients are biased (in general), and the standard errors are invalid. | ||
D. The coefficients are unbiased (in general), and the standard errors are valid. |
A. The coefficients are unbiased (in general), but inefficient and the standard errors are valid (in general). | ||
B. The coefficients are biased (in general), and inefficient and the standard errors are valid (in general). | ||
C. The coefficients are biased (in general), and inefficient but the standard errors are invalid (in general). | ||
D. The coefficients are unbiased (in general), and efficient and the standard errors are valid (in general). |
A. H₀ : RSS = 0 | ||
B. H₀ : β₂ = β₃ | ||
C. H₀ : β₂ = β₄=β₅ = 0 | ||
D. H₀ : β₂ + β₃ =2 |
A. The estimates of all coefficients in the model will be different if a proxy variable is included. | ||
B. The standard errors and t-statistics of all coefficients will be different if a proxy variable is included. | ||
C. The result of including a proxy variable is no different than estimating the model if data were available on X₃ except that with the proxy variable, it will not be possible to estimate the individual coefficients β₁ and β₃. | ||
D. R2 of the model with a proxy variable changes compared to the true model. |
A. The t statistic for Z will be the same as that which would have been obtained for X₃, if it had been possible to regress Y on X₂, ..., Xk, so you are able to assess the significance of X₃, even if you are not able to estimate its coefficient. | ||
B. R2 will be the same as it would have been if it had been possible to regress Y on X₂, ..., Xk. | ||
C. The standard errors and t statistics of the coefficients of X₂,X₄ ..., Xk will be the same as those that would have been obtained if it had been possible to regress Y on X₂, ..., Xk. | ||
D. The estimates of all the coefficients of the explanatory variables will be different from those that would have been obtained if it had been possible to regress Y on X₂, ..., Xk. |
A. H0 : RSS = 0 | ||
B. H0 : β2 = β3 | ||
C. H0 : β2 = β4 = β5 = 0 | ||
D. H0 : β2 = β3 = 2 |
A. H0 : β3 = β4 | ||
B. H0 : β3 + β4 = 0 | ||
C. H0 : β4 = 0 | ||
D. H0 : β1 = β4 |
A. β₄ > 0 | ||
B. β₄ < 0 | ||
C. β₄ = 0 | ||
D. This cannot be determined with the amount of information given. |
A. β₃ > 0 | ||
B. β₃ < 0 | ||
C. β₃ = 0 | ||
D. This cannot be determined with the amount of information given. |
A. The variable "Age" is not an important factor in the model for earnings. | ||
B. The coefficient is subject to omitted variable bias. | ||
C. Schooling is an important factor, and it is important to see its effect in exclusion. | ||
D. The coefficient is subject to multicollinearity. |
A. The bias is positive. | ||
B. The bias is negative. | ||
C. The bias cannot be determined with the information provided. | ||
D. There is no bias in the third model. |
A. It shows the covariance between all the explanatory variables in the model. | ||
B. It indicates the potential explanatory power of the missing variables in the model. | ||
C. It indicates the probability of Type I or Type II errors that could occur in the model. | ||
D. It exaggerates the explanatory power of the model, because it acts as a proxy for the missing variable. |
A. It means that the variance of the distribution of the disturbance term is the same for each observation. | ||
B. It implies that the distribution of the disturbance term associated with each observation should have a mean equal to zero. | ||
C. It implies that the distribution of the disturbance term associated with each observation should have a normal distribution. | ||
D. It means that the disturbance terms are not independently distributed. |
A. Both I and III are true. | ||
B. Both II and III are true. | ||
C. Both I and II are true. | ||
D. All three consequences listed are possible. |
A. An F-test for linear restriction | ||
B. A Chow test for irrelevant variable | ||
C. A Durbin-Hu-Watson test for measurement error | ||
D. A Goldfeld-Quandt test for heteroscedasticity |
A. An F-test for linear restriction | ||
B. A White test for heteroscedasticity | ||
C. A Durbin-Hu-Watson test for measurement error | ||
D. A Goldfeld-Quandt test for heteroscedasticity |
A. | ||
B. | ||
C. | ||
D. |
A. Use weighted least squares, i.e. transform the model by using appropriate weights for the regression model so that the error variance becomes homoscedastic. | ||
B. Use White's heteroscedasticity-consistent variances and standard error at least in large samples, in which the t-tests and F-tests are asymptotically valid. | ||
C. Run a standard OLS regression, ignoring heteroscedasticity, because in large samples, this problem is not of any consequence and the t-tests and F-tests are asymptotically valid. | ||
D. Refine the data to get homoscedastic variances, otherwise the results are contaminated and misleading. |
A. Nothing can be said unless more information is provided. | ||
B. The White test confirms the presence of heteroscedasticity in this case. | ||
C. Both the White test and the Goldfeld-Quandt test confirm that there is no heteroscedasticity in this case. | ||
D. The Goldfeld-Quandt test confirms the presence of heteroscedasticity in this case. |
A. The White test | ||
B. No test can be performed unless more information is provided. | ||
C. The Goldfeld-Quandt test | ||
D. A Monte-Carlo simulation |
A. Both models have one constant and one explanatory variable. | ||
B. The first model has one explanatory variable, and the second has two explanatory variables. | ||
C. The first model has two explanatory variables, and the second has one explanatory variable. | ||
D. The slope and the explanatory variables of the two models are not affected by the transformation. |
A. The OLS estimators are biased in the first model due to heteroscedasticity. | ||
B. The OLS estimators are efficient in the first model despite heteroscedasticity. | ||
C. The OLS estimators are efficient and unbiased due to heteroscedasticity. | ||
D. The OLS estimators are unbiased but not efficient due to heteroscedasticity. |
A. The disturbance term may not be independent of the regressors so that it is possible for the regressors and the disturbance term to have a distributional relationship between them. | ||
B. The disturbance term may not have a zero expectation. | ||
C. The specification of the model when the regressors are randomly selected from fixed populations necessarily are non-linear. | ||
D. The regressors are all correlated such that the possibility of multicollinearity is very high. |
A. It is a biased estimator. | ||
B. It is an unbiased estimator. | ||
C. Biasedness cannot be determined in this case, because the weight ai depends upon the values of the regressors. | ||
D. There is not enough information given to calculate the biasedness. |
A. Provided that the regression model assumptions are valid, the estimator is consistent. | ||
B. Provided that the regression model assumptions are valid, the OLS estimators are BLUE (best linear unbiased estimators), as assured by the Gauss-Markov theorem. | ||
C. Provided that the regression model assumptions are valid, the estimator has a zero mean. | ||
D. If the disturbance term has a normal distribution, the regression coefficients also have normal distributions. |
A. The estimator for the slope coefficient is biased and inconsistent. | ||
B. The standard errors, t-tests, and F-tests are invalid. | ||
C. All of the above | ||
D. None of the above |
A. Only I is correct. | ||
B. Both II and III are correct. | ||
C. Only II is correct. | ||
D. Only III is correct. |
A. A proxy variable | ||
B. A binary dependent variable | ||
C. An instrument variable | ||
D. A dummy variable |
A. The IV estimator is biased and inconsistent. | ||
B. The IV estimator is unbiased and consistent. | ||
C. The IV estimator is unbiased but inconsistent. | ||
D. The IV estimator is biased but consistent. |
A. Two-stage least squares | ||
B. Ordinary least squares | ||
C. Maximum likelihood estimation | ||
D. Restricted least squares |
A. Durbin Watson test | ||
B. Goldfeld-Quandt test | ||
C. Durbin-Wu-Hausman test | ||
D. White test |
A. A linear combination of two or more time series will be non-stationary if one or more of them is non-stationary, even if there is a long term relationship between the series. | ||
B. Even if a co-integrating relationship belongs to a system of simultaneous relationships, OLS may be used for any simultaneous equations bias tends to zero asymptotically. | ||
C. If there is a case of a co-integrating relationship, least squares estimators can be shown to be super consistent. | ||
D. To test for co-integration, it is necessary to evaluate whether the disturbance term is a stationary process. |
A. Ct = Yt | ||
B. Ct = β₁ + β₂Yt | ||
C. Ct + It = β₁ + β₂Yt + ut | ||
D. |
A. OLS method is biased, but the IV method is consistent. | ||
B. Both OLS and IV are biased. | ||
C. Both OLS and IV are inconsistent. | ||
D. IV is always preferred over OLS in this case. |
A. The demand function is identified, but the supply function is not. | ||
B. The supply function is identified, but the demand function is not. | ||
C. Both are identified. | ||
D. Neither is identified. |
A. The demand function is over identified, but the supply function is not identified. | ||
B. Both are just identified. | ||
C. The supply function is over identified, but the demand function is not identified. | ||
D. Neither is identified. |
A. The demand function is just identified, but the supply function is not identified. | ||
B. The supply function is just identified, but the demand function is not identified. | ||
C. Neither is identified. | ||
D. Both are just identified. |
A. The distribution of the disturbance terms is continuous. | ||
B. The variances of the disturbance terms are heteroscedastic. | ||
C. The OLS estimation of the linear probability model may predict probabilities of more than 1 or less than 0. | ||
D. The disturbance terms are not normally distributed, and hence the standard errors and test statistics are invalid. |
A. In both models, the estimated probability of an event occurring lie outside the [0,1] bounds. | ||
B. In both models, the estimated probability of an event occurring lie within the [0,1] range. | ||
C. Both models are linearly related to the explanatory variables. | ||
D. Both models are interchangeable with the linear probability model. |
A. Through a dummy variable | ||
B. Simultaneous equations: one equation having only w₁ observations and the other with (w₁ +w₂) observations | ||
C. Tobit model | ||
D. Logit model |
A. Autoregressive model | ||
B. Probit model | ||
C. Logit model | ||
D. Linear dependent model |
A. It cannot be interpreted individually without taking into account the impact of the other lagged variables. | ||
B. It is the short run effect of the influence of X on Y. | ||
C. It is the long run effect of the influence of X on Y. | ||
D. It is the constant influence of X on Y. |
A. It cannot be interpreted individually without taking into account the impact of the other lagged variables. | ||
B. It is the short run effect of the influence of X on Y. | ||
C. It is the long run effect of the influence of X on Y. | ||
D. It is the interim influence of X on Y. |
A. β₂ | ||
B. γ | ||
C. β₂ + γ | ||
D. β₂ γ |
A. Partial adjustment model | ||
B. Adaptive expectations model | ||
C. Rational expectations model | ||
D. Error correction model |
A. ADL(0,1) model | ||
B. ADL(0,0) model | ||
C. ADL(1,0) model | ||
D. It would remain as an ADL(1,1) model. |
A. The disturbance term does not have zero mean. | ||
B. The disturbance term is heteroscedastic. | ||
C. The values of the disturbance term do not have independent distributions. | ||
D. The disturbance term is not distributed independently of the regressors. |
A. Third-order moving average autocorrelation: MA(3) | ||
B. Autoregressive distributed lag (ADL) | ||
C. Third-order autoregressive autocorrelation: AR(3) | ||
D. Adaptive expectations |
A. The regression coefficients remain unbiased. | ||
B. Autocorrelation causes the standard errors to be estimated wrongly, often being biased downwards. | ||
C. OLS is unbiased and efficient and therefore the preferred choice for estimation. | ||
D. OLS estimators are biased if the disturbance term is subject to autocorrelation. |
A. Breusch-Godfrey test | ||
B. Durbin's d-test | ||
C. Durbin's h-test | ||
D. White test |
A. The disturbance term ε_t is independently distributed and satisfies all OLS assumptions, so the equation is free of autocorrelation. | ||
B. There is no multicollinearity in the last equation which is why it is free of autocorrelation. | ||
C. The coefficients now take into account the value of ρ, and thus autocorrelation is eliminated. | ||
D. The last equation takes into account the past values of the variables, thus eliminating autocorrelation. |
A. No. There is no test for this purpose. | ||
B. Yes, through the common factor test. | ||
C. Yes, by using Durbin's d-test. | ||
D. Yes, by using Durbin's h-test. |
A. The population variance of the distribution is independent of time. | ||
B. The population covariance between its values at any two time points depends only on the distance between those points and not on time. | ||
C. The population mean of the distribution is independent of time. | ||
D. The population variance of the distribution tends to zero as the time 't' becomes large. |
A. -1 < β2< 1 | ||
B. β2 > 1 | ||
C. 0 < β2< 1 | ||
D. β2 = 1 |
A. It would be stationary and referred to as random walk with drift. | ||
B. It would be non-stationary and referred to as Random walk. | ||
C. It would be non-stationary with a deterministic trend. | ||
D. It would be stationary and referred to as an integrated model. |
A. An error correction model | ||
B. A stationary model with a deterministic trend | ||
C. A non-stationary model called random walk with drift | ||
D. A stationary model called random walk with drift |
A. It implies that even if two or more time series are individually non-stationary, their linear combination can be stationary, suggesting that there exists a long term or equilibrium relationship between them. | ||
B. It refers to the regression of one time series variable on other time series variables. | ||
C. It refers to the graph which depicts the cumulative autocorrelation of disturbance terms at various lags. | ||
D. It implies that we can reconcile a trend-stationary process with a difference-stationary process. |
A. Panel data is often large in comparison to cross-section data. | ||
B. The problem of bias caused by unobserved heterogeneity is removed in panel data. | ||
C. They are relatively inexpensive compared to cross-section data. | ||
D. They are less susceptible to measurement error. |
A. Only I explains the fixed effects regressions method. | ||
B. Only II explains the fixed effects regressions method. | ||
C. Only III explains the fixed effects regressions method. | ||
D. All three explain the fixed effects regressions method. |
A. In the fixed effects model, the sample or the number of observations in the panel data are fixed overtime, whereas in the random effects model, the number of observations in the panel data are not constant. | ||
B. In the fixed effects model, the unobserved effect in the model is correlated with the explanatory variable, whereas in the random effects model it is not. | ||
C. The fixed effects model uses time demeaning to estimate the coefficients, where as the random effects model uses the first differences method. | ||
D. It is possible to use the least squares dummy variable method in the random effects model but not in the fixed effects model. |
A. Only I needs to be considered. | ||
B. Both I and II need to be considered. | ||
C. Both I and III need to be considered. | ||
D. Both II and III need to be considered. |