Dear list,
I want to compare how well different nested models fit my data, but I
am not sure how to choose the parameters, and the more I google the
more confused I am. Since tests on my real data gave way too small
p-values, I decided to do tests on random data. The p-value is
supposed to tell me how much better the full model fits the data, i.e.
how much signal for the additional parameters is hidden in the data.
Since I use random data (there is no signal at all), I would expect a
uniform distribution between 0 and 1 for the p-values if I compare
full vs nested models.
I do 10000 runs with 20 random normal distributed X and Y values
(using gsl_ran_gaussian), equivalent to 20 data points. I do two fits:
M1: y = a1x + a0 -> params1=1 (a0 does not count), df1=20-1-1=18
M2: y = a2x^2 + a1x + a0 -> params2=2 (a0 does not count), df2=20-2-1=17
I then calculate both errors (sum of squared residuals) and calculate F:
F = ((err1-err2) / (df1-df2)) / (err2/df2)
and calculate the p-value using
p=gsl_cdf_fdist_Q(F, df1-df2, df2)
I would expect a uniform distribution between 0 and 1, but the
distribution is skewed and shows way more small values then big ones
(see attached file), stating that the full model is "better" in most
cases. Obviously, there is something wrong, so I have a couple of
questions:
- is it correct that constant values (a0) do not count as parameters?