[Eeglablist] nested hypothesis testing to decide whether to use one or two dipoles to fit a component

Maximilien Chaumon maximilien.chaumon at gmail.com
Mon Aug 13 05:16:56 PDT 2012


Hello eeglab & Fieldtrip,

I'm trying to find out if it would be possible to use a nested hypothesis
testing approach to decide whether to use a one or two dipole model while
estimating components' dipole locations.
The rationale I would like to follow is this: with two dipoles, we will
always obtain a better fit than with one dipole, but the decrease in sum of
squared errors (SSE) should follow a F distribution with k (=
Nparameters_2dipoles - Nparameters_1dipole) degrees of freedom. If the
decrease in SSE is greater than what would be expected under this F
distribution, then we decide that 2 dipoles provide a sufficiently better
fit and decide using them.

I asked this question to eeglablist and Scott pointed out that it is
difficult/impossible(?) to determine if the second dipole fits actual
interesting data or just noise introduced by the imperfect head model.
Christian then said it'd be worth a shot, and I agree, so here I am again
with two questions, or two confirmations:

1) *How many parameters are estimated in ft_dipolefitting.m ?* specially in
the case of 2 dipoles. If I count correctly, we estimate 6 parameters for
one dipole, and, depending on whether the orientation has to be the same in
the 2 dipoles, one (amplitude) or three (amplitude and orientation) more.
2) *Can I assimilate the relative residual variance to a SSE?* the function
rv.m does this: rv = sum((d1-d2).^2) ./ sum(d1.^2);
So that seems to be a sum of squared errors divided by the variance of the
original data. So if I multiply the rv by the sum squared component map, I
should get it, right?

Thanks a lot!
Max

2012/8/11 Christian Kothe <christiankothe at gmail.com>

> I can only speak from my armchair here, but it sounds like it should be
> worth a try - even if you don't get the # of parameters exactly right it
> will probably give you at least some level of complexity control in
> whatever the range of validity is. If it works, it may inspire follow-up
> work (e.g., Bayesian model selection or likelihood ratio tests).
>
> The number of parameters for a 2-dipole model seems to be 3 (xyz) + 4 (2x
> the orientation parameters). Not sure about the momentum, though - you
> might look up the place where the actual function minimization is being
> performed in dipfit (fminunc call?) and see whether these are being
> optimized together with the others.
>
> Christian
>
> On Aug 10, 2012, at 3:29 PM, Scott Makeig <smakeig at gmail.com> wrote:
>
> MAX - Unfortunately, in general using two dipoles rather than one will
> ~always improve the fit. Even if the source is a pure single dipole, a
> second dipole can be used to correct for noise or errors in the forward
> head model. This is less always the case for the constrained
> spatially-symmetric dipole pairs allowed by dipfit().  However, we have not
> thought of an optimal way to decide between using one or (occasionally) two
> dipoles to fit e.g. maps of ICA brain sources.  The goal would be to decide
> whether the two-dipole version is fitting noise/forward model error vs
> actual bilateral source generation...
>
> Scott Makeig
>
> On Thu, Aug 9, 2012 at 1:54 AM, Maximilien Chaumon <
> maximilien.chaumon at gmail.com> wrote:
>
>> Hello all,
>>
>> When fitting dipoles to components, we are all sooner or later puzzled by
>> the question whether to use one or two symmetrical dipoles.
>>
>> Would it be correct to put the problems in terms of a nested hypothesis
>> testing?
>>
>> We are fitting a scalp map with one or two parameters and get a residual
>> variance after the fit.
>> Could we not use this residual variance as a measure of the SSE and
>> compute a F statistic to decide whether to use the more complex (with two
>> dipoles) or simpler (with one dipole) of two nested models?
>> If yes, then how would we decide on the number of degrees of freedom? How
>> many free parameters do we have in each case? x,y,z,and two orientations
>> per dipole? how does the imposed symmetry affect that number? Could we
>> really map residual variance to SSE? How many "observations" do we have in
>> that case (see formula below)?
>>
>> I found this formula, for F:
>> F = (SSEF-SSER)/ (kF-kR) / ((1-SSEF)/(N-kF-1))
>> where
>> SSE is sum of squared errors,
>> k is numbers of parameters,
>> N is number of observations (? what in our case?)
>> F and R indices for full and reduced model respectively (in our case two
>> and one dipole).
>>
>>
>> Thanks a lot for any comment!
>> Best,
>> Max
>>
>> dipfit
>>
>>
>>
>>
>>
>> _______________________________________________
>> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> To unsubscribe, send an empty email to
>> eeglablist-unsubscribe at sccn.ucsd.edu
>> For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
>>
>
>
>
> --
> Scott Makeig, Research Scientist and Director, Swartz Center for
> Computational Neuroscience, Institute for Neural Computation; Prof. of
> Neurosciences (Adj.), University of California San Diego, La Jolla CA
> 92093-0559, http://sccn.ucsd.edu/~scott
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20120813/e82078f7/attachment.html>


More information about the eeglablist mailing list