<div dir="ltr">Hi Tim,<div><br></div><div>Yes I was searching for some approximate test stats and p-vals generated by creating pseudotrials within each trial. I'll try this out. I'm thinking it might be wise to leave a gap between the pseudo trials (i.e. not make them contiguous EEG segments) to make them closer to independent. Leaving out every-other pseudotrial might be a reasonable tradeoff. One could get 2 test stats or just 2 sample variances this way: one from the even pseudotrials and one from the odds. </div><div><br></div><div>This is a bit hacky though, and I wonder if there are canonical methods to deal with the lack of independence. </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 22, 2016 at 12:59 PM, Tim Mullen <span dir="ltr"><<a href="mailto:mullen.tim@gmail.com" target="_blank">mullen.tim@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">Winslow, Makoto,</p>
<p dir="ltr">As a statistical principle, bootstrapping can only be used when you have multiple independent and identically distributed (i.i.d) observations available. The observations are resampled with replacement from the original set to construct an empirical probability distribution. </p>
<p dir="ltr">It is not possible to use bootstrapping to test for statistical differences between only two observations (i.e. two trials). In general, with any test, your statistical power will be extremely low if you have only one observation per condition. </p>
<p dir="ltr">You can try to mitigate this by segmenting your long continuous trials into short 'pseudo-trials' and then testing for differences in the pseudo-trial between conditions. Make sure that you average your causal measure over time within each trial before computing your stats. One concern is that the pseudotrials may be far from i.i.d within a condition, so if using bootstrap, your bootstrap distribution may not converge to the true distribution of the estimator and your stats will be biased. </p>
<p dir="ltr">Depending on your specific null hypothesis and protocol, however, there may be alternative parametric and nonparametric tests you can apply.</p>
<p dir="ltr">Otherwise try to collect data for more subjects (then you simply bootstrap across subjects e.g. using statcond with SIFT matrices) or more trials (run your experiment more than once per condition).</p><span class="HOEnZb"><font color="#888888">
<p dir="ltr">Tim<br>
</p></font></span><div class="HOEnZb"><div class="h5">
<div class="gmail_extra"><br><div class="gmail_quote">On Aug 18, 2016 11:09 AM, "Makoto Miyakoshi" <<a href="mailto:mmiyakoshi@ucsd.edu" target="_blank">mmiyakoshi@ucsd.edu</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Dear Winslow,<div><br></div><div>Yes, unfortunately the bootstrap seems to be designed for across trials.</div><div><br></div><div>Makoto</div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Aug 13, 2016 at 4:57 PM, Winslow Strong <span dir="ltr"><<a href="mailto:winslow.strong@gmail.com" target="_blank">winslow.strong@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I'd like to use a resampling technique (e.g. bootstrap) to get p-values and test stats for SIFT connectivity metrics for 1 subject across n conditions. <div><br></div><div>This is a steady-state condition study, hence there's only 1 trial per condition. I'm trying to analyze whether certain connectivity metrics (i.e. their averages over a condition) are statistically significantly different across the conditions. I was under the impression I could use SIFT's surrogate distribution generator to obtain the surrogate distribution for these calculations, but when I run that from the GUI for bootstrap, I get the error:</div><div><br></div><div>"Unable to compute bootstrap distributions for a single trial"</div><div><br></div><div>Is this surrogate function only designed to do boostrapping over trials? Or is there a way to do it over windows within a condition?</div></div>
<br>______________________________<wbr>_________________<br>
Eeglablist page: <a href="http://sccn.ucsd.edu/eeglab/eeglabmail.html" rel="noreferrer" target="_blank">http://sccn.ucsd.edu/eeglab/ee<wbr>glabmail.html</a><br>
To unsubscribe, send an empty email to <a href="mailto:eeglablist-unsubscribe@sccn.ucsd.edu" target="_blank">eeglablist-unsubscribe@sccn.uc<wbr>sd.edu</a><br>
For digest mode, send an email with the subject "set digest mime" to <a href="mailto:eeglablist-request@sccn.ucsd.edu" target="_blank">eeglablist-request@sccn.ucsd.e<wbr>du</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div data-smartmail="gmail_signature"><div dir="ltr">Makoto Miyakoshi<br>Swartz Center for Computational Neuroscience<br>Institute for Neural Computation, University of California San Diego<br></div></div>
</div></div>
<br>______________________________<wbr>_________________<br>
Eeglablist page: <a href="http://sccn.ucsd.edu/eeglab/eeglabmail.html" rel="noreferrer" target="_blank">http://sccn.ucsd.edu/eeglab/ee<wbr>glabmail.html</a><br>
To unsubscribe, send an empty email to <a href="mailto:eeglablist-unsubscribe@sccn.ucsd.edu" target="_blank">eeglablist-unsubscribe@sccn.uc<wbr>sd.edu</a><br>
For digest mode, send an email with the subject "set digest mime" to <a href="mailto:eeglablist-request@sccn.ucsd.edu" target="_blank">eeglablist-request@sccn.ucsd.e<wbr>du</a><br></blockquote></div></div>
</div></div></blockquote></div><br></div>