> How can it not be causality when it's a double-blind intervention?
The design of the study looks solid and results are certainly statistically significant. With N=24 and the limitations quoted below, I think it's fair to take these results with a grain of skepticism.
But on balance, I found the criticism in GP's comment to be a little over the top.
- "By testing on subsequent days, it is possible that effects from one condition were reflected in the scores obtained on the next day."
- "The environmental factors that were not experimentally modified exhibited some variability owing to changes in outdoor conditions and participant behavior."
- "This study used a controlled environment to individually control certain contaminants. Assessments performed in actual office environments are important to confirm the findings in a noncontrolled setting."
It's not just N=24 because those same participants took 9 independent tests which generates more data. Also, the p-value should be considered along with the N, since the sampling variance of the test statistic will go up with a small N and this is accounted for in the test thresholds, which somewhat offsets that limitation.
But, a larger N and replication studies are needed.
The design of the study looks solid and results are certainly statistically significant. With N=24 and the limitations quoted below, I think it's fair to take these results with a grain of skepticism.
But on balance, I found the criticism in GP's comment to be a little over the top.
- "By testing on subsequent days, it is possible that effects from one condition were reflected in the scores obtained on the next day."
- "The environmental factors that were not experimentally modified exhibited some variability owing to changes in outdoor conditions and participant behavior."
- "This study used a controlled environment to individually control certain contaminants. Assessments performed in actual office environments are important to confirm the findings in a noncontrolled setting."