Bonus Friday link: a real use for fake data

Very interesting news article in Science this week on how physicists trying to detect the extremely subtle signal of gravity waves use intentionally-faked data in order to learn how to detect real signals. Once or twice per year, there is a “blind injection”: without the knowledge of the physicists actually doing the data collection and analysis, the gravity wave detector is subtly reconfigured to make it produce fake data that appear to contain a signal.

I was struck by the lengths to which these blind injections are taken. The physicists working on the data analysis are allowed to spend months chasing down fake signals, checking for errors, trying to confirm the signal via independent means, etc. They’ve gone so far as to have observatories look for non-existent events in space that might have produced the fake signals, and in one case even wrote a paper and were ready to submit it when the deception was revealed. All of which perhaps just illustrates why fundamental physics, perhaps alone among major fields of science, doesn’t need to worry about whether it has a systematic problem with its statistical tests. Other fields, quite likely including ecology, have a lot of catching up to do in terms of bringing their analytical practices closer to the textbook ideal.

I was also struck by how the fake data were produced–by actually physically manipulating the detector so that it produces exactly the sort of data that would be expected if a gravity wave actually did occur. It’s not that unusual in ecology for people to test out new analytical approaches to see if they can detect a signal in simulated (i.e. fake) data. But the problem with this approach is that it’s often not at all clear that the simulated “signals” in the simulated data actually resemble the signals that would be seen in real data. For instance, techniques for testing for non-random patterns of species co-occurrence in species x site matrices have been validated by testing them on simulated matrices known to contain “checkerboard” patterns (a given site contains species A, or species B, but never both). But it’s unclear when, if ever, interspecific competition actually would be expected to produce such a checkerboard pattern (and if you say “it’s obvious that it would”, sorry, no it’s not, as can be easily demonstrated with a simple competition model). Which is a problem since we only care about “checkerboards” in the first place because they’re a putative “signal” of interspecific competition. Which perhaps just goes to show that, in order to properly test your signal detection abilities with fake signals, you first have to know what constitutes a “signal”, and what exactly it’s a signal of.

9 thoughts on “Bonus Friday link: a real use for fake data

  1. Nice link thanks – that is a remarkable investment in testing the system! For this reason, I prefer to test statistical models by generating data from a more complex ecological model (like an individual based model) that shares nothing in common structurally with the statistical model that I’m testing. I call it Virtual Ecology (Volker Grimm calls it that too). Now I can cite Science in support of the idea!

    • That “virtual ecology” idea is definitely the right way to go. Far too many ecologists confuse statistical models and ecological models. This is one respect in which neutral theory (in evolution and ecology) deserves a lot of credit for setting a good example. Want to know what the world would look like in the absence of selection, so that you can validate analytical approaches for detecting selection? Build a process-based population genetic model that includes a parameter governing the strength of selection, and dial that parameter down to zero. As opposed to, say, doing some sort of constrained randomization of your observed data, or trying to generate “selection-free” data using some approach that doesn’t start from an explicit description of selection, drift, migration, etc.

    • Another thought: one common objection to the sort of virtual ecology you’re suggesting is that it’s hard to know if your virtual world is actually like the real world in the relevant respects. Now, that’s of course a spurious objection, because it’s always made at the service of justifying even *less* well-grounded approaches to validating one’s methods. But it occurs to me that one interesting way to address that objection would be to use *real* experiments to generate data that has the desired properties, and then test whether one’s analytical approach can detect the “signals” thereby generated. For instance, you want to validate an approach for detecting the effects of selection? Go out and impose artificial selection in nature, and see if your approach can detect its effects? This will of course be infeasible in some cases. But in many cases it won’t be.

      Just off the top of my head, I could see this as an easy way to get a nice paper on metacommunities. Using some tractable, manipulable model system like small ponds, manipulate environmental conditions and dispersal rates among ponds so that you have ponds in which species composition just reflects local environmental conditions, and others in which it reflects local environmental conditions plus dispersal from other ponds (indeed, such experiments have already been done). Then run the data through some of the ordination-based approaches that have been suggested to tease apart the effects of spatial factors (i.e. dispersal) and local environmental factors on local species composition. See if those ordination-based approaches actually pick up the treatment effects. Has someone already done this and I’ve just forgotten it? Because it seems like a really obvious idea to me…

    • I’ve taken just this approach in my analysis of analytical problems in extracting environmental signals from tree rings, by first building a tree growth and ring response model that includes the essential elements affecting ring response. I address the issue Jeremy’s raised (how you know whether or not you’re testing a realistic, real-world situation), by essentially examining all realistic (and many unrealistic) scenarios, the set of which is constrained by certain fundamental basic tree growth processes. If the tested analytical methods cannot return a reliable, rather than spurious, signal, in any situation, then you have very strong reason to believe that the methods have real problems.

      Anyway, I love this kind of thing, and think it is absolutely the way to go in observational science generally; thanks for the find Jeremy. However, wouldn’t necessarily want to be the one who’d invested hundreds of hours in research and writing a paper thereon, only to find out I’d been part of a reality check!

  2. Some months ago I was curious about the application of the Benford’s Law to detect fraud in financial reports. Surprisingly I was able to find many papers about the application of the benford’s law in data analysis, specifically in regression tables. Although I honestly do not think that the the regression coefficient is the best metric to be evaluated, which has been the target of some of these papers, I believe that the MS report from ANOVA tables would be a good candidate. Its actual values spans over larger orders of magnitude compared to other metrics, and is widely reported in papers. This method would allow to detect fraud in data analysis, with could be used as surrogate of fake data. To calculate this you would need many values, so it would be hard to detect if one person is cheating, but you could use it to verify reliability across journals. Does prestigious journals have more reliable data? Or the pressure to have clear results would make them more prone to fraud?

    Have to find some time to do this, I think would be interesting

  3. I once had a field tech take a huge leak in the field, next to one of my small plots, in the middle of an extended dry spell. (I learned about it later when we were just chatting, he didn’t think it was a big deal). I asked him to not tell me which one, and I wanted to see if I could figure it out by looking at the results. I couldn’t. Then he told me, and it was a minor outlier which I then excluded.

    And that’s about as technical as it gets in ecology for error detection, huh?

  4. Pingback: Parasite Interactions: How can we detect them? | Parasite Ecology

  5. Pingback: Unrelated to all that, 03/21 edition | neuroecology

  6. Pingback: When, if ever, is it ok for a paper to gloss over or ignore criticisms of the authors’ approach? | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s