Friday links: statistics vs. TED talk, #scimom, Jeremy vs. Nate Silver, and more

Also this week: fake it ’til you make it (look like you work 80 hours/week), great Canadian minds think alike, evolutionary biologists vs. ecologists, E. O. Wilson vs. the OED, Wyoming vs. data, the evidence on anonymity and openness in peer review, subtle gender biases in award nomination, and much more. Lots of good stuff this week, you might want to get comfortable first. Or skim, if you must. But whatever you do, stick with it until the end so you can read about a runaway trolley speeding towards Immanuel Kant. 🙂

From Brian (!):

A neat example on the importance of nomination criteria for gender equity is buried in this post about winning Jeopardy (an American television quiz show). For a long time only 1/3 of the winners were women. This might lead Larry Summers to conclude men are just better at recalling facts (or clicking the button to answer faster). But a natural experiment (scroll down to the middle of the post to The Challenger Pool Has Gotten Bigger) shows that nomination criteria were the real problem. In 2006 Jeopardy changed how they selected the contestants. Before 2006 you had to self-fund a trip to Los Angeles to participate in try-outs to get on the show. This required a certain chutzpah/cockiness to lay out several hundred dollars with no guarantee of even being selected. And 2/3 of the winners were male because more males were making the choice to take this risk. Then they switched to an online test. And suddenly more participants were female and suddenly half the winners were female. It seems so subtle and removed from the key point (who wins the quiz show) but airline flight vs online test seems to make a huge difference. What are accidental but poorly designed nomination criteria doing in academia? Several bloggers including Meg and Morgan have commented on how the nomination process can have a big impact on equitable gender outcomes in an academic context.

From Meg:

This article on how some men (and some, though fewer, women) fake 80 hour work weeks is interesting. To me, the most interesting part was the end:

But the fact that the consultants who quietly lightened their workload did just as well in their performance reviews as those who were truly working 80 or more hours a week suggests that in normal times, heavy workloads may be more about signaling devotion to a firm than really being more productive. The person working 80 hours isn’t necessarily serving clients any better than the person working 50.

The article is based on a study in the corporate world, but definitely applies to academia, too. (ht: Chris Klausmeier)

Apparently I wasn’t the only woman to have a post appear on Monday about how it’s possible to be a and the importance of role models! I really enjoyed this piece by anthropologist and historian Carole McGranahan. (My piece from Monday is here.)

From Jeremy:

Hilda Bastian (an academic editor at PLOS) takes a deep dive into all of the comparative and experimental evidence on anonymity and openness in peer review. It’s a blog post rather than a paper and so hasn’t been reviewed itself, so I’m trusting her to have done it right (FWIW, it has all the signs of trustworthiness). I love that she’s up front about the serious design and sample size problems of many studies. That’s one of the main take-homes, actually–on several issues, the available evidence sucks, so you can’t draw conclusions on those issues. And I love that she’s looking at all the available evidence, not just focusing on whichever study (or appalling anecdote) gets talked about most or supports her views (she favors openness over anonymity). Among her conclusions:

  • Reviewers often see through author blinding
  • Revealing reviewer identities causes many reviewers to decline to review, but may make reviews somewhat better
  • Author blinding can reduce, increase (yes, increase), or have no effect on gender bias. But the evidence is pretty unreliable and hard to interpret.

Stephen Heard on why scientific grant funding should be spread fairly evenly among investigators. Echoes an old post of mine (we even independently came up with equivalent graphical models!), though Stephen goes beyond my post in considering how uncertainty in predicting PIs’ future productivity should affect funding allocation.

Caroline Tucker comments on the opposing papers deriving from the ASN meeting’s debate on ecological vs. evolutionary limits on continental-scale species richness. Haven’t read them myself yet, but judging from her comments I’m wondering if the competing hypotheses are too vaguely defined to actually be testable. Whenever people disagree on whether evidence X even counts as a test of hypothesis Y, that makes my spidey sense vague hypothesis sense tingle.

The always-thoughtful Arjun Raj muses on when to retract a paper. Not as easy a call as you might think.

This is old but I missed it at the time: great This American Life episode on the fuzzy boundary between bold science and crackpottery, as exemplified by a collaboration between an NIH-funded cancer researcher and a musician. A meditation on the importance–and frustration–of looking for evidence against your ideas (“severe tests“) rather than evidence for them. Here are my related old posts on pseudoscience and scientific lost causes. (ht Andrew Gelman)

His own recent claim to the contrary, no, E. O. Wilson did not coin the term “evolutionary biology”, though it’s possible that he helped to popularize it.

Dismantling the evidence behind the most-viewed TED talk ever. The first bit (before the p-curve stuff) would be a good example for an introductory stats course.

Speaking of good examples for an intro stats course, here’s Nate Silver committing the most common and serious statistical mistake made by people who should know better: letting the data tell you what hypothesis to test, and then testing it on the same data. This mistake goes by various names (circular reasoning, “double-dipping”, “Texas sharpshooter fallacy“). Here, Silver notices an unusual feature of some ice hockey data, and then calculates a very low probability that the feature would occur by chance. Which is very wrong (and no, the fact that P is way less than 0.05 here does not make it ok). Every dataset has some “unusual” features, just by chance. You can’t notice whichever feature that happens to be, and then test whether that particular feature would be expected occur by chance alone. Because if the dataset had happened to exhibit some other “unusual” feature, you’d have done the test on that feature instead (Andrew Gelman calls this “the garden of forking paths“). It’s the equivalent of hitting a golf ball down a fairway, and then declaring that it’s a miracle that the ball landed where it did, because the odds are astronomical that the ball would land on that particular spot by chance alone (can’t recall where I read that analogy…). Nate Silver’s on record saying that frequentist statistics led science astray for a century. But ignoring its basic principles (here, predesignation of hypotheses) isn’t such a hot idea either. Come on, Nate, you’re better than this.

In praise of linear models. From economics, but non-technical and applicable to ecology.

Wyoming just criminalized gathering environmental data if you plan to share the data with the state or federal government. IANAL, but I can’t imagine this passing constitutional muster. But in a weird way, I’m kind of impressed with Wyoming here. Go big or go home, as the saying goes–even when it comes to data suppression. (ht Economist’s View)

This is from last month but I missed it at the time: Paige Brown Jarreau summarizes her doctoral dissertation on why science bloggers blog and what they blog about. Looks like Meg, Brian, and I are typical in some ways, but atypical in other ways.

And finally: little known variants of the trolley problem:

There’s an out of control trolley speeding towards Immanuel Kant. You have the ability to pull a lever and change the trolley’s path so it hits Jeremy Bentham instead…

(ht Marginal Revolution)

24 thoughts on “Friday links: statistics vs. TED talk, #scimom, Jeremy vs. Nate Silver, and more

  1. I don’t understand your criticism of Nate Silver’s calculation. There is no hypothesis testing here. All he did was calculate the probability of an event using empirical data. And the “event” was not arbitrary – 1 goal games are exciting in playoff hockey, especially when you consider that all games going to OT will by definition end in a 1 goal difference. There are not endless features or possibilities here that could have been tested.

    I suppose you could criticize the difference between the probability of an event happening 14 times in a row, versus the probability that any 1 of 30 teams experiences such a streak over 18 years. This latter probability is much higher, but the fact that it has never before happened before this year still suggests it is rare.

    Admittedly, sports commentary is full of esoteric statistics that are created to fit narratives. Every time a player gets compared to a group of hall-of-famers for doing something rare is comical when the definition of the event is full of arbitrary thresholds (e.g., averaging some decent number during some important period). But I don’t think Nate’s example here is problematic.

    • Sure there’s a hypothesis–calculating the probability that the streak would occur under the hypothesis that the games are independent events with scoring margins drawn randomly from the distribution of all playoff game scoring margins.

      “I suppose you could criticize the difference between the probability of an event happening 14 times in a row, versus the probability that any 1 of 30 teams experiences such a streak over 18 years. ”

      Yup. That’s part of the problem here.

      “There are not endless features or possibilities here that could have been tested.”

      Sure there are, though they’re impossible to enumerate. It just doesn’t seem like because we humans only tend to notice some of them. Nobody ever says something like “My goodness, the scores in the last 5 Oilers games have been 2-1, 4-2, 0-1, 0-3, and 2-1–what are the odds of getting those 5 exact scores all in a row?” I highly doubt that Nate decided a priori to look for streaks of 1-goal games and merely forgot to correct for the fact that the Rangers are only one of 30 teams in the NHL. Rather, something caused him to notice that the Rangers had played a bunch of 1-goal playoff games in a row–as opposed to something else causing him to notice some other purportedly-unusual thing–and so that’s what he did his test on.

      • This reminds me of Richard Feynman’s sarcastic quote, “You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”

      • Hmm… I would say the suggestion that games are independent events is an assumption, not a hypothesis being tested here. There is some discussion in the comments on 538 addressing the “Wyatt Earp effect” I referenced earlier, which is not the same as the Texas sharpshooter fallacy. There are no relationships being inferred here. Nate did not claim that the Rangers were somehow special or that the streak was not random, just that the streak had a low probability of occurring.

        An event needs to be interesting for anyone to care. The fact that one can conceive of endless game-score combinations is irrelevant because only a small subset of those game score combinations is actually interesting (e.g., 1-goal games, shutouts). Nate looked at this because it was an exciting 1-goal game and it just so happened that the Rangers set an NHL record for consecutive 1-goal playoff games. You can argue that record is somewhat arbitrary, but it still fits in a subset of “interesting” outcomes.

        Jeremy, you seem to be suggesting that if some interesting event happens, there is no proper way to calculate the probability of that event having happened.

      • “I would say the suggestion that games are independent events is an assumption, not a hypothesis being tested here.”

        I think that’s semantics. Nate specified a null model, a null hypothesis, some assumptions…use whatever term you like.

        ” There is some discussion in the comments on 538 addressing the “Wyatt Earp effect” I referenced earlier, which is not the same as the Texas sharpshooter fallacy.”

        They’re related, at least in my mind, but fair enough if you’d prefer to set the Texas sharpshooter fallacy to one side and think about it in terms of the Wyatt Earp effect.

        “An event needs to be interesting for anyone to care. The fact that one can conceive of endless game-score combinations is irrelevant because only a small subset of those game score combinations is actually interesting”

        Perhaps there’s where we disagree. You think it’s somehow meaningful that humans tend to notice certain things and find them interesting, while tending not to notice or find interesting other equally “improbable” things. I think your stance amounts to implicitly assuming that, had Nate Silver decided in advance what unusual feature of the data to look for, he’d have decided to look for “an unusually long streak of 1-goal NY Rangers playoff games”.

        “Jeremy, you seem to be suggesting that if some interesting event happens, there is no proper way to calculate the probability of that event having happened.”

        Well, that’s not the way I’d put it. What I’m suggesting is the interpretation of the probability of some event happening (under some null model or hypothesis or assumptions) is greatly altered if you only chose to calculate that probability because you think the event is unexpected under that null model/hypothesis/assumptions. The point of calculating the probability is to distinguish signal from noise. Identify events that should cause you to reject your null model/hypothesis/assumptions in favor of the alternative that there really is something interesting going on. Not just your brain doing the equivalent of seeing patterns in tea leaves or a face on the surface of the moon. Noticing events that seem to be unusual or interesting, and then calculating probabilities of those events under some null model, isn’t a reliable procedure for distinguishing signal from noise. In the long run, it will cause you to mistake noise for signal very often.

        Put another way, Nate’s probability calculation is mostly if not entirely redundant. He’s already noticed something that seems unusual, and so his probability calculation is very probably going to “confirm” that. In which case, why bother doing the calculation? There’s little or no chance that you’ll ever discover that any event you thought was “interesting” actually wasn’t.

        But I feel like I’m repeating myself and it’s just not clicking with you, so I’m not sure what else to say…Does Sam’s Feynman anecdote click any better, or do you see that as just irrelevant too?

      • I think I understand Dan’s perspective on this one. Yes, asking what is the probablity of a certain license plate occurring after having seen the license plate makes the question uninteresting but I fail to see how that negates one’s right to ask it. Now that I heard about the Rangers’ streak, my curiousity has been piqued. What am I supposed to do? Not follow up on that curiousity? If I notice an interesting plant in my backyard and I ask “how did it get there” and I propose a series of hypotheses to answer that question then are all those hypotheses untestable because they were developed after first observing the phenomenon? Of course, one would want to test those hypotheses on other populations, species, etc but would I not have to make some measurements on the particular individual plant in my backyard?

      • @Richard F:

        “Yes, asking what is the probablity of a certain license plate occurring after having seen the license plate makes the question uninteresting but I fail to see how that negates one’s right to ask it.”

        I never said Nate has no *right* to ask the question! He’s free to ask whatever question he wants. And I’m free to argue that the question he’s asked is a poor one that doesn’t actually serve the purpose that asking it is intended to serve, because asking such questions is a very unreliable way to distinguish signal from noise. And it sounds like you’d agree, since you say that his question is uninteresting?

        “Now that I heard about the Rangers’ streak, my curiousity has been piqued. What am I supposed to do? Not follow up on that curiousity? ”

        No, not at all! You absolutely should follow up your curiosity–in a way that will reliably distinguish whether that curiosity was well founded or not. For instance, you could keep watching to see if the Rangers’ streak continues in future. That is, you treat the data you have, and the apparent signal you think you see in it, as a hypothesis to be tested with new data that will come in in the future.

        The broader issue you raise is much discussed in philosophy of science. It’s called the “old evidence” problem. When, if ever, is it the case that data that were already known when a hypothesis was developed can be used to test that hypothesis? Because there certainly are cases in science in which old evidence has been taken to provide very strong tests of hypotheses. For instance, the perihelion of Mercury was long known when Einstein developed his theory of relativity, but the fact that relativity theory predicts the perihelion of Mercury was taken as a very severe test of relativity theory (which the theory passed with flying colors). One standard (but nevertheless debatable) answer as to when tests based on old evidence are legit is that old evidence can be used to test a hypothesis if the evidence wasn’t “used” in developing the hypothesis. The intuition here is that if the evidence was used in developing the hypothesis, the hypothesis could hardly fail to fit the evidence. But this is a tricky issue, because it’s not easy to flesh out what “used” means here. Deborah Mayo is one philosopher I know of who’s written about this problem.

      • And I’m free to argue that the question he’s asked is a poor one that doesn’t actually serve the purpose that asking it is intended to serve…

        I guess my questions are: 1) what is that intended purpose? 2) what would be a good version of his question?

      • I can’t read Silver’s mind and divine his intentions. But as I said in a previous comment, if the purpose is something like “find out if the Rangers have an unusual propensity to play 1-goal playoff games”, well, that’s the version of the question you’d ask. And to answer it you’d see if they continue to play an unusually large number of 1-goal playoff games in the future.

        That is, the unusual data prompt you to ask an interesting question–that you then go on to answer with *different* data.

        I’m sure one could imagine other sensible questions that might be inspired by Silver’s observation, and reliable ways to answer those questions.

  2. Hey Jeremy- This is an intriguing issue, and I am glad you brought it up, because it seems somewhat frequent that investigators disobey the most fundamental of rules concerning statistics. I was somewhat taken aback, for example, that in the past three dissertation defenses I attended, the students (and by extension, their committees) violated the basic concept of the control. One defense was in biochemistry, and two in ecology. Their “controls” were taken from the same pool as their “samples”. When I pointed this flaw out via a subtle question, smiles turned to frowns, because all of a sudden there was a realization that 4 to 6 years of work was not gonna get published in a top-tier journal. Ouch.

    I have a question concerning your example of inappropriate hypothesis development & testing. If an investigator develops and tests a hypothesis appropriately and obtains an outcome, would it be inappropriate to use the same data to model that outcome? This seems a subtle distinction, but I ask it because model construction can and often is very complex, involving a great many steps. So, if one were forced to model each step of the mechanism using a fresh set of data, it could take a career to build one model. One could assert that each phase of the model was a hypothesis per se, because it is a process of investigation. On the other hand, one could argue that modeling the same data is simply a means to provide a more thorough explanation of the primary result. I know you have a depth of experience in statistics, so I am curious how you would adjudicate the issue. Thanks!

    • “If an investigator develops and tests a hypothesis appropriately and obtains an outcome, would it be inappropriate to use the same data to model that outcome?”

      Can you give a specific example of the sort of modeling exercise you have in mind?

      • I recall one project where we did something akin to what I describe. We were attempting to examine the role of FOXA1 expression in urologic disorders. To do so, we spent about $50K to conduct a microarray study. We were able to determine male mice heterozygous for the gene (one copy of the defective allele) experienced profound ureter and bladder dysfunction. Then we used our data to map the developmental process (molecular pathway) impacted by the mutation. Thus, the molecular model was derived from the same pool of samples we used to test the primary hypothesis. One reviewer thought we should have used an entirely different set of samples for the model, because of what you mentioned concerning unusual artifacts in every data set. Practically speaking, we could not do that given budgetary constraints. However, we assessed the data using non-parametric approaches, which our statistician on the project said was appropriate.

      • Hmm, sorry, too far from my expertise for me to say much useful, as I don’t know enough about what it means to “map the development process impacted by the mutation”. In general, there of course is a big difference between being able to fit a model to data (estimate its parameters), and validating the model, for instance because of the possibility of overfitting (fitting the noise). Brian has some old posts on this issue and related statistical topics.

      • Perhaps a more pertinent situation was described by Shipley & Keddy (1987) concerning individualistic & community concepts as falsifiable hypotheses. They argued for using pattern to deduce mechanism. They assessed species ranges relative to environmental gradients but made no attempt to interpret results as a test of any causal hypotheses. However they argue pattern can subsequently be used to “prove or disprove” proposed mechanisms of community organization. (I put prove/ disprove in italics, as I know such terms are not appropriate for probability statistics). This situation seems somewhat indicative of what you mention concerning “double dipping” insomuch as no hypothesis was put forth prior to analysis, but then onbserved patterns were applied to confirm or deny a model. Thoughts?

      • Ah, ok. No, I don’t think inferring process from pattern is a very reliable procedure in most cases. It’s very rare for a pattern to be so informative as to be diagnostic of a process, at least in ecology. It’s far more common for many different processes to lead to similar patterns. The entire history of community ecology (my own subfield) is littered with failed attempts to infer process from pattern, and no clearly-successful attempts that I can think of. I have an old post on this:

        Has any “shortcut” method in ecology ever worked?

  3. I read your prior post with interest. Perhaps I do not fully understand the issue, but one example where observational/ pattern (i.e., non-experimental) data appeared very useful was Darwin’s Theory of Evolution. Granted he amassed an incredible breadth of observational data, but as best I recall, there were not any experimental data per se involved.

    Another example coming to mind are analyses of flowering patterns to infer effects of climate change. Investigators have used herbarium specimens to document what appear to be changes in flowering times over the past couple of centuries. Again, no particular hypothesis was tested a priori concerning the flowering time of any particular species, but data were fitted after the fact. I know in many cases, researchers are putting out data where the correlations of changes in flowering times over years are not especially strong, but allege moderate to weak linear relationships support their case.

    So the other question I would have is, let’s say one group of correlations in a pattern to process inference is on the order of about 0.65 (p < 0.05), while another group of correlations for a different inferential model is on the order of 0.95 (p < 0.001). Would your ability to assert a mechanism using observational data increase or simply not matter regardless of the correlations observed?

    I appreciate your feedback because this is a particular line of reasoning I am currently pursuing. Thanks!

  4. Regarding the 538 Jeopardy post, Brian writes: “…And 2/3 of the winners were male because more males were making the choice to take this risk. Then they switched to an online test. And suddenly more participants were female and suddenly half the winners were female.”

    However, the 538 post reports this: “Almost half of returning champions this season have been women. In the year before Jennings’s streak, fewer than 1 in 3 winners were female.”

    I’m assuming “this season” is 2015, given the date of the 538 post. The post noted that Jennings’s streak occurred in 2004, so the year before is 2003. The post reported that Jeopardy instituted its online test in 2006. That’s all the relevant data that I found in the post.

    So, if I understand the 538 post correctly: in 2003, 2/3 of winners were men; in 2006, the rules changed to permit an online test for qualification; and, so far in 2015, half of the winners have been women.

    If that is the case, then it seems incorrect to use the word “suddenly” to describe the change between a data point from 2003 and a data point from 2015 based on an event that occurred in 2006. It also seems that we should use more data before making any inferences about the influence of the online test. It would be really interesting and important if the online post made a substantial difference in the gender ratio of Jeopardy winners, but we should probably use more than two possibly-nonrandom data points to make such an inference.

  5. Interesting points on the blind review process. I don’t have strong feelings for or against, but I do wonder how well it actually works. I recently submitted a paper, as sole author, to a journal that employed the double-blind process. I submitted to that journal simply because it was a good fit for my paper, not because it had a double-blind process. However, because my submission was the final paper from a series I had published from my PhD research, I had cited some of my previously-published papers in my submitted manuscript to help explain methods & context. So it felt kind of ridiculous jumping through all the formatting hoops for double-blind, & then citing my own work in the ms using the personal pronoun! Given that many researchers cite their own work, especially in multi-faceted studies where study systems or contexts may have already been described in previous work, how does this affect the success of author-blinding systems?

    • This kind of thing is why it’s common for reviewers to see through author blinding.

      Advocates of author blinding argue that blinding is worth it even if it does often get seen through. This could be for various reasons. Blinding could of course still be helpful in the cases where it doesn’t get seen through. The blinding could serve as a reminder to reviewers to be aware of their subtle biases. It could create slight uncertainty in the reviewer’s mind as to who the authors are, which might be enough to overcome any subtle biases. On the other hand, these arguments kind of seem like bank shots to me, or like slightly wishful thinking. Insofar as reviewers see through author blinding, then you’d think that would reduce or eliminate any positive (or negative!) effects of blinding.

      My other gut instinct is that any positive or negative effects author blinding has with regards to bias in peer review are likely to be small, because I suspect the biases themselves are small–the subtle effect of subtle stereotypes on the part of reviewers (obvious exceptions like the appalling anecdote I linked to aside). That could be why the empirical evidence is so mixed–you’re trying to estimate a small effect size. That’s not to say we shouldn’t worry about subtle biases in peer review–we should. And it’s not to say we shouldn’t try author blinding. In ecology & evolution, I think Am Nat’s experiment with it is worth trying not because the data show it’s a good idea, but precisely because the data are mixed and so we need more information.

      Re: formatting hoops, at Am Nat I believe it’s no work for authors at all–Am Nat just deletes their names from the title page.

  6. Hello Jeremy- I realize it was not a focus of the post, but in thinking about and revisiting some of the classic literature in ecology, I believe many of ecology’s central paradigms in fact originated in pattern to process approaches (GE Hutchinson comes to mind, among others).

    My education and mentors along the way also emphasized use of observational data to elucidate function. Admittedly many of them have long since retired, so they were part of a generation different from yours or mine. While I do not rule out the importance of experimental approaches, I believe they are exceptionally challenging in ecology due to the plethora of variables not controlled or measured in natural settings. I read your prior post concerning pattern to process issues, and I believe you have too narrowly defined what it is and how it is applied.

    As such, I kindly suggest you consider a hefty debate of experimental v. observational approaches in one of your future posts. I believe this would be a worthy and robust philosophical debate.

  7. I just wanted to post a link to another article on Wyoming’s new law: http://www.wyofile.com/blog/critics-say-wyoming-data-trespassing-law-criminalizes-science/

    While I understand researchers concern I also think that this is a more objective evaluation of the law. The blog post linked to here has quote’s by the pro bono representation of Western Watersheds Project which is accused of collecting data on private land without landowner permission. Whether this is true or not I believe a better use of ecologists, environmental scientists, conservationists is to better understand this attitude. We strive to increase science literacy, yet I feel like we ignore the group who would benefit most. That means reaching out to people you don’t necessarily agree with and instead of making science a fight between scientists/researchers against people who maybe don’t understand the process or question it. I think that we should take a step back and ask ourselves why this law was voted into place and honestly think about landowners perspectives as well as researchers perspective. I know that both the scientist and the landowner have likely been in the wrong and I guarantee that you likely know someone (scientist or landowner or private company) that has been in the wrong while also likely knowing what the over-reaction looks and feels like.

    In an effort to move forward I think that we really need to work WITH people. I know it is not easy. I know it is not always possible. But at the very least understanding the concerns of the other side is, I believe, the most professional thing to do and the only way we can increase scientific literacy.

  8. Hi Jeremy, just to follow up on the Nate Silver story – the Rangers have played 4 more games and two of them were 1-goal games and the other 2 weren’t. I think that’s just about what you would expect if 1-goal games happen 50% of the time. So, if Nate’s point was “Hey look at the coincidental thing that happened” the there is no arguing with his point. if his point was “the Rangers tend to have 1-goal games” then there’s no evidence so far that that is true. Best, Jeff.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.