Statistical Balkanization – is it a problem?

Aside from the question about what statistical methods are appropriate to use in ecology, there is a mostly independent question about how many statistical methods is optimal for use across the field of ecology. That optimum might be driven by how many techniques we could reasonably expect people to be taught in grad school and to rigorously evaluate during peer review. Beyond that limit, the marginal benefits of a more perfect statistical technique could easily be outweighed by the fact only a very small fraction of the audience could read or critique the method. To the extent we exceed that optimum and are using too many different methods, I think it is fair to talk about statistical Balkanization. Balkanization is of course a reference to the Balkans (the region in the former Yugoslavia) and how the increasing splintering into smaller geographic, linguistic and cultural groups became unsustainable and led to multiple wars. I think there is a pretty clear case that too many statistical methods in use is bad for ecology and thus the label of that state as Balkanization is fair (I’ll make that case below). I am less sure if we are there yet or not.

After attending a recent conference I got to wondering about how many different statistical methods I had seen used to attack largely similar problems, but I wanted to be a little more rigorous. So I skimmed the methods of every research article in the June and July issues of Ecology Letters for a total of 23 articles. Almost by definition what appears in Ecology Letters is representative of “the mainstream of ecology”. Below are my nutshell summaries (each bullet point is one article) (some of which are probably oversimplified or not completely accurate due to my skimming, but they’re close):

  • GLM, ANOVA
  • Network statistics, Joint species distribution models (JSDM)
  • GLMM, Tukey HSD
  • RDA, ANOVA w/ permutation restrictions to control for temporal autocorrelation, 4th corner analysis with bootstrapping, GLS
  • ANOVA with varPower() correlation structure (=~GLS), LMM, stepAIC
  • LMM
  • LMM, Moran’s I, likelihood ratio tests, PGLS (Phylogenetic regression), variance partitioning
  • PERMANOVA, path analysis
  • ABC, uniform priors, model selection
  • Consensus phylogenetic trees, PCA,  MCMC GLMM, brms bayesian, phylogenetic parameter estimation (Pagel lambda)
  • weighted LMM, temporal autocorrelation regression, Moran I
  • repeated measures ANOVA, corARMA covariance structure, network statistics
  • quantile regression, GAM, permutation
  • MCMC, Bayesian Inference, Orthogonal polynomials, uniform priors
  • GLM, Global Sensitivity Analysis
  • Kolmogorov-Smirnov, t-tests, bootstrapping
  • 3 level nested LMM, Nakagawa & Schielzeth R2
  • PERMANOVA, NMDS/RDA, LMM, SEM
  • nested 2-way ANOVA, Tukey HSD, bootstrapped confidence intervals
  • phylodiversity metrics, beta diversity metrics, LM, SAR (spatial autoregression), minRSA model selection
  • AICc, model selection from full suite of models, LMM
  • PCA, RDA, permutation tests, GAMM
  • LM, LMM on simulation output

Note that LM=Linear Model (regression), LMM=Linear Mixed Models, GLM=Generalized Linear Model (logistic and poisson regression). And beyond that, how many of the acronyms you understood is perhaps part of the point of the article.

Does this represent statistical Balkanization?

I don’t know. You tell me! On the surface I was somewhat reassured by the results. There seems to be a strong convergence on moving beyond linear models (regression) to add either random effects (linear mixed models or LMM) or generalized (non-normal) errors (GLM) or both (GLMM). Beyond that you could say there are a handful of multivriate papers using the basic PCA/RDA and a few specialty topics that will always be specialty like phylogenetics, spatial autocorrelation, network so pretty good. A unified core around LMM/GLM/GLMM and some specialty methods sounds promising The core shares a basic model structure y=a+bx_1+cx_2+…+ε (possibly with a link function). And the idea that everybody is interpreting linear coefficients is unifying.

But its when you dig into the details that the wide array of approaches appears. That basic model is being fitted using:

  • sum of squares with F-tests (ANOVA)
  • OLS/MLE by direct deterministic computation (i.e. calculating a formula)
  • Maximum likelihood fit by direct integral quadrature (deterministic approximate computation)
  • Maximum likelihood fit by Monte Carlo Integration
  • Maximum Likelihood fit by Markov-Chain Monte Carlo integration
  • weighted LMM
  • a wide variety of error covariance structures
  • multiple methods to detect and deal with temporal autocorrelation
  • multiple methods to detect and deal with spatial autocorrelation
  • bootstrapped confidence intervals
  • permutation based confidence intervals (with several variations including constrained and unconstrained permutation)

And the inferential frame includes

  • traditional p-value from F-tests
  • traditional p-value from likelihood ratio tests (which can collapse to F-tests in some cases)
  • AIC model selection from a subset of models chosen by the author
  • somewhat related AIC selection on all possible models
  • minRSA model selection
  • Bayesian (these two both use uniform/uninformative priors but we are increasingly seeing vague priors)
  • deviance partitioning (an extension of variance partitioning) done in several ways
  • different methods of calculating pseudo-R2 on GLMs

And in just 23 articles in a very central journal we see

  • multiple applications of GAM
  • one application of quantile regression
  • a handful of phylogenetic methods
  • a handful of network methods
  • a handful of multivariate methods
  • multiple uses of SEM or path analysis
  • 4th corner analysis

I want to emphasize I don’t think any of the methods were wrong. If I were a reviewer I would have passed all of these methods (at least based on the skim I did). So I am NOT calling out individual authors. Rather I am musing on the state of the field as a whole. Its a group fitness argument, not an individual fitness argument.

Part of what’s happened is the move to LMM and GLM (and GLMM) have meant we left the neat world of normal-statistics linear models where the R2 truly was simultaneously the % of variance explained and the square of the Pearson correlation and there was a direct (matrix algebra) formula for the solution. Now we must use iterative methods to solve (and plenty of people can attest to the challenges of getting complex models to converge – I heard it mentioned multiple times a the conference that started all of this). And all kinds of extra assumptions are brought in. For example what is the minimum number of levels one should estimate a random effect on (debatable but almost certainly higher than many – most? –  published analyses) And although the mathematical machinery of GLM is elegant, it requires iterative fitting that can go wrong (e.g. notoriously on noisy logistic regression) and deviance is much less direct to work with than normal residual errors. There are probably a dozen different efforts to define an analogue of R2 in the GLM/deviance world. And despite being used for a couple of decades many fewer people are expert at diagnosing the violations of the additional assumptions that creep into GLM & LMM than they were at diagnosing OLS.

At that level it seems like there could be some challenge in the high diversity of methods.

Why would Balkanization be a problem?

I can think of several angles from which using too many statistical methods in our field could be a problem. The reviewer and the graduate student are the two that strike me most.

Does anybody reading this blog feel competent to actually review every method listed above (even if say we leave out the phylogenetic and network methods)? There is of course more than one level to this. One level is can you read the stats well enough to interpret whether the authors take the right biological take home message. But another level is do you know that particular method well enough to know “where the bodies are buried”. IE where things can go badly wrong. Where wrong data structures or method parameters (in R packages) or assumption violations or poorly behaved distributions of the errors can completely break things?  How many readers have poked into the control structure of lme (package nlme) that determines the optimization process? Everybody good with BFGS as the default optimization method (lmer has a control parameter too with different defaults)? Do you have a good handle on whether the authors did proper diagnostics on MCMC burn-in? Whether an appropriate quadrature was used for integration? (do you even know what quadrature for integration is or why it might matter)? On whether spatial autoregression was needed and properly used? The pros and cons of the half dozen major methods that have been developed to address spatial autcorrelation (many work well but a few are in the literature that work very poorly)? On whether simpler methods of GLMM showed good or bad convergence? On whether assumptions of mixed models were violated? On how much deviation from a poisson distribution is acceptable to still run a poisson regression (like normality, very few ecological datasets are truly poisson)? What chi-squared value signals a good fit for SEM or should a different measure of model quality be used for SEM? I would hazard a guess that very few (if any?) people could evaluate all the methods at this level. And so what happens during review? Are editors always able to get an expert in the method used out of the 2-3 reviewers they land? I doubt it? Do people speak up when they are unsure about whether the complex stats are rigorously and appropriately applied? I doubt it. I think they mostly stay quiet and hope some other reviewer (or the author) knows what they are doing. This is a far cry from the days when everybody knew how to diagnose whether a (possibly transformed) OLS regression was well done or not. Maybe the default parameters of these more complex methods are always fine and we don’t need to worry? But given how many students come to me with convergence errors and that don’t know what convergence error means and fix it by trial and error rejiggering of the data until the error goes away (not by tweaking optimzation control parameters), I doubt it? So are our statistics being published more or less reliable than the days of OLS and ANOVA? Do we even know the answer to that question?

To be sure we could train and build expertise in any one of these things; but can we expect reviewer expertise in ALL of these things? If not that is statistical Balkanization.

And what about the graduate student. On the one hand I was psyched. The list of methods in the 23 papers above almost exactly matches the content of a 2nd semester graduate stats course I teach (except for ABC and 4th corner which I don’t cover yet). But I can tell you from teaching that course that my course is way more exposure than most graduate students get. And that the exposure I can give in a semester to most of these topis is VERY superficial. I spend ~15 minutes on MCMC. One lecture on Bayesian. A week on GLM, A week on LMM. Very little additional time on GLMM. One lecture on non-independent errors (GLS). A week in which non-linear regression, GAM, quantile regression and machine learning are all smashed together. And I guarantee you that at that level of coverage the students are not qualified to review let alone use these techniques themselves (and the students would be the first to agree with me). So what the heck is our strategy for graduate statistics education in this day and age? What is reasonable to expect students to learn? Where do those course slots come from? And how do we expect students to learn the rest?

And if freshly minted PhD students cannot read those papers critically, who can?

To repeat myself, I don’t object to any specific method. They all exist for good reasons. And I’ve probably been a co-author on papers using almost all of these techniques and a first/senior author on papers using more than half of them. So I’m not singling out individuals. Nor do I have easy answers for putting the genie back in the bottle. But I definitely think as a field we have moved past the optimum degree of complexity. The volume of the convex hull spanning all statistics commonly used in journals seems to have grown exponentially and I don’t think that is good for the field. It worries me about the quality of peer review on statistics. And it makes me feel badly for graduate students. We just might have achieved statistical Balkanization.

What do you think?

 

32 thoughts on “Statistical Balkanization – is it a problem?

  1. Dear Brian,

    I think what you describe only reflects on the state of the field; many articles in Ecology are method papers. Whenever I attend a conference where someone is pushing for a new statistical approach, I often ask the same questions: How wrong am I for not using this new approach? Am I at risk of misinterpreting my results because I use other statistical tools? In most circumstances the answer to the latter is a resounding NO, especially as we are moving beyond the dichotomous p-value interpretation (and I believe we are, although slowly).

    In my research field, magnitude, direction and generalization of effects are what matter most. As a reviewer, I get suspicious when I feel that the main results are buried under layers of fancy statistics. I get suspicious of the conclusions, not the stats, unless they are plain wrong (e.g., strong pseudo-replication problems).

  2. Well thanks for using and introducing the term “Balkanization” to ecology/stats audience, and propagating this narrative even wider. And yes, it is a problem.

    Sincerely,
    a researcher from w̶a̶r̶-̶t̶o̶r̶n̶ balkanized Sarajevo following this blog.

  3. An excellent article Brian, as thought provoking as ever. I think your conclusions lead to a question I often find myself asking when reading ecology journals – what is the value of many of today’s ecological research papers? As an adviser on ecology and nature conservation, I see the value of ecological research in its applied nature – i.e. in helping societies to understand the impacts of their choices in order to help them choose more sustainable policies. And I consider this value relating to choices at both a large scale (e.g. resource efficiency choices at a societal scale) and a small scale (e.g. informing decisions on how best to manage a parcel of land to conserve a particular species/habitat). I also recognise that an ecological research project does not itself need to be applied to have an applied contribution, as research to test an ecological theory can have significant applied impact if it subsequently contributes to changing our understanding of ecosystem functioning. Thinking in this way, I can’t help but question whether many of the articles in leading ecological journals do meaningfully increase our ecological understanding (and therefore add value to the field)? Yes a new statistical technique may help to fractionally reduce uncertainty in the magnitude and direction of an effect. But has our biological understanding meaningfully increased as a result? And was it the most efficient use of scarce resources (both human & financial) to get this marginal reduction in uncertainty? Often, it seems not. Societies require improved understanding of ecological systems if they are to increase their likelihood of making the most sustainable choices. However, I suspect that much of the ecological knowledge required to make such choices will not be gained from the development of elaborate new methodological techniques, but rather from the application of existing techniques at different scales and in different systems – i.e. in worthy but non-sexy research that is unlikely to be published in the leading journals. Ultimately society pays for ecological research. But if ecology increasingly deviates from providing useful knowledge for society will it continue this subsidy? Only time will tell…

      • And reading about the Apollo moon landing on its 50th anniversary earlier this week, I was interested to learn that even at the time the Apollo program was criticized for wasting money that could’ve been spent on solving social and economic problems here on earth.

        So, question: With the benefit of hindsight, was the Apollo program a good use of money? If so, why? Because of what we learned about the moon, because of technological spinoffs, because it gave people good jobs at NASA, because of the can-do pro-science spirit it instilled in the populace, because discovery and exploration are just inherently Good Things, or what? Because if even the Apollo program is hard to justify, isn’t it going to be even harder to justify, say, the sort of microcosm-based fundamental ecological research someone like me does?

    • I do agree with Jeremy that we have to be careful to value basic research not value all science in the context of direct social benefits. But I’m going to differ from Jeremy a little and agree with you in that we have gotten into this world where *everybody* has to publish *all* the time which cannot but help to lead to the publication of more trivial papers and I don’t think that is good for ecology.

    • Danny, there are two ways that ecology can fall short in contributing solutions to societal ills – first, not addressing questions that are relevant to societal problems or two, asking relevant questions but not providing useful answers. I think our problem is more the second than the first.

  4. Hi Brian,
    I consider myself “okay” at stats (not an expert and not a novice); and as such I have two questions:
    1. Firstly, is the ability to analyse data in ever more complex/specific ways going to be at the detriment of designing studies well?
    2. Secondly, does the choice of these stats methods really impact the results/conclusions, and if it does is that not more due to an under-powered study?

    • Although you phrase them as questions, if I had to guess your opinions I would agree with both of them.

      1) Absolutely techno-statistical skills have seemed to come at the experience of careful experimental design. And that is not a winning trade-off.
      2) Very rarely do these more complex answers give qualitatively different answers (other than causing p values to wander from slightly one side of the p<0.05 fence to slightly the other which is not really a qualitative difference). And if they did it should cause us to worry! A big effect is a big effect no matter how it is analyzed.

  5. This post really strikes a chord as I find myself increasingly frustrated in efforts to fully understand the analyses I have to evaluate (in the “do I know where the bodies are buried?” sense), or even the analyses in papers I’m an author on (mostly large group papers rather than in-house ones).

    I like the question “is the state of the field good?”, but I think it can only be answered in reference to alternative states. Every decision involves tradeoffs, so I wonder what the alternatives to Balkanization are and what tradeoffs they entail? Do we ban the use of analyses that have essentially equivalent but more widely understood alternatives? If so, who gets to decide what’s allowed and what’s not? Do we set a higher bar for new methods papers, insisting that they propose more than a slightly reduced chance of making a small error in a narrow range of circumstances? That’s problematic because 3-4 people (editor + reviewers) will make a call on what the broader community even gets to consider. So, while my first instinct was to say “no” to the question, I’m not sure that’s my answer relative to any realistic alternatives I can think of.

    On the topic of tradeoffs, one concern is that more hours/day/years spent mastering every new analysis method is less time doing everything else, such as thinking deeply about what questions to ask, what field and lab methods might be best suited to the question (possibly ones that don’t yet exist), what qualitative conclusions one might draw from the massive literature on topic X (i.e., synthetic thinking), how to identify species in the field, etc. This concerns me. One solution is specialization. Every department hires one or more quantitative specialists whose job it is to know all this stuff and to help others navigate statistics (not just once you have data, but throughout the whole process). Scientists routinely make use of services for things like DNA sequencing, soil/tissue chemical analyses, satellite image processing, etc., because the efficiency of the whole operation is improved by doing so. So why not statistics?

    • Well I agree you’re not going to have some top down centralized determination of what is acceptable. But reviewers could speak up about “out of the mainstream” statistics a lot more often than they do. Or tell AEs that they cannot vouch for the statistics used a lot more often than they do.

      Medical science does use statistical consulting heavily. I think the reason we don’t in ecology is as plain as not enough money.

  6. Hi Brian, it seems to me that for most of the linear models (generalized, mixed, incorporating spatial and temporal autocorrelation, Bayesian approaches), the whole point of adding increased complexity is the assumption that we will end up with better models and/or better estimates or our uncertainty. Presumably, the additional complexity helps us (1) decide what variables to include in the model, (2) identify the functional relationships that link driver and response variables, (3) estimate parameters and (4) know how good or bad are models are. You’ve outlined the costs of the additional complexity – is there any work being done to see if we’re getting the benefits we think we’re getting? Couldn’t this be, at least in part, answered with data? Jeff.

    • Jeff – unintentionally you are trying to draw me down a road I’ve been before: https://dynamicecology.wordpress.com/2012/09/11/statistical-machismo/

      But most of the complexity is around non-normality of errors (GLM) and non-independence of errors (LMM & GLS). Mathematically we know that ignoring this non-independence does not bias the estimates (e.g. of coefficients) just reporting smaller than correct confidence intervals (and hence p-values). So as long as you don’t live in the world where p=0.045 is radically different than p=0.055 it is unlikely to have huge impacts on the outcome. But you do get better estimates of uncertainty. I’ve been asking for examples where the ecological interpretation changes for 6 years now and have yet to hear of one. The one case it can is where the errors are confounded with the variables of interest, but you are never going to untangle those with a regression design.

      The bottom line though is a large effect is going to show up as a large effect no matter what method you use. And a small effect is going to be small (and variously significant or not depending on methods unless you have a lot of power, in which case it will be consistently statistically significant and biologically insignificant).

      I think two viable theories for chasing more and more diverse statistical methods are:
      1) just because (its fun, its impressive, its something, but not really scientifically motivated)
      2) we’re chasing smaller and smaller signals and I’m not sure that’s a good thing

      • Hi Brian,

        I think whether or not better characterizing uncertainty “matters” depends on your goal. If we’re talking inference, where you want a good estimate of the beta_hat effect and we’re OK with p = 0.045 = 0.055, then it might not change the outcome. If we’re talking about prediction, with a focus on y_hat, then properly characterizing uncertainty may (will) be very important. Kind of tangent to the post, though…

  7. Very thought-provoking post and comments. I think I agree that Ecology is experiencing statistical Balkanization, and that this has downsides, insofar as we’re talking about several different flavors of linear models being used to do hypothesis testing and description of patterns. I’m not talking strictly about null hypothesis statistical testing here, I’m talking generally about evaluating evidence for a hypothesized relationship or mechanism. As you’ve pointed out, Brian, a large effect is going to be large. That said, at least at the present it seems like this is more of a source of confusion than a source of conflict, so I don’t think we’re playing the Balkanization metaphor to its full potential yet. Thankfully. I don’t mean to downplay a certain amount of gatekeeping around methods and other manifestation of statistical machismo, because there are certainly zealots for particular approaches, my experiences just haven’t indicated that ecology, as a field, is well characterized as being divided into number of different statistical camps that each are in conflict with each other. I think most ecologists just use whatever approaches they or their colleagues already are comfortable with, and this degree of Balkanization is sub-optimal because it hinders communication, makes work difficult to evaluate, and to some degree slows the pace of discovery since there is an opportunity cost to learning and executing complex statistical methods for marginal gains.

    One arena where better estimates of uncertainty probably has meaningful consequences is in ecological forecasting. I’m not a forecaster so I’m not entirely certain what role this plays, but I hope someone else can comment on this.

    The other point I’ll make is that especially outside the different flavors of linear modeling, it seems like complex methods are valuable when they really expand the kinds of information ecologists can glean from data. You mentioned network analysis above, I think this is one example. Also, wavelet analyses. These are techniques that have real power but only relatively small fractions of ecologists are familiar with them, and so there are many of the same problems with communication and review that you’ve pointed out. It’s not really possible to be well-versed (especially in the “where the bodies are buried” sense) in all of these approaches, even for quantitative ecologists (I consider myself one) or statistical ecologists, so I think that some level of specialization is inevitable, and to some degree beneficial. Whether that rises to the level of Balkanization I’m not sure.

    • Jon, your recent paper using wavelets to look for signals of “detuning” in spatial synchrony is one of a relatively modest number of ecology papers where I’d say “sophisticated stats were essential to detecting this effect, which is large enough that it’s definitely real and important, yet is also an effect that would’ve been overlooked by simpler methods”.

      It’s perhaps telling (?) that the example of using wavelets to detect spatial synchrony of population fluctuations of a particular wavelength is very different than “explicitly modeling non-normal, non-independent errors”. Somewhat like Brian, I’d like to be pointed to an example in which using a generalized linear mixed model (rather than dealing with non-normal, non-independent errors in some simpler, easier-to-understand way) altered the substantive ecological conclusions in some important way.

      • Yeah, I’m thinking of working a version of that “many analysts” exercise into my advanced biostats course. Great teaching tool for getting students to appreciate all the judgment calls that go into a typical statistical analysis.

      • Thanks, Jeremy, for your positive comments about our paper. I definitely strive to work on things that will move the needle in terms of what we can learn from data, so it’s really gratifying that you think we’ve succeeded!

  8. As a PhD student I feel overwhelmed by all the models and ways of doing things, and when I’m attempting to analyse my data, trying to find the ‘best’ way gets confusing, because you’ll get different answers from different experts, and when you go to the literature, there’s even more possible ways of doing things, but in the Methods they don’t explain how they did it, just a brief summary, so then you have to figure out how to write the code, input your data in the right way, check if it meets the model assumptions, figure out what to do if it doesn’t. Yet in undergrad all I learnt was t-test, regressions, and ANOVAs all in excel, and also multivariate analysis in PRIMER (which is unfortunately not free). So yes, for someone starting out, I think this is an issue. It’s impossible to learn all methods, and then, going forwards, unless you’re familiar with the particular method, how can you review the paper (or, I’m sure as often happens, because it’s not the reviewer’s method of choice, they’ll say the way it was analysed was ‘wrong’ even if it is fine to do it that way).

  9. I foresee one possible solution to evoking admission of humility from reviewers as having check boxes they must complete when submitting the review. Examples:
    * Please rate your familiarity with statistical techniques used in this manuscript: {NA, 1, 2, 3, 4, 5}
    * Please rate your familiarity with the ecological system (species, ecosystem, ecological process of interest): {NA, 1, 2, 3, 4, 5}
    …where 1 = maybe have heard of it before and 5 = proficient enough to find the bodies. Ideally the answers to the checkboxes would accompany the printed/posted article if accepted, even if the reviewers remain anonymous.

  10. Brian, thank you for talking about a perplexing situation in ecology: the statistical toolbox is getting increasingly heavy for teachers to handle, for reviewers to comprehend, for students to master, for practicing ecologists to master AND to use AND to explain to non-ecologists.
    Despite being a practicing ecologist/consultant who longs to use good statistical information, I want to admit that I have inadequate knowledge (if any knowledge at all) on most of Brian;s ECOLOGY LETTERS nutshell. And this despite the fact that I am an avid enthusiast of statistical learning …. statistics has been in my weekly learning plan (aka Jef,s Friday 19th of July 2019 post) for years.

    Second, I am not an exception: most of my practicing colleagues (of whom none had been lucky enough to have Brian or Jeff or others of you as a teacher in under- or post-graduate studies) are even worst with statistics. They are even ready to quit participation in a paid project if a little bit more that the absolutely elementary level of statistics is required. Note that several of them own a PhD.

    Third, is it just me that bumps into so many statistical errors in peer-reviewed papers??? How can these errors have slipped the notice of the reviewers? Brian’s anxiety seems to have become fully embodied in ecological literature regardless of how old/new statistical methods are . The repercussions of this may be worth to consider: consultants can not understand papers and therefore gain little knowledge about new (and old) statistical tools thus they can not treat their data at the optimal thus they frequently create reports of low validity which are unconvincing on issues of ecological management. Decision making in a variety of issues is still based on such reports.

    One potential way out-of-the tunnel is to defy statistical machismo as Brian eloquently described it some years ago….and try to do our best with that little statistics we know. A second way out is to create opportunities for ecologists to enter guided learning without having to spend thousands of dollars and hundreds of mandays to learn: short courses (the MOOC type) titled as model selection, Bayesian methods in ecology etc are so welcomed by not-so-young ecology professionals!!

    Last but not least…some Geography: as a Balkan (Greek) ecologist, let me indicate that Balkans are not just “the region in former Yugoslavia” but the entire peninsula at the southeastern point of mainland Europe comprising Romania, Bulgaria, Albania, Greece, former Yugoslavia countries and european Turkey.

    https://en.wikipedia.org/wiki/Balkans

    Nowadays these countries/regions share many cultural features and many happy moments …

    • I am a Balkan and I do not feel offended at all.

      I can not deny the recent and not so recent historical past of several wars in the area…although I am not sure that wars before the 20th century serve as an adequate analogy with the current “divide into smaller pieces” feature Brian wants to emphasize in ecological statistics….
      Yet our point of concern is ecological statistics and not Balkan history.
      .

  11. Pingback: Interpretando resultados – Mais Um Blog de Ecologia e Estatística

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.