What new technology will soon change ecology the most?

Many changes in the content and practice of science arise in response to changes in technology. Think for instance of how the advent of the internet has changed how scientists publish and communicate their results. Or think of how the advent of cheap computers changed statistics. PCR is a third example. Tiny GPS tags for monitoring animal movement are a fourth.*

So, what do you think is the next big technological advance that’s about to change ecology in a big way?

Here are a couple of candidates off the top of my head:

  • Drones. Potentially let you sample larger and more inaccessible areas more quickly than you could in other ways. See here and here for a couple of recent randomly-googled examples.
  • Smartphone apps for identifying organisms. With all due respect for drones, this is my pick for the Next Big Technological Thing in ecology. There are already apps that will identify common birds via their songs, even automatically without user input. I bet the day isn’t too far away when you’ll be able to use your phone to identify a lot of species of plants and animals as quickly and reliably as an expert taxonomist or natural historian could, just by taking pictures or recordings of them with your phone. What’s that going to do for ecology? What new research avenues will it open up? And what does it mean to be a taxonomist or natural historian in a world in which accurately identifying organisms (and then looking up all that’s known about them) is something anyone with a phone can do instantly?

What do you think will be the next technological advance to change ecology in a big way? Looking forward to your comments!

*Totally random aside: way back when I was a grad student, I saw a talk about the movement behavior of large lizards such as monitor lizards. I still remember the speaker’s passing remark that one of the great things about working with monitor lizards is that “you can cover them with sensors and they don’t care.” This remark was accompanied by a picture of a monitor lizard with a chunky radio collar and a big battery pack duct-taped to the base of its tail. Perhaps one day (or even today!), my old school science cred will be that I saw someone give a talk on movement behavior that involved attaching heavy kit to animals large enough not to be bothered by this.

47 thoughts on “What new technology will soon change ecology the most?

  1. Speaking for insects (especially bees) I think the day that we can reliably and quickly as experts on smartphones is far, away. For a small group of insects (maybe 20%) it should be possible, but not for the average brown bug around the corner.
    1) A lot of characteristics used for identification will be not visible on a photo (e.g. sexual organs). Thus, somewhat would have to develop a totally novel key if even possible.
    2) Often characteristics are ambigous or to a degree variable and require a lot of experience as well as comparisons with reference collections. I am not sure how this can implemented in an app (but maybe it can).
    3) Maybe it is possible, but it would require too much time and money to implement enough species to replace experts.

    So I think smartphones will help better identifying species in the future, but I guess it will not be next big technology in the next 20 years.

    I go for realtime portable DNA-Barcodescanner, they are already available, but I think they are going to be cheaper and better

    • Yes, your #1 is a stumbling block.

      Your #2 seems like the sort of thing machine learning algorithms were made to deal with.

      Not sure if your #3 is a stumbling block or not. I’d be interested to know how much it cost to build those birdsong ID apps.

    • I think the theory behind smartphones is they can use algorithms humans cannot and examine ratios of body parts more precisely etc. In short machine learning could invent whole new keys. Results to date have been a mixed bag, but I wouldn’t rule it out. When i was a grad student a member of my cohort got great results on just scanning the shape of a leaf which is a thing most human keys can only go so far with.

    • None of these are big stumbling blocks for Artificial Intelligence. Automated identification of insects is already happening, and on some of the most difficult insects to key, e.g. stonefly larvae. http://grail.cs.washington.edu/wp-content/uploads/2015/08/larios2007aii.pdf. That paper is nearly 2 years old. I’ve keyed aquatic insect larvae before (I took a course in aquatic insect biology), and I can tell you that if stonefly larvae keying can be automated, I think the vast majority of insects could be keyed automatically, very soon.

      Your stumbling block 2 is actually what solves stumbling block 1. Combinations of vague judgement calls is what AI is designed for (as Jeremy points out), and AI will pick up on differences that aren’t even listed in human written keys.

      As for three, human labor will always be expensive. Machine learning gets cheaper and cheaper to implement.

      I think the only real stumbling block of AI will be dealing with very rare species where training data sets are too small.

      • Ok, now I’m back to being pleased with my guess of phone apps for identifying species.

        (Can you tell I don’t actually know anything about AI and so am easily swayed by whoever commented most recently? 🙂 )

        Thanks for the link to the paper. Always helps to have concrete examples to consider!

      • I agree that AI should be able to solve most of these problems given a certain time frame. I have a hunch however that the advances in AI may similarly pose a certain threat to the willingness of potential participants.

        Taxonomic identification already offers a pretty clear reward structure for anyone willing to delve into it: One gets faster/ more efficient at identification of organisms with a key. At some point, one can assign an organism to a family, genus based on characteristics shared by the group. The process with time rewards people for the time they invest in learning it.

        AI takes away this learning structure and degrades the person willing to help to the required motile extension of a computer, if in the end all one would do is point the camera of your phone at a bug and be told by the app what species of coleoptera you now look at. That would be dull. It would take away a sense of purpose for the volunteers by removing immediate and intermediate rewards and eventually render the whole exercise boring.

    • I think the tech helps for increasing speed and resolution of data, and that certainly helps with the application of science… but with science as a whole, I don’t think there is much benefit. Similar to the big stats movement this will lead to a proliferation of where do we apply this new tech races.

      Having said that – increasing a scientists reach is not a bad thing… and any tech that does it should be embraced. I think more and better ecologically guided data mining is probably a good thing. But the problem with all the flashy tech is that there are people that will rely on the tech and algorithms to avoid field time and thinking and inductively looking at nature. You have to develop an intuitive thinking and feeling in ecology in nature – because complexity – you mustn’t strive to reduce this complexity from the get go, you must just limit the scale of the complexity for the student, initially.

      I also don’t think that phones are that useful, mostly we are trying to distinguish species in a genus, that would be hard for software and honestly the hardware of phones… sensors and lenses change slowly, noise reduction and lens correction algorithms change fast and they are not all that great… and you can’t bypass them…
      My bias as an ecologist who used species lists on “palm devices” in the field… they encouraged you to allocate a species to what it MOST LOOKED LIKE on the list – it means your mind set was changed from what species is this? – to which of these species is this? I also don’t like the idea of using skillsavers – photograph with decent camera, collect a sample, key it out and then you know – and then take more photographs and submit to image based servers – great idea… will help a lot especially drawing images (of live flowers) off a server for expected species lists of an area – for prior learning, doesn’t help for science, but it does help for speedy application… The server does the same, allocate to which it most likely looks like, not do i know this species, or do i not?

      Drones with good gps and cameras are very useful for consulting scientific purposes – they take better images at a variety of scales, in the now. So this is very useful in rapidly changing environments, like riparian and flooding zones or rehabilitated land, but you still need an adequate sample size of data. So you can in the end map more clearly which areas need what attention for maintenance crews… but this will not advance science – merely the application of it, which is no small thing.

      I would like to use the tech to do for instance an experiment with fixed point photography of micropatches for various treatments on a weekly basis with extra sensors associated with soil nutrient trends… orr drones reading and recording rfids of sensors in the field to reduce battery requirements of the sensors in the field… for harsh or inaccessible environments like pedohydrology variables and vegetation imagery at the same time. but I would hate it to get in the way of a person walking around looking intently at the ground and spotting things and connecting dots in a new way.

  2. Arguing against myself: the common thread of both of my suggestions (and jk’s above) is new technologies that let you collect more data more efficiently. Which raises the question: is the rate of progress in ecology really data-limited? I mean, sure, for some questions it probably is! But for others, probably not. For instance, is more observational data on, say, presence/absence of more species at more sites really a game-changer for questions about species distribution and coexistence in community ecology?

    • “is the rate of progress in ecology really data-limited?”


      With one qualification – we are missing consistent long term broad and deep monitoring data (same taxa, same location for 30+ years at many locations and for many taxonomic groups). That could become become cheaper with new technologies. But it is also very doable with todays technologies. Its mostly a lack of forethought/willpower/focus.

      Otherwise, to overgeneralize we need more thinking, not more data.

  3. I don’t know enough details to provide a great answer but I’d say satellite technology that result in more frequent and more detailed data. For example, LIDAR data for entire states is now available and that’s great for locating everything from trees to barriers to fish passage. But what Brian said about not needing more data, needing better data and more thinking.

  4. Neat post, I have two comments about smart phones.

    1). I’m a bit skeptical about the power of smart phones to revolutionize identification of species. How modern image recognition models work is the opposite of classical taxonomy. Deep neural nets require massive volumes of labeled data, whereas taxonomy relies on the type specimen. The need for massive amounts of labeled data creates a catch 22. The species that we have a ton of labeled data on are the ones we can already easily identify (hence why we have so many labeled images). It’s easy to train a model to identify birch trees, but trying to do the same with a species of weevil becomes much harder because of the paucity of data. Which brings us to the second issue, which is it could widen the gulf between charismatic fauna vs. the rest. Doing this for birds would be relatively easy. People love birds, there’s an army of mechanical turks out there to help you label data and people love to take photos of birds. But as someone who used to spend a fair amount of time looking caddisfly larva under the scope, there aren’t a ton of people taking high res photos to capture the sclera on the thorax to be used by an RNN. Even if you did decide to build your algorithm to identify caddisflies, you need taxonomists to create that massive labeled data set to train it on, so if anything they’ll be more in demand.

    2). I think the bigger way smartphones could revolutionize ecology is the advent of new sensors on the phones, and the ability to attach other sensors to your phone. Here’s an off the cuff example. Right now we use lidar from planes (soon to be drones?) to map the 3 structure of the forest. Could you use your IR camera on the iPhone X to map 3d of different microhabitats on the forest floor? Instead of spending hours identifying plants in transects in the field, could you use smartphone cameras to create VR environments to explore later? Could you imagine an AR headset attached to the phone that you use during fieldwork? Some of this is pretty far afield, but I’d say AR will become really big in the next 5 years.

  5. Here are some less obvious ones.

    There are quite a few systems, previously only used by physical scientists and single-species studies, that are probably on the brink of being utilized by ecologists (primarily for trait-based ecomorphology, etc), due to their increasing affordability, miniaturization, and more efficient/easy data processing.
    1) We’re already beginning to see multispectral photography (in combination with photospectroscopy) take off as a comparative tool (just look at the amount of recent papers that have sprung up in the last few months), which will no doubt expand from visual ecology to other areas. Next in line is hyperspectral photography, but the data processing isn’t quite there yet (simple/quick enough to process many specimens).
    2) Raman spectroscopy is another system that will soon be used for wide scale comparisons of species traits (e.g. ant hydrocarbon profiles, concentration of pigments). It provides quick measurements and handheld devices are already here.
    3) Finally, desktop EMs are increasingly common in ecology departments, and are already very useful for wide scales comparisons that do not require the high resolution of more sophisticated/expensive instruments.
    4) Micro-CT imaging has revolutionized morphological work, and we will no doubt see it pop up in more and more comparative work soon.
    5) The ‘Oxford nanopod’? This is something I have heard will revolutionize ecological genetics, but I haven’t read into it very much. It’s apparently some sort of inexpensive USB device.

    • EMs? Electron microscopes?

      You’re totally right about microCT in paleontology. Haven’t seen it being applied in ecology yet and it seems like it would only be suitable for certain specialized lines of research, but maybe that just shows the limits of my reading and imagination?

      No idea what the Oxford nanopod is either. Any readers know more about this?

      • “Oxford nanopore” is a company that makes a DNA sequencer the size of a USB stick called a MinION. They’re also developing a sequencer that plugs directly into a phone! Currently, the tech seems pretty finicky and error-prone, but is a promising step towards a real-life Tricorder.

        Also, anecdotally, I’ve tried a number of those iPhone bird call apps and they were all pretty shoddy.

      • “Also, anecdotally, I’ve tried a number of those iPhone bird call apps and they were all pretty shoddy.”

        My prediction that phone apps are about to revolutionize organismal identification is looking worse and worse! Even in my own mind, actually.

  6. I was indeed referring to electron microscopes. They are commonly used for reasonably rapid species identification (not as convenient as a smart phone app for insect ID, but it can provide images of structures current phones would struggle to identify, and images can be taken on intact specimens seconds after you place them in the chamber…depending on how quickly you can get the specimen into focus) and morphological trait measurements when employed for ecological work. It can also be used for things like comparisons of color producing structures, insect visual systems, etc. Of course, they are also important for studies of any sort that focus on organisms that are too small to be compared by other methods.

    I agree that most of the techniques mentioned, including microCT, are primarily applicable to certain ‘mechanistic’ disciplines within ecology (ecomorphology, functional trait ecology, visual ecology, chemical ecology, etc), but, I would argue, that this could be said to be equally true of the techniques you have mentioned (e.g. the usefulness of drones beyond specific macroecology questions). I think if a sub-discipline exhibits an unprecedentedly large and rapid development, (producing science of a much higher impact) due to a technological advancement, and for visual and chemical ecology this seems to have already begun (we should see amusing papers with ‘revolution’ or ‘renaissance’ in the titles in no time), then any technology integral to these endeavors should be considered to have changed ecology as a whole in a ‘big way.’
    It should also be pointed out that the ‘mechanistic’ sub-disciplines can often be used to tackle broad ecological questions from a different perspective, or, more commonly, unravel some of the underlying factors responsible for trends and interpretations of results that can appear ‘wishy-washy’ in high impact macroecology papers. However, this is rarely the case, with most skipping to genetics for any mechanistic interpretations; which generally just leaves you with more correlations, perhaps some sequences, that are equally uninformative when in comes to the mechanism behind the change. Another piece of the puzzle certainly, but leaving out an often, if unpopular and difficult to measure, component…but I digress.*

    I certainly agree that smart phones have the potential to be very useful for species ID. There are some interesting devices around to turn a phone into a ‘thermal’ (NIR) camera (e.g. the FLIR one), though they have some huge limitations when compared to the real thing. I have even seen (unconvincing) attempts to capture multispectral images with a smartphone camera! Needless to say, this would be something.

    Finally, the ‘Oxford Nanopod’ business was mentioned by an inebriated ecologist of considerable fame at a conference, who I shall leave unnamed. A quick google search revealed that he was likely referring to the ‘Oxford Nanopore,’ which is a portable low-cost device for DNA sequencing, but doesn’t seem quite as exciting as he was suggesting (then again, I’m no geneticist). He was certainly adamant at the time that this was the next big thing, but there may have been other factors behind his extreme enthusiasm.

    *It is rare to see ecology papers, exploring a ‘big’ question, to combine a nice broad-scale study that incorporates any sort of test to explore possible mechanisms and/or determine useful variables to include. Instead, there is a huge number of papers that grab variables that are easily measured before slamming them into a model, hoping this will sort out the important variables (ideally, the scientists involved will be able to use their experience/knowledge of the system to assist them to improve their models, but it’s hard to imagine that this is always the case). The conclusions that can be reached at the end of such papers are often rather weak, generally consisting of the same broad speculation that can be found at the end of other papers exploring a similar problem (see bulk of large-scale climate change studies exploring insect responses). There’s nothing wrong with this sort of study (sometimes any more than a broad brush is unnecessary/irrelevant to answer a question), but it doesn’t really resolve all that much, at best providing a slightly narrower range of possibilities for the next study. However, because it is these sort of papers that are considered high impact ecology, they make up the vast majority of papers produced and reduce the involvement of those with potentially useful perspectives, such as the now ‘unfashionable’ ecomorphology, to the wayside. Perhaps this is because such research was difficult to achieve without expensive/large/difficult to master equipment?

    • Oops…I was beaten to it.
      Apologies for the long-winded rant. I hope it doesn’t sound dismissive, smug, or aggressive to anyone. That wasn’t my intention. Feel free to ignore it (or ban it for spreading nonsensical bunk), but any challenges to the opinions expressed are most welcome.
      I must admit, I find myself riled at the mere mention of drones. Whilst undoubtedly of future importance, I can’t help but resent that fact that presentations almost entirely made up of drone videos (occasionally camera trap videos) walk away with prizes at conferences, repeatedly beating out those presenting well thought out explorations of important concepts or true methodological innovations.

      • Going to agree that drones are more cool than advancing science at this exact moment.

        In fact drones are going to have their biggest impact only when the other aspects of image processing metioned through the post all come together (hyperspectral, lidar, machine learning) to do quick species ID. I’m not as pessimistic as some but it is still 10 years away (and relies on higher dimensional imaging than current smart phones, not just machine learning).

      • “In fact drones are going to have their biggest impact only when the other aspects of image processing ”

        Yeah, that’s a further thought I just had in response to Ted Hart’s earlier comment. I’m not sure what the advantage is to ecology of capturing a massive amount of augmented-reality imagery (or whatever) to process later if a lone human still has to process the images. This gets back to Margaret Kosmala’s old post on crowdsourced processing of Snapshot Serengeti’s massive camera trap image collection: https://dynamicecology.wordpress.com/2013/05/30/citizen-science-for-creating-large-usable-ecology-datasets-guest-post/ Which in turn arguably illustrates Ted’s point that it’s easier to find some way to speed the processing of images of charismatic popular organisms. Snapshot Caddisfly would not have been nearly as successful, I don’t think.

      • “Going to agree that drones are more cool than advancing science at this exact moment.”

        It’s been a while since a conversation starter post of mine has resulted in a conversation that thought so little of my initial suggestions. It’s refreshing, albeit accidental. Perhaps in future I should make it deliberate–start conversations by tossing out deliberately-rubbish ideas. 😉 (just kidding) (mostly)

      • “pologies for the long-winded rant. I hope it doesn’t sound dismissive, smug, or aggressive to anyone. ”

        No worries, and no apologies needed. Your comments were completely fine and very interesting, I thought. It’s refreshing to have some productive disagreement around here (we’re also getting some in the thread on whether ecologists’ statistical methods are too complicated these days). Productive disagreement with the posts has gotten rarer around here over the years, which is unfortunate because those are the comment threads from which I learn the most.

        Re: talks comprised mostly of drone videos and camera trap images winning prizes at conferences, you’re clearly attending different conferences than me! Which may not be saying much. I basically only go to the ESA annual meeting, so I don’t get out much, conference-wise.

    • “I think if a sub-discipline exhibits an unprecedentedly large and rapid development, (producing science of a much higher impact) due to a technological advancement, and for visual and chemical ecology this seems to have already begun (we should see amusing papers with ‘revolution’ or ‘renaissance’ in the titles in no time), then any technology integral to these endeavors should be considered to have changed ecology as a whole in a ‘big way.’”

      Good point. Interesting to think about pasts cases in which subfield-specific technological revolutions have increased the prominence of the subfield within the field of ecology as a whole. For instance, do folks think that movement ecology has gained in prominence within ecology as a whole thanks to miniaturization of tagging technology? Honest question.

      Although technologies that revolutionize many subfields will always be a bigger deal than those revolutionizing only one. PCR and associated gene sequencing tech, for instance.

      Re: ecology papers that explore “big” general questions, but also nail the underlying mechanisms in a convincing way in a specific study system, that’s often what Mercer Award-winning papers do: https://dynamicecology.wordpress.com/2014/12/01/hoisted-from-the-comments-what-sort-of-papers-win-the-mercer-award/. It is indeed rarely done and hard to emulate, though I wish more people would try. Though I don’t think it’s because such papers tend to rely on expensive/large/difficult-to-master equipment.

      • I just realised that I’ve been posting these comments as ‘tea-stained musings’. This is very embarrassing. This is a page on which I write posts as they come to mind, without the limitations of traditional grammatical rules or any sense of decency. Please ignore this.

        I am an Australian PhD student. If I ever feel the compunction to post again, it shall be under my name.

      • No worries James. We allow anonymous comments and pseudonyms. So long as you’re not impersonating anyone else, or one person using a bunch of different alias to try to fool others into thinking you’re several different people, you’re fine.

  7. Dumb/lazy question: can someone explain to me what multispectral images are and the areas of ecological research for which it would be game-changing to be able to obtain them easily/cheaply?

  8. Let’s see if I get there first this time.

    Multispectral photography is an increasingly common way to measure colour-related traits. It involves using a ordinary SLR that has been modified (by having its UV/NIR blocking sensors removed) to allow the measurement of a specimen’s body patterns as well as brightness, hue, and saturation, etc in the invisible ultraviolet and near-infrared spectra. Easy to use software exists (http://www.sensoryecology.com/image-analysis-tools/), that enables calibrated images to be converted into animal vision (e.g. those cool images of flowers as seen by a bee). This is an important innovation because, previously the only ecological research that used colour measures used subjective colour scores, which of course were not quantitative and didn’t sample any important variation in the invisible wavelengths, which may be important components of a predator vision and are associated with a number of abiotic factors (e.g. NIR reflectance and the ability to tolerate heat in ants). Scores were still used in ecological studies as late as last year (http://onlinelibrary.wiley.com/doi/10.1111/geb.12516/full).

    This technology is especially good for ecological studies because image processing is very fast, and you obtain spatial information not possible with photospectroscopy (using both would provide even more information, but few attempt this). However, the spectrum obtained is only as wide as the limitations of the sensor (SLRs are built with human vision in mind after all), and whilst the range is more than adequate for most visual systems, it doesn’t have the full spectral range you can get with photospectroscopy and hyperspectroscopy. As I mentioned, hyperspectroscopy would be ideal, but image processing is not yet efficient or easy enough to allow it to be widely employed. It is also pretty expensive.

    Multispectral photography is primarily useful for visual ecology (e.g. predator-prey interactions and pollinator ecology), which is where it is employed most in the literature (utilising only UV-VIS), with some interesting within-community comparisons around (primarily birds and insects). However, it is the impending development of large-scale interdisciplinary comparisons over large climatic gradients that represents a major development. This isn’t just ecology, but it represents a very large step forward in a major interdisciplinary study area, in which ecological components play a major role.

    I’m always surprised when people assume research into biological colouration is well established. It has certainly long been of interest (Aristotle, Darwin, Wallace, and even Newton speculated on the subject), but it was only a short time ago, around 2015, that using reproducible, quantitative methods became commonplace, and its use in ecology is still is its infancy.

    This is already too long, so I’ll leave you with a quote from Cuthill et al.’s (2017) Science review, which sums up what I’m trying to get across and confirms that I’m not pulling this out of the air.

    “…Only recently have visual physiologists, sensory ecologists, behavioural ecologists, and evolutionary biologists with shared interests in coloration come together to study the mechanisms of production and perception, the intricacies of function, and patterns of evolution…We are on the threshold of a new era…”

    • Great description of multispectral. I would also lump in hypespectral (breaking the UV-visibile-IR spectrum into dozens if not over 100 channels. Hyperspectral sensors exist in handheld form (measure a leaf), airplane/drone instruments as well as satellite.

      One can debate how strong the claims are (and it varies from chemical to chemical) but because of differential absorbances, ratios of different channels can pull out chlorophyll amounts (the NDVI is an example of this and it require infrared), water concentration, nitrogen, cellulose, etc. And the ability to control for the background light is important and complex (and can be done experimentally – e.g. measure a leaf against black velvet under standardized lighting) or mathematically (using ratios is a simple attempt to control for light brightness).

      One can also just throw that data into a machine learning algorithm and let the machine find predictive power in interactions between bands, even if we don’t know the specific chemical signatures for traits like LMA or even species identity (again working with varying degrees of success). While one can and should be skeptical of the idea that throwing hundreds more variables into a machine learning algorithm guarantees a fantastic outcome, it is also, given the basic fact of different light absorbance spectra of different chemicals, hard to argue against the idea that multi/hyperspectral doesn’t add a lot to basic black and white image factors like shape and size. Both to studying species properties and IDs and also studying physiological condition of whole communities.

  9. I’m not sure that drones will ever take off as standard ecological sampling tech due to expense & airspace issues. Apps have huge potential to reduce costs of sampling large ranges.
    Not sure if you would class this as tech, but I reckon the development of open-source/freeware analysis programs that perform specific subsets of ecological analyses without needing coding skills are a good thing. There are plenty of good ecologists who are not code-savvy, and I wonder how much the focus on coding-based analysis programs limits the advancement of some fields.

  10. NGS in all it’s forms seems the most obvious one to me – especially in combination with eDNA, including ancient DNA. It is already completely revolutionising Ecology (and Biology & Medicine for NGS) but I predict that the big revolution does still lie ahead of us. I also think ever cheeper and more reliable barcoding will be ready for species ID before smartphones are up to the task. So for studying diversity of e.g. insects, eDNA will soon be the most reliable and complete method around (at least for p/a data) – or so I think. The same is true for studying anything that is not visible at the naked eye (barcoding in general – not only eDNA) which I think will become increasingly important in ecology (look e.g. at the rise of ‘microbial’ ecology papers in traditional ecology journals) or (for eDNA) everything that might be visible but either hard to identify (e.g. insects) or has a low detection probability (basically everything). Other examples for the power of barcoding are food web studies (gut content), and maybe even the reconstruction of time series (e.g. by sequencing eDNA from anoxic soil cores – especially in marine environments). There will be the need for a lot of method validation but I still predict it will (continue) to be huge.

  11. Last summer I noticed a plant identification app in China. It had about a 80-90% accuracy. You basically snap a picture of a flower, it use machine learning with user feedback to identify the species. I was really impressed and have seen nothing even approaching this in English.

  12. Flowers. I forgot to mention it also work very well for grasses, which is impressive. And my point is it is already available, but only in Chinese. Although the Latin names are also mentioned.

  13. Jeremy this is a great question. And one that I think community and population ecologists really should think more about. Many of us collect data in the field in much the same way our advisors did (a pencil, a notebook and meter tape), and much the same way their advisors before them. This is just not the case in most other fields of science. From my cursory understanding of the history of science, it would seem many dramatic expansions of knowledge in physics, astronomy, chemistry and biology have been preceded by advances in observational power that comes from a new technology. What technologies have produced new leaps in our ability to observe populations and communities? Perhaps there aren’t many:

    As one minor example for how technology affects ecology I would offer the the case of the flatbed scanner. It’s tempting to see a connection between the surge in interest among plant community ecologists in traits related to leaf are (leaf size, SLA, LDMA) and the availability of cheap flatbed scanners in the mid to late 1990’s and the development of free image processing software such as imageJ. A PhD student in the early 2000’s could do in a few minutes and with only a $100 piece of equipment what would have taken days to do back in the early 1990’s. (tracing leaves, cutting out paper int the shape of the leaf and then weighing the paper cutout was one method to get leaf area prior to scanners). Once flatbed scanners became consumer grade it got easier–and much more tempting–to measure the leaf area of hundreds of species from thousands of individuals.
    Keeping with my flatbed scanner example, I think we will see ecologists start to utilize 3d scanners more and more as they become cheaper. Potentially quantifying 3d traits and architecture of organisms in new ways. For instance: https://phys.org/news/2017-02-bird-lovers-scientists-secrets-beak.html

    I would also second Fabian Rogers comments about eDNA. It is already becoming a quantitative tool for detecting not just the presence but also the abundance of species. Its real value will be in creating quantitative time series of species abundance and potentially recovering paleoecological data from soil, aquatic sediments or caves.

    But I agree with the suggestion that the next biggest thing is going to be image recognition. I am surprised no one has mentioned iNaturalist yet. It has an automatic image recognition function on it’s smartphone app and it correctly suggests the genus or species of most common species I’ve encountered (at least in California). And it does this with relative few training images per species.
    Yes, in theory many species need to be dissected or examined microscopically (flowers, insects, fungi) to determine species based on a taxonomic key. But it could be the case that subtle differences exist which would allow them to be distinguished from an image. (Brian’s point earlier). Combined with super higher resolution images, depth scanners and multi-spectral data, I have no doubt AI will soon be able to outpace human naturalists in almost every way. (And by the way many field ecologists are not always the most gifted at field identification–we’re talking about ecology NOT taxonomy here!).

    So the question is are we data limited in this regard (Jeremy’s other question). I would argue that many of the respondents to this post are far to skeptical of image recognition. We probably don’t need more data on the occurrence of organisms across space. I agree with Brian that the real need is detailed data on organism abundance and behavior collected over time rather than across space. And I think that image recognition will help. In the future, ecologists will train their own neural networks to recognize the key study species at their field site(s). This is already feasible but beyond the technical expertise of most ecologists. (Software built around programs such as google’s tensor flow will make this easier and easier). So the objection that image recognition will not be able to distinguish between every pair of closely related species is besides the point–we only need it to distinguish between the species that co-occur and interact in our study. Now imagine a camera array distributed across your field site (or you could use a drone for low altitude flyovers). Now assuming you can afford to store all the terabytes of image data, I think image recognition will allow plant ecologists to track the growth and phenology of many of the dominant species of plants in that field. For us plant ecologists, the recruitment, growth and mortality of thousands of individuals will become much easier to collect–not just annually, but monthly or weekly. It will also be relative inexpensive to keep such a project running for many years–no need to hire field technicians to measure plant cover and growth year after year. My hope is that we’ll now have reliable streams of observational data at the population and community level that we can test predictive theories of ecology on. Imagine the canonical datasets in ecology, such as BCI, Cedar Creek, Park Grass experiment, except at higher temporal resolution and for cheaper. I don’t see why this couldn’t be applied to other taxa of interest as well, such as corals, fish (in clear water at least), and birds. Of course it won’t work for everything but that’s to be expected. Check this out to see an example of accurate street tree recognition from google streetview: http://www.vision.caltech.edu/registree/

    Philosophically I believe that collecting this kind of data will push ecologists to focus on predicting processes rather than describing spatial patterns. With this kind of data streaming in every month or so, ecology will become a bit more like meteorology, the goal will be to predict the future dynamics of the system from the data we have on the current state. (Shout out to Ethan White’s lab for leading the way in this regard:https://www.biorxiv.org/content/early/2017/09/20/191130).

    • Ok, now I have no idea what to think about my original predictions. I’ll be over here watching our commenters make wagers with each other on whether/when species ID apps will totally change ecology. 🙂

      Interesting re: flatbed scanners. Though that story also illustrates just how high a bar a technology has to jump to revolutionize all of ecology rather than just research on one topic. Stuff like computers or PCR changed ecology (and the rest of science) waaaaay more than flatbed scanners or miniaturized GPS tags or etc.

      Interesting suggestion that image recognition is going to change ecologists’ focus from spatial patterns to temporal patterns. I think a lot of people, including me, would say just the opposite (and have said the opposite in old comment threads here that I can’t find just now). The reason being that technological advances make it easier to sample more locations within any given period of time, but don’t speed time up. It’ll always take X units of time to collect a time series X units of time long (leaving aside reconstructions of past time series from sediment cores or whatever). But I suppose we could be both be right, it’s just that the revolution in time series analysis (assuming that it comes) will have to wait until sufficient time passes for the data to be collected. How long that will be depends on the question and study system, of course–some variables change faster than others.

      • My humble opinion is that not one of the things alone which were mentioned will be a game-changer by itself, but a (creative) combination of some of them will bring new advances. What about a drone, which is able to take plant samples and determines the species by a mini barcoding module directly on board and submitting location and results directly to the computer of the scientist…? (a little bit of kidding…)

      • Re-reading my comment, I’m probably a bit too gung-ho, and I probably didn’t really answer exactly the question you asked. I’m just too excited about your great question! 🙂

        I think we all interpret you original question a bit differently. What type of “change” and what type of “ecology” are you asking about?

        My bias is towards understanding how the abundances of organisms that we can see and count change over time (at the scale of years to decades), and how they respond to exogenous factors (especially anthropogenic changes). I would contend this vague description encompasses many of the big questions in community ecology–and most of the applied questions in ecology. (It probably leaves out big questions in biogeography, paleoecology and evolution.) Within this realm of ecology, I think that automatic detection and counting of organisms from images is imminent and I predict will be a huge source of new data. I think that governments, non-profits and even businesses will begin collecting this data regardless of whether ecologists thinks its useful. Once it begins to become available, ecologists will use it–let no data go unanalyzed, that should be the motto of ecology. The wider availability of streams of image data and the possibility of quickly analyzing these data will certainly “change” how ecologists do ecology regardless of whether it “advances” ecology (probably I’m answering a different question than what you had in mind). Also I’m emphasizing here computer vision and mining images for data broadly, not the use of smart-phone apps narrowly, which I agree will have limited impact.

        I’m not contending that flatbed scanners changed ecology in a big way (although there certainly have been A LOT of papers published in plant ecology on leaf traits over the past 10 years!). My point was that we underestimate to the stealthy way in which technology and convenience drive research questions, rather than research questions driving our choice of technology (I think this is especially true for students). As Army Adviser James Willbanks says in Ken Burns recent documentary about the Vietnam War: “If you can’t count what’s important, you make what you can count important.”

        And this isn’t necessarily always bad thing always (at least initially). Sometimes we do discover new things when we apply a new tool to something, even when we don’t have a good reason to do so.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.