Freigeben über


The Top Five Data Innovations Transforming Wildlife Conservation: Artificial Intelligence for Wildlife Conservation Workshop 2018

AI4E-Biodiversity-RFimages-2017Aug-10As part of the ICML and IJCAI conferences, I was honored to give the invited keynote talk at the Artificial Intelligence for Wildlife Conservation (AIWC) workshop.  It contained a strong agenda of many compelling speakers presenting their work using machine learning to solve challenges in wildlife conservation. 

The topic of my talk was innovations in data collection for wildlife conservation.  Data is the heart of machine learning, and there are some aspects of data collection for wildlife conservation that are particularly harrowing, such as current methods of tagging harming animals and poachers using data from wildlife scientists to target and kill rare species.  However, all is not lost – I have seen some interesting innovations in the area of wildlife data collection, many of them used by our AI for Earth grant recipients.  I spoke about these 5 areas of data innovation:

  • UAV/drone imagery – projects such as FarmBeats have created optimizations for drones to collect data, like an autonomous iPhone app that allows a farmer or biologist to specify a path or area to cover and the algorithm calculates the best path to minimize battery usage and leverages the wind to speed up/slow down.  It also introduced the the innovation of leveraging television whitespaces (unused television channels) for connectivity; television wavelengths are much lower frequency than traditional WiFi and stretch much farther, so one TV Whitespaces router could provide connectivity to a large farm. 
  • Camera trap – researchers at the California Institute of Technology (Caltech) have compiled a custom dataset of camera trap images to help develop computer vision models for identifying animals. Camera traps produce massive amounts of raw image data that must be sorted to find the useful photos, a task that would be ideal for automated computer vision systems. The Caltech dataset helps computer vision researchers refine their models to deal with previously-unseen locations by generalizing about backgrounds. When the models can learn to distinguish what’s typically in the background, they can ignore it and correctly pick out photos with the desired animals—leading to an efficient and scalable method to take advantage of all that camera trap data.
  • Simulation – one difficulty in training computer vision models is providing the models with good data. Someone first has to go through the time-consuming manual process of assessing thousands and thousands of images to produce a dataset that will be effective for the machine learning process—as Caltech did for their camera trap project. However, advanced 3D computer modeling now offers a different possibility: creating simulated images of real-world animal habitats. Simulating images with tools such as AirSim would allow us to quickly build custom datasets covering the wide variety of conditions and situations the computer vision model might encounter, and train the models faster, moving on sooner to processing real imagery and getting useful results.
  • Crowdsourcing – applications like iNaturalist allow citizen scientists (everyday people who care about the environment) to take pictures of wildlife (both flora and fauna) from their mobile phone and submit these observations to a database for biologists to consume.  The pictures can be easily augmented with date and location metadata from your phone, and a machine learning classifier can help non-experts identify the species by suggesting the top matches for the photographed species based on computer vision.  Therefore, the work of cataloguing the local wildlife is crowdsourced, and the computer vision helps people label their data properly.  
  • Social media – the organization Wild Me created a platform called Wildbook which powers websites such as https://whaleshark.org.  This site uses social media for its success.  At 10pm every night, an intelligent agent searches various social media sites such as YouTube for pictures and videos of whale sharks.  When it finds instances of these animals online from people posting pictures or videos of their whale-watching trips, it can identify the animal to the individual animal level—not just recognizing that it is a whale shark in general, but which specific one it is.  This can help with tracking migration paths of individual animals and estimating population densities.  It also posts a comment on the YouTube video letting the poster know that their image was used for conservation purposes and linking to an informational page on the animal which contains relevant information on the animal, locations and times it has been sighted, and other animals with which it’s been seen. 

The AIWC organizers also released a challenge problem on determining an optimal flight path for a drone to fly through a simulated African environment to collect data on elephant location density.  Submissions for the challenge are still being accepted.   

AirSimElephants

The full list of accepted papers is at https://sites.google.com/a/usc.edu/aiwc/accepted-papers.  Check them out; there is some compelling work being done using machine learning to address hard problems in wildlife conservation. 

Computing Robust Strategies for Managing Invasive Plants
Andreas Lydakis, Jenica Allen, Marek Petrik and Tim Szewczyk

Recognition for Camera Traps in Unfamiliar Territory
Sara Beery, Grant Van Horn and Pietro Perona

Crowdsourcing mountain images for water conservation
Darian Frajberg, Piero Fraternali and Rocio Nahime Torres

Exploiting Data and Human Knowledge for Predicting Wildlife Poaching
Swaminathan Gurumurthy, Lantao Yu, Chenyan Zhang, Yongchao Jin, Weiping Li, Xiaodong Zhang and Fei Fang

Towards Automatic Identification of Elephants in the Wild
Matthias Körschens, Björn Barz and Joachim Denzler

Deep Reinforcement Learning for Green Security Game with Online Information
Lantao Yu, Yi Wu, Zheyuan Ryan Shi, Rohit Singh, Lucas Joppa and Fei Fang

Designing the Game to Play: Optimizing Payoff Structure in Security Games
Zheyuan Ryan Shi, Ziye Tang, Long Tran-Thanh, Rohit Singh and Fei Fang

Counting Caribou from Aerial Imagery
Evan Shelhamer, Nathan Pamperin and Trevor Darrell

An agent-based model of an endangered population of the Arctic fox from Mednyi Island
Angelina Brilliantova, Anton Pletenev, Liliya Doronina, and Hadi Hosseini

The Great Grevy’s Rally: A Review on Procedure
Jason Parham, Charles Stewart, Tanya Berger-Wolf, Daniel Rubenstein and Jason Holmberg

Convolutional Neural Networks for Detecting Great Whales from Orbit in Multispectral Satellite Imagery
Patrick Gray and David Johnston

Green Security Game with Community Engagement
Taoan Huang, Rohit Singh and Fei Fang

Simulation for Wildlife Conservation with UAVs
Elizabeth Bondi, Debadeepta Dey, Ashish Kapoor, Jim Piavis, Shital Shah, Fei Fang, Bistra Dilkina, Robert Hannaford, Arvind Iyer, Lucas Joppa and Milind Tambe

Ruling the Roost with CNNs: Detecting and Tracking Communal Bird Roosts in Weather Radar Data
Zezhou Cheng, Saadia Gabriel, Pankaj Bhambhani, Daniel Sheldon, Subhransu Maji, Andrew Laughlin and David W. Winkler

Probabilistic Inference with Generating Functions for Population Models
Kevin Winner, Daniel Sheldon and Debora Sujono

Inferring Latent Velocities from Weather Radar Data using Gaussian Processes
Rico Angell, Daniel Sheldon and Eric Johnson