Authors: Sazida B. Islam, Damian Valles, Toby J. Hibbitts, Wade A. Ryberg, Danielle K. Walkup, Michael R.J. Forstner

Accurate identification of animal species is necessary to understand biodiversity richness, monitor endangered species, and study the impact of climate change on species distribution within a specific region. Camera traps represent a passive monitoring technique that generates millions of ecological images. The vast numbers of images drive automated ecological analysis as essential, given that manual assessment of large datasets is laborious, time-consuming, and expensive. Deep learning networks have been advanced in the last few years to solve object and species identification tasks in the computer vision domain, providing state-of-the-art results. In our work, we trained and tested machine learning models to classify three animal groups (snakes, lizards, and toads) from camera trap images. We experimented with two pretrained models, VGG16 and ResNet50, and a self-trained convolutional neural network (CNN-1) with varying CNN layers and augmentation parameters. For multiclassification, CNN-1 achieved 72% accuracy, whereas VGG16 reached 87%, and ResNet50 attained 86% accuracy. These results demonstrate that the transfer learning approach outperforms the self-trained model performance. The models showed promising results in identifying species, especially those with challenging body sizes and vegetation.

Suggested Citation

Islam, S.B., D. Valles, T.J. Hibbitts, W.A. Ryberg, D.K. Walkup, and M.R.J. Forstner. 2023. Animal species recognition with deep convolutional neural networks from ecological camera trap images. Animals 13:1526.