top of page

How RunPod & GPU Computing Help Protect the Himalayan Ecosystem


Imagine manually having to sort through tens of thousands of images taken by camera traps for each study conducted. This is what many biologists and conservationists in the Himalayan region spend lots of their valuable time doing, whereas they could also be out in the field actually protecting endangered animals. Though absolutely tedious, sorting these images is the key to a proper understanding of the ecosystem and may provide insights into the most effective ways to protect it; it’s an essential technique conservation efforts depend on and cannot currently be done more effectively. With Project Trinity, the Kashmir World Foundation set out to improve the process of studying wildlife with camera trap technology. The Artificial Intelligence Team (alias Team Draco) has been tasked with eliminating the need for manual image sorting by developing an A.I. model with similar capabilities. We are proud to announce that we’ve partnered up with RunPod to tackle these problems and help protect endangered species.


Developing an A.I. model is generally an iterative process, meaning a multitude of different models are created, each one slightly better than its predecessor. ‘Creating’ a model is usually referred to as training it and requires enormous amounts of computing power. We start with a pre-trained version of the model and customize it for our specific purpose by exposing it to data from the Himalayas.

During one epoch, the entire dataset is analyzed once for training purposes

Let's illustrate the compute requirements: during a single round of training, the model analyzes the entire dataset, which contains roughly 7500 images, a total of 300 times; this means over 2 million images have to be handled. For Project Trinity, Team Draco has already finished around 50 of these training runs, meaning we've processed over 112 million images!


Luckily for us, the problem of processing images at high speed isn’t unexplored. This capability has always been an essential feature of modern computers; without you noticing it, your display updates itself at least 30 times per second to ensure smoothness and realistic movement. This means that per second the computer already renders a total of 30 images and with the rising popularity of video games, graphics on consumer computers have been held against increasingly higher standards.


The component responsible for rendering graphics is called the GPU (graphical processing unit), which is not to be confused with the CPU (central processing unit). The CPU is optimized to perform a very broad range of complicated tasks and is therefore in control of running an operating system such as Windows. A common analogy is for it to be the brain of the computer. In contrast, a GPU performs best when it handles large quantities of relatively simple computations. This includes graphics rendering during computer games as well as training our Artificial Intelligence models at high speed. Continuing the analogy, a GPU would be the muscle of the computer.


Arjun Bakhale, Sasanka Sreedevi-Naresh, and Julian Pallanez from Team Draco put this to a test by training our A.I. model with identical settings on both a CPU and GPU. They found that the CPU completed after 1 hour and 40 minutes, whereas the GPU took only 7 minutes. Sasanka: “Since CPUs are made to perform computations for a variety of tasks such as working in spreadsheets and browsing the internet, while GPUs are so specialized to do tasks like rendering images rapidly for increased frames per second in games, it can be identified that a GPU is much better for training our model compared to a CPU.”

GPU vs CPU, YOLOv5 training presentation by Arjun Bakhale and Sasanka Sreedevi-Naresh

It can easily be concluded that GPUs are essential for an efficient development process of any artificially Intelligent system. However, everything comes at a price and a GPU comes at a very high price. A high-end consumer GPU can easily set you back ~1200 dollars and not all laptops support them. It is therefore not surprising that one of Draco’s greatest limitations was the lack of available GPUs. Fortunately, this is where RunPod has been of great help to us. Recently Daan Eeltink, team lead of the A.I. Team, had the honor of discussing Project Trinity with Zhen Lu, who founded RunPod together with Pardeep Singh. They were intrigued by our project and the fact that our team consists solely of highly motivated interns with a passion for A.I. and wildlife conservation and graciously offered to provide us with access to their cloud GPUs. (Cloud GPUs are GPUs you have access to, but are not physically inside your computer. They are located in, for example, datacenters and the user can control them via an internet connection.)

A RunPod + YOLOv5 training setup, A.I. Team Meeting, August 3th 2022

RunPod’s main objective is to revolutionize the cloud GPU market and make these services available to both hobbyists and professionals. At the moment, the market is dominated by companies like Google and Amazon, who offer cloud computing services at high prices. “The current landscape is full of tools that you get stuck with using because there are no alternatives. There isn't enough competition and innovation and we plan to change that.” To this end, RunPod offers a multitude of different high-end GPUs for very affordable prices. We encourage everybody to take a look at their website runpod.io and check out what they have to offer.


Thanks to RunPod’s passionate founders, biologists will have an efficient way of protecting the Himalayan ecosystem, snow leopards and other wildlife a better chance of survival, and KwF interns an opportunity to work with fast and efficient A.I. training. We are very thankful to RunPod for their dedication to Project Trinity and hope to long continue our partnership.


244 views0 comments

Recent Posts

See All
bottom of page