[DISCLAIMER]
This is a post from 2017 that I’ve recently translated from Spanish to English.
[/DISCLAIMER]
I’m doing the Deep Learning for Coders MOOC from fast.ai and the exercise in the first week consists on using a script to detect dogs and cats from several images. It’s always nice to apply what you learn in a personal project, so I came up with the idea of detecting if the cars that I can see from my balcony are cabs or not.
In order to apply Deep Learning to this problem we need data, so I’ll need to start taking pictures of the street from my balcony. I’m a lazy guy so I’d like to avoid the manual task of the picture taking.
I had a Raspberry Pi and an old web camera collecting dust in a closet. The camera is pretty old with a low resolution (640×480) so a model trained by these kind of pictures would have more trouble making sense of them but that’s what I’ve got for the moment.
My team:

The Raspberry has an OS similar to distributions like Debian/Ubuntu, making it easy to run bash scripts. This is the one I used to take pictures:
#webcam.sh
[code language=”bash”]
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M%S")
fswebcam -r 640×480 –no-banner /home/pi/webcam/$DATE.jpg
[/code]
The above script simply takes a picture and saves it with the creation date.
#getpics.sh
[code language=”bash”]
#!/bin/bash
while true
do
/home/pi/webcam.sh
sleep 5
done
[/code]
And with this script we run the previous one every five seconds.
$ nohup ./getpics.sh &
At last we run the above command to get the script running in the background.
Once several images are taken what we have to do is to manually label them and put each image on the correct folder (taxi /not taxi).
The folder “train” contains the folder “notaxi” and “taxi” and we have to do the same with the folder “validation”.

As you can see, parked cars are also appearing on the images, something that clearly doesn’t help the model training process. One really simple solution is to crop the images, in this case the inferior rectangle is the problem,
for i in *.jpg ; do convert -gravity South -chop 0x140 "$i" cropped/"${i%.*}.jpg" ; done
For each image with the “.jpg” extension we crop 140 pixels in the vertical axis and save it on a new “cropped” folder.
Now before and after examples:
Before:
After:
Yes!! Now we are ready to train our taxi detector!
For the training I used the code from the fast.ai course I previously mentioned. This is all in a github repo. In the repo you’ll find a brief explanation about what the code does. If you run the code you’ll train a model which outputs something like the following:
Resources
Link to repository.
Link to data.