Google+ Followers

Wednesday, July 31, 2013

28. Motion Detection

2000 page views!!
The aim here is to only record frames where motion has caused differences from previous frames to be detected.  With this method, very many frames where there is no motion, are omitted and therefore lots of image storage space is saved.  

A very clever post was made by brainflakes on the Raspberry Pi Forum in May this year. The link is here: and it works a treat.  The Python program uses, as you will see, the Python Imaging Library (PIL) and other imports.  To install PIL, use the following on the Pi:

sudo aptitude install python-imaging-tk

I didn't need to install any other packages, as I already seemed to have the StringIO, subprocess, os and time packages which are imported by the Python program, on my system.  

The Python program uses a thumbnail image, buffer1 captured by the captureTestImage() method and compares that with a second one, buffer2.  If the number of changedPixels is greater than the pre-defined value of sensitivity, then a corresponding high resolution image is saved on to the SD card.  To determine whether a pixel has changed or not, its green channel value is compared with the pre-defined value of threshold.  The green channel is the 'highest quality' channel because the human eye is more sensitive to green light than it is to red or blue, and so green is usually allotted an extra binary digit (bit) for its values in a pixel.

So there are these two variables that you can tweak - "threshold" and "sensitivity".  The values I used are in the code below, and these I arrived at by trying different values so that the motion detection wasn't too sensitive - giving rise to triggering exposures for anything that vaguely moved - and so, too many images captured, and not sensitive enough, causing long gaps in time between exposures.  pageauc is working towards a web based graphical user interface (GUI) in which you can vary all the camera's parameters, and put the images on the web, so I'm going to keep a close eye on him at  His explanatory videos are excellent.

Here's my experimental arrangement:

The bowl of apples, pears and nectarines aren't part of the experimental equipment, but can be useful for refreshment purposes. 

The camera board is held on to the eyepiece cover with wires, as before.  You can see them sticking out. 

And now for an up-close..

You may not recognise my Raspberry Pi in its new clothes -  a Gamble Mix 'N Match case from ModMyPi.  You get it cheap (£2.49), but you take a chance on getting any colour - I think my blue top and black bottom looks snazzy, and it sits quite comfortably on top of the Mighty Midget field scope.  Thank goodness it wasn't pink! Note that I'm still using the 25 cm cable for more versatile positioning of the WiFi dongle.

Once again, I used the very wonderful ImageJ for 
  • importing the images
  • trimming them
  • rotating and flipping them (the Python program won't take the -vf and -hf commands)
  • image stabilization
  • optimizing image contrast and brightness and
  • labelling images with their filenames, which reflect the date and time they were taken.
Here is brainflakes' very neat Python code:

  1 import StringIO  
  2 import subprocess  
  3 import os  
  4 import time  
  5 from datetime import datetime  
  6 from PIL import Image  
  8 # Motion detection settings:  
  9 # Threshold (how much a pixel has to change by to be marked as "changed")  
 10 # Sensitivity (how many changed pixels before capturing an image)  
 11 # ForceCapture (whether to force an image to be captured every forceCaptureTime seconds)  
 12 threshold = 10  
 13 sensitivity = 300  
 14 forceCapture = True  
 15 forceCaptureTime = 60 * 60 # Once an hour  
 17 # File settings  
 18 saveWidth = 1280  
 19 saveHeight = 960  
 20 diskSpaceToReserve = 40 * 1024 * 1024 # Keep 40 mb free on disk  
 22 # Capture a small test image (for motion detection)  
 23 def captureTestImage():  
 24     command = "raspistill -w %s -h %s -t 0 -e bmp -o -" % (100, 75)  
 25     imageData = StringIO.StringIO()  
 26     imageData.write(subprocess.check_output(command, shell=True))  
 28     im =  
 29     buffer = im.load()  
 30     imageData.close()  
 31     return im, buffer  
 33 # Save a full size image to disk  
 34 def saveImage(width, height, diskSpaceToReserve):  
 35     keepDiskSpaceFree(diskSpaceToReserve)  
 36     time =  
 37     filename = "%04d%02d%02d-%02d%02d%02d.jpg" % (time.year, time.month,, time.hour, time.minute, time.second)  
 38"raspistill -w 1296 -h 972 -t 0 -e jpg -q 10 -o %s" % filename, shell=True)  
 39     print "Captured %s" % filename  
 41 # Keep free space above given level  
 42 def keepDiskSpaceFree(bytesToReserve):  
 43      if (getFreeSpace() < bytesToReserve):  
 44           for filename in sorted(os.listdir(".")):  
 45                if filename.startswith("capture") and filename.endswith(".jpg"):  
 46                     os.remove(filename)  
 47                     print "Deleted %s to avoid filling disk" % filename  
 48                     if (getFreeSpace() > bytesToReserve):  
 49                          return  
 51 # Get available disk space  
 52 def getFreeSpace():  
 53      st = os.statvfs(".")  
 54      du = st.f_bavail * st.f_frsize  
 55      return du  
 57 # Get first image  
 58 image1, buffer1 = captureTestImage()  
 60 # Reset last capture time  
 61 lastCapture = time.time()  
 63 while (True):  
 65      # Get comparison image  
 66      image2, buffer2 = captureTestImage()  
 68      # Count changed pixels  
 69      changedPixels = 0  
 70      for x in xrange(0, 100):  
 71           for y in xrange(0, 75):  
 72           # Just check green channel as it's the highest quality channel  
 73           pixdiff = abs(buffer1[x,y][1] - buffer2[x,y][1])  
 74           if pixdiff > threshold:  
 75                changedPixels += 1  
 77 # Check force capture  
 78      if forceCapture:  
 79           if time.time() - lastCapture > forceCaptureTime:  
 80                changedPixels = sensitivity + 1  
 82      # Save an image if pixels changed  
 83      if changedPixels > sensitivity:  
 84           lastCapture = time.time()  
 85           saveImage(saveWidth, saveHeight, diskSpaceToReserve)  
 87      # Swap comparison buffers  
 88     image1 = image2  
 89     buffer1 = buffer2  

Here is my first motion detection movie:

It has condensed more than an hour's recording into about 38 seconds, by only capturing frames triggered by motion (and I cheated a little by keeping only every 5th frame out of the huge number of images captured).  See if you can identify the different types of tits (: })

And here's my second...
This time I used a threshold of 10 and a sensitivity of 150, and condensed about 30 minutes' images into about 25 seconds.  Enjoy the dancing boats!

In Summary (when the Pi is running headless and remotely):

Set the Field Scope to view something which is likely to have some, but not much movement to detect. Connect the RasPiCam to the eyepiece and turn on the Pi.

Run Xming

Log in with PuTTY

Run the VNCSession.vnc config file mentioned above.  This opens the Pi's desktop on the PC's monitor.

On the Pi's desktop, double-click Geany, the program text editor

Open and run it.  The LXTerminal will open and display a line for every image that has been recorded

It may be necessary to adjust the variables in the program, threshold and sensitivity

When ready to view and move the images from the Pi to the PC, run WinSCP and copy the files over to the Capture folder on the PC

On the PC, run ImageJ, File -> Import -> Image Sequence, and direct this to the Capture folder on the PC.

Now you can see and play the sequence of images via ImageJ

Thursday, July 11, 2013

27. TightVNC - a Lightweight Server for a Remote Pi

I remembered that some time ago, Raspberry π IV Beginners told us how to set the Raspberry Pi up as a VNC (Virtual Network Computing) server.  VNC is a platform-independent "graphical desktop sharing system".  It uses the Remote Frame Buffer (RFB) protocol, in this case, to transmit my Windows PC's keyboard and mouse events to the Raspberry Pi, and transmits the Linux machine's (RasPi) desktop and its responses back to the PC, thus allowing the Pi to be controlled remotely and 'headless', meaning, without a mouse, keyboard or monitor.  'Network' includes the internet, so in theory, this should be possible over the web, but we'll see later how we get on with that.....  TightVNC uses a special type of encoding, tight encoding, which is useful for low bandwidth connections.

So I looked this up and found it on the new website and this is really a reproduction of the excellent Raspberry π IV Beginners' instructions:

As the Pi is currently headless, and I have been using Xming and PuTTY (see previous post No 21 at to communicate with it, there is no reason why I wouldn't be able to do all the following stuff through PuTTY :

You can see the Pi Cam on the top left, mounted in a Pimoroni Ltd Raspberry Pi Camera Mount,, which is very useful.  

The camera is now without a scope, but it's also upside-down, so the video images have to be flipped not only horizontally, but also vertically.  Incidentally, as my Pi has recently been having difficulty picking up WiFi in the area where it currently is, I thought that a short (0.25 m) USB 2.0 A Male to A Female cable would make it easier to orientate the Edimax Wireless Nano USB Adapter:
and it works beautifully, with enough length to twist the Edimax to face the WiFi router.  See the Edimax at the bottom of the picture, connected to the short USB cable.

The next picture also shows the 25 cm cable going down to the Edimax at the bottom.  The Edimax is Duct-Taped, so you can't actually see it, but its broad face, the one which shows the flashing blue LED, is facing the router, which is downstairs.

On the Pi, after doing a 

sudo apt-get update

to ensure the Pi's software is up to date, I did a

sudo apt-get install tightvncserver

to install this version of VNC Server and any dependencies it may have, like Java etc.

Then to run this, I did a 


to get it running.  You are then asked for passwords, and to verify them.

I then entered

vncserver :1 -geometry 800x600 -depth 24

to set up a socket - in this case, number 1 (you can set up socket 2, 3 etc to start up more VNC Server sessions at the same time).

Now, back on the PC, you need VNC installed, so the instructions said to go to (clicking this link will immediately download the Windows installer package to your computer) to download the appropriate TightVNC files, (which are free, and open-source, as usual) including the tvnserver and tvnviewer programs.  When these were fully installed, I opened tvnviewer and the following came up:

As you can see above, I entered the Pi's IP address, appended with the socket number (1), and after clicking "Connect" I was asked for the password I had given above.

Then up pops the Pi's desktop:

You will see from the cursor label above, (at the top left, in tiny print) that you can save this session to a .vnc file, (which I named VNCSession.vnc) so that when you want to run again, clicking the icon on the PC will display the Pi's desktop, providing you are already logged on to the Pi with  PuTTY .  Now use:

vncserver :1 -geometry 800x600 -depth 24

followed by starting 


allowing you now to open LXTerminal as follows:

Here I have shown that I have double-clicked on the Pi's LXTerminal, which opens the 'pi@raspberrypi' window.  Then I opened the Command Prompt program on the PC, changed to the mplayer and netcat directory, and entered the command

nc -l -p 5001 | mplayer -fps 31 -cache 1024 -

This of course, runs the netcat program (nc) and pipes its output from port 5001 to the mplayer program as before.  When I run the raspivid program in the pi@raspberrypi window, 

raspivid -vf -hf -t 86400000 -o - | nc 5001

raspivid pipes its video output to netcat and mplayer magically opens up and wmplayer shows the vertically flipped (-vf), horizontally flipped (-hf) (because my PiCam is currently upside-down) raspivid video on port 5001 on the PC's IP address, for a time of 86400000 milliseconds (24 hours).

And the result:
You can see that the video image has opened up to fill the screen (automagically, as Lady Ada would say), and the windows can be minimised.

From now on, provided you are logged in with PuTTY, simply clicking the .vnc icon on the PC, mentioned above, which I called VNCSession.vnc, brings up the RasPi desktop immediately.  
If you need to reboot the Pi, you may need to go back and run tvnviewer to set up the socket again.  

So the whole procedure for running raspivid on a remote headless Pi, is:

Log in with PuTTY

Run the VNCSession.vnc config file mentioned above.  This opens the TightVNC Viewer with my pre-defined parameters, to give the Pi's desktop on the PC's monitor.

Back to the PC, run the Windows Command Prompt program.  Then

cd mplayer and netcat 

to change directory.  Then run the command:

nc -l -p 5001 | mplayer -fps 31 -cache 1024 -

Finally, on the Pi desktop, open the LXTerminal, and enter the command

raspivid -vf -hf -t 86400000 -o - | nc 5001

MPlayer should, after a minute or so, open up and show full-screen real-time video from the remote PiCam.

The verdict?  Well, since I made the above changes (putting the Edimax dongle on a 25 cm cable, and using VNC) the RasPi stays on the WiFi network much better - in fact - it hasn't dropped out yet, after several days of continuous use, except when the heat of the sun caused the Duct Tape to slip, allowing the Edimax to point in a different direction.  Once again - magic!