The aim here is to only record frames where motion has caused differences from previous frames to be detected. With this method, very many frames where there is no motion, are omitted and therefore lots of image storage space is saved.
A very clever post was made by brainflakes on the Raspberry Pi Forum in May this year. The link is here: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=45235 and it works a treat. The Python program uses, as you will see, the Python Imaging Library (PIL) and other imports. To install PIL, use the following on the Pi:
sudo aptitude install python-imaging-tk
I didn't need to install any other packages, as I already seemed to have the StringIO, subprocess, os and time packages which are imported by the Python program, on my system.
The Python program uses a thumbnail image, buffer1 captured by the captureTestImage() method and compares that with a second one, buffer2. If the number of changedPixels is greater than the pre-defined value of sensitivity, then a corresponding high resolution image is saved on to the SD card. To determine whether a pixel has changed or not, its green channel value is compared with the pre-defined value of threshold. The green channel is the 'highest quality' channel because the human eye is more sensitive to green light than it is to red or blue, and so green is usually allotted an extra binary digit (bit) for its values in a pixel.
So there are these two variables that you can tweak - "threshold" and "sensitivity". The values I used are in the code below, and these I arrived at by trying different values so that the motion detection wasn't too sensitive - giving rise to triggering exposures for anything that vaguely moved - and so, too many images captured, and not sensitive enough, causing long gaps in time between exposures. pageauc is working towards a web based graphical user interface (GUI) in which you can vary all the camera's parameters, and put the images on the web, so I'm going to keep a close eye on him at https://www.youtube.com/watch?v=ZuHAfwZlzqY. His explanatory videos are excellent.
Here's my experimental arrangement:
The bowl of apples, pears and nectarines aren't part of the experimental equipment, but can be useful for refreshment purposes.
The camera board is held on to the eyepiece cover with wires, as before. You can see them sticking out.
And now for an up-close..
You may not recognise my Raspberry Pi in its new clothes - a Gamble Mix 'N Match case from ModMyPi. You get it cheap (£2.49), but you take a chance on getting any colour - I think my blue top and black bottom looks snazzy, and it sits quite comfortably on top of the Mighty Midget field scope. Thank goodness it wasn't pink! Note that I'm still using the 25 cm cable for more versatile positioning of the WiFi dongle.
Once again, I used the very wonderful ImageJ for
- importing the images
- trimming them
- rotating and flipping them (the Python program won't take the -vf and -hf commands)
- image stabilization
- optimizing image contrast and brightness and
- labelling images with their filenames, which reflect the date and time they were taken.
1 import StringIO
2 import subprocess
3 import os
4 import time
5 from datetime import datetime
6 from PIL import Image
7
8 # Motion detection settings:
9 # Threshold (how much a pixel has to change by to be marked as "changed")
10 # Sensitivity (how many changed pixels before capturing an image)
11 # ForceCapture (whether to force an image to be captured every forceCaptureTime seconds)
12 threshold = 10
13 sensitivity = 300
14 forceCapture = True
15 forceCaptureTime = 60 * 60 # Once an hour
16
17 # File settings
18 saveWidth = 1280
19 saveHeight = 960
20 diskSpaceToReserve = 40 * 1024 * 1024 # Keep 40 mb free on disk
21
22 # Capture a small test image (for motion detection)
23 def captureTestImage():
24 command = "raspistill -w %s -h %s -t 0 -e bmp -o -" % (100, 75)
25 imageData = StringIO.StringIO()
26 imageData.write(subprocess.check_output(command, shell=True))
27 imageData.seek(0)
28 im = Image.open(imageData)
29 buffer = im.load()
30 imageData.close()
31 return im, buffer
32
33 # Save a full size image to disk
34 def saveImage(width, height, diskSpaceToReserve):
35 keepDiskSpaceFree(diskSpaceToReserve)
36 time = datetime.now()
37 filename = "%04d%02d%02d-%02d%02d%02d.jpg" % (time.year, time.month, time.day, time.hour, time.minute, time.second)
38 subprocess.call("raspistill -w 1296 -h 972 -t 0 -e jpg -q 10 -o %s" % filename, shell=True)
39 print "Captured %s" % filename
40
41 # Keep free space above given level
42 def keepDiskSpaceFree(bytesToReserve):
43 if (getFreeSpace() < bytesToReserve):
44 for filename in sorted(os.listdir(".")):
45 if filename.startswith("capture") and filename.endswith(".jpg"):
46 os.remove(filename)
47 print "Deleted %s to avoid filling disk" % filename
48 if (getFreeSpace() > bytesToReserve):
49 return
50
51 # Get available disk space
52 def getFreeSpace():
53 st = os.statvfs(".")
54 du = st.f_bavail * st.f_frsize
55 return du
56
57 # Get first image
58 image1, buffer1 = captureTestImage()
59
60 # Reset last capture time
61 lastCapture = time.time()
62
63 while (True):
64
65 # Get comparison image
66 image2, buffer2 = captureTestImage()
67
68 # Count changed pixels
69 changedPixels = 0
70 for x in xrange(0, 100):
71 for y in xrange(0, 75):
72 # Just check green channel as it's the highest quality channel
73 pixdiff = abs(buffer1[x,y][1] - buffer2[x,y][1])
74 if pixdiff > threshold:
75 changedPixels += 1
76
77 # Check force capture
78 if forceCapture:
79 if time.time() - lastCapture > forceCaptureTime:
80 changedPixels = sensitivity + 1
81
82 # Save an image if pixels changed
83 if changedPixels > sensitivity:
84 lastCapture = time.time()
85 saveImage(saveWidth, saveHeight, diskSpaceToReserve)
86
87 # Swap comparison buffers
88 image1 = image2
89 buffer1 = buffer2
90
Here is my first motion detection movie:
It has condensed more than an hour's recording into about 38 seconds, by only capturing frames triggered by motion (and I cheated a little by keeping only every 5th frame out of the huge number of images captured). See if you can identify the different types of tits (: })
And here's my second...
In Summary (when the Pi is running headless and remotely):
Set the Field Scope to view something which is likely to have some, but not much movement to detect. Connect the RasPiCam to the eyepiece and turn on the Pi.
Run Xming
Log in with PuTTY
Run the VNCSession.vnc config file mentioned above. This opens the Pi's desktop on the PC's monitor.
On the Pi's desktop, double-click Geany, the program text editor
Open MotionDetect.py and run it. The LXTerminal will open and display a line for every image that has been recorded
It may be necessary to adjust the variables in the program, threshold and sensitivity
When ready to view and move the images from the Pi to the PC, run WinSCP and copy the files over to the Capture folder on the PC
On the PC, run ImageJ, File -> Import -> Image Sequence, and direct this to the Capture folder on the PC.
Now you can see and play the sequence of images via ImageJ
Super!!
No comments:
Post a Comment