Here are some of the results of my work on the GaugeCam water level height measurement project.
The problem: We measure fine when the target is stable relative to the camera, but if the camera gets bumped, the target sinks down or tilts in the water, or anything else happens to alter the geometric relationship between the camera and the target, measurements come out wrong.
The solution: Adjust the calibration to accommodate some of the changes between the camera and the target. We cannot handle big changes, but we can handle those that most commonly occur in the field.
We have finally gotten around to working on the problem of handling motion between the camera and the calibration target. After has been calibrated, if the camera has moved, we need to make adjustments so our water-level search does not measure improperly. We have finished the first step in the process. If we know the nominal position of a calibration feature in an images such as that shown in figure 1, we can find the change in position of the target in a subsequent image such as is shown in figure 2. Figure 3 shows the reference image overlayed with the moved image. As can be seen they do not line up. Then, all we need to do is move the image in rotation and translation so the targets on the moved image are aligned with the reference image, then we can perform the search accurately as before. Figure shows the moved image overlayed on the reference image after the adjustment has taken place.
The next step will be to integrate this in to our GRIM software folllowed by integration into our web service.
Figure 1. Reference image
Figure 2. Target find of moved image
Figure 3. Unaligned images
Figure 4. Adjusted image overlayed on the reference image
Here is a brief video that describes the current state of the GaugeCam Remote Image (GRIM) software GUI. Most of these changes are “usability” changes, but there will be more changes in the near future to accomodate some of the advanced image processing techniques to improve on the already impressive suite that is currently in place.
Here are some images that have failed in the past, but that can now be found with our new (not yet released) waterline finding algorithm. We have some more work to do particularly on images with shadows in them, but we are definitely moving up the curve in terms of our ability to handle more difficult images. The last dirty images is particularly impressive. We will add a few more improvements to this, then start working on handling minor camera movements that cause problems for the find algorithm.
Dirty Image 1
Dirty Image 2
Dirty Image 3
Even though the blog has been fairly quiet for the last while, a lot of work has been performed to improve the capability of the GaugeCam water level measurement camera system. We will now start posting more frequently to discuss some of these improvements and talk about planned future improvements. There are three categories of improvements where we have made significant advancments. These are:
- The real-time web interface – This is the movement of images from the camera to the webserver, the application of the algorithm to create measurements, and the presentation of graphics and measurments on the internet. You can see some of the results of this work on this web page that shows the water level in a tidal marsh on the North Carolina coast as measured by one of our cameras. Andrew is responsible for our software systems and infrastructure.
- Camera, remote power, mounting, and target hardware – One of our hardest tasks is to develop a truly remote camera system that generates its own power, withstands the weather, provides its own light at night, is phsically stable, etc., etc. François is responsible for this in addition to his maintenance of the lab and our test cameras. Up until now, his improvements to the hardware have taken place behind the scenes, but expect to see a dramatically improved set of hardware on this blog in the very near future as we move to our first prototype camera production run.
- Vision algorithms – Up until the marsh camera was put into place and started shoveling images out to our web site, the requirements of the image processing software were not really well known because there were no images with which to work other than what we gathered in the lab. Therefore, we made our best best a what was required, wrote the algorithms and deployed them. They really work quite well, but now that we have a “real” and continuing stream of images, we know a lot more about what the vision algorithms will have to handle. I (Ken) am responsible for making the improvements to handle things like fog (See Image 1 below) and dirty high water marks (See Image 2 below). I will write about these and other improvements to the vision algorithms as they are developed and deployed.
Image 1. Fog
Image 2. High water line
The following is a video of our new auto-calibration capability. Previously it took a big effort to calibrate the GaugeCam water measurement software because it was necessary to specify a region of interest for each of the calibration target ficucials. In addition, the previous algorithm struggled with degraded images. This video demonstrates the new ease of use and calibration robustness even with bad images.
As we move toward the release of our beta software, I thought it would be a good time to reflect on our progress so far. As you would see if you cared to review this entire blog, we originally started with the idea that we could measure stream stage (water depth) using a camera. Why was this an attractive method when researchers and government agencies (USGS) already use a number of other methods, such as transducers and bubbler gauges? Well, from our collective experience, we know that field measurements are often erroneous due to instrument drift, infrequent or incorrect instrument calibration, or technician inexperience, just to name a few reasons. We felt the GaugeCam concept could address these error sources, while also providing a way to visually verify measurements.
After completing a brief proof of concept in the laboratory, we deployed a camera in the field near Pullen Park, Raleigh, NC. We chose this approach because we anticipated that the field application would involve many challenges we would never address in a lab-only study. We were correct! Our camera and communications system, which worked beautifully in the lab setting was not as robust in the field as we had hoped. We were able to compile a list of issues associated with our field application, which we have addressed in the beta version of our software. The field application gave us impetus to develop a functional daemon for processing images in real time on the GaugeCam server. Additionally, we were able to gather data for comparison with USGS stream stage data measured at Pullen Park.
While the Pullen Park deployment was underway, we stayed busy in the lab, assessing the capabilities of our camera and software. The camera was tested at a variety of distances and angles relative to the water level bench. We were encourage by the results but knew that to minimize the need for highly experienced technicians, we would need to automate our calibration process. To test the automated calibration, we have modified the water level bench using a white background with horizontal black bars substituting for water level. This was required to reduce the noise introduced by the water meniscus. Once the automatic calibration is verified, we will repeat our earlier study of water level detection from a variety of distance and angles. We will also deploy the system at alpha sites, which have already been identified. The transition from manual calibration to automatic calibration has been a little more difficult than anticipated (as seen in several recent posts). I feel we are encountering a typical challenge for machine vision projects; that the abilities of the human eye are very difficult to emulate using an algorithm!
We continue to work on automated calibration, as Ken has recently described. In the lab we have installed thick horizontal lines on the bench background at known positions relative to our calibration fiducials. We are evaluating the effects of perspective and camera posture angle with the intention of determining limitations associated with this calibration method. Below is an image at an extreme angle that clearly illustrates lens distortion, effects of perspective and effects of posture angle. There are known machine vision techniques to minimize these effects, which we may consider implementing in our beta software package.
Addition of functionality has been cut off for the Version 0.4 Beta release of the GaugeCam Remote Image Manager (GRIM) software. Any new functionality will be relegated to future releases. That being said, this release of the software holds everything necessary to accurately measure water level in streams, lakes, and other bodies of water. We expect to be able to put up the release version of the software for free download before the end of the year. We will also put up documentation, images, and a video or two that describe the installation and use of the software in the next few weeks. The functionality of the software includes the following:
- Both manual and automatic methods to calibrate a scene to convert pixel positions to inches/feet/meters.
- Run all the images in a directory to calculate a water height in each image and a .csv file with the results for all the images.
- Adjustable image processing parameters to deal with noisy images.
- Configuration files to quickly switch between images taken at different localities.
- Setup file preparation for images sent to a website for images to be processed in real-time and displayed on the internet.
- Test images to assure the camera has not been bumped (which throws off the calibration) since the last time the camera was calibrated.
- Save result images with color overlays of the line position and the points at the water level that were successfully found.
We would be happy to work with anyone who might find this software useful. We have started the lab tests of the software at the NCSU BAE labs. We are working with a third-party vendor to develop a solar powered remote camera that transmits its images via cellphone to our website for processing. That camera and an internet processing service should be available to whoever needs it by the end of second quarter 2011. If you have any questions about this and/or would like to participate with us in the testing or perform testing on your own, please contact do not hesitate to contact us.
Andrew, François, and I all met in the lab this Saturday for our regular bi-weekly meeting of the GaugeCam team at the François’ NCSU BAE lab. We talked about a lot of things and were able to perform the first test of the automatic vision calibration technique. François measured the exact position of the calibration dots and the water level with the laser system he and Troy built for that purpose. We captured a couple of images of the test apparatus and calibration dots with the (really cheesy) webcam on my laptop. While we were at the lab, we got VERY good results.
I evaluated those images and some other ones when I got home and found that when the camera is not close to the level of the water, the calibration gets thrown slightly off due to viewing angle based dot distortion. We could do the math to back out those distortions, but it is much easier to change the shape of the fiducial. Currently we use a circular dot. Our short-term solution will be to change the fiducial shape to horizontal lines on a vertical ruler. That should work very well for the time being, but we will eventually need to go to a checkerboard calibration target and template matching to find the square intersections. I will discuss the benefits of such an approach when we get to that in a future version of the GaugeCam image processing software.
Andrew continued his work on the web interface/database elements of the software and we discussed some of the commercialization issues. GaugeCam plans to provide an Alpha version of the software to NCSU for one of their research projects. We are in the process of identifying 6-8 beta partners with whom we hope to work when a product offering is available. We hope the Alpha program will start sometime this fall with the Beta program to start in late spring or early fall.