Even though the blog has been fairly quiet for the last while, a lot of work has been performed to improve the capability of the GaugeCam water level measurement camera system. We will now start posting more frequently to discuss some of these improvements and talk about planned future improvements. There are three categories of improvements where we have made significant advancments. These are:
- The real-time web interface – This is the movement of images from the camera to the webserver, the application of the algorithm to create measurements, and the presentation of graphics and measurments on the internet. You can see some of the results of this work on this web page that shows the water level in a tidal marsh on the North Carolina coast as measured by one of our cameras. Andrew is responsible for our software systems and infrastructure.
- Camera, remote power, mounting, and target hardware – One of our hardest tasks is to develop a truly remote camera system that generates its own power, withstands the weather, provides its own light at night, is phsically stable, etc., etc. François is responsible for this in addition to his maintenance of the lab and our test cameras. Up until now, his improvements to the hardware have taken place behind the scenes, but expect to see a dramatically improved set of hardware on this blog in the very near future as we move to our first prototype camera production run.
- Vision algorithms – Up until the marsh camera was put into place and started shoveling images out to our web site, the requirements of the image processing software were not really well known because there were no images with which to work other than what we gathered in the lab. Therefore, we made our best best a what was required, wrote the algorithms and deployed them. They really work quite well, but now that we have a “real” and continuing stream of images, we know a lot more about what the vision algorithms will have to handle. I (Ken) am responsible for making the improvements to handle things like fog (See Image 1 below) and dirty high water marks (See Image 2 below). I will write about these and other improvements to the vision algorithms as they are developed and deployed.
Image 1. Fog
Image 2. High water line
GaugeCam began to investigate ways to actuate weirs, pumps, lighting and other equipment at their remote camera locations a couple of years ago. As part of the investigation, we purchased several Arduino micro-controllers. At the time, the NCSU BAE department used expensive PLC’s (programmed in ladder logic) to control their equipment out in the field. A low-end system typically cost them around $600. List price for an Arduino micro-controller with similar functionality costs between $35-80 (USD). The Arduinos we use have between 14 and 64 digital I/O, several microsecond timers, and an staggering array of optional add-ons. It is programmed in C, so there is not much of a learning curve to know how to use them.
We have a variety of micro-controllers that work with GaugeCam cameras and to control light position on some of our lab setups. When Dr. Birgand saw this, he wondered whether we could use the much less expensive micro-controllers to run some pumps and other equipment both in the lab and out in the field. He put one of the lab assistants to work learning to program the equipment and now, several months later, they have developed a couple of solutions that are more flexible, smaller, lighter, and more capable, not to mention much cheaper than the previous solution.
This is an example of one of several synergies that has occurred as a result of the collaboration between GaugeCam and the NCSU BAE department. Even though we do not plan to use Arduinos as actuators on our remote camera products (we are several months away from an exciting announcement in that regard), still, the research has contributed the effectiveness and capability of the lab in the performance of their on-going research.
Hurricane Irene is scheduled to pass through North Carolina in the next day or so. GaugeCam is located in Raleigh. Raleigh is expected to receive one to two inches of rain and winds in the range of 30-40 mph. The tidal marsh camera could get hit hard. You can see from François’s previous post that it is directly in the hurricane’s path and could experience up to a foot of rain and 100+ mph winds. François sent his team out to the marsh to temporarily remove the camera. Thanks for the help, guys!
CORRECTION!: Troy just popped me a note to let me know the team has MOVED the camera to a higher location in the marsh where it will weather the storm. It will be interesting to see how all this works out.
Troy has the GRIM (GaugeCam Remote IMage processor) program in his hands and is in the throes of final test. We plan to release it as soon as we get the documentation together and fix any last minute bugs. We are pretty pleased with how it turned out. There is one known issues, that is not a show stoppers. Troy found that if the image exhibits extreme fish-eye, the calibration is thrown off a little. This is not a problem with the calibration itself, rather it is a problem with the positioning of the fiducials in the scene. The solution is to use optics that minimize fish-eye. That is not too much of a challenge. We believe this software is sufficiently robust and accurate to provide the level of accuracy we require for the research (and refereed journal articles) we are writing. We will discuss the release schedule during our Saturday meeting and report back here.
TWO NEW PROJECTS FOR GAUGECAM!!!
Really Cheap GUI Programmable Microcontroller – The first is what we will call the Arduino controller project. One of François’s students has designed a system that moves water through a series of tubes for measurement by a spectroscope. The system needs to be flushed and water needs to be moved according to a very specific pattern. The student built a fairly sophisticated control system based on a Programmable Logic Controller (PLC). The problem is that several more of these systems are required and the PLC’s cost $750 each. I looked at the system and made the comment that I knew of a low cost (<$70) micro-controller that could replace the PLC. The microcontroller we have chosen is the Arduino Mega 256. It features 54 digital I/O (14 PWM) and 6 analog I/O. We could just write the program in the controller and give it to NCSU, but there is a better long term solution. GaugeCam would like to write an easy to use GUI based program for researchers to use to develop control code for their project without having to learn ladder logic for a PLC or C/C++ for a microcontroller. We will need to write a program that runs in the microcontoller and a program that runs in a Windows or Linux PC. I will lay that program out more fully in the coming weeks. François bought the microcontroller this week, so we should be able to start on the project as soon as we GRIM and a few other things off our plate. We are aiming at a <$100 solution including the development software. Tentatively, we are calling the new program GaugeCam Arduino Programmer (GAP).
Really Cheap Machine Vision System – This second project has been in the works for awhile. We made a lot of progress on it when it was called KamVu. For a number of reasons, we have brushed the program back off and started working on it again. The idea is to develop a full blown vision system that runs either in a Linux or Windows PC (netbook, desktop, or laptop) or on a BeagleBoard. With the BeagleBoard option, we believe it should be possible to do sophisticated machine vision for <$200 if you use a webcam and <$350 if you use an industrial machine vision camera with trigger inputs, strobe outputs, and an industrial housing. We are working with The Imaging Source on this project as they are the leading provider of inexpensive but capable industrial machine vision cameras. This vision system will be aimed at those who have a need for a cheap vision solution, but are not programmers. At least initially, the idea will not be to sell the systems, but to provide a free, probably open source program to develop applications and show people how to buy the system and put it together in a step-by-step fashion. We hope to be able to do our own Debian based Linux distribution specifically for the BeagleBoard so that a user can just install stuff, plug in a camera and start developing. We already have a Windows installer that discovers the digital I/O and cameras and is getting close to being able to develop simple presence/absence applications. Here is a screen shot of the application in its current state of development.
As we move toward the release of our beta software, I thought it would be a good time to reflect on our progress so far. As you would see if you cared to review this entire blog, we originally started with the idea that we could measure stream stage (water depth) using a camera. Why was this an attractive method when researchers and government agencies (USGS) already use a number of other methods, such as transducers and bubbler gauges? Well, from our collective experience, we know that field measurements are often erroneous due to instrument drift, infrequent or incorrect instrument calibration, or technician inexperience, just to name a few reasons. We felt the GaugeCam concept could address these error sources, while also providing a way to visually verify measurements.
After completing a brief proof of concept in the laboratory, we deployed a camera in the field near Pullen Park, Raleigh, NC. We chose this approach because we anticipated that the field application would involve many challenges we would never address in a lab-only study. We were correct! Our camera and communications system, which worked beautifully in the lab setting was not as robust in the field as we had hoped. We were able to compile a list of issues associated with our field application, which we have addressed in the beta version of our software. The field application gave us impetus to develop a functional daemon for processing images in real time on the GaugeCam server. Additionally, we were able to gather data for comparison with USGS stream stage data measured at Pullen Park.
While the Pullen Park deployment was underway, we stayed busy in the lab, assessing the capabilities of our camera and software. The camera was tested at a variety of distances and angles relative to the water level bench. We were encourage by the results but knew that to minimize the need for highly experienced technicians, we would need to automate our calibration process. To test the automated calibration, we have modified the water level bench using a white background with horizontal black bars substituting for water level. This was required to reduce the noise introduced by the water meniscus. Once the automatic calibration is verified, we will repeat our earlier study of water level detection from a variety of distance and angles. We will also deploy the system at alpha sites, which have already been identified. The transition from manual calibration to automatic calibration has been a little more difficult than anticipated (as seen in several recent posts). I feel we are encountering a typical challenge for machine vision projects; that the abilities of the human eye are very difficult to emulate using an algorithm!
The bi-weekly GaugeCam meeting went particularly well on Saturday. Troy and François made plenty of good suggestions about how I should improve the software and fix some bugs. One of the coolest ones was the addition of an overlay scale. It is not finished yet, but from the image below, you can get the general idea.
Addition of functionality has been cut off for the Version 0.4 Beta release of the GaugeCam Remote Image Manager (GRIM) software. Any new functionality will be relegated to future releases. That being said, this release of the software holds everything necessary to accurately measure water level in streams, lakes, and other bodies of water. We expect to be able to put up the release version of the software for free download before the end of the year. We will also put up documentation, images, and a video or two that describe the installation and use of the software in the next few weeks. The functionality of the software includes the following:
- Both manual and automatic methods to calibrate a scene to convert pixel positions to inches/feet/meters.
- Run all the images in a directory to calculate a water height in each image and a .csv file with the results for all the images.
- Adjustable image processing parameters to deal with noisy images.
- Configuration files to quickly switch between images taken at different localities.
- Setup file preparation for images sent to a website for images to be processed in real-time and displayed on the internet.
- Test images to assure the camera has not been bumped (which throws off the calibration) since the last time the camera was calibrated.
- Save result images with color overlays of the line position and the points at the water level that were successfully found.
We would be happy to work with anyone who might find this software useful. We have started the lab tests of the software at the NCSU BAE labs. We are working with a third-party vendor to develop a solar powered remote camera that transmits its images via cellphone to our website for processing. That camera and an internet processing service should be available to whoever needs it by the end of second quarter 2011. If you have any questions about this and/or would like to participate with us in the testing or perform testing on your own, please contact do not hesitate to contact us.
We’ve added a small study to our todo list. Jeff built a lighting apparatus that is capable of illuminating the water level bench from multiple nodes on a three-dimensional grid. Based on an program built for an Arduino board, a light source automatically traverses in a radial pattern while pausing at multiple nodes in the vertical direction. Our hypothesis is that changes in illumination could produce a change in water level measurement by as much as two to three millimeters, or the approximate height of the water meniscus at the water-background interface. Jeff will post more details about the Arduino program, lighting apparatus and our use of the IOBridge board.
Here is a short time lapse video. It’s a quick look at the challenging lighting conditions we’re encountering in the field.