The metrology of the GaugeCam system was presented on Tuesday morning of the 2011 ASABE International Conference. The presentation featured images from our field cam and our lab camera, but the emphasis was on optimal system performance in the lab. We won’t present the specific findings here since we intend to publish those formally. However, we are pleased with the lab results and feel that we’ve found a “sweet spot” for camera location, angle and lens.
Since the conference had a strong academic presence, we really focused on the fine details of our lab analysis. We pointed out several sources of error that we encountered during our experimentation. Some of these sources we know how to correct (ex. glare from IR lighting), while others (ex. image distortion, unless you want to perform a nonlinear model of the image) must simply be avoided when applying this technology. Much of this material is basic knowledge to the machine vision community, but it is important to discuss with hydrologists who may want to apply the technology.
We were pleased with the turnout for the presentation, which included attendees from government agencies and universities, both domestic and international. The audience also asked some good questions about system maintenance and whether our results rival other measurement techniques.
On the official ASABE program, the contributors to this presentation were Troy Gilmore, François Birgand, Ken Chapman and Andrew Brown. Kelly Chapman contributed significantly as a lab volunteer this summer and she is currently finalizing our documentation so we can release the version of GRIME used in the study. Christian Chapman set up the original GaugeCam server. Randall Etheridge and Brad Smith worked to set up the salt marsh camera, which gave us access to some nice demonstration images. Thanks to everyone for their hard work. We look forward to seeing our results in print!
GaugeCam will be presented at the ASABE International Conference in Louisville, KY on Aug 7-10, 2011.
GRIME has been put to the test lately. We’ve been processing loads of images and GRIME is performing well! We’re looking forward to compiling all of this data and really assessing our performance.
Some of our schedules are opening up soon, so we’re anticipating a major GaugeCam push in the next couple months. We’re implementing some interesting camera technology right now. Stay tuned!
We continued working toward our beta release today. We need to do more debugging, but we’re moving forward.
Our GaugeCam team members successfully completed the Krispy Kreme Challenge today! One ate all the donuts and finished under an hour. A fun time for a good cause.
As we move toward the release of our beta software, I thought it would be a good time to reflect on our progress so far. As you would see if you cared to review this entire blog, we originally started with the idea that we could measure stream stage (water depth) using a camera. Why was this an attractive method when researchers and government agencies (USGS) already use a number of other methods, such as transducers and bubbler gauges? Well, from our collective experience, we know that field measurements are often erroneous due to instrument drift, infrequent or incorrect instrument calibration, or technician inexperience, just to name a few reasons. We felt the GaugeCam concept could address these error sources, while also providing a way to visually verify measurements.
After completing a brief proof of concept in the laboratory, we deployed a camera in the field near Pullen Park, Raleigh, NC. We chose this approach because we anticipated that the field application would involve many challenges we would never address in a lab-only study. We were correct! Our camera and communications system, which worked beautifully in the lab setting was not as robust in the field as we had hoped. We were able to compile a list of issues associated with our field application, which we have addressed in the beta version of our software. The field application gave us impetus to develop a functional daemon for processing images in real time on the GaugeCam server. Additionally, we were able to gather data for comparison with USGS stream stage data measured at Pullen Park.
While the Pullen Park deployment was underway, we stayed busy in the lab, assessing the capabilities of our camera and software. The camera was tested at a variety of distances and angles relative to the water level bench. We were encourage by the results but knew that to minimize the need for highly experienced technicians, we would need to automate our calibration process. To test the automated calibration, we have modified the water level bench using a white background with horizontal black bars substituting for water level. This was required to reduce the noise introduced by the water meniscus. Once the automatic calibration is verified, we will repeat our earlier study of water level detection from a variety of distance and angles. We will also deploy the system at alpha sites, which have already been identified. The transition from manual calibration to automatic calibration has been a little more difficult than anticipated (as seen in several recent posts). I feel we are encountering a typical challenge for machine vision projects; that the abilities of the human eye are very difficult to emulate using an algorithm!
We continue to work on automated calibration, as Ken has recently described. In the lab we have installed thick horizontal lines on the bench background at known positions relative to our calibration fiducials. We are evaluating the effects of perspective and camera posture angle with the intention of determining limitations associated with this calibration method. Below is an image at an extreme angle that clearly illustrates lens distortion, effects of perspective and effects of posture angle. There are known machine vision techniques to minimize these effects, which we may consider implementing in our beta software package.
In preparation for a presentation on Friday, I’ve been thinking back over the GaugeCam project. I’ve learned a few lessons about developing instrumentation over the past year or so. These items may be just a reminder to veterans of this kind of process, but might also be helpful to novices out there.
Ask yourself a few questions to break down the process:
- What are the absolute minimum functions we need?
- What can we already do? (As opposed to “What can’t we do?”, which would be a very long list.)
- Are there parts of this process/instrument that should be done by another party – is it more efficient to find someone with specific expertise than to learn it ourselves?
Cast a wide net to understand the problem, but learn to focus on an individual task:
- We knew we would encounter challenges in the field that would not exist in the lab. We ran a prototype in the field to determine what some of those challenges were. We simultaneously ran preliminary experiments in the lab. (Cast a wide net)
- We’ve decided to bring in some outside expertise to tackle some of the field issues. While they’re working through those challenges, we’re able to focus on fine-tuning performance in the lab. (Focus on an individual task)
I’ve also learned a lot about the software development process, but that will have to wait until a future post.
The students in the NCSU Biogeochemistry for Ecological Engineering lab are just finishing a Theory of Drainage course. GaugeCam would have been very useful for recording accurate and repeatable liquid level measurements during this lab experiment!