Visual Line Following
The following tutorial is one way to use a vision system to identify and follow a line.
The system uses:
* single CCD camera
* USB Digitizer
* Pentium CPU on board
* Left, Right differential motors
To get a better idea of what we're trying to accomplish lets first look at some sample pictures that the BucketBot took. The images are fromthe CCD camera mounted to the front of the BucketBot angled towards the ground.
Note that we have not done any calibration of lighting adjustments. The images are straight from the camera.
If it worth mentioning that the lines are created by black electrical tape stuck on moveable floor tiles. This allows us to move the tiles around and experiment with shapes quite easily.
Bad lighting can really cause problems with most image analysis techniques (esp. when thresholding). Imagine if a robot were to suddenly move under a shadow and lose all control! The best vision techniques try to be more robust to lighting changes.
To understand some of those issues lets look at two histograms from a straight line image from the previous slide. Next to each of these imagesare the images histogram. The histogram of an image is a graphical representation of the count of different pixel intensities in the image.
As you can see, histograms for lighter images slump towards the right (towards the 255 or highest value) whereas darker images have histograms with most pixels closer to zero. This is obvious but using the histogram representation we can better understand howtransformations to the image change the underlying pixels.
The next step is to see if we can correct these two images so that they look closer to one another. We do that by normalizing the images ...
To counter the effects of bad lighting we have to normalize the image. Image normalization attempts to spread the pixel intensities over the entire range ofintensities. Thus if you have a very dark image the resulting normalization process will replace many of the dark pixels with lighter pixels while keeping the relative positions the same, i.e. two pixels may be made lighter but the darker of the two will still be darker relative to the second pixel.
By evenly distributing the image intensities other image processing functions like thresholding which arebased on a single cutoff pixel intensity become less sensitive to lighting.
For example, the following images from the previous page show what happens during normalization. It is important to note that any image transformation that is meant to improve bad images must also preserve already good ones. In testing image processing functions be sure to always understand the negative side effects thatany function can have.
Normalization did not create much change because image was lighted ok to begin with.
The bad image experienced a large amount of change as the image intensities did not cover the entire intensity range due to bad lighting. You can see from the histogram that the image intensities are now more evenly distributed.
Also you can note how the new histogram appears to be notas solid as the original. This is due to how the intensity values are stretched. Since the new image has exactly the same number of pixels as the old image the new image still has many pixels intensity values that do not exist and therefore show up as gaps in the histogram. Adding another filter like a mean blur would cause the histogram to become more solid again as the gaps would be filled dueto smoothing of the image.
Next we need to start focusing on extracting the actual lines in the images.
In order to follow the line we need to extract properties from the image that we can use to steer the robot in the right direction. The next step is to identify or highlight the line with respect to the rest of the image. We can do this by detecting the transition from...
Leer documento completo
Regístrate para leer el documento completo.