Navigation to a target

I have been working hard lately on getting my robot to do something a bit more interesting than just wandering around not bumping into things. I decided I wanted the robot to move with purpose, towards a goal of some description. I thought that using vision would be a good way for the robot to detect a target that it could then navigate towards. I went through various options, carrying out some experiments on each option to determine what would make an easily identifiable target. I thought about using natural landmarks in the robots environment to act as targets but decided that purpose made visual targets would allow for more reliable detection. Coloured objects are easy to detect using a camera and OpenCV and was my first option. A certain shape of a certain colour could act as a target but when experimenting I found that a lot of false positives occur in a natural environment. Any object of a similar colour and shape will trigger as a target. I reasoned that the target should contain more information for the robot than a simple shape. I started playing around with QR codes using a library called zbar for python. Using an online QR code generator I was able to make QR codes to act as a target. Zbar is great and I could reliably read a QR code and interpret the information it contained. The issue I ran in to with this is the distance at which the code can be seen. When the QR code was further than around 1 metre from the camera it could not be read with my robots camera. Not ideal for navigation when the robot could be several metres from the target, it would never see it unless it got close enough by chance. I added to the QR code idea by surrounding the QR code with a coloured border. This meant that the robot could detect the coloured border and drive towards it until the QR code was readable. This worked to an extent but I have since developed a personal issue with QR codes, I can’t read them! They only mean something to my robot. If I place these symbols around a room, I don’t know what each one is. I wanted to find a target for my robot that was easily readable by the robot and by me, or anyone else who looks at it. I settled on a solution using a coloured border with a simple symbol inside that I would detect using OpenCV, as shown below.

Home symbol

Home symbol

Food symbol

Food symbol

Detecting the border is quite straight-forward, threshold the image to isolate the green colour and then find the contours in the thresholded image. I went a bit further with this and looked for a contour that contained a child contour. The child contour being the symbol within the border. This meant that only green objects with a non-green area within it was detected as a potential target. I then approximated the contour that is the outer edge of the border to just leave the coordinates of the four corners. I ignore any shapes that have more or less than 4 corners, again improving detection reliability.  This also meant that I could do a perspective correction on the detected symbol to give me an image that I could match to a known symbol. I read an issue of the Magpi magazine that had an article about using OpenCV to detect symbols, which can be found here. This is more or less the same as what I am trying to achieve although I prepared the image from the camera in a slightly different way. The section on matching the detected symbol to a known image however is exactly what I did, so I will let you read the article rather than duplicate it all here. What I was left with is a function that can capture an image and check it for green borders that are square in shape. If a border is found it can then check the contents of the border and match it to known images. At the moment I have two symbols, one for home and one for food and the robot can distinguish between the two images. As an added bonus, as the green border is a known size I was able to calculate an approximate distance to the target using the lengths of the sides of the border. I was also able to compare the lengths of the left and right side of the border to give an indication of what way the target symbol is facing compared to the robots heading.

Armed with all of this information I was able to get the robot to drive towards, and align itself to the target symbol. A video of this in action is shown below.

At the moment the navigation side of the code needs more work, particularly obstacle avoidance. I am planning to combine the obstacle detection using OpenCV with the detection of targets to give a robust way of navigating to a target whilst avoiding objects on the floor. At the moment all targets found that contain the incorrect symbol are ignored. I want to add a way to log where all targets (or potential targets) are for future reference by the robot. Some sort of map will be required but this is a project for another day. The code for my robot can be found on github. Be aware that this is very much a work in progress and subject to change at any time.

 

Advertisements
excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.