Navigation to a target

I have been working hard lately on getting my robot to do something a bit more interesting than just wandering around not bumping into things. I decided I wanted the robot to move with purpose, towards a goal of some description. I thought that using vision would be a good way for the robot to detect a target that it could then navigate towards. I went through various options, carrying out some experiments on each option to determine what would make an easily identifiable target. I thought about using natural landmarks in the robots environment to act as targets but decided that purpose made visual targets would allow for more reliable detection. Coloured objects are easy to detect using a camera and OpenCV and was my first option. A certain shape of a certain colour could act as a target but when experimenting I found that a lot of false positives occur in a natural environment. Any object of a similar colour and shape will trigger as a target. I reasoned that the target should contain more information for the robot than a simple shape. I started playing around with QR codes using a library called zbar for python. Using an online QR code generator I was able to make QR codes to act as a target. Zbar is great and I could reliably read a QR code and interpret the information it contained. The issue I ran in to with this is the distance at which the code can be seen. When the QR code was further than around 1 metre from the camera it could not be read with my robots camera. Not ideal for navigation when the robot could be several metres from the target, it would never see it unless it got close enough by chance. I added to the QR code idea by surrounding the QR code with a coloured border. This meant that the robot could detect the coloured border and drive towards it until the QR code was readable. This worked to an extent but I have since developed a personal issue with QR codes, I can’t read them! They only mean something to my robot. If I place these symbols around a room, I don’t know what each one is. I wanted to find a target for my robot that was easily readable by the robot and by me, or anyone else who looks at it. I settled on a solution using a coloured border with a simple symbol inside that I would detect using OpenCV, as shown below.

Home symbol

Home symbol

Food symbol

Food symbol

Detecting the border is quite straight-forward, threshold the image to isolate the green colour and then find the contours in the thresholded image. I went a bit further with this and looked for a contour that contained a child contour. The child contour being the symbol within the border. This meant that only green objects with a non-green area within it was detected as a potential target. I then approximated the contour that is the outer edge of the border to just leave the coordinates of the four corners. I ignore any shapes that have more or less than 4 corners, again improving detection reliability.  This also meant that I could do a perspective correction on the detected symbol to give me an image that I could match to a known symbol. I read an issue of the Magpi magazine that had an article about using OpenCV to detect symbols, which can be found here. This is more or less the same as what I am trying to achieve although I prepared the image from the camera in a slightly different way. The section on matching the detected symbol to a known image however is exactly what I did, so I will let you read the article rather than duplicate it all here. What I was left with is a function that can capture an image and check it for green borders that are square in shape. If a border is found it can then check the contents of the border and match it to known images. At the moment I have two symbols, one for home and one for food and the robot can distinguish between the two images. As an added bonus, as the green border is a known size I was able to calculate an approximate distance to the target using the lengths of the sides of the border. I was also able to compare the lengths of the left and right side of the border to give an indication of what way the target symbol is facing compared to the robots heading.

Armed with all of this information I was able to get the robot to drive towards, and align itself to the target symbol. A video of this in action is shown below.

At the moment the navigation side of the code needs more work, particularly obstacle avoidance. I am planning to combine the obstacle detection using OpenCV with the detection of targets to give a robust way of navigating to a target whilst avoiding objects on the floor. At the moment all targets found that contain the incorrect symbol are ignored. I want to add a way to log where all targets (or potential targets) are for future reference by the robot. Some sort of map will be required but this is a project for another day. The code for my robot can be found on github. Be aware that this is very much a work in progress and subject to change at any time.

 

Advertisements

Carbon Fibre shell for BFRMR1 mobile robot

As mentioned in my last post, I have been hoping to have a custom carbon fibre shell made for my robot for a while now, to replace the aluminium shell. That time has come! I have a friend who has been making carbon fibre parts for a while now and he kindly offered to make a part for my robot. The first step was to create a model of the part I wanted, which I did myself. I used styrofoam (blue) to create a model of the shell of the exact size required. I used hot glue to stick several bits of styrofoam together to give me a rough shape.

Rough shape styrofoam model of robot shell

Rough shape styrofoam model of robot shell

This was then shaped by hand using sandpaper to leave the final shape required. The shaping involved rounding the corners and ensuring the top of the shell was as smooth as possible.

The finished styrofoam model of the robot shell

The finished styrofoam model of the robot shell

At this point the styrofoam model was given to my friend who spent quite a bit of time getting it ready to use to make a mould. This process included sealing the part with epoxy resin, covering it with body filler and sanding it to shape and several coats of a special resin designed for pattern making that can be sanded and finished to a high standard. A mould of the part was then made that could be used to make the carbon fibre part. The carbon fibre part was made and given to me ready for fitting to the robot.

I completely dismantled the robot to allow the new shell to be fitted. I had to round the corners of the base plate to match the rounded corners of the shell. I fabricated some angled brackets to attach the shell to the robot, which were fixed to the shell with epoxy resin. I had to cut the carbon fibre shell to accommodate the head servo, the TFT screen and an access panel at the front and rear. I fabricated some additional brackets to hold the access panels on to the shell, and also attached these with epoxy resin. To protect the lovely shiny surface of the carbon fibre shell, I cocooned it in masking tape before any cutting or drilling took place. With the shell finished I rebuilt the robot, mounted all the parts to the shell and fitted the shell to the robot base. Take a look at the finished product!

BFRMR1 with custom carbon fibre shell

BFRMR1 with custom carbon fibre shell

Close up of the carbon fibre shell

Close up of the carbon fibre shell

Side View of BFRMR1 with  carbon fibre shell

Side View of BFRMR1 with carbon fibre shell

Switch array mounted to new shell

Switch array mounted to new shell

I am very pleased with the look of the robot with the carbon fibre shell and it is very tough. The other advantage is that the shell is now very light. I weighed all of the aluminium parts of the old shell that I took off and they weighed 750g. The carbon fibre shell weighs in at 260g. This is a considerable weight saving, especially for a robot driven by modified servo motors and should reduce load on the servos and extend battery life.

I have also been working on the software for the robot. I have modified the way the Arduino and the Raspberry pi interact and I have tried to move some of the real time processing to the Arduino. As such, the Raspberry pi now sends commands to the Arduino, via serial, to instruct the robot to carry out a particular movement. This could be move the head to a given position or drive forward/turn a certain distance. When the move is complete the Arduino then returns a packet of data containing up to date sensor readings. On top of this the Arduino is also monitoring the sensors as the robot moves, detecting potential collisions and stopping the robot if necessary. The Raspberry pi can inspect the returned data packet to check if the robot moved the required distance and if not, check which sensor triggered and act accordingly. This allows much more accurate control of distances moved and sensor thresholds to stop the robot and frees up the pi to do other tasks if required.

I have also been playing with the TFT display, nothing particularly special at the moment but I can switch between modes and start and stop the robot using the buttons, and display the status on the screen. Some pictures below.

Mode select displayed on the TFT screen

Mode select displayed on the TFT screen

IMG_1534

Status displayed on the TFT screen

I am currently improving the obstacle avoidance code and working on some basic behaviours for the robot. One of which, as shown above, is finding food mode. My idea is that the robot will search out objects of a certain colour which it will identify as food. It will then navigate to the object to satisfy its hunger. Other modes such as play may involve the robot looking for a ball or similar object. When I am happy with the obstacle avoidance mode of the robot I will make a video, stay tuned!

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.