Navigation to a target

I have been working hard lately on getting my robot to do something a bit more interesting than just wandering around not bumping into things. I decided I wanted the robot to move with purpose, towards a goal of some description. I thought that using vision would be a good way for the robot to detect a target that it could then navigate towards. I went through various options, carrying out some experiments on each option to determine what would make an easily identifiable target. I thought about using natural landmarks in the robots environment to act as targets but decided that purpose made visual targets would allow for more reliable detection. Coloured objects are easy to detect using a camera and OpenCV and was my first option. A certain shape of a certain colour could act as a target but when experimenting I found that a lot of false positives occur in a natural environment. Any object of a similar colour and shape will trigger as a target. I reasoned that the target should contain more information for the robot than a simple shape. I started playing around with QR codes using a library called zbar for python. Using an online QR code generator I was able to make QR codes to act as a target. Zbar is great and I could reliably read a QR code and interpret the information it contained. The issue I ran in to with this is the distance at which the code can be seen. When the QR code was further than around 1 metre from the camera it could not be read with my robots camera. Not ideal for navigation when the robot could be several metres from the target, it would never see it unless it got close enough by chance. I added to the QR code idea by surrounding the QR code with a coloured border. This meant that the robot could detect the coloured border and drive towards it until the QR code was readable. This worked to an extent but I have since developed a personal issue with QR codes, I can’t read them! They only mean something to my robot. If I place these symbols around a room, I don’t know what each one is. I wanted to find a target for my robot that was easily readable by the robot and by me, or anyone else who looks at it. I settled on a solution using a coloured border with a simple symbol inside that I would detect using OpenCV, as shown below.

Home symbol

Home symbol

Food symbol

Food symbol

Detecting the border is quite straight-forward, threshold the image to isolate the green colour and then find the contours in the thresholded image. I went a bit further with this and looked for a contour that contained a child contour. The child contour being the symbol within the border. This meant that only green objects with a non-green area within it was detected as a potential target. I then approximated the contour that is the outer edge of the border to just leave the coordinates of the four corners. I ignore any shapes that have more or less than 4 corners, again improving detection reliability.  This also meant that I could do a perspective correction on the detected symbol to give me an image that I could match to a known symbol. I read an issue of the Magpi magazine that had an article about using OpenCV to detect symbols, which can be found here. This is more or less the same as what I am trying to achieve although I prepared the image from the camera in a slightly different way. The section on matching the detected symbol to a known image however is exactly what I did, so I will let you read the article rather than duplicate it all here. What I was left with is a function that can capture an image and check it for green borders that are square in shape. If a border is found it can then check the contents of the border and match it to known images. At the moment I have two symbols, one for home and one for food and the robot can distinguish between the two images. As an added bonus, as the green border is a known size I was able to calculate an approximate distance to the target using the lengths of the sides of the border. I was also able to compare the lengths of the left and right side of the border to give an indication of what way the target symbol is facing compared to the robots heading.

Armed with all of this information I was able to get the robot to drive towards, and align itself to the target symbol. A video of this in action is shown below.

At the moment the navigation side of the code needs more work, particularly obstacle avoidance. I am planning to combine the obstacle detection using OpenCV with the detection of targets to give a robust way of navigating to a target whilst avoiding objects on the floor. At the moment all targets found that contain the incorrect symbol are ignored. I want to add a way to log where all targets (or potential targets) are for future reference by the robot. Some sort of map will be required but this is a project for another day. The code for my robot can be found on github. Be aware that this is very much a work in progress and subject to change at any time.

 

Advertisements

Obstacle detection using OpenCV

I have been working on a way to detect obstacles on the floor in front of the robot using the webcam and OpenCV. I have had some success and I have made a short video of the obstacle detection in action.

The method I am using involves capturing an image, converting it to grayscale, blurring it slightly and then using canny edge detection to highlight the edges in the image. Using the edge detected image, starting from the left and moving along the width of the image in intervals, I scan from the bottom of the image until I reach a pixel that is white, indicating the first edge encountered. I am left with an array that contains coordinates of the first edges found in front of the robot. Using this array, I then look for changes in the direction of the slope of the edges that may indicate an object is present. At the moment I am ignoring anything in the top half of the image as anything found here will probably be far enough away from the robot to not be too concerned about. This will change depending on the angle of the head. If the head is looking down towards the ground, obviously everything in the scene may be of interest. With the changes of slope found, I then scan in both directions to try and find the edge of the object, indicated by a sharp change in values in the array of nearest edges.

This method seems to work quite well but some tweaking may be required for it to work in all environments. I am planning on using the coordinates of the edges of the obstacles found to create a map of some kind of the area in front of the robot. Combined with the coordinates of the target the robot is heading for, I hope to be able to plan a path for the robot to follow to reach the target.

For anyone that is interested I have put my code on Github. It is a work in progress but may be worth a look!

 

BFRMR1 video

I have finally got around to making a video of BFRMR1 in action. The video shows some of the features of the robot and obstacle avoidance and colour tracking in action.

During testing of the robot I found that at higher speeds, when the robot stops, the front of the robot dips a little bit. This causes the IR sensors to see the floor and leads the robot to believe there is an obstacle where there isn’t one. This was always a possibility as the drive wheels are quite close to the centre of the robot and I was relying on the weight of the batteries to keep the back end of the robot weighted down. I put the wheels where they are so that when the robot turns, it does so within its own footprint. This was to hopefully stop the robot bumping in to things when it turns. To overcome this issue I decided to add some “legs” to the front of the robot that do not contact the ground during normal running but are there to stop the robot tipping forward when it stops from high speed. These are pictured below.

"Legs" at the front of the robot

“Legs” at the front of the robot

I chose this solution as the issue only occurs when running the robot quickly and I didn’t want to add any weight to the rear of the robot after all the effort of lightening the robot with the carbon fibre shell! These “legs” can be easily removed if necessary and I hope to solve the issue in software in the future.

Carbon Fibre shell for BFRMR1 mobile robot

As mentioned in my last post, I have been hoping to have a custom carbon fibre shell made for my robot for a while now, to replace the aluminium shell. That time has come! I have a friend who has been making carbon fibre parts for a while now and he kindly offered to make a part for my robot. The first step was to create a model of the part I wanted, which I did myself. I used styrofoam (blue) to create a model of the shell of the exact size required. I used hot glue to stick several bits of styrofoam together to give me a rough shape.

Rough shape styrofoam model of robot shell

Rough shape styrofoam model of robot shell

This was then shaped by hand using sandpaper to leave the final shape required. The shaping involved rounding the corners and ensuring the top of the shell was as smooth as possible.

The finished styrofoam model of the robot shell

The finished styrofoam model of the robot shell

At this point the styrofoam model was given to my friend who spent quite a bit of time getting it ready to use to make a mould. This process included sealing the part with epoxy resin, covering it with body filler and sanding it to shape and several coats of a special resin designed for pattern making that can be sanded and finished to a high standard. A mould of the part was then made that could be used to make the carbon fibre part. The carbon fibre part was made and given to me ready for fitting to the robot.

I completely dismantled the robot to allow the new shell to be fitted. I had to round the corners of the base plate to match the rounded corners of the shell. I fabricated some angled brackets to attach the shell to the robot, which were fixed to the shell with epoxy resin. I had to cut the carbon fibre shell to accommodate the head servo, the TFT screen and an access panel at the front and rear. I fabricated some additional brackets to hold the access panels on to the shell, and also attached these with epoxy resin. To protect the lovely shiny surface of the carbon fibre shell, I cocooned it in masking tape before any cutting or drilling took place. With the shell finished I rebuilt the robot, mounted all the parts to the shell and fitted the shell to the robot base. Take a look at the finished product!

BFRMR1 with custom carbon fibre shell

BFRMR1 with custom carbon fibre shell

Close up of the carbon fibre shell

Close up of the carbon fibre shell

Side View of BFRMR1 with  carbon fibre shell

Side View of BFRMR1 with carbon fibre shell

Switch array mounted to new shell

Switch array mounted to new shell

I am very pleased with the look of the robot with the carbon fibre shell and it is very tough. The other advantage is that the shell is now very light. I weighed all of the aluminium parts of the old shell that I took off and they weighed 750g. The carbon fibre shell weighs in at 260g. This is a considerable weight saving, especially for a robot driven by modified servo motors and should reduce load on the servos and extend battery life.

I have also been working on the software for the robot. I have modified the way the Arduino and the Raspberry pi interact and I have tried to move some of the real time processing to the Arduino. As such, the Raspberry pi now sends commands to the Arduino, via serial, to instruct the robot to carry out a particular movement. This could be move the head to a given position or drive forward/turn a certain distance. When the move is complete the Arduino then returns a packet of data containing up to date sensor readings. On top of this the Arduino is also monitoring the sensors as the robot moves, detecting potential collisions and stopping the robot if necessary. The Raspberry pi can inspect the returned data packet to check if the robot moved the required distance and if not, check which sensor triggered and act accordingly. This allows much more accurate control of distances moved and sensor thresholds to stop the robot and frees up the pi to do other tasks if required.

I have also been playing with the TFT display, nothing particularly special at the moment but I can switch between modes and start and stop the robot using the buttons, and display the status on the screen. Some pictures below.

Mode select displayed on the TFT screen

Mode select displayed on the TFT screen

IMG_1534

Status displayed on the TFT screen

I am currently improving the obstacle avoidance code and working on some basic behaviours for the robot. One of which, as shown above, is finding food mode. My idea is that the robot will search out objects of a certain colour which it will identify as food. It will then navigate to the object to satisfy its hunger. Other modes such as play may involve the robot looking for a ball or similar object. When I am happy with the obstacle avoidance mode of the robot I will make a video, stay tuned!

A re-design of BFR-MR1

I realised that the design of my robot didn’t meet one of my initial goals for the project. Although a sturdy and reliable design, most of the components were still on show and the design didn’t make it easy to add a cover to enclose the electronics. I redesigned and made a new base plate for the robot. The brackets that fix the servos for the drive wheels to the base were moved from the top of the base plate to the bottom. This freed up some space on the top of the base plate and gave the robot a much bigger ground clearance. The Raspberry Pi, Arduino and batteries were then fixed to the top of the base plate. I could then fabricate an enclosure to cover everything. I made separate sides, top and front and back to the enclosure out of aluminium sheet. The head pan servo was mounted in to the top of the enclosure, reusing the pan/tilt camera mount from the previous robot design. I had to fabricate some spacers for the rear castors and the IR sensor bar to space everything off the base plate correctly. I also treated myself to a small TFT screen (Adafruit 2.2″ TFT) that is mounted into the top of the enclosure. Combined with a home-made array of tactile switches this means I can start/stop the robot or switch between modes without connecting to the robot with another computer. The front and back of the enclosure had to be removable to access connectors and batteries for charging. A coat of paint and the robot was ready to go.

New design of BFR-MR1

New design of BFR-MR1

 

BFR-MR1 TFT screen and switch array

BFR-MR1 TFT screen and switch array

 

Modified IR sensor bar attached to front of robot

Modified IR sensor bar attached to front of robot

This design keeps a lot of the electronics tucked away whilst still allowing access to the necessary connectors with the ends of the enclosure removed. I like this design but I would still like to replace the aluminium enclosure with a custom carbon fibre one. I have ordered some foam to carve into shape which can then be used to make a mould to create the carbon fibre piece. This will hopefully happen quite soon and I promise a blog entry outlining the process if it is a success!

Meanwhile I have been developing some software for the robot. The TFT screen is attached to the raspberry pi, as are the tactile switches. A bit of time was spent working out how to use the SPI interface through python to send pixel data to the screen. I found this series of tutorials online that were very helpful http://w8bh.net/pi/TFT1.pdf. I can now write text to the screen and I have interfaced the buttons so that I can cycle through options on the screen and select what I would like the robot to do, or display sensor data.

I have been thinking about what I would like the robot to do and a lot of it revolves around vision so that is going to be the focus of future work. Identifying targets to reach and obstacle avoidance to allow the robot to get to its goals are my main aims for the future.

 

BFRMR1 build complete

This robot came together very quickly. This was partly due to the fact that I had all the parts I needed already and partly due to this design being a relatively simple one. From my last post to having a robot ready to test took about 3 weeks. Since then I have been modifying software from my humanoid robot for use on a mobile robot. The parts that needed fabricating for my new robot included a base plate to mount everything to, new brackets for the wheel servos and encoders, a bracket for 3 sharp IR sensors, a mounting plate for the Arduino and a new pan/tilt arrangement for the camera and sonar sensor. I wanted to use 40mm castors at the rear of the robot as I have found these to roll really well on carpet. This lead to possibly the trickiest part of the design. I had to raise two sections at the back of the base plate to mount the castors to so that the robot was level and didn’t have a huge ground clearance, which may have made the robot look odd. The picture below shows the finished base plate, painted black.

Finished robot base plate

Finished robot base plate

The other brackets and components were fabricated and painted black as well. I decided to modify two servos for use in the head pan/tilt arrangement. I wanted to get a signal from the internal pot to use as feedback of servo position. This involved opening the case of the servo and soldering a wire to the centre pin of the internal potentiometer, which can then be read by the Arduino via an ADC. I knew that this would not be as accurate as using an external potentiometer as I did with my humanoid robot but I was willing to make this sacrifice at this stage to simplify the mechanical design of the pan/tilt arrangement. With all the parts ready for assembly I took a picture of all the components laid out because I’m sad like that and I think it’s cool!

BFRMR1 components ready for assembly

BFRMR1 components ready for assembly

The next task was to assemble the robot. I also needed to make and attach encoder disks to the wheels. These were designed on the computer, printed onto card and glued to the wheels. The assembled robot is shown below.

BFRMR1 ready to roll

BFRMR1 ready to roll

Another view of BFRMR1

Another view of BFRMR1

I still plan to make, or have made, a cover for the robot to hide and contain all the wires and electrical components. This is a work in progress but testing and software development can still continue on the robot as it is.

At the moment I am still testing and modifying software to display data from the robot on a screen and control the robot manually from either a PC or the Raspberry Pi. My goal with this robot is to have several modes of operation that can be selected between. For example; obstacle avoidance, wall following, colour tracking etc. This is very much a work in progress but when I have something working I will make and post a video. That’s it for now, back to software!

A New Robot

It’s a bitter-sweet moment, the start of a new project. On the downside, my humanoid project will be put on hold and the robot will be robbed of many of its parts. This project has run its course with me and I have found myself at a dead end with its progress. I was happy with the robot build and the software development I did, but its time for a new challenge. This leads me nicely on to the sweet side of the situation, the start of a new robot project! I am returning to mobile robots which I found to be a lot of fun.

This project has some simple goals from the outset. I want to build a sturdy, reliable mobile robot platform. Ultimately I want as many of the electronics and delicate components as possible covered or hidden away. The reason for this is a certain 14 month old chap that takes great interest in anything and everything and is currently learning about the world by testing things to destruction!  I will be using many of the parts from my humanoid robot so I will be sticking with the Arduino mega and my custom interface board and the Raspberry Pi. I have purchased a new battery for powering the Pi, a 12000mAh lipo of the type used to recharge your phone when its battery goes flat. The wheels will be driven using modified servos with external encoders and for sensors I want to use a combination of sharp IR sensors, sonar and a webcam. My intention is to design and build an aluminium base for everything to attach to, and enclose the electronics either with a cover made from aluminium or possibly fibreglass or carbon fibre.

It’s early stages but I have purchased some new wheels which I have machined to allow me to attach a servo horn. My humanoid robot has been robbed of many of the parts I need for this new project. I am currently in the process of designing the base plate but I have a couple of pictures showing a very rough outline of how everything may end up fitting together. The first picture shows the new wheels and the second shows the components laid out to help me visualise where everything will go.

 

New wheels

New wheels

 

Early development

Early development

Plenty for me to be getting on with but I will be back with more as and when I make some progress.

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.