BFRCode

With the redesign of my robot BFR4WD complete I have moved back to developing the software to control the robot. As I have eluded to in previous posts, I have been working on a protocol for sending commands from the Raspberry Pi to the Arduino. The idea is that the Pi carries out the high level control, i.e. move forward 50cm, turn 30 degrees, etc. as well as image processing, and the Arduino is in charge of the low level control associated with these commands. I took inspiration from Gcode and started developing what I’m now calling BFRCode. I’m sure this isn’t a new idea but it is my take on it and I can tailor it to meet the requirements for my projects. BFRCode consists of a list of alpha-numeric command strings that can be sent by the Pi and interpreted and executed by the Arduino. The current list of commands (BFRCode_commands.xls), along with all of the other code I am working on can be found on github here. The command to move the robot forward 50cm for example would look like W1D50, a turn anti-clockwise of 30 degrees would be W3D30. I have also added functions for driving an arc shaped path and turning the robot to face a given direction as measured by the compass. The code also allows the head to be moved, sensor readings to be returned and power to the servos to be turned on and off. Currently all move commands return a status code to indicate if the move was completed successfully or not, as may be the case if an obstacle was encountered during the movement. The Arduino is in charge of detecting obstacles during movements.

This control scheme has the benefit of separating the high level control from the low level. Functions can be developed on the arduino and tested in isolation to make sure they do what they should and can simply be called by the python script running on the pi. Likewise, when developing the high level code on the Raspberry Pi, very little thought needs to be put in to moving the robot, just issue a command and check that it was executed correctly. Complex sequences of movements can be created by putting together a list of commands and storing as a text file, like you would expect from a Gcode file. A python script can then read through the file and issue the commands one at a time, checking each has completed successfully before issuing the next. I have found that sending strings with a newline termination is a very reliable method of exchanging data and can be done at a reasonably high baud rate. The other advantage to controlling the robot like this is that data is only sent between the Raspberry Pi and the Arduino when a command is issued or data is required. This is in contrast to previous approaches I have taken where data is constantly being sent back and forth.

To send commands, up until now,  I have been using a python script that I wrote that takes typed commands from the command line and sends them to the Arduino. This was OK but I decided I wanted a more user friendly and fun way to control the robot manually, for testing purposes and to show people what the robot can do whilst I’m working on more autonomous functions. I have started making a GUI in Tkinter that will send commands at the touch of a button. If I use VNC to connect to the Raspberry Pi it means I can control the robot manually using any device I choose (laptop, phone or tablet). I have also set the Raspberry Pi up as a wi-fi access point so I can access it without connecting to a network, ideal if I take to robot anywhere to show it off. Below is a screenshot of the GUI I am working on.

BFRGui

I created some custom graphics that are saved as .gif images so that a Tkinter canvas can display them. There are controls for moving and turning the robot and pan and tilt controls for the head. The compass graphic shows the current compass reading. If the compass graphic is clicked, the user can drag a line to a new bearing and on release of the mouse button, a command will be issued to turn the robot to the new heading. I have buttons for turning servo power on and off and a display showing the current sonar reading. I have incorporated a display for the image captured by the webcam. I am using OpenCV to grab the image and then converting it to be displayed on a Tkinter canvas. I’m really pleased with the way that BFRCode and the GUI are turning out. My 3 year old boy has had his first go at manually controlling a robot with the GUI and that is a success in itself!

I have made a very quick video of me controlling the robot using the GUI after connecting to the Raspberry Pi using VNC from a tablet.

Something I would like to develop is a BFRCode generator that allows a path to be drawn on the screen that can then be turned into a BFRCode file. The generated file could then be run by the robot. Head moves and image capture could be incorporated into the instructions. This could be useful for security robots that patrol an area in a fixed pattern. I am still very keen to develop some mapping software so the robot can then plot a map of its environment autonomously. The map could than be used in conjunction with the BFRCode generator to plot a path that relates to the real world.

3D printing a new robot

I was very pleased with the way BFRMR1 turned out, but it had some design flaws that I needed to address. The servos used for the drive wheels were a bit too slow. The drive wheels were positioned near the centre of the robot to help limit the size of the turning circle, but meant that the robot would tip forwards when stopping. It also meant that I couldn’t mount anything to the front of the robot, such as a gripper. Wheel encoder resolution was also a bit limited. An idea started forming in my mind for a new robot.

The idea was to make a four wheeled robot, with each wheel driven independently. I wanted to stick with using servos to drive the wheels. I love servos. They are cheap and very easy to control. But they can be slow! My idea evolved to making a gearbox to speed the servos up a bit, whilst taking the hit of losing a bit of torque. However, for this new robot it wouldn’t matter too much as I was doubling the number of drive wheels.  I thought of several ways of gearing the servos. Drive belt and pulleys was the first option but I decided to go for gears instead. I could have bought the required gears but I thought that this project was as good an excuse as any to invest in a new piece of equipment, a 3D printer!

After a bit of research I decided on a prusa i3 printer bought as a kit. I painted the aluminium frame and after a few days and a couple of long nights I had my printer assembled and working.

Prusa i3 3d printer

Prusa i3 3d printer

After calibration and a number of test prints, I set about designing some gears to form a gearbox.  I used OpenSCAD to design all the parts for the robot. To design the gears I downloaded a gear generator from thingiverse http://www.thingiverse.com/thing:3575. I started with a 25 tooth gear that would be connected directly to a servo horn, then a 14 tooth gear that would be connected to the wheel drive shaft. Attached to the 14 tooth gear is a 45 tooth gear with a finer pitch to drive an encoder disc with a 14 tooth gear attached. All of this together would increase the top speed of the servo and give me an encoder resolution of 180 pulses per wheel revolution. It took a few tries to get each of these gears right and some of the prototypes are shown in the picture below.

3D printed gear prototypes

3D printed gear prototypes

With all of the gears designed I made a trial gearbox. I wanted to use aluminium rectangular box section to house the gearbox. This means all of the gears are hidden and contained and also means that the gearbox could form a part of the robots chassis. The prototype gearbox just used a short section of the aluminium box as a test. The picture below shows the gearbox from the end with the encoder disc nearest the camera.

Prototype gearbox

Prototype gearbox

The final gearbox design used a long length of aluminium box section with two servos attached and two gearboxes within. This would form the drive for one side of the robot. Access holes were cut into the box section so allow assembly and adjustment of the gearbox and the picture below shows the view into one of the access holes. You can see the two servos mounted with gears attached and the drive shaft passing through the box section with its gear attached. The encoder discs are hidden.

One completed gearbox

One completed gearbox

I also designed and printed some bushes for the drive shaft to run in that clipped into holes drilled in the aluminium box section. The hole through the middle of these is slightly undersized so that they can be drilled out to exactly the right size for the shaft to fit in.

3D printed bushes

3D printed bushes

The encoders consists of a 28 slot encoder disc and a photo-interrupter to detect each of the slots as the disc turns. I decided on using sharp GP1A52LRJ00F slotted optical switches. These have photologic outputs so only a minimum of external circuitry is required to interface these with the Arduino. In fact only one resistor is needed so I used stripboard to make four encoder circuits that were then mounted inside the aluminium box section with the encoder discs turning between the sensors.

With two gearbox/chassis sections made I had two sides of the chassis. To join these together and make a complete chassis I needed to design some brackets. These brackets attached aluminium box section cross members to the gearbox sections to make a rectangular chassis. These are shown in the picture below.

Chassis bracket

Chassis bracket

One feature I wanted for this robot was the ability to separate the electronics and sensors from the chassis easily. To achieve this I decided to mount the Arduino mega, the Raspberry Pi, the batteries and the USB hub on a sheet of HDPE plastic that would then be bolted to the chassis with four bolts. Should I need to work on the chassis in the future I could just undo these four bolts, disconnect the encoders and drive servos from the Arduino and remove the electronics board. I also decided to mount the head pan/tilt mechanism to this board as well. The picture below shows the chassis with the electronics board attached.

Assembles chassis and electronics

Assembled chassis and electronics

The head pan/tilt mechanism consists of two regular servos and some 3D printed brackets. The picture below shows the bracket that attaches the pan servo to the electronics board.

Servo bracket for head pan/tilt

Servo pan bracket

Attached to the pan servo is the tilt servo via another 3D printed bracket. I designed a further piece that fixes to the tilt servo that the head can be bolted to, all shown in the picture below.

Head tilt servo bracket

Head tilt servo bracket

The head of the robot houses a sonar sensor and a webcam. See the picture below showing the assembled head attached to the pan/tilt mechanism.

3D printed head

3D printed head

With all of this done the robot is almost mechanically complete. I need to design and print some mounts for two IR sensors that will probably mount to the electronics board either side of the pan/tilt mechanism. The other job to do is to design and print a housing for a small screen and some buttons for controlling the robot without having to connect to it with another PC.

BFR4WD almost complete

BFR4WD almost complete

I have been developing software for the new robot alongside the mechanical build. I have modified the wheel control loop software from my previous robot to now control 4 wheels at the same time. A lot of the software from the BFRMR1 can be used in this project but one thing that I knew needed work was the communications between the Arduino and the Raspberry Pi. I was using serial communications but I never really liked the protocol I was using, that I developed so I can’t even blame anyone else for it. I am sticking with serial comms but wanted an improved protocol. Inspired by G-code as used on 3D printers I decided to come up with my own protocol to send commands in the form of strings to the robot. I’m calling it BFR-Code for now! The basic idea is that movement commands or requests for data can be sent to the robot along with some data to determine how to move. So a move command string will start with a capital letter M followed by a number to determine the type of move and then any data required proceeded by a capital D. So the command M1 D200 would drive the robot forward 200 encoder ticks. Error codes and data can be returned to the Raspberry Pi in a similar manner. This whole thing is a work in progress and I will make a blog post in the future with full details if this works out well.

For now I am continuing work on the software but I am near to making a video of the robot in action so check in again soon!

Navigation to a target

I have been working hard lately on getting my robot to do something a bit more interesting than just wandering around not bumping into things. I decided I wanted the robot to move with purpose, towards a goal of some description. I thought that using vision would be a good way for the robot to detect a target that it could then navigate towards. I went through various options, carrying out some experiments on each option to determine what would make an easily identifiable target. I thought about using natural landmarks in the robots environment to act as targets but decided that purpose made visual targets would allow for more reliable detection. Coloured objects are easy to detect using a camera and OpenCV and was my first option. A certain shape of a certain colour could act as a target but when experimenting I found that a lot of false positives occur in a natural environment. Any object of a similar colour and shape will trigger as a target. I reasoned that the target should contain more information for the robot than a simple shape. I started playing around with QR codes using a library called zbar for python. Using an online QR code generator I was able to make QR codes to act as a target. Zbar is great and I could reliably read a QR code and interpret the information it contained. The issue I ran in to with this is the distance at which the code can be seen. When the QR code was further than around 1 metre from the camera it could not be read with my robots camera. Not ideal for navigation when the robot could be several metres from the target, it would never see it unless it got close enough by chance. I added to the QR code idea by surrounding the QR code with a coloured border. This meant that the robot could detect the coloured border and drive towards it until the QR code was readable. This worked to an extent but I have since developed a personal issue with QR codes, I can’t read them! They only mean something to my robot. If I place these symbols around a room, I don’t know what each one is. I wanted to find a target for my robot that was easily readable by the robot and by me, or anyone else who looks at it. I settled on a solution using a coloured border with a simple symbol inside that I would detect using OpenCV, as shown below.

Home symbol

Home symbol

Food symbol

Food symbol

Detecting the border is quite straight-forward, threshold the image to isolate the green colour and then find the contours in the thresholded image. I went a bit further with this and looked for a contour that contained a child contour. The child contour being the symbol within the border. This meant that only green objects with a non-green area within it was detected as a potential target. I then approximated the contour that is the outer edge of the border to just leave the coordinates of the four corners. I ignore any shapes that have more or less than 4 corners, again improving detection reliability.  This also meant that I could do a perspective correction on the detected symbol to give me an image that I could match to a known symbol. I read an issue of the Magpi magazine that had an article about using OpenCV to detect symbols, which can be found here. This is more or less the same as what I am trying to achieve although I prepared the image from the camera in a slightly different way. The section on matching the detected symbol to a known image however is exactly what I did, so I will let you read the article rather than duplicate it all here. What I was left with is a function that can capture an image and check it for green borders that are square in shape. If a border is found it can then check the contents of the border and match it to known images. At the moment I have two symbols, one for home and one for food and the robot can distinguish between the two images. As an added bonus, as the green border is a known size I was able to calculate an approximate distance to the target using the lengths of the sides of the border. I was also able to compare the lengths of the left and right side of the border to give an indication of what way the target symbol is facing compared to the robots heading.

Armed with all of this information I was able to get the robot to drive towards, and align itself to the target symbol. A video of this in action is shown below.

At the moment the navigation side of the code needs more work, particularly obstacle avoidance. I am planning to combine the obstacle detection using OpenCV with the detection of targets to give a robust way of navigating to a target whilst avoiding objects on the floor. At the moment all targets found that contain the incorrect symbol are ignored. I want to add a way to log where all targets (or potential targets) are for future reference by the robot. Some sort of map will be required but this is a project for another day. The code for my robot can be found on github. Be aware that this is very much a work in progress and subject to change at any time.

 

Obstacle detection using OpenCV

I have been working on a way to detect obstacles on the floor in front of the robot using the webcam and OpenCV. I have had some success and I have made a short video of the obstacle detection in action.

The method I am using involves capturing an image, converting it to grayscale, blurring it slightly and then using canny edge detection to highlight the edges in the image. Using the edge detected image, starting from the left and moving along the width of the image in intervals, I scan from the bottom of the image until I reach a pixel that is white, indicating the first edge encountered. I am left with an array that contains coordinates of the first edges found in front of the robot. Using this array, I then look for changes in the direction of the slope of the edges that may indicate an object is present. At the moment I am ignoring anything in the top half of the image as anything found here will probably be far enough away from the robot to not be too concerned about. This will change depending on the angle of the head. If the head is looking down towards the ground, obviously everything in the scene may be of interest. With the changes of slope found, I then scan in both directions to try and find the edge of the object, indicated by a sharp change in values in the array of nearest edges.

This method seems to work quite well but some tweaking may be required for it to work in all environments. I am planning on using the coordinates of the edges of the obstacles found to create a map of some kind of the area in front of the robot. Combined with the coordinates of the target the robot is heading for, I hope to be able to plan a path for the robot to follow to reach the target.

For anyone that is interested I have put my code on Github. It is a work in progress but may be worth a look!

 

BFRMR1 video

I have finally got around to making a video of BFRMR1 in action. The video shows some of the features of the robot and obstacle avoidance and colour tracking in action.

During testing of the robot I found that at higher speeds, when the robot stops, the front of the robot dips a little bit. This causes the IR sensors to see the floor and leads the robot to believe there is an obstacle where there isn’t one. This was always a possibility as the drive wheels are quite close to the centre of the robot and I was relying on the weight of the batteries to keep the back end of the robot weighted down. I put the wheels where they are so that when the robot turns, it does so within its own footprint. This was to hopefully stop the robot bumping in to things when it turns. To overcome this issue I decided to add some “legs” to the front of the robot that do not contact the ground during normal running but are there to stop the robot tipping forward when it stops from high speed. These are pictured below.

"Legs" at the front of the robot

“Legs” at the front of the robot

I chose this solution as the issue only occurs when running the robot quickly and I didn’t want to add any weight to the rear of the robot after all the effort of lightening the robot with the carbon fibre shell! These “legs” can be easily removed if necessary and I hope to solve the issue in software in the future.

Raspberry Pi and Wiimote

It’s been a while between posts again but I have made some time for my projects recently.  I have been playing around with hand switches and adding some functionality to my python software. Menu bars have been added with options for enabling different functions such as continuous mode (servos move as soon as the slider is moved) and head tracking mode.

I was working away on improving my code when I saw this cheeky little post on the Raspberry Pi website.  http://www.raspberrypi.org/archives/3298.  I was inspired and my Raspberry Pi has been waiting patiently for me to do something with it for a while now and I thought this may be a good opportunity to have a play. The plan was to use a Wiimote to control my robot, using the Raspberry Pi as the middle man. It happened that I had a couple of Wiimotes knocking about so it seemed too good a project to pass up. I acquired a Bluetooth adapter and promptly threw it in the bin because it caused me no end of trouble on the Pi and on my other computers both running Ubuntu. I purchased a different Bluetooth adapter (cheaper than the first I may add) and everything worked wonderfully. I installed blueman as I have found it works well on my Ubuntu machines. To test the connection with the Wiimote I installed wmgui (sudo apt-get install wmgui). This is a little program that connects to the Wiimote and shows button presses and accelerometer data coming from the Wiimote on the screen. So far so good, Bluetooth working and  Wiimote connected.

The next job was to work out how to get the data from the Wiimote into a Python program so I could use it to control my robot. Luckily, it would appear someone has done all of the hard work in the form of CWiiD. I installed CWiiD onto the Pi (sudo apt-get install python-cwiid).  I found the following page more than helpful in getting started started with using CWiiD in my Python program http://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/wiimote/

I think Python is great. Everything works really well and all of the code I had written for my desktop to control the robot could be used with the Raspberry Pi with very little modification. All I had to do was comment out the OpenCV stuff for now, as the SD card I’m using has a clean install of wheezy on it and I haven’t got round to building OpenCV again. I added some code to connect the Wiimote to the Pi.  The three-axis accelerometer in the Wiimote gives values of acceleration in the x, y and z directions. Using two of these values, it’s possible to detect if the Wiimote is being tilted front to back or rolled left to right easily. Rotation in the third plane is more complicated so I decided that only two servos should be controlled at a time. To move each pair of servos, a button must be held down before the accelerometer data is used to move the servos. Button B for the head and the direction buttons to switch between pairs of servos on the arms. For an added touch I used the home button on the Wiimote to send all of the robots servos to their home position.

This project was perfect for the Raspberry Pi and there is a lot of documentation out there to help. What I would like to do next is incorporate the storing of robot positions using the Wiimote to allow sequences to be programmed and replayed.

I’ve put together a video showing the Wiimote control in action. Enjoy!

It’s been a while…

It’s been too long since my last post, sorry about that! I’ve had a lot of non-robot related stuff going on, but I have managed to make some progress. I’ve modified my robots head to include a sharp GP2D12 IR sensor to allow the robot some basic depth perception. This required fabrication of a new head bracket and some trial and error to get the sensor positioned correctly. The first attempt used a mini servo to move the IR sensor up and down. I figured that if an object is close to the camera then the IR sensor would need to be angled further down in order to be looking straight at the object. On the other hand if the object is further away, the IR sensor needs to be angled slightly higher to see the object. I fabricated this set-up but found through some experimentation that the horizontal beam of the IR sensor is wide enough to pick up an object near the centre of the cameras field of view anywhere within the usable range of the IR sensor.  As such, this first design went in the bin and I fabricated a new head bracket with the IR sensor mounted above the camera with some adjustment to alter the angle. Have a look at the picture below to see the final design.

 

The IR sensor gives an analogue output relating to distance of the object from the sensor. The Arduino code was modified to read the analogue input pin that the sensor is connected to, scale the value and then output this via serial with all the other data from the potentiometers.

On the software side of things, a lot has changed. I started playing around with the Raspberry Pi, initially using qtonpi. This seemed to have a lot of potential but at this stage appears to be in early stages of development. Being a bit lazy I didn’t feel the effort required was worthy. I may revisit this in the future. I decided to use the recommended Raspian distribution instead, which works pretty much out of the box. I also decided to delve into the world of Python programming due to the strong links between it and the Raspberry Pi.
On the back of this, and in an attempt to get to grips with using Linux, I’ve replaced windows on my desktop PC with Ubuntu. I’ve re-written a lot of my code in Python and built OpenCV with the Python bindings. I have to admit, I think Ubuntu is great and I’m getting on really well with Python. I created a GUI using TKinter and used Python serial libraries to allow the servos of my robot to be controlled from the PC. The program only has basic functionality at the moment but only took a few days to write. The longest part of the process was building OpenCV with all the required dependencies and getting to grips using it in a Python program. Check out the screen shot below to see the program running. I’ve implemented colour tracking in OpenCV by converting the webcam image to HSV and applying a threshold to give a greyscale image. The centre of the coloured object is then identified using moments. The 10 blue sliders on the GUI set the desired position of the servos and the edit box below each one shows the actual position received from the potentiometer. The other 6, green sliders allow the OpenCV threshold values to be altered to identify different colours.


I did run into one problem using OpenCV moments with Python. I had a major memory leak when trying to find the moments of the image. I think this was to do with converting an iplimage to Mat, which was required before the image is passed to cv.moments. I ended up converting the image to a numpy array before passing it to the newer cv2.moments function, which worked just as well and got rid of the memory leak. Took me two days to get to the bottom of that one!

I also built OpenCV on the Raspberry Pi, which takes about 10 hours to complete! I’ve tried running my Python software on the Raspberry Pi. The GUI on its own will run well and allow control of the servos and display their actual position. However, when image display and processing is added into the mix, the Pi seems to struggle. The problem seems to be a delay in getting the image data from the camera as opposed to the Pi struggling with the processing side of things. I didn’t manage to get more than about 2 frames per second and making any more progress using the Pi was hard work. As a result the Pi is back in its box for the moment and I’m continuing development using Ubuntu on my desktop.

As you can see, I’ve made a few changes to the tools I’m using to develop the software for my robot. I was a decision that was surprisingly difficult to make. I like QT and the C++ language and was really starting to  get to grips with it. You can make some great looking GUI’s and I’ve had a play with QML and this opens up more possibilities for custom, great looking interfaces. However, due to that fact that I’m not a professional programmer and I tend to dip in and out of programming as and when I get time, it would take me a long time to implement new functionality in my programs. Python on the other hand allows new functionality to be added quickly, meaning a new idea or hardware addition can be included with out too much fuss. I’ve seen comments that Python runs slower than a complied language but I can’t say that I’ve noticed any real slow down when the program is running, although I appreciate that my program is fairly basic at the moment.  I think I’m going to stick with Python for the moment while the robot is very much in development.

My next step is to add a visual representation of the reading from the IR sensor to the GUI and do a bit more testing. I also need to add the head tracking functionality to my Python code along with the ability to program moves and replay them. Hopefully it won’t be quite as long between posts for anyone following my progress. I’ll leave you with a picture of the whole robot with his new head. See you soon!

 

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.