Modelling the robot arm – Denavit Hartenberg parameters

Part 7 of my Youtube series, documenting the building and software development of my desktop robot head and arm, is now available.

 

I have designed a new attachment for the end of the robot arm. For the time being I have removed the touch sensitive ‘hand’ and I have replaced it with a sonar sensor, mounted via an additional micro-servo. The idea is that the sonar can measure the environment and use this information, in conjunction with the camera mounted i the head, to build up a visual and spacial model of its environment. I’ll be honest, I’m still a bit fuzzy on how this is going to work but it should keep me busy for a while.

This episode also demonstrates the progress that I have made in modelling the robot arm. With the model in place, I have been able to generate a mimic of the robot arm, embedded in the GUI. I have used matplotlib to generate a 3D plot that I use to display the model of the robot arm.

The first step in modelling the arm, was to find its Denavit Hartenberg parameters. There are lots of great resources online that detail how this is done, so I will only cover it briefly. I assigned reference frames to each of the robot arm joints, namely the base servo, lower arm servo, elbow servo and end effector (sonar) servo. From these reference frames, and some measurements taken from the robot, I was able to find the Denavit Hartenberg parameters as shown in the table below.

 

Description Link a α d θ
Base 0 15mm 90 102mm θBase
Lower 1 150mm 0 0 θLower
Elbow 2 161mm 0 0 θElbow
End 3 38mm 0 0 θEnd
Sonar 4 Sonar Distance 0 0 0

 

The variables in this case, are the θ values, which are the joint angles. You will notice the addition of the Sonar ‘link’ in the table. I will explain more about this in a moment. With these values identified, it is then a case of using these in a Denavit Hartenberg matrix to find the Cartesian coordinates of each joint. Multiplying the XYZ coordinates for a joint by the successive Denavit Hartenberg  matrix, gives the coordinates of the next joint on the arm.

I was able to calculate these coordinates for the arm, initially using joint angles given by sliders in the GUI, and plot them on a 3D matplotlib plot. This was embedded in the GUI and the robot arm model moved as the sliders were altered. It was then possible to read live joint angles from the robot so that the model reflected the actual position of the robot arm at any time.

The additional sonar ‘joint’ in the table above was used to calculate where in 3D space the sonar sensor, mounted to the arm, was measuring to. I treated the sonar sensor as an additional prismatic joint on the robot and as such the variable is the distance measured by the sonar sensor whilst the angle of the joint remains constant. I was then able to plot a line that represents the sonar sensor reading on to the robot model.

I plan to do the same bit of work for the robot head and have a model of that on the screen as well. This is likely to be the content for the next video in the series. I also want to explore a way to use the sonar readings to plot the environment as the robot arm moves around. At the moment I am thinking either a point cloud type data structure or a 3D occupancy grid type approach, but this is very early days so the approach may change.

For now, please enjoy the most recent video and subscribe to my Youtube channel for notifications of future videos. Any feedback or recommendations are welcome.

Advertisements

Colour detection and tracking

Part 6 of my Youtube series was posted last weekend and I have been working on tracking coloured objects.

 

As mentioned in my last post, I decided to develop the tracking function using the detection of a coloured ball instead of faces. To detect a certain colour I converted an image from the webcam to HSV colours, and then applied a threshold to isolate the coloured object of interest. Using contours I was able to detect the position of the coloured ball in the image. I could then use the x and y screen coordinates of the detected ball to calculate x and y error values. Multiplying these errors by a gain value (currently 0.04) and adding each value to the current yaw and pitch servo positions, allowed the head to move to track the object. I have been experimenting with different gain values but this seems to give a reasonable result at the moment.

Using the same code, but with the threshold values changed, I was also able to get the head to track the end of the arm. Although there is always some room for improvement, this ticks another item off of the wish list.

Making a start on the to do list… Face detection and tracking

I have started work on my wish list from the last post. I decided to jump straight in with some face detection and tracking. I have implemented object tracking on previous projects but not face tracking. Using OpenCV with python, getting some basic face detection working was quite straight forward and I had this up and running quickly. There are so many great tutorials out there for getting this working that I don’t need to repeat it here. With this working, I was able to get basic face tracking working by simply finding the position of the face in camera coordinates, and moving the pitch and yaw servos, by increasing or decreasing the angle set-point, to keep the face centred in the camera image. I have uploaded Part 5 of my Youtube series and this video shows the face detection and basic tracking in action.

 

 

As well as discussing my wish list of features for this project, the video also features my son, Vinnie. He loves helping out in the workshop and can’t wait to start some projects of his own. He has watched this project unfold and enjoys watching the robot move. I asked him to watch the robot as it replayed some pre-programmed sequences to see if he could tell what the robot was “thinking”. He did a good job at interpreting what the robot was doing and I am sure he will be keen to help me again in the future. Aside from a bit of fun and getting my son involved in the project, I think this exercise is useful as I want this robot to become a social robot. It needs to be able to interact with people and communicate and entertain to a certain degree. If a 4 year old can interpret what the robot is doing then I think I’m on the right track.

I need to do some more work on the face detection and tracking. At the moment the frame rate is a little slow and I need to work out why. It may be the face detection itself, or it could be the conversion of the image for displaying in the GUI that is slowing it down. I also need to improve the tracking. In the past I have tracked coloured objects, again using OpenCV, so I may write some code to detect a coloured ball, and use this to develop the tracking code. I think if I can get this right then I can just substitute detecting a coloured ball with a face and the tracking process is the same. I will calculate an x and y error in camera coordinates and use this to calculate an amount to move the pitch and yaw servos, in the same way a PID loop works. I am hoping to get a smooth tracking motion rather than the jerky movements I have currently.

All being well I will return very soon with a video showing this in action.

Replaying sequences, and some thoughts…

Part 4 of my Youtube series on my desktop robot head and arm build is now available. This episode shows the robot replaying some sequences of movements, along with some new facial expressions. I was exploring how well the robot was able to convey emotions and, even with quite basic movements and facial expressions, I think it does a reasonable job. Check out the video here.

 

 

Now for some thoughts…

It’s at this point in a project that I normally reach a cross roads. On the one hand the mechanical build is complete and the electronics, although there is more to come in this project, are working as required. These are the aspects of robot building that I enjoy the most. However, I really want the robot to do something cool.  I find myself shying away from the software in favour of starting a new project where I can fire up the 3D printer and the mill. I often put this down to not having an end goal in mind for the robot and the associated software. So I am going to use this space to jot down some thoughts about this that may help me keep on track. I have made notes about some features that I would like to implement in the project which I will list below. Some are quick and fairly easy, others are going to take some time. Whether I ever get them all completed will remain to be seen, but this will prove a helpful reminder should I need to come back to the list.

  • Program and replay poses/sequences from the GUI
  • 3D model of the robot on the GUI for offline programming or real-time monitoring
  • Face detection and tracking
  • Face expression detection (smile or frown) and react accordingly
  • Automatically stop the arm when the hand switch is activated
  • Detect someone and shake their hand
  • Gripper?
  • Remote control/input to the robot via bluetooth (I have a module on the way) maybe an android app?
  • Program the robot to play simple games
  • Object detection and recognition
  • Voice input and sound/speech output
  • Mapping of visual surroundings and reacting to changes
  • Use the robot as a platform for AI development. I have worked on this in the past, trying to use neural networks to allow the robot to have some hand-eye coordination
  • Sonar/IR sensors to sense the area in front of the robot and react to changes

This is just a preliminary list of features that I think would be interesting. I will certainly return to this list myself as a reminder to keep me on track. If anyone has any other suggestions, please leave a comment as I am interested in what other people would consider useful/fun.

My ultimate goal for this project is to have a robot that can sit on my desk or in my lounge, and interact with me and my family. It may sit idle, waiting for someone to activate it, before offering a suggested game or activity. It may begin moving under its own volition, learning its environment, building a model of its world and own self, ready to react to any sensed changes. It may get bored and lonely, and make noises until someone comes to see what the robot wants and play a game. I am not sure but this is where I want the project to head. Ultimately, I will want all processing to be done on-board, so that the robot can be moved around (a Raspberry Pi is a likely candidate to achieve this).

I will keep you all updated on the progress of this project. I think small steps are required to implement each of the features above in turn. I am hoping that eventually I will be able to join all of the features together into an interesting final product.  Until next time, thanks for visiting!

EDIT: I have put my code on to Github here. This is early days in this project but I like to share, especially if it helps someone out!

Youtube series Part 2 and 3 now available

Part 2 of my Youtube series following the development of my latest robot project, a desktop robot head and arm, has been up for a week or so now and I have just finished Part 3. Part 3 covers the testing of the hand switch and how I am starting to develop the code for both the Arduino and the controlling PC.

 

 

 

 

 

If you enjoy the videos, please subscribe as I plan to continue making these as often as time allows. If you would like more information on any aspects of the robot, drop me a message and I can go into more detail in a future video.

BFRCode

With the redesign of my robot BFR4WD complete I have moved back to developing the software to control the robot. As I have eluded to in previous posts, I have been working on a protocol for sending commands from the Raspberry Pi to the Arduino. The idea is that the Pi carries out the high level control, i.e. move forward 50cm, turn 30 degrees, etc. as well as image processing, and the Arduino is in charge of the low level control associated with these commands. I took inspiration from Gcode and started developing what I’m now calling BFRCode. I’m sure this isn’t a new idea but it is my take on it and I can tailor it to meet the requirements for my projects. BFRCode consists of a list of alpha-numeric command strings that can be sent by the Pi and interpreted and executed by the Arduino. The current list of commands (BFRCode_commands.xls), along with all of the other code I am working on can be found on github here. The command to move the robot forward 50cm for example would look like W1D50, a turn anti-clockwise of 30 degrees would be W3D30. I have also added functions for driving an arc shaped path and turning the robot to face a given direction as measured by the compass. The code also allows the head to be moved, sensor readings to be returned and power to the servos to be turned on and off. Currently all move commands return a status code to indicate if the move was completed successfully or not, as may be the case if an obstacle was encountered during the movement. The Arduino is in charge of detecting obstacles during movements.

This control scheme has the benefit of separating the high level control from the low level. Functions can be developed on the arduino and tested in isolation to make sure they do what they should and can simply be called by the python script running on the pi. Likewise, when developing the high level code on the Raspberry Pi, very little thought needs to be put in to moving the robot, just issue a command and check that it was executed correctly. Complex sequences of movements can be created by putting together a list of commands and storing as a text file, like you would expect from a Gcode file. A python script can then read through the file and issue the commands one at a time, checking each has completed successfully before issuing the next. I have found that sending strings with a newline termination is a very reliable method of exchanging data and can be done at a reasonably high baud rate. The other advantage to controlling the robot like this is that data is only sent between the Raspberry Pi and the Arduino when a command is issued or data is required. This is in contrast to previous approaches I have taken where data is constantly being sent back and forth.

To send commands, up until now,  I have been using a python script that I wrote that takes typed commands from the command line and sends them to the Arduino. This was OK but I decided I wanted a more user friendly and fun way to control the robot manually, for testing purposes and to show people what the robot can do whilst I’m working on more autonomous functions. I have started making a GUI in Tkinter that will send commands at the touch of a button. If I use VNC to connect to the Raspberry Pi it means I can control the robot manually using any device I choose (laptop, phone or tablet). I have also set the Raspberry Pi up as a wi-fi access point so I can access it without connecting to a network, ideal if I take to robot anywhere to show it off. Below is a screenshot of the GUI I am working on.

BFRGui

I created some custom graphics that are saved as .gif images so that a Tkinter canvas can display them. There are controls for moving and turning the robot and pan and tilt controls for the head. The compass graphic shows the current compass reading. If the compass graphic is clicked, the user can drag a line to a new bearing and on release of the mouse button, a command will be issued to turn the robot to the new heading. I have buttons for turning servo power on and off and a display showing the current sonar reading. I have incorporated a display for the image captured by the webcam. I am using OpenCV to grab the image and then converting it to be displayed on a Tkinter canvas. I’m really pleased with the way that BFRCode and the GUI are turning out. My 3 year old boy has had his first go at manually controlling a robot with the GUI and that is a success in itself!

I have made a very quick video of me controlling the robot using the GUI after connecting to the Raspberry Pi using VNC from a tablet.

Something I would like to develop is a BFRCode generator that allows a path to be drawn on the screen that can then be turned into a BFRCode file. The generated file could then be run by the robot. Head moves and image capture could be incorporated into the instructions. This could be useful for security robots that patrol an area in a fixed pattern. I am still very keen to develop some mapping software so the robot can then plot a map of its environment autonomously. The map could than be used in conjunction with the BFRCode generator to plot a path that relates to the real world.

3D printing a new robot

I was very pleased with the way BFRMR1 turned out, but it had some design flaws that I needed to address. The servos used for the drive wheels were a bit too slow. The drive wheels were positioned near the centre of the robot to help limit the size of the turning circle, but meant that the robot would tip forwards when stopping. It also meant that I couldn’t mount anything to the front of the robot, such as a gripper. Wheel encoder resolution was also a bit limited. An idea started forming in my mind for a new robot.

The idea was to make a four wheeled robot, with each wheel driven independently. I wanted to stick with using servos to drive the wheels. I love servos. They are cheap and very easy to control. But they can be slow! My idea evolved to making a gearbox to speed the servos up a bit, whilst taking the hit of losing a bit of torque. However, for this new robot it wouldn’t matter too much as I was doubling the number of drive wheels.  I thought of several ways of gearing the servos. Drive belt and pulleys was the first option but I decided to go for gears instead. I could have bought the required gears but I thought that this project was as good an excuse as any to invest in a new piece of equipment, a 3D printer!

After a bit of research I decided on a prusa i3 printer bought as a kit. I painted the aluminium frame and after a few days and a couple of long nights I had my printer assembled and working.

Prusa i3 3d printer

Prusa i3 3d printer

After calibration and a number of test prints, I set about designing some gears to form a gearbox.  I used OpenSCAD to design all the parts for the robot. To design the gears I downloaded a gear generator from thingiverse http://www.thingiverse.com/thing:3575. I started with a 25 tooth gear that would be connected directly to a servo horn, then a 14 tooth gear that would be connected to the wheel drive shaft. Attached to the 14 tooth gear is a 45 tooth gear with a finer pitch to drive an encoder disc with a 14 tooth gear attached. All of this together would increase the top speed of the servo and give me an encoder resolution of 180 pulses per wheel revolution. It took a few tries to get each of these gears right and some of the prototypes are shown in the picture below.

3D printed gear prototypes

3D printed gear prototypes

With all of the gears designed I made a trial gearbox. I wanted to use aluminium rectangular box section to house the gearbox. This means all of the gears are hidden and contained and also means that the gearbox could form a part of the robots chassis. The prototype gearbox just used a short section of the aluminium box as a test. The picture below shows the gearbox from the end with the encoder disc nearest the camera.

Prototype gearbox

Prototype gearbox

The final gearbox design used a long length of aluminium box section with two servos attached and two gearboxes within. This would form the drive for one side of the robot. Access holes were cut into the box section so allow assembly and adjustment of the gearbox and the picture below shows the view into one of the access holes. You can see the two servos mounted with gears attached and the drive shaft passing through the box section with its gear attached. The encoder discs are hidden.

One completed gearbox

One completed gearbox

I also designed and printed some bushes for the drive shaft to run in that clipped into holes drilled in the aluminium box section. The hole through the middle of these is slightly undersized so that they can be drilled out to exactly the right size for the shaft to fit in.

3D printed bushes

3D printed bushes

The encoders consists of a 28 slot encoder disc and a photo-interrupter to detect each of the slots as the disc turns. I decided on using sharp GP1A52LRJ00F slotted optical switches. These have photologic outputs so only a minimum of external circuitry is required to interface these with the Arduino. In fact only one resistor is needed so I used stripboard to make four encoder circuits that were then mounted inside the aluminium box section with the encoder discs turning between the sensors.

With two gearbox/chassis sections made I had two sides of the chassis. To join these together and make a complete chassis I needed to design some brackets. These brackets attached aluminium box section cross members to the gearbox sections to make a rectangular chassis. These are shown in the picture below.

Chassis bracket

Chassis bracket

One feature I wanted for this robot was the ability to separate the electronics and sensors from the chassis easily. To achieve this I decided to mount the Arduino mega, the Raspberry Pi, the batteries and the USB hub on a sheet of HDPE plastic that would then be bolted to the chassis with four bolts. Should I need to work on the chassis in the future I could just undo these four bolts, disconnect the encoders and drive servos from the Arduino and remove the electronics board. I also decided to mount the head pan/tilt mechanism to this board as well. The picture below shows the chassis with the electronics board attached.

Assembles chassis and electronics

Assembled chassis and electronics

The head pan/tilt mechanism consists of two regular servos and some 3D printed brackets. The picture below shows the bracket that attaches the pan servo to the electronics board.

Servo bracket for head pan/tilt

Servo pan bracket

Attached to the pan servo is the tilt servo via another 3D printed bracket. I designed a further piece that fixes to the tilt servo that the head can be bolted to, all shown in the picture below.

Head tilt servo bracket

Head tilt servo bracket

The head of the robot houses a sonar sensor and a webcam. See the picture below showing the assembled head attached to the pan/tilt mechanism.

3D printed head

3D printed head

With all of this done the robot is almost mechanically complete. I need to design and print some mounts for two IR sensors that will probably mount to the electronics board either side of the pan/tilt mechanism. The other job to do is to design and print a housing for a small screen and some buttons for controlling the robot without having to connect to it with another PC.

BFR4WD almost complete

BFR4WD almost complete

I have been developing software for the new robot alongside the mechanical build. I have modified the wheel control loop software from my previous robot to now control 4 wheels at the same time. A lot of the software from the BFRMR1 can be used in this project but one thing that I knew needed work was the communications between the Arduino and the Raspberry Pi. I was using serial communications but I never really liked the protocol I was using, that I developed so I can’t even blame anyone else for it. I am sticking with serial comms but wanted an improved protocol. Inspired by G-code as used on 3D printers I decided to come up with my own protocol to send commands in the form of strings to the robot. I’m calling it BFR-Code for now! The basic idea is that movement commands or requests for data can be sent to the robot along with some data to determine how to move. So a move command string will start with a capital letter M followed by a number to determine the type of move and then any data required proceeded by a capital D. So the command M1 D200 would drive the robot forward 200 encoder ticks. Error codes and data can be returned to the Raspberry Pi in a similar manner. This whole thing is a work in progress and I will make a blog post in the future with full details if this works out well.

For now I am continuing work on the software but I am near to making a video of the robot in action so check in again soon!

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.