Robot Head Mark 2

Around 3 months ago I started a new robot project. I decided that I wanted to build another robot head that could sit on a desktop. I had a few goals in mind when starting the project. I wanted to 3D print the parts and try out some finishing techniques on the parts. Nothing too fancy, just sand them smooth and paint, to see how good a finish I could achieve. I also wanted the robot to be low-cost and simple. I mean really simple, just two servos and a couple of sensors. This was very intentional. I want to expand on the work of my previous project, the desktop robot head and arm, by concentrating on the software. I found I was too often dealing with mechanical issues or limitations and these were an all to welcome distraction from the software. I like making things and having something physical to show for the hard work, whilst I find that with software there isn’t always that sense of satisfaction of having created something. However, the software is where the robot really comes to life and I need to get better and more focused at writing it and implementing the ideas that I have. To that end I wanted a simple robot with very little to go wrong that can serve as a platform to develop some interesting functions. This project will hopefully pull together a lot of the work from previous projects into one robot, and I intend to craft the software with care. I want to squeeze as much functionality out of this simple robot platform as possible. I also wanted to take the opportunity of a new project to switch from using TKinter for the GUI to using wxPython.

I started the process by designing the robot in FreeCad. I did more design work on this than in previous projects and really spent my time on the modelling. I even put the parts together into an assembly to help decide on the colour scheme. I have started another youtube series following the design and build of the robot. Part 1 is below.

This video shows the design and printing of the robot and at the end of it I had a pile of 3D printed parts ready for finishing. The design is a simple pan/tilt robot head, with a camera and a sonar sensor mounted in the head. I want to build on my previous work with point clouds so I wanted a sonar mounted on the head. I don’t think I will end up creating point clouds with this robot but I will likely use the techniques to come up with a different way to represent the environment. However, I didn’t want the sonar sensor looking like the robots ‘eyes’ as they often do, so I tried something new with this project. I printed a mesh to mount in front of the sonar and, combined with a piece of filter fabric, made a cover for the sensor. Later testing would reveal if this was successful or not.  The only other feature of the head was the robots actual ‘eye’. In place of the TFT screen used in the last project I opted for an RGB LED in a custom enclosure that would furnish the robot with a single colour changing eye that it can use to attract interest or convey emotions.

Part 2 shows the finishing of the 3D printed parts.

This involved a lot of sanding and priming before finally painting the parts. I was overall really pleased with the finished parts. It become clear from this process that the quality of the final finish is directly proportional to the amount of finishing work put in. I also took away a few other lessons from this process when designing the parts for finishing. Sharp corners and deep recesses are difficult to sand and remove all of the build lines without removing too much material. Smooth, rounded surfaces or large flat surfaces are easier to sand and look good when painted.

Part 3 of the video series documents the assembly of the robot.

With the robot assembled I was able to do some testing. I decided to use an Arduino Nano for this project, in-keeping with the minimalist and simplicity goals of the project. I knocked up a bread board circuit for testing and set about testing the RGB LED eye and the sonar sensor. Good news was that the sonar still seemed to function just fine from behind its cover, and the RGB LED worked as expected.

I initially connected the RGB LED to digital outputs, thinking that the few colours that this yields would suffice. I subsequently decided to use PWM outputs, so that a wider range of colours could be generated. Given that the robot was so simple, I thought this was a reasonable extravagance.

Part 4 shows testing of the RGB connected to PWM outputs and the first test of the servos.

This video also represents my first simple programs written using wxPython. So far, I like it. The GUI’s look better than their TKinter counterparts and overall it is not much more complex, if at all. I created a couple of test programs, one that sends commands via serial to set R,G and B values for the LEDs and another that sends servo positions. Code for the Arduino has also been written to receive these commands and set the RGB outputs and servo positions accordingly. I can also send get commands to the Arduino and it will return values as appropriate.

This brings us almost up to date on this project. There will be a new video soon, so be sure to subscribe to my youtube channel. I have been working on embedding OpenCV images and MatPlotLib plots in wxPython windows, as well as making a more permanent circuit for the robots electronics.

Advertisements

Point clouds using the robot arm

Several months ago I posted the final video (Part 8) in the series looking at the build of a Desktop robot head and arm.  I say final video as, for the time being, I have shelved this project and have moved on to something else. I learnt a lot from this project and I will likely revisit it at some point in the future.

The video above takes you through the process of using the robot arm, along with the model described in the previous post, to build a point cloud representation of the robots surroundings. I added feedback to the final servo in the robot arm assembly, the sonar tilt servo, and I was then able to calculate the position and orientation of the sonar mounted on the end of the robot arm. Using the sonar reading, I could then apply the same technique used to calculate the robot joint positions to find the x,y and z coordinates of the object that the sonar was detecting. This position could then be stored as a point. I programmed the robot arm to move to a series of positions and record a measurement from the sonar sensor as a point which was added to a large array. This array could then be plotted as a 3D plot in the matplotlib window in the GUI.  I pushed this as far as I could and ended up with a point cloud of several thousand points. At this point my PC was struggling to display the points and allow the plot to be rotated or zoomed smoothly. The gathering of the data for the largest point cloud I created took somewhere in the region of 30-40 minutes.

The point cloud experiment was interesting as it showed that the position of objects could be estimated and stored as positions in an array. One of the reasons that I moved on from this project was that I wanted to revisit previous work of combining a distance sensor with a camera. I would like to be able to capture an image, extract one or more objects of interest from the image and measure a distance to the object. This information can them be stored in some form. Similar to the way the point cloud represents distance points, I would like to build up a cloud of objects and their positions that the robot can later use to reference what it is currently seeing.

I will be back very soon with another post as I have made good progress with the next project that I will be sharing here.

Modelling the robot arm – Denavit Hartenberg parameters

Part 7 of my Youtube series, documenting the building and software development of my desktop robot head and arm, is now available.

 

I have designed a new attachment for the end of the robot arm. For the time being I have removed the touch sensitive ‘hand’ and I have replaced it with a sonar sensor, mounted via an additional micro-servo. The idea is that the sonar can measure the environment and use this information, in conjunction with the camera mounted i the head, to build up a visual and spacial model of its environment. I’ll be honest, I’m still a bit fuzzy on how this is going to work but it should keep me busy for a while.

This episode also demonstrates the progress that I have made in modelling the robot arm. With the model in place, I have been able to generate a mimic of the robot arm, embedded in the GUI. I have used matplotlib to generate a 3D plot that I use to display the model of the robot arm.

The first step in modelling the arm, was to find its Denavit Hartenberg parameters. There are lots of great resources online that detail how this is done, so I will only cover it briefly. I assigned reference frames to each of the robot arm joints, namely the base servo, lower arm servo, elbow servo and end effector (sonar) servo. From these reference frames, and some measurements taken from the robot, I was able to find the Denavit Hartenberg parameters as shown in the table below.

 

Description Link a α d θ
Base 0 15mm 90 102mm θBase
Lower 1 150mm 0 0 θLower
Elbow 2 161mm 0 0 θElbow
End 3 38mm 0 0 θEnd
Sonar 4 Sonar Distance 0 0 0

 

The variables in this case, are the θ values, which are the joint angles. You will notice the addition of the Sonar ‘link’ in the table. I will explain more about this in a moment. With these values identified, it is then a case of using these in a Denavit Hartenberg matrix to find the Cartesian coordinates of each joint. Multiplying the XYZ coordinates for a joint by the successive Denavit Hartenberg  matrix, gives the coordinates of the next joint on the arm.

I was able to calculate these coordinates for the arm, initially using joint angles given by sliders in the GUI, and plot them on a 3D matplotlib plot. This was embedded in the GUI and the robot arm model moved as the sliders were altered. It was then possible to read live joint angles from the robot so that the model reflected the actual position of the robot arm at any time.

The additional sonar ‘joint’ in the table above was used to calculate where in 3D space the sonar sensor, mounted to the arm, was measuring to. I treated the sonar sensor as an additional prismatic joint on the robot and as such the variable is the distance measured by the sonar sensor whilst the angle of the joint remains constant. I was then able to plot a line that represents the sonar sensor reading on to the robot model.

I plan to do the same bit of work for the robot head and have a model of that on the screen as well. This is likely to be the content for the next video in the series. I also want to explore a way to use the sonar readings to plot the environment as the robot arm moves around. At the moment I am thinking either a point cloud type data structure or a 3D occupancy grid type approach, but this is very early days so the approach may change.

For now, please enjoy the most recent video and subscribe to my Youtube channel for notifications of future videos. Any feedback or recommendations are welcome.

Colour detection and tracking

Part 6 of my Youtube series was posted last weekend and I have been working on tracking coloured objects.

 

As mentioned in my last post, I decided to develop the tracking function using the detection of a coloured ball instead of faces. To detect a certain colour I converted an image from the webcam to HSV colours, and then applied a threshold to isolate the coloured object of interest. Using contours I was able to detect the position of the coloured ball in the image. I could then use the x and y screen coordinates of the detected ball to calculate x and y error values. Multiplying these errors by a gain value (currently 0.04) and adding each value to the current yaw and pitch servo positions, allowed the head to move to track the object. I have been experimenting with different gain values but this seems to give a reasonable result at the moment.

Using the same code, but with the threshold values changed, I was also able to get the head to track the end of the arm. Although there is always some room for improvement, this ticks another item off of the wish list.

Making a start on the to do list… Face detection and tracking

I have started work on my wish list from the last post. I decided to jump straight in with some face detection and tracking. I have implemented object tracking on previous projects but not face tracking. Using OpenCV with python, getting some basic face detection working was quite straight forward and I had this up and running quickly. There are so many great tutorials out there for getting this working that I don’t need to repeat it here. With this working, I was able to get basic face tracking working by simply finding the position of the face in camera coordinates, and moving the pitch and yaw servos, by increasing or decreasing the angle set-point, to keep the face centred in the camera image. I have uploaded Part 5 of my Youtube series and this video shows the face detection and basic tracking in action.

 

 

As well as discussing my wish list of features for this project, the video also features my son, Vinnie. He loves helping out in the workshop and can’t wait to start some projects of his own. He has watched this project unfold and enjoys watching the robot move. I asked him to watch the robot as it replayed some pre-programmed sequences to see if he could tell what the robot was “thinking”. He did a good job at interpreting what the robot was doing and I am sure he will be keen to help me again in the future. Aside from a bit of fun and getting my son involved in the project, I think this exercise is useful as I want this robot to become a social robot. It needs to be able to interact with people and communicate and entertain to a certain degree. If a 4 year old can interpret what the robot is doing then I think I’m on the right track.

I need to do some more work on the face detection and tracking. At the moment the frame rate is a little slow and I need to work out why. It may be the face detection itself, or it could be the conversion of the image for displaying in the GUI that is slowing it down. I also need to improve the tracking. In the past I have tracked coloured objects, again using OpenCV, so I may write some code to detect a coloured ball, and use this to develop the tracking code. I think if I can get this right then I can just substitute detecting a coloured ball with a face and the tracking process is the same. I will calculate an x and y error in camera coordinates and use this to calculate an amount to move the pitch and yaw servos, in the same way a PID loop works. I am hoping to get a smooth tracking motion rather than the jerky movements I have currently.

All being well I will return very soon with a video showing this in action.

Replaying sequences, and some thoughts…

Part 4 of my Youtube series on my desktop robot head and arm build is now available. This episode shows the robot replaying some sequences of movements, along with some new facial expressions. I was exploring how well the robot was able to convey emotions and, even with quite basic movements and facial expressions, I think it does a reasonable job. Check out the video here.

 

 

Now for some thoughts…

It’s at this point in a project that I normally reach a cross roads. On the one hand the mechanical build is complete and the electronics, although there is more to come in this project, are working as required. These are the aspects of robot building that I enjoy the most. However, I really want the robot to do something cool.  I find myself shying away from the software in favour of starting a new project where I can fire up the 3D printer and the mill. I often put this down to not having an end goal in mind for the robot and the associated software. So I am going to use this space to jot down some thoughts about this that may help me keep on track. I have made notes about some features that I would like to implement in the project which I will list below. Some are quick and fairly easy, others are going to take some time. Whether I ever get them all completed will remain to be seen, but this will prove a helpful reminder should I need to come back to the list.

  • Program and replay poses/sequences from the GUI
  • 3D model of the robot on the GUI for offline programming or real-time monitoring
  • Face detection and tracking
  • Face expression detection (smile or frown) and react accordingly
  • Automatically stop the arm when the hand switch is activated
  • Detect someone and shake their hand
  • Gripper?
  • Remote control/input to the robot via bluetooth (I have a module on the way) maybe an android app?
  • Program the robot to play simple games
  • Object detection and recognition
  • Voice input and sound/speech output
  • Mapping of visual surroundings and reacting to changes
  • Use the robot as a platform for AI development. I have worked on this in the past, trying to use neural networks to allow the robot to have some hand-eye coordination
  • Sonar/IR sensors to sense the area in front of the robot and react to changes

This is just a preliminary list of features that I think would be interesting. I will certainly return to this list myself as a reminder to keep me on track. If anyone has any other suggestions, please leave a comment as I am interested in what other people would consider useful/fun.

My ultimate goal for this project is to have a robot that can sit on my desk or in my lounge, and interact with me and my family. It may sit idle, waiting for someone to activate it, before offering a suggested game or activity. It may begin moving under its own volition, learning its environment, building a model of its world and own self, ready to react to any sensed changes. It may get bored and lonely, and make noises until someone comes to see what the robot wants and play a game. I am not sure but this is where I want the project to head. Ultimately, I will want all processing to be done on-board, so that the robot can be moved around (a Raspberry Pi is a likely candidate to achieve this).

I will keep you all updated on the progress of this project. I think small steps are required to implement each of the features above in turn. I am hoping that eventually I will be able to join all of the features together into an interesting final product.  Until next time, thanks for visiting!

EDIT: I have put my code on to Github here. This is early days in this project but I like to share, especially if it helps someone out!

Youtube series Part 2 and 3 now available

Part 2 of my Youtube series following the development of my latest robot project, a desktop robot head and arm, has been up for a week or so now and I have just finished Part 3. Part 3 covers the testing of the hand switch and how I am starting to develop the code for both the Arduino and the controlling PC.

 

 

 

 

 

If you enjoy the videos, please subscribe as I plan to continue making these as often as time allows. If you would like more information on any aspects of the robot, drop me a message and I can go into more detail in a future video.

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.