Colour detection and tracking

Part 6 of my Youtube series was posted last weekend and I have been working on tracking coloured objects.

 

As mentioned in my last post, I decided to develop the tracking function using the detection of a coloured ball instead of faces. To detect a certain colour I converted an image from the webcam to HSV colours, and then applied a threshold to isolate the coloured object of interest. Using contours I was able to detect the position of the coloured ball in the image. I could then use the x and y screen coordinates of the detected ball to calculate x and y error values. Multiplying these errors by a gain value (currently 0.04) and adding each value to the current yaw and pitch servo positions, allowed the head to move to track the object. I have been experimenting with different gain values but this seems to give a reasonable result at the moment.

Using the same code, but with the threshold values changed, I was also able to get the head to track the end of the arm. Although there is always some room for improvement, this ticks another item off of the wish list.

Advertisements

Making a start on the to do list… Face detection and tracking

I have started work on my wish list from the last post. I decided to jump straight in with some face detection and tracking. I have implemented object tracking on previous projects but not face tracking. Using OpenCV with python, getting some basic face detection working was quite straight forward and I had this up and running quickly. There are so many great tutorials out there for getting this working that I don’t need to repeat it here. With this working, I was able to get basic face tracking working by simply finding the position of the face in camera coordinates, and moving the pitch and yaw servos, by increasing or decreasing the angle set-point, to keep the face centred in the camera image. I have uploaded Part 5 of my Youtube series and this video shows the face detection and basic tracking in action.

 

 

As well as discussing my wish list of features for this project, the video also features my son, Vinnie. He loves helping out in the workshop and can’t wait to start some projects of his own. He has watched this project unfold and enjoys watching the robot move. I asked him to watch the robot as it replayed some pre-programmed sequences to see if he could tell what the robot was “thinking”. He did a good job at interpreting what the robot was doing and I am sure he will be keen to help me again in the future. Aside from a bit of fun and getting my son involved in the project, I think this exercise is useful as I want this robot to become a social robot. It needs to be able to interact with people and communicate and entertain to a certain degree. If a 4 year old can interpret what the robot is doing then I think I’m on the right track.

I need to do some more work on the face detection and tracking. At the moment the frame rate is a little slow and I need to work out why. It may be the face detection itself, or it could be the conversion of the image for displaying in the GUI that is slowing it down. I also need to improve the tracking. In the past I have tracked coloured objects, again using OpenCV, so I may write some code to detect a coloured ball, and use this to develop the tracking code. I think if I can get this right then I can just substitute detecting a coloured ball with a face and the tracking process is the same. I will calculate an x and y error in camera coordinates and use this to calculate an amount to move the pitch and yaw servos, in the same way a PID loop works. I am hoping to get a smooth tracking motion rather than the jerky movements I have currently.

All being well I will return very soon with a video showing this in action.

Replaying sequences, and some thoughts…

Part 4 of my Youtube series on my desktop robot head and arm build is now available. This episode shows the robot replaying some sequences of movements, along with some new facial expressions. I was exploring how well the robot was able to convey emotions and, even with quite basic movements and facial expressions, I think it does a reasonable job. Check out the video here.

 

 

Now for some thoughts…

It’s at this point in a project that I normally reach a cross roads. On the one hand the mechanical build is complete and the electronics, although there is more to come in this project, are working as required. These are the aspects of robot building that I enjoy the most. However, I really want the robot to do something cool.  I find myself shying away from the software in favour of starting a new project where I can fire up the 3D printer and the mill. I often put this down to not having an end goal in mind for the robot and the associated software. So I am going to use this space to jot down some thoughts about this that may help me keep on track. I have made notes about some features that I would like to implement in the project which I will list below. Some are quick and fairly easy, others are going to take some time. Whether I ever get them all completed will remain to be seen, but this will prove a helpful reminder should I need to come back to the list.

  • Program and replay poses/sequences from the GUI
  • 3D model of the robot on the GUI for offline programming or real-time monitoring
  • Face detection and tracking
  • Face expression detection (smile or frown) and react accordingly
  • Automatically stop the arm when the hand switch is activated
  • Detect someone and shake their hand
  • Gripper?
  • Remote control/input to the robot via bluetooth (I have a module on the way) maybe an android app?
  • Program the robot to play simple games
  • Object detection and recognition
  • Voice input and sound/speech output
  • Mapping of visual surroundings and reacting to changes
  • Use the robot as a platform for AI development. I have worked on this in the past, trying to use neural networks to allow the robot to have some hand-eye coordination
  • Sonar/IR sensors to sense the area in front of the robot and react to changes

This is just a preliminary list of features that I think would be interesting. I will certainly return to this list myself as a reminder to keep me on track. If anyone has any other suggestions, please leave a comment as I am interested in what other people would consider useful/fun.

My ultimate goal for this project is to have a robot that can sit on my desk or in my lounge, and interact with me and my family. It may sit idle, waiting for someone to activate it, before offering a suggested game or activity. It may begin moving under its own volition, learning its environment, building a model of its world and own self, ready to react to any sensed changes. It may get bored and lonely, and make noises until someone comes to see what the robot wants and play a game. I am not sure but this is where I want the project to head. Ultimately, I will want all processing to be done on-board, so that the robot can be moved around (a Raspberry Pi is a likely candidate to achieve this).

I will keep you all updated on the progress of this project. I think small steps are required to implement each of the features above in turn. I am hoping that eventually I will be able to join all of the features together into an interesting final product.  Until next time, thanks for visiting!

EDIT: I have put my code on to Github here. This is early days in this project but I like to share, especially if it helps someone out!

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.