Colour detection and tracking

Part 6 of my Youtube series was posted last weekend and I have been working on tracking coloured objects.

 

As mentioned in my last post, I decided to develop the tracking function using the detection of a coloured ball instead of faces. To detect a certain colour I converted an image from the webcam to HSV colours, and then applied a threshold to isolate the coloured object of interest. Using contours I was able to detect the position of the coloured ball in the image. I could then use the x and y screen coordinates of the detected ball to calculate x and y error values. Multiplying these errors by a gain value (currently 0.04) and adding each value to the current yaw and pitch servo positions, allowed the head to move to track the object. I have been experimenting with different gain values but this seems to give a reasonable result at the moment.

Using the same code, but with the threshold values changed, I was also able to get the head to track the end of the arm. Although there is always some room for improvement, this ticks another item off of the wish list.

Advertisements

Making a start on the to do list… Face detection and tracking

I have started work on my wish list from the last post. I decided to jump straight in with some face detection and tracking. I have implemented object tracking on previous projects but not face tracking. Using OpenCV with python, getting some basic face detection working was quite straight forward and I had this up and running quickly. There are so many great tutorials out there for getting this working that I don’t need to repeat it here. With this working, I was able to get basic face tracking working by simply finding the position of the face in camera coordinates, and moving the pitch and yaw servos, by increasing or decreasing the angle set-point, to keep the face centred in the camera image. I have uploaded Part 5 of my Youtube series and this video shows the face detection and basic tracking in action.

 

 

As well as discussing my wish list of features for this project, the video also features my son, Vinnie. He loves helping out in the workshop and can’t wait to start some projects of his own. He has watched this project unfold and enjoys watching the robot move. I asked him to watch the robot as it replayed some pre-programmed sequences to see if he could tell what the robot was “thinking”. He did a good job at interpreting what the robot was doing and I am sure he will be keen to help me again in the future. Aside from a bit of fun and getting my son involved in the project, I think this exercise is useful as I want this robot to become a social robot. It needs to be able to interact with people and communicate and entertain to a certain degree. If a 4 year old can interpret what the robot is doing then I think I’m on the right track.

I need to do some more work on the face detection and tracking. At the moment the frame rate is a little slow and I need to work out why. It may be the face detection itself, or it could be the conversion of the image for displaying in the GUI that is slowing it down. I also need to improve the tracking. In the past I have tracked coloured objects, again using OpenCV, so I may write some code to detect a coloured ball, and use this to develop the tracking code. I think if I can get this right then I can just substitute detecting a coloured ball with a face and the tracking process is the same. I will calculate an x and y error in camera coordinates and use this to calculate an amount to move the pitch and yaw servos, in the same way a PID loop works. I am hoping to get a smooth tracking motion rather than the jerky movements I have currently.

All being well I will return very soon with a video showing this in action.

Replaying sequences, and some thoughts…

Part 4 of my Youtube series on my desktop robot head and arm build is now available. This episode shows the robot replaying some sequences of movements, along with some new facial expressions. I was exploring how well the robot was able to convey emotions and, even with quite basic movements and facial expressions, I think it does a reasonable job. Check out the video here.

 

 

Now for some thoughts…

It’s at this point in a project that I normally reach a cross roads. On the one hand the mechanical build is complete and the electronics, although there is more to come in this project, are working as required. These are the aspects of robot building that I enjoy the most. However, I really want the robot to do something cool.  I find myself shying away from the software in favour of starting a new project where I can fire up the 3D printer and the mill. I often put this down to not having an end goal in mind for the robot and the associated software. So I am going to use this space to jot down some thoughts about this that may help me keep on track. I have made notes about some features that I would like to implement in the project which I will list below. Some are quick and fairly easy, others are going to take some time. Whether I ever get them all completed will remain to be seen, but this will prove a helpful reminder should I need to come back to the list.

  • Program and replay poses/sequences from the GUI
  • 3D model of the robot on the GUI for offline programming or real-time monitoring
  • Face detection and tracking
  • Face expression detection (smile or frown) and react accordingly
  • Automatically stop the arm when the hand switch is activated
  • Detect someone and shake their hand
  • Gripper?
  • Remote control/input to the robot via bluetooth (I have a module on the way) maybe an android app?
  • Program the robot to play simple games
  • Object detection and recognition
  • Voice input and sound/speech output
  • Mapping of visual surroundings and reacting to changes
  • Use the robot as a platform for AI development. I have worked on this in the past, trying to use neural networks to allow the robot to have some hand-eye coordination
  • Sonar/IR sensors to sense the area in front of the robot and react to changes

This is just a preliminary list of features that I think would be interesting. I will certainly return to this list myself as a reminder to keep me on track. If anyone has any other suggestions, please leave a comment as I am interested in what other people would consider useful/fun.

My ultimate goal for this project is to have a robot that can sit on my desk or in my lounge, and interact with me and my family. It may sit idle, waiting for someone to activate it, before offering a suggested game or activity. It may begin moving under its own volition, learning its environment, building a model of its world and own self, ready to react to any sensed changes. It may get bored and lonely, and make noises until someone comes to see what the robot wants and play a game. I am not sure but this is where I want the project to head. Ultimately, I will want all processing to be done on-board, so that the robot can be moved around (a Raspberry Pi is a likely candidate to achieve this).

I will keep you all updated on the progress of this project. I think small steps are required to implement each of the features above in turn. I am hoping that eventually I will be able to join all of the features together into an interesting final product.  Until next time, thanks for visiting!

EDIT: I have put my code on to Github here. This is early days in this project but I like to share, especially if it helps someone out!

Youtube series Part 2 and 3 now available

Part 2 of my Youtube series following the development of my latest robot project, a desktop robot head and arm, has been up for a week or so now and I have just finished Part 3. Part 3 covers the testing of the hand switch and how I am starting to develop the code for both the Arduino and the controlling PC.

 

 

 

 

 

If you enjoy the videos, please subscribe as I plan to continue making these as often as time allows. If you would like more information on any aspects of the robot, drop me a message and I can go into more detail in a future video.

YouTube video series

Happy Robot

 

I am managing to find a bit more time lately to progress some of my projects, in particular the desktop robot. I have started work on a robot arm to accompany the robot head as I suggested in the last post. So far the base and lower joint of the robot arm have been designed in Freecad and 3D printed. I am currently working towards completing the robot arm. At the moment I don’t intend to fit a gripper to the end of the arm but instead use a custom designed touch sensor to allow the arm to get feedback from the environment. But you can bet that a gripper will be on the cards at some point in the future! To document and share the work on the project, I am making a series of YouTube videos to show you what I am up to. I admit, I would have liked to have started doing this at the beginning of the project but better late than never, right? I will also admit that I am not a fan of talking in my videos, or the sound of my own voice, but its the easiest way to explain what I am doing. I will get used to it and I’m sure I will get more comfortable as time goes on.

If anyone out there would like more information on any part of the build, let me know via a comment on here or on YouTube. I do intend to focus on particular parts of the project in future videos, with a sort of tutorial feel to show you how I implemented various features.

Below is the first video of the series, which gives an introduction to the project and shows some of the work that I have been doing. My aim is to release a video every week, but time will tell how realistic this aim is. I hope you enjoy the videos and I am working on the next one now, which should be available tomorrow.

 

Latest robot project – A desktop social robot

I have been working on a new robot project for the last few months.  I like desktop robots and having read about social robots like Jibo and Buddy I decided I would like to try and create a low cost version of one of these that can sit next to my computer and maybe interact with me and my children. I wanted to try and design some more complex components for 3D printing and thought that this project would be a good opportunity. I decided to give FreeCAD a shot as it looked like a promising open source 3D design package.  After negotiating the expected learning curve I was impressed with the range of functionality that FreeCAD has and I was able to design some cool looking parts for my new robot. The idea was to design a desktop robot with a display and a camera integrated into the head and use servos to pan and tilt the head to enable to robot to look around the room. As the project progressed, as is often the case, it evolved and the robot ended up with an extra servo that enabled the head to roll as well. I wanted to incorporate an accelerometer into the head to track its position and went with a cheap MP6050.  For the display I used an adafruit 2.2″ tft screen that I have used in previous projects. I had a webcam, that I stole from my mobile robot that I mounted in the head as well. I also used this project as an excuse to learn how to use kiCAD for pcb design. I used this to design a shield for the arduino mega to interface all of the sensors and servos.

Below is a picture of the finished robot.

Social robot

Social robot

I was particularly pleased with the lettering on the bottom servo housing, easy to do with FreeCad. I was also pleased with the bracket that connects the roll and tilt servos, as shown below.

Roll to tilt head bracket

Roll to tilt head bracket

The accelerometer mounts to the head and, using this library, is able to spit out roll, pitch and yaw angles.

MP6050 mounted to head

MP6050 mounted to head

After some calibration I have been able to control the servo positions using the accelerometer readings very reliably. A simple control loop for each servo is all that is required to position the head at any combination of roll, pitch and yaw angles.

I have a lot of work to do on the software, but I wanted to share the project now that the mechanical and electronic build is complete. I was going to use the raspberry pi for this project but I have decided to use my desktop computer for now, but I may decide to use the pi at a later date. Previously I was driving the tft screen from the pi but I am now writing to the screen from the Arduino. The screen will display a face, probably just simple shapes for now, to allow the robot to display some emotions.

I am also planning to design a robot arm it some point, and I would like this robot to be able to control the robot arm. I am thinking of possibly having a few modular parts, like arms or other sensors, that can work together. I am not sure how this will happen at that moment but its fun to think about.

Come back soon as I hope to have a video of the head moving around and controlling its position up here when its working.

 


 

BFRCode

With the redesign of my robot BFR4WD complete I have moved back to developing the software to control the robot. As I have eluded to in previous posts, I have been working on a protocol for sending commands from the Raspberry Pi to the Arduino. The idea is that the Pi carries out the high level control, i.e. move forward 50cm, turn 30 degrees, etc. as well as image processing, and the Arduino is in charge of the low level control associated with these commands. I took inspiration from Gcode and started developing what I’m now calling BFRCode. I’m sure this isn’t a new idea but it is my take on it and I can tailor it to meet the requirements for my projects. BFRCode consists of a list of alpha-numeric command strings that can be sent by the Pi and interpreted and executed by the Arduino. The current list of commands (BFRCode_commands.xls), along with all of the other code I am working on can be found on github here. The command to move the robot forward 50cm for example would look like W1D50, a turn anti-clockwise of 30 degrees would be W3D30. I have also added functions for driving an arc shaped path and turning the robot to face a given direction as measured by the compass. The code also allows the head to be moved, sensor readings to be returned and power to the servos to be turned on and off. Currently all move commands return a status code to indicate if the move was completed successfully or not, as may be the case if an obstacle was encountered during the movement. The Arduino is in charge of detecting obstacles during movements.

This control scheme has the benefit of separating the high level control from the low level. Functions can be developed on the arduino and tested in isolation to make sure they do what they should and can simply be called by the python script running on the pi. Likewise, when developing the high level code on the Raspberry Pi, very little thought needs to be put in to moving the robot, just issue a command and check that it was executed correctly. Complex sequences of movements can be created by putting together a list of commands and storing as a text file, like you would expect from a Gcode file. A python script can then read through the file and issue the commands one at a time, checking each has completed successfully before issuing the next. I have found that sending strings with a newline termination is a very reliable method of exchanging data and can be done at a reasonably high baud rate. The other advantage to controlling the robot like this is that data is only sent between the Raspberry Pi and the Arduino when a command is issued or data is required. This is in contrast to previous approaches I have taken where data is constantly being sent back and forth.

To send commands, up until now,  I have been using a python script that I wrote that takes typed commands from the command line and sends them to the Arduino. This was OK but I decided I wanted a more user friendly and fun way to control the robot manually, for testing purposes and to show people what the robot can do whilst I’m working on more autonomous functions. I have started making a GUI in Tkinter that will send commands at the touch of a button. If I use VNC to connect to the Raspberry Pi it means I can control the robot manually using any device I choose (laptop, phone or tablet). I have also set the Raspberry Pi up as a wi-fi access point so I can access it without connecting to a network, ideal if I take to robot anywhere to show it off. Below is a screenshot of the GUI I am working on.

BFRGui

I created some custom graphics that are saved as .gif images so that a Tkinter canvas can display them. There are controls for moving and turning the robot and pan and tilt controls for the head. The compass graphic shows the current compass reading. If the compass graphic is clicked, the user can drag a line to a new bearing and on release of the mouse button, a command will be issued to turn the robot to the new heading. I have buttons for turning servo power on and off and a display showing the current sonar reading. I have incorporated a display for the image captured by the webcam. I am using OpenCV to grab the image and then converting it to be displayed on a Tkinter canvas. I’m really pleased with the way that BFRCode and the GUI are turning out. My 3 year old boy has had his first go at manually controlling a robot with the GUI and that is a success in itself!

I have made a very quick video of me controlling the robot using the GUI after connecting to the Raspberry Pi using VNC from a tablet.

Something I would like to develop is a BFRCode generator that allows a path to be drawn on the screen that can then be turned into a BFRCode file. The generated file could then be run by the robot. Head moves and image capture could be incorporated into the instructions. This could be useful for security robots that patrol an area in a fixed pattern. I am still very keen to develop some mapping software so the robot can then plot a map of its environment autonomously. The map could than be used in conjunction with the BFRCode generator to plot a path that relates to the real world.

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.