Making a start on the to do list… Face detection and tracking

I have started work on my wish list from the last post. I decided to jump straight in with some face detection and tracking. I have implemented object tracking on previous projects but not face tracking. Using OpenCV with python, getting some basic face detection working was quite straight forward and I had this up and running quickly. There are so many great tutorials out there for getting this working that I don’t need to repeat it here. With this working, I was able to get basic face tracking working by simply finding the position of the face in camera coordinates, and moving the pitch and yaw servos, by increasing or decreasing the angle set-point, to keep the face centred in the camera image. I have uploaded Part 5 of my Youtube series and this video shows the face detection and basic tracking in action.

 

 

As well as discussing my wish list of features for this project, the video also features my son, Vinnie. He loves helping out in the workshop and can’t wait to start some projects of his own. He has watched this project unfold and enjoys watching the robot move. I asked him to watch the robot as it replayed some pre-programmed sequences to see if he could tell what the robot was “thinking”. He did a good job at interpreting what the robot was doing and I am sure he will be keen to help me again in the future. Aside from a bit of fun and getting my son involved in the project, I think this exercise is useful as I want this robot to become a social robot. It needs to be able to interact with people and communicate and entertain to a certain degree. If a 4 year old can interpret what the robot is doing then I think I’m on the right track.

I need to do some more work on the face detection and tracking. At the moment the frame rate is a little slow and I need to work out why. It may be the face detection itself, or it could be the conversion of the image for displaying in the GUI that is slowing it down. I also need to improve the tracking. In the past I have tracked coloured objects, again using OpenCV, so I may write some code to detect a coloured ball, and use this to develop the tracking code. I think if I can get this right then I can just substitute detecting a coloured ball with a face and the tracking process is the same. I will calculate an x and y error in camera coordinates and use this to calculate an amount to move the pitch and yaw servos, in the same way a PID loop works. I am hoping to get a smooth tracking motion rather than the jerky movements I have currently.

All being well I will return very soon with a video showing this in action.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.

%d bloggers like this: