Obstacle detection using OpenCV

I have been working on a way to detect obstacles on the floor in front of the robot using the webcam and OpenCV. I have had some success and I have made a short video of the obstacle detection in action.

The method I am using involves capturing an image, converting it to grayscale, blurring it slightly and then using canny edge detection to highlight the edges in the image. Using the edge detected image, starting from the left and moving along the width of the image in intervals, I scan from the bottom of the image until I reach a pixel that is white, indicating the first edge encountered. I am left with an array that contains coordinates of the first edges found in front of the robot. Using this array, I then look for changes in the direction of the slope of the edges that may indicate an object is present. At the moment I am ignoring anything in the top half of the image as anything found here will probably be far enough away from the robot to not be too concerned about. This will change depending on the angle of the head. If the head is looking down towards the ground, obviously everything in the scene may be of interest. With the changes of slope found, I then scan in both directions to try and find the edge of the object, indicated by a sharp change in values in the array of nearest edges.

This method seems to work quite well but some tweaking may be required for it to work in all environments. I am planning on using the coordinates of the edges of the obstacles found to create a map of some kind of the area in front of the robot. Combined with the coordinates of the target the robot is heading for, I hope to be able to plan a path for the robot to follow to reach the target.

For anyone that is interested I have put my code on Github. It is a work in progress but may be worth a look!

 

BFRMR1 video

I have finally got around to making a video of BFRMR1 in action. The video shows some of the features of the robot and obstacle avoidance and colour tracking in action.

During testing of the robot I found that at higher speeds, when the robot stops, the front of the robot dips a little bit. This causes the IR sensors to see the floor and leads the robot to believe there is an obstacle where there isn’t one. This was always a possibility as the drive wheels are quite close to the centre of the robot and I was relying on the weight of the batteries to keep the back end of the robot weighted down. I put the wheels where they are so that when the robot turns, it does so within its own footprint. This was to hopefully stop the robot bumping in to things when it turns. To overcome this issue I decided to add some “legs” to the front of the robot that do not contact the ground during normal running but are there to stop the robot tipping forward when it stops from high speed. These are pictured below.

"Legs" at the front of the robot

“Legs” at the front of the robot

I chose this solution as the issue only occurs when running the robot quickly and I didn’t want to add any weight to the rear of the robot after all the effort of lightening the robot with the carbon fibre shell! These “legs” can be easily removed if necessary and I hope to solve the issue in software in the future.

excitingtechnology.net

Facts and Thoughts on Technological Progress

Turing's Radiator

Pleasantly Warm Topics in Computational Philosophy

Mind the Leap

One small step for the brain. One giant leap for the mind.

Steve Grand's Blog

Artificial Life in real life

jimsickstee

Hi, I'm Jim. Read what I write and I'll write more to read.