P2 Rescue People

  This time we should program a dron to make an area recognition. In this case we are in the ocean an we had recived an emergency call from a sinking ship, we know the last position of that ship. Our mission is to send the robot to that area and look for the people, also the drone must send the positions of the persons that it finds.





Im going to divide the resolution if this projects in three parts:

1. Image processing

2. Accident position

3. Movement control


1. Image Processing

The drone should be able to detect faces, this can be easy done by a haarcascade. I follow the next link to detect faces localy in my computer. Once it was able to detect correctly random images of faces i put as input in the code, it was time to try the same code inside Unibotics.

With this the drone could detect some faces but not all. This is becouse the drone was "seeing" faces in random directions, the faces aren't straight when the drone pass above them so he could find faces like:





The haarcascade is not trained to detect faces in thouse positions. So in order to make the drone able to detect faces like that the drone should rotate the images. In a loop the image will be rotated +90, +180, +270, +360, for all this images rotated the programm will check if there is any face. If it is, it should draw the rectangle in the "real" image the drone is getting and not in the rotation one. For this is necesary make a coordinate transform. 

Regarddless of how is the image, i establish this coordinates for the transformation:






So if we rotate the image in the 4 degrees i said before, the coordinates (for us) would be:




But when the program give us the coordinates, they would be equal as the original edges (when the image rotates it don't move th edge of x and y). Let's make an example to explain this. We want to know the yellow dot position, from the rotated images into the original one. The original image is this:




Where the red arrows are our rotated edge, and the black x and y are the edges for the image. The yellow dot is in the position (0,1) in the original picture. Let's rotate 90º,



Now for us the dot is in the (3,0) position. We want to find the transformation that will allow our drone to transform that position (of rotated image) into the original one. So if it has rotated the image 90º and finds a face in the position (3,0), originally in the image is at the point (0,1).

true_x= y_rotated
true_y= height - x_rotated

For the "true" y it's going to be the height, which is 4 (of the original image) - x (3), getting position on y equal to 1, which is correct. Also we should adjust the width and height of the rectangle, for the "true" width it would be -height and for the "true" width would be height , as we can suppouse looking both pisctures above.

With the rest of the cases we can do the same exercise to know the transformation:


Looking where is the dot in the rotated and where is in the original, we can calculate the transform. It is a little bit confusing, becouse of that i recomend you to draw yourself this to understand it better.

With this in mind we should be able to do a series of conditions to draw the rectangle correctly in the original image. 


Here are two examples of how it works. At the left you can see the original image (the one that would recive the drone by HAL.get_ventral_image(). At the right is what the drone is suppose to do internally, rotate the image with the degrees i said before.



As you can see when in the rotated image detects faces, it place the green squares correctly in the original image.






2. Accident position

The position given in the exercise are in GPS, the initial drone position is in GPS as the position of the survirvors (the last position of the sinking ship).
To be able to move the drone we should manage thouse position in UTM, to know the distance between the two points. So to obtain the given position in UTM i used an online web to do the transformation.

I obtain that 40º16'48.2''N, 3º49'03.5''W and 40º16'47.23''N, 3º49'01.78''W are (430492, 4459162) and (430532, 4459132) respectivelly.

Now that we have this let's calculate the distance between the initial position and the objective one (substracting both). With this we have calculated where is the position of the survivors in a way i can command the drone to go there using HAL.set_cmd_pos(pos_x, pos_y, high, 0) from JdeRobot.

One important thing about this command is that for it the y edge goes in the opposite direction. So we have to have this in mind to avoid commanding wrong positions to the drone.

Once we command the drone to takeoff using the fuction provided by Unibotics, it's time to make the robot go to the accident zone (i use the function comment above). I check that the robot has reach that position before command any other order (that is why in some videos you could see that my robot oscilates before start searching, it is trying to reach the goal position with he minimum error).

3. Movement control

Now the robot has reach the accident zone and we have programmed the necessary to process the images and look for survivors. Let's start moving the drone to find the maximum number of people.
I will program a spiral movement to be able to sweep the largest area in the shortest time posible.

My first aproximation was based on increment the linear velocity in x and y. And keeping constant the angular velocity.




To increment de linear velocity i check the orientation. When the orientation of the drone is the same it means that it has made a turn. When this happens its time to increase both velocities. Also i know that the orientation changes becouse i give an angular velocity as i said before.

Over time the circles of the spiral increases its radious but the drone velocity increments a lot, so it finish making circles with a high velocity. I try to adjust the best i can both velocities in order to avoid to end up with a high velocity. Here i leave you a demostration:



As you can see besides it go fast it doesn't make a correct spiral, this is becouse i have to increment two times per "circle" :



One in the initial orientation as i mention before and the other in its negative direction (the opposite). With this the drone started to do better spirals.

Also i have to find a better solution to avoid high speed. Instead of increment the linear velocities i would try to decrease the angular velocity.

With a constant value in the lineal velocity and a high angular velocity at the start, in the beginning it would turn faster but as the angular velocity decreases the circles will increase.

But with this i find another problem, decreasing the angular velocity improved the behaviour but it reached a point when it was 0 or negative, so it start to do things i dont want it to do.

So i create some cases:

  • In the beginning the movement will be led by the angular velocity and its value. This mean i decrease the angular velocity  the slowest i can but keeping doing spirals.  I decrement it slowly so the drone don't left spaces without sweep when it increment the radius circle. Also making it going slowly allows better face detection. The linear velocities increases but very very little.
  • Once the angular velocity is less than a value, the movement will be led by the linear velocity. It would incress the minimun to keep describing a spiral. The angular velocity will decreases but very little. When the angular velocity reach the least value it is asign an invariable value. The increment is slowly for the same reasons as in the case before.

With this i get this behaviour:



As you can see it behavies better but there is a face that is not able to recognize as a person. To improve this i change some of the arguments of the faceCascade.detectMultiScale fuction so the drone can detect as a face.

Other thing i want to comment is that you can see in the video that sometimes the drone detect a face but the counter of number of people doesnt increment. This is becouse to increment the counter it has to comply one condition, this new person has to be at a certain distance from any other person detected. Otherwise the drone will think it has detected the same person two times.

To end the explanation of this exercise, the drone will finish pass some amount of time, it is like the batery of the drone is low. So it should return to it's initial position that i command with HAL.set_cmd_pos(pos_x, pos_y, high, 0). I check that the error between the drone position and the lading position is less than x to avoid my drone land in the water. That is why some times it takes a little  o land, becouse it has to make small movements to decrement the error.

Here i leave you some videos of the final result:








So finally i get a slowest movement which i think is better for a mision of this kind.

Hope this has help you. See you in the next practise!


Comentarios

Entradas populares de este blog

P0 Follow Line

P3B Car junction