Entradas

P3B Car junction

Imagen
  In this practise we have to make able out car to negociate a car junction. So in order to make this practise let's divide it in some parts: 1. Detect stop signal 2. Look both sides 3. Turn left 1. Detect stop signal First of all we have to make able our car to stop when it reach the stop signal. The car would have a constant lineal velocity and when it detects de stop signal it will start decelerating. To detect the stop firstly i used a red colour filter, but there are some building that also have red parts so the car detect a red signal when it really wasn't. In order to detect truely the stop signal, once the color filter detect something i compare it (cropping it with it's bounding box) with the shape of a true stop signal. Shape stop signal.                                                           ...

P3 Autoparking

Imagen
 In this practise we have to make an autonomus car park in a place between two cars. For this task we have three Lidars that would help it make the maniouver to place correctly without chrashing with anything. To make this exercise im going to divide the problem into three main parts: 1. Detect the place 2. Position the car to start the maneuvering 3. Maniouvers 1. Detect the place Before be able to detect the place we should know where hare positioned the Lidars and which information they give to us. The  platform  provides us three functions: HAL.getFrontLaserData()  - to obtain the front laser sensor data It is composed of 180 pairs of values: (0-180º distance in millimeters) HAL.getRightLaserData()  - to obtain the right laser sensor data It is composed of 180 pairs of values: (0-180º distance in millimeters) HAL.getBackLaserData()  - to obtain the back laser sensor data It is composed of 180 pairs of values: (0-180º distance in millimeters) But, where ...

P2 Rescue People

Imagen
  This time we should program a dron to make an area recognition. In this case we are in the ocean an we had recived an emergency call from a sinking ship, we know the last position of that ship. Our mission is to send the robot to that area and look for the people, also the drone must send the positions of the persons that it finds. Im going to divide the resolution if this projects in three parts: 1. Image processing 2. Accident position 3. Movement control 1. Image Processing The drone should be able to detect faces, this can be easy done by a haarcascade. I follow the next  link  to detect faces localy in my computer. Once it was able to detect correctly random images of faces i put as input in the code, it was time to try the same code inside Unibotics. With this the drone could detect some faces but not all. This is becouse the drone was "seeing" faces in random directions, the faces aren't straight when the drone pass above them so he could find faces like: The haa...

P1 Vacumm cleaner

Imagen
  The first challenge of this subject is to implement the vacumm cleaner we did last year but this time with autolocalization. To implement this we are going to use the BSA Algorithm. This is a coverage algorithm which consists in using a mesh to cover all the space. Apparently its really simple, it could be split in 4 steps: 1. Create the map mesh. It will help us to know where the objects are, which are the positions that the robot has visited... 2. The robot movement. All the position visited are marked in the mesh, also we save the positions visited which have unvisited neigbours (this will be our return points). The robot will move in one direction until it finds any object or position visited in front of him.  3. When the robot is stuck becouse he finds an object or visited cell and there is no other free cell around it, we will have to find the nearest return point to our robot and calculate how to reach that point. 4. Once we reach the return point we will be back to t...

P0 Follow Line

Imagen
  Hello world! This is going to be my blog for  the subject service robotics of the degree in software robotics engineering. Here you're going to find all the steps i take to complete the subjects exercises. To start we have to made the follow line code we did last year , to work on the unibotics platform. So starting from my code of the last year i could make my car reach the objective, it only took me some time to readjust all the constants and all the velocities (becouse the values i had in the other code were too big and the car make strange things with them). Also as you could see in the videos i add two circles to the image, red to indicate that the car is detecting a curve and green when is on a straight line. This is basically to made me the depuration of the code easier, intead of looking all the information in the terminal and what is going on with the detection of the lines in the camera, i only have to watch the camera information. Here i leave a video with the ex...