diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS
new file mode 100644
index 0000000..b7aa492
--- /dev/null
+++ b/.github/CODEOWNERS
@@ -0,0 +1,25 @@
+
+# @Home
+/docs/home/ @RoBorregos/home-integracion
+
+/docs/home/Areas/Navigation.md @RoBorregos/home-nav
+/docs/home/Areas/HRI.md @RoBorregos/home-hri
+/docs/home/Areas/Integration\ and\ Networks.md @RoBorregos/home-integracion
+/docs/home/Areas/Mechanics.md @RoBorregos/home-mecanica
+/docs/home/Areas/Computer\ Vision.md @RoBorregos/home-vision
+/docs/home/Areas/Manipulation.md @RoBorregos/home-manipulacion
+/docs/home/Areas/Electronics\ and\ Control.md @RoBorregos/home-manipulacion
+
+/docs/home/Aug\ 2022\ -\ Jun\ 2023/Human\ Robot\ Interaction/ @RoBorregos/home-hri
+/docs/home/Aug\ 2022\ -\ Jun\ 2023/Mechanics/ @RoBorregos/home-mecanica
+/docs/home/Aug\ 2022\ -\ Jun\ 2023/Integration\ and\ Networks/ @RoBorregos/home-integracion
+/docs/home/Aug\ 2022\ -\ Jun\ 2023/Electronics\ and\ Control/ @RoBorregos/home-electronica
+/docs/home/Aug\ 2022\ -\ Jun\ 2023/Computer\ Vision/ @RoBorregos/home-vision
+
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Human\ Robot\ Interaction/ @RoBorregos/home-hri
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Mechanics/ @RoBorregos/home-mecanica
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Integration\ and\ Networks/ @RoBorregos/home-integracion
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Electronics\ and\ Control/ @RoBorregos/home-electronica
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Computer\ Vision/ @RoBorregos/home-vision
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Manipulation/ @RoBorregos/home-manipulacion
+/docs/home/Aug\ 2023\ -\ Jun\ 2024/Navigation/ @RoBorregos/home-nav
diff --git a/.github/workflows/format.yml b/.github/workflows/format.yml
new file mode 100644
index 0000000..4bd3b78
--- /dev/null
+++ b/.github/workflows/format.yml
@@ -0,0 +1,23 @@
+name: Test Format
+
+on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - name: Set up Python
+ uses: actions/setup-python@v5
+ with:
+ python-version: '3.x'
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements_test.txt
+ - name: Test with pytest
+ run: pytest
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..ff11275
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+site/
+__pycache__/
\ No newline at end of file
diff --git a/README.md b/README.md
index c04e928..7d2cc44 100644
--- a/README.md
+++ b/README.md
@@ -13,7 +13,7 @@ Welcome to the RoBorregos Official documentation. This documentation is based on
## Add new page
To add a new page, locate the docs directory.
-```{bash}
+```bash
ROBORREGOS-DOCS
│ mkdocs.yml
│ requirements.txt
@@ -37,25 +37,38 @@ To add new images, add them to the assets folder. Preferably, use the same name
To run the documentation locally, you need to have python installed.
1. Clone the repository
-```{bash}
+```bash
git clone https://github.com/RoBorregos/RoBorregos-Docs.git
```
2. Install the requirements
-```{bash}
+```
pip install -r requirements.txt
```
3. Run the server
-```{bash}
+```bash
mkdocs serve
```
-4. Open the browser and go to http://localhost:8000
+4. If you encounter issues with the command not being found try the following
+```bash
+python -m mkdocs serve
+```
+
+5. Open the browser and go to http://localhost:8000
+
+## Test
+Run formatting tests
+```bash
+pip install -r requirements_test.txt
+# At root directory of project
+pytest
+```
## Deploy
Pls do not deploy without permission from the repo mantainer.
-```{bash}
+```bash
mkdocs gh-deploy
```
diff --git a/docs/LARC/Mechanics/Version1.md b/docs/LARC/2023/Mechanics/Version1.md
similarity index 100%
rename from docs/LARC/Mechanics/Version1.md
rename to docs/LARC/2023/Mechanics/Version1.md
diff --git a/docs/LARC/Vision/ArUco_detection.md b/docs/LARC/2023/Vision/ArUco_detection.md
similarity index 96%
rename from docs/LARC/Vision/ArUco_detection.md
rename to docs/LARC/2023/Vision/ArUco_detection.md
index 1f30a62..9112e69 100644
--- a/docs/LARC/Vision/ArUco_detection.md
+++ b/docs/LARC/2023/Vision/ArUco_detection.md
@@ -58,4 +58,4 @@ cv2.destroyAllWindows()
```
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/LARC/Vision/Color_detection.md b/docs/LARC/2023/Vision/Color_detection.md
similarity index 98%
rename from docs/LARC/Vision/Color_detection.md
rename to docs/LARC/2023/Vision/Color_detection.md
index 8be1ace..ed6f30b 100644
--- a/docs/LARC/Vision/Color_detection.md
+++ b/docs/LARC/2023/Vision/Color_detection.md
@@ -81,4 +81,4 @@ cv2.destroyAllWindows()
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/LARC/Vision/Letter_Clasification.md b/docs/LARC/2023/Vision/Letter_Clasification.md
similarity index 95%
rename from docs/LARC/Vision/Letter_Clasification.md
rename to docs/LARC/2023/Vision/Letter_Clasification.md
index 09920eb..28b883e 100644
--- a/docs/LARC/Vision/Letter_Clasification.md
+++ b/docs/LARC/2023/Vision/Letter_Clasification.md
@@ -12,7 +12,7 @@ To install Tensorflow Litte you need to run the following command:
pip install tflite-model-maker
```
### Datastet structure
-
+
### Usage
@@ -107,4 +107,4 @@ print(data[max])
```
- 
\ No newline at end of file
+ 
\ No newline at end of file
diff --git a/docs/LARC/Vision/index.md b/docs/LARC/2023/Vision/index.md
similarity index 100%
rename from docs/LARC/Vision/index.md
rename to docs/LARC/2023/Vision/index.md
diff --git a/docs/LARC/2023/index.md b/docs/LARC/2023/index.md
new file mode 100644
index 0000000..f9f21c3
--- /dev/null
+++ b/docs/LARC/2023/index.md
@@ -0,0 +1,20 @@
+# @RescueMaze - 2023
+
+The main developments during 2023 with respect to previous years are the following:
+
+
+TODO: modify this.
+## Mechanics
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+- d
+
+## Electronics
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+-
+
+## Programming
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+- d
diff --git a/docs/LARC/2024/Electronics/Description.md b/docs/LARC/2024/Electronics/Description.md
new file mode 100644
index 0000000..2f4c1ac
--- /dev/null
+++ b/docs/LARC/2024/Electronics/Description.md
@@ -0,0 +1,48 @@
+## Introduction
+
+For all the devices that would be used in the final product to interact with the computer, the use of a microcontroller and a microprocessor is necessary. It was decided that an Arduino MEGA 2560 would be the chosen microcontroller for this purpose, since it has a worldwide platform that fulfills the connections that are needed for all the devices that were mentioned in the index. In addition to this, the Arduino MEGA 2560 has a wide number of digital inputs, as well as PWM inputs, which supports the final decision to implement it on the robot. A microprocessor, the Raspberry Pi 4, was used so that the robot could detect the packages through the cameras, and with the purpose that both the Arduino and the Raspberry Pi can communicate bilaterally.
+
+
+
+
+## Connections
+
+The connections were made so that the sensors and motor drivers that were needed are connected to the Arduino, with the exception of the power supplies for the motor drivers, which are powered by different LiPO batteries. With that in mind, the first LiPO battery (11.1 volts) is powering the Arduino through a voltage regulator, which is outputting 5 volts to this microcontroller. The second LiPO battery (11.1 volts) is connected to the wheel motors, and the third one (which has the same voltage) is powering a stepper motor.
+
+The XT60 connectors were chosen for all the input voltages since they internally have diodes, which would help protect the circuit overall in case the current went the opposite way in any moment.
+
+
+
+There were also three LED indicators for some voltage sources, such as the Arduino's converted supply voltage (which is 3.6V in the schematic, but was made 5V at last), the Arduino's not regulated supply voltage (the 22V line in the schematic, which is now 11.1V), and the motors' supply voltage (the 12V line, 11.1V at last).
+
+
+
+There are some support devices for the components, like the push button to reset the camera. At first, an OpenMV CamH7 Plus was used, but it no longer turned on, so the team had to switch to normal USB cameras to process all the packages for the challenge. This button would be later used as a reset for the Arduino MEGA instead.
+
+The schematic (which was designed on EasyEDA) looked like this at last:
+
+
+
+
+
+
+
+
+## PCB Development
+
+A software tool such as EasyEDA can develop Printed Circuit Boards (PCBs) out of electronic schematics, which opens the possibility to integrate many devices in a single board, without the need of making more physical electrical connections than the ones that are explicitly required. Plus, by making a PCB we can be reassured that there will not be any mistakes like overvoltages or short circuits.
+
+The PCB was designed so that this board could be used modularly, in case any other challenge needed these same connections. Such decision came to mind because many of the components that are used in this PCB are basic to make an autonomous robot work. It was also decided that the PCB would have female pin headers, since it was better to attach the components just by that instead of soldering them directly to the board.
+
+Every voltage via was handled so that no device would be overloaded by an excessive amount of voltage that it could not handle, which is the same case for the microcontroller and the microprocessor. It is important to note that the Raspberry Pi is not being contemplated in the PCB, since it is being powered by the same battery as the Arduino externally. Another fact to consider is that initially it was intended that the PCB would have three different vias: 22 volts, 12 volts and 19 volts. At last, it was inferred that the best decision would be to use three 11.1 volts batteries.
+
+
+
+The final result in Electronics matters can be seen in this picture, which was taken the day of the competition:
+
+
+
+## Modifications
+
+Other modifications included adding a new voltage regulator for the gripper’s servos, since they needed at least 6 volts to operate. Thus, it was decided that the battery that would supply energy to the Arduino would also power said servos.
+One of the difficulties that was encountered throughout the development of this challenge was that both the Raspberry Pi and the TMC2209 driver for the stepper were overheating, so two cooler fans were designed so this could be fixed. Both fans are powered by almost 12 volts (which are being outputted by one of the stepper motor's driver slots).
\ No newline at end of file
diff --git a/docs/LARC/2024/Mechanics/MDF_fixtures.md b/docs/LARC/2024/Mechanics/MDF_fixtures.md
new file mode 100644
index 0000000..1760cad
--- /dev/null
+++ b/docs/LARC/2024/Mechanics/MDF_fixtures.md
@@ -0,0 +1,24 @@
+## MDF
+
+Generally speaking, Medium-Density Fibreboard is a great material to build out a simple structure or prototype. The key aspect is its easy manufacturability and cost. However, when designing the base of a robot, that needs to be sturdy, there are some limitations. Feel free to mix and combine fixture elements such as nuts and bolts. These types of fixtures are a great choice since the forces they would generate as a response of stress would be mainly perpendicular to the material and accross all of its layers. Just as with 3D printed parts, the material is most vulnerable when applying parallel loads to some and not all of the layers of the material. Therefore, adhesive fixtures that act only on the surface are not recommended.
+
+
+
+## Basic Fixtures
+
+Taking these considerations into account, to design a fixture between 2 MDF parts is simple. In this case, basic types of fixtures refers to a situation in which both parts are perpendicular one from another. Furthermore, there are 4 basic types of contact between 2 MDF parts where some kind of movement is blocked. These are divided depending on where a "plug" type part would act upon the "socket" type part, and whether or not the "socket" part covers both edges of the "plug" part. Generally speaking, the goal of an MDF structure is to block as many degrees of freedom as posible. As you can already tell, this types of fixtures can only go so far on their own. However, when used in coalition with one another, a group of these basic fixtures can do most of the job. Keep in mind that because laser cutting works so well with MDF, tolerances can be neglected. As long as you always take into account the width of the material that you will cut, you can design to your hearts content!
+
+
+
+## Half-Lap fixtures
+
+A Half-Lap fixture happens when both parts are both "plug" and "socket", and they do not assemble perpendicular from one another, but rather through an axis parallel to both pieces. Both parts have a 'U' shape and these interlock with one another. These are particularly useful because the increased area of contact makes a greater friction force that helps maintain the pieces in place. However the main advantage of this type of fixture is that it locks movement in almost all directions. Therefore, this type of fixture works best in structural beams and as an 'adapter' between 2 pieces.
+
+
+
+## Mix and Match
+
+If the goal is to use as little nuts and bolts as possible, do not be affraid to make as many components as you like. The limits are your imagination. This is a part of the LARC 2024 chassis of the robot, where the middle beam (transparent) is held in place to the wheel wall (pink outline) by a lock piece (blue) that goes into the "plug" part of the beam and locks the movement parallel to the beam. Since the surface of contact is relatively huge, the friction works so well that it even is a hazzle to disassemble it!
+
+
+
diff --git a/docs/LARC/2024/Programming/Robot Control/Velocity Control.md b/docs/LARC/2024/Programming/Robot Control/Velocity Control.md
new file mode 100644
index 0000000..319a804
--- /dev/null
+++ b/docs/LARC/2024/Programming/Robot Control/Velocity Control.md
@@ -0,0 +1,49 @@
+## Holonomic Robot Overview
+
+Encompassing mechanical movements aligned with the main engine's expectations, is overseen by the microcontroller. This entails control of four motors for movement, along with respective encoders, BNO055 IMU, floor phototransistors, elevator, and gripper. The control system accommodates mecanum wheel drive capabilities spanning forward/backward, left/right, and rotation.
+
+
+## Encoder for feedback
+
+ At the core of control lies the utilization of encoders to maintain uniform motor speeds, alongside continuous odometry updates for spatial consistency. The controllability of said wheel speeds depends on an implemented PID control loop for each wheel. Odometry is the driving force of this method and makes mecanum wheels viable. Given the myriad factors influencing chassis movement, such as weight distribution and mecanum wheel precision, IMU feedback supplement movement control, aiding directional precision.
+
+## Kinematic Equations
+
+As described below, these equations govern the direction and speed control of mecanum drive, facilitating precise movement across the field.
+
+\[ \text{front_left} = \frac{1}{r}(v_x - v_y - (l_x + l_y)z) \]
+
+\[ \text{front_right} = \frac{1}{r}(v_x + v_y + (l_x + l_y)z) \]
+
+\[ \text{back_left} = \frac{1}{r}(v_x + v_y - (l_x + l_y)z) \]
+
+\[ \text{back_right} = \frac{1}{r}(v_x - v_y + (l_x + l_y)z) \]
+
+where
+
+\[ v_x = \text{linear velocity in the X-axis} \]
+
+\[ v_y = \text{linear velocity in the Y-axis} \]
+
+\[ l_x = \text{Wheel base distance} \]
+
+\[ l_y = \text{Wheel track distance} \]
+
+\[ z = \text{Angular speed} \]
+
+
+
+
+For autonomous driving, inverse kinematic equations are used. The equations are derived from mathematical modeling of a 4 Mecanum Wheeled Robot. These equations are used for calculating the required linear velocity for each individual motor at every instance of robot movement.
+
+
+The previous equations generate linear velocity for each wheel, so that each one will be transforming linear velocity to angular movement. A linear velocity to RPM conversion is used.
+
+\[
+\text{RPM} = \frac{\omega_i \cdot 60}{\text{wheelCircumference}}
+\]
+
+
+Later then, Output RPM serves as input feedback for the Embedded Control System. At the robot's core, PID type controllers for each motor’s function to maintain the robot’s stability over all its trajectory. Furthermore, phototransistors and both cameras bolster precision and facilitate orientation confirmation, alongside distance measurement to objects.
+
+
diff --git a/docs/LARC/2024/index.md b/docs/LARC/2024/index.md
new file mode 100644
index 0000000..b8550f5
--- /dev/null
+++ b/docs/LARC/2024/index.md
@@ -0,0 +1,32 @@
+# @LARC - 2024
+
+The main developments during 2024 with respect to previous years are the following:
+
+
+TODO: modify this.
+## Mechanics
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+- d
+
+## Electronics
+
+- Arduino MEGA 2560
+- Servomotors
+- Raspberry Pi 4 Model B+
+- Pololu Gearmotor Encoder
+- 16-Channel Analog Digital Multiplexer
+- Reflectance Sensor Array: 4-Channel Analog Output QTR-HD-O4A
+- XL4015 Voltage & Current Regulator
+- 3 11.1V LiPo Batteries
+- IR Sensor
+- Stepper motor
+- LEDs
+- Resistors
+- TMC2209 Stepper Motor Driver
+- USB Cameras
+
+## Programming
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+- d
diff --git a/docs/LARC/index.md b/docs/LARC/index.md
index 87a87bd..23cf5f2 100644
--- a/docs/LARC/index.md
+++ b/docs/LARC/index.md
@@ -1,3 +1,35 @@
-#LARC 2023
-## Latin American Robotics Competition
-### Un challenge
+# Latin American Robotics Competition Open Challenge
+
+The competition context is aimed at automating an environment with a large number of packages
+to be organized, Figure 1. The essence of the competition is extracted from environments such
+as warehouse, product distribution center, store stock, etc.
+
+Warehouses automation is already a reality in large companies like Amazon and Alibaba, but it
+should be a reality in midsize companies soon. Think of a possible solution. Participants must
+build an agile and fast robot to organize as many packages as possible in a limited time.
+
+
+
+## The goal
+
+The robot can move freely in the scenario but cannot collide or push a package out of the
+package's area. To reach the challenges of the competition, the robot must take each package
+and leave it to its destination. The robot will not know his initial position in the scenario either the
+position of the packs in the package's area. The objective is to take packages from a specific
+location and take them to predefined locations, so that, at the end, the packages are in a desired
+arrangement in the proposed scenario. The specific objectives are:
+ 1. Take colored packages (yellow, red, green, blue) and move them to the unloading regions
+ with equivalent colors.
+ 2. Pick up packages containing 2D codes and move them to any respective position on the
+ shelves.
+ 3. Pick up packages with alphabetical values and take them to any respective position on
+ the shelves.
+
+## Packages
+
+Packages can be marked by color, 2D codes or alphabetical values. The possible colors for the
+packages are green, yellow, blue and red. The 2D code is a bi-dimensional representation
+containing 9 combinations, from one to 9, according to the markers that can be obtained from the
+site https://chev.me/arucogen. The alphabetical packages are white and the 2D code packages
+are black. There is a specific region in the scenario where the packages are initially positioned,
+called the loading region.
diff --git a/docs/RescueMaze/Algorithm.md b/docs/RescueMaze/2023/Algorithm.md
similarity index 97%
rename from docs/RescueMaze/Algorithm.md
rename to docs/RescueMaze/2023/Algorithm.md
index b5a310b..6367e2f 100644
--- a/docs/RescueMaze/Algorithm.md
+++ b/docs/RescueMaze/2023/Algorithm.md
@@ -10,7 +10,7 @@ The algorithm's main unit is the __tile__, where each one has different properti
The algorithm uses a __graph__, a data structure that contains different nodes that can have connections to other nodes. In this case, the nodes are the different tiles and the connections are the physical space between them, so a connection exists if the tiles are adjacent to each other. This data structure was chosen for its flexibility, since it can be scaled freely as the robot explores, without having limits on the size of the map.
There isn't a specific interface for the graph, instead it is embedded in the tiles, using __pointers__ to have a current tile at all times, as well as a _std::map_ in the tiles that has the pointers to the adjacent tiles in each of the four directions. There's also another _std::map_ at the top level that relates a position to the corresponding tile's pointer, added for a faster lookup.
-
+
## Engine
At the start of the execution it assigns the current direction to the north and uses that as a __reference__ for the map. It also initializes relevant variables (_map of tiles, first tile, unvisited tiles vector_). At every new tile it checks for victims, and stores them if there are, so as to not give additional kits if it passes the victim again.
@@ -25,4 +25,4 @@ Also, during the loop there's a variable that keeps track of the __best unvisite
Having the best next tile, it only needs to know the path it needs to take to go there, which is obtained from the path variable, where it uses a grasp of __recursion__, since the path would be the path to go to the adjacent tile plus a movement to get there. And to actually do that it goes backwards, adding movements to a _stack_, and then updating the tile to the previous one in the path.
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/RescueMaze/Control/PID.md b/docs/RescueMaze/2023/Control/PID.md
similarity index 95%
rename from docs/RescueMaze/Control/PID.md
rename to docs/RescueMaze/2023/Control/PID.md
index fa0b5e2..1420af7 100644
--- a/docs/RescueMaze/Control/PID.md
+++ b/docs/RescueMaze/2023/Control/PID.md
@@ -19,7 +19,7 @@ To visualize the effectiveness of the PID controller, the [following code](https
3. Remove all serial.print() calls from the Arduino code (otherwise, serial communication will fail)
4. From the arduino, call a routine similar to this:
-
+
Where robot is an instance of [Movement](https://github.com/RoBorregos/rescuemaze-2023/blob/pidrotation/navSensors/main_code/Movement.h) and [Plot](https://github.com/RoBorregos/rescuemaze-2023/blob/pidrotation/navSensors/main_code/Plot.h) is a class that handles serial communication and plotting. A method to update the motors movement should also be called. In this example, [updateStraightPID(RPMs)](https://github.com/RoBorregos/rescuemaze-2023/blob/pidrotation/navSensors/main_code/Movement.cpp#L552) updates the data and state of the motor, which is sent through serial by the plot class.
@@ -28,9 +28,9 @@ Where robot is an instance of [Movement](https://github.com/RoBorregos/rescuemaz
**Example 1**
-
+
**Example 2**
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/RescueMaze/Control/Sensors/BNO55.md b/docs/RescueMaze/2023/Control/Sensors/BNO055.md
similarity index 100%
rename from docs/RescueMaze/Control/Sensors/BNO55.md
rename to docs/RescueMaze/2023/Control/Sensors/BNO055.md
diff --git a/docs/RescueMaze/Control/Sensors/TCS34725.md b/docs/RescueMaze/2023/Control/Sensors/TCS34725.md
similarity index 100%
rename from docs/RescueMaze/Control/Sensors/TCS34725.md
rename to docs/RescueMaze/2023/Control/Sensors/TCS34725.md
diff --git a/docs/RescueMaze/Control/Sensors/VLX53L0X.md b/docs/RescueMaze/2023/Control/Sensors/VLX53L0X.md
similarity index 100%
rename from docs/RescueMaze/Control/Sensors/VLX53L0X.md
rename to docs/RescueMaze/2023/Control/Sensors/VLX53L0X.md
diff --git a/docs/RescueMaze/Control/Sensors/index.md b/docs/RescueMaze/2023/Control/Sensors/index.md
similarity index 87%
rename from docs/RescueMaze/Control/Sensors/index.md
rename to docs/RescueMaze/2023/Control/Sensors/index.md
index c14f8c9..24d8d0c 100644
--- a/docs/RescueMaze/Control/Sensors/index.md
+++ b/docs/RescueMaze/2023/Control/Sensors/index.md
@@ -2,9 +2,9 @@
The following sensors were used in Rescue Maze 2023:
-- Time of flight distance sensor: [VLX53L0X](./VLX53L0X) (Adafruit)
-- RGB Color Sensor: [TCS34725](./TCS34725) (Adafruit)
-- Absolute Orientation Sensor: [BNO055](./BNO055) (Adafruit)
+- Time of flight distance sensor: [VLX53L0X](./VLX53L0X.md) (Adafruit)
+- RGB Color Sensor: [TCS34725](./TCS34725.md) (Adafruit)
+- Absolute Orientation Sensor: [BNO055](./BNO055.md) (Adafruit)
## Using i2c devices
diff --git a/docs/RescueMaze/Control/index.md b/docs/RescueMaze/2023/Control/index.md
similarity index 95%
rename from docs/RescueMaze/Control/index.md
rename to docs/RescueMaze/2023/Control/index.md
index 6e2ed5c..a2ba47e 100644
--- a/docs/RescueMaze/Control/index.md
+++ b/docs/RescueMaze/2023/Control/index.md
@@ -9,7 +9,7 @@ One of the most important elements needed to develop a stable control is the use
sensors which give feedback to the robot about its current state. The sensor input is then
used to move the robot accordingly and reach the target positions.
-See [Sensors](./Sensors) for information about the sensors used.
+See [Sensors](Sensors/index.md) for information about the sensors used.
## PID
diff --git a/docs/RescueMaze/Jetson Nano/RunningJetson.md b/docs/RescueMaze/2023/Jetson Nano/RunningJetson.md
similarity index 100%
rename from docs/RescueMaze/Jetson Nano/RunningJetson.md
rename to docs/RescueMaze/2023/Jetson Nano/RunningJetson.md
diff --git a/docs/RescueMaze/Jetson Nano/USBRules.md b/docs/RescueMaze/2023/Jetson Nano/USBRules.md
similarity index 82%
rename from docs/RescueMaze/Jetson Nano/USBRules.md
rename to docs/RescueMaze/2023/Jetson Nano/USBRules.md
index e768eea..c41dd46 100644
--- a/docs/RescueMaze/Jetson Nano/USBRules.md
+++ b/docs/RescueMaze/2023/Jetson Nano/USBRules.md
@@ -3,7 +3,7 @@
Here is an example of how we can se the behaviour of the usb ports.
-
+
## Udev Rules
@@ -13,7 +13,7 @@ To automate the USB ports, we need to create a rule for each port. The rule will
Here is the hard investigation we did to determine the rules:
-
+
-
+
diff --git a/docs/RescueMaze/ROS/Navigation.md b/docs/RescueMaze/2023/ROS/Navigation.md
similarity index 98%
rename from docs/RescueMaze/ROS/Navigation.md
rename to docs/RescueMaze/2023/ROS/Navigation.md
index 8719f6a..7bf7d3e 100644
--- a/docs/RescueMaze/ROS/Navigation.md
+++ b/docs/RescueMaze/2023/ROS/Navigation.md
@@ -53,7 +53,7 @@ The velocity commands were initially published to the `/cmd_vel` topic, which is
The goals are sent to the navigation stack using the [move_base](http://wiki.ros.org/move_base) package. In this case the goals were limited to the goals sent by the [algorithm](../Algorithm.md), being only 90 degree turns and 30 cm movements forward or backward. In order to send accurate goals, a custom transform was used, which is used to represent the ideal position of the robot at any given moment, compensating for incaccuracies in the robot's translational and rotational movement.
-This transform was calculated by using the IMU yaw data as well as the [localization_grid](/docs/RescueMaze/ROS/LocalizationGrid.md) data and was published by a transform broadcaster.
+This transform was calculated by using the IMU yaw data as well as the [localization_grid](LocalizationGrid.md) data and was published by a transform broadcaster.
- IMU data: Stored when the robot is initialized, and used to calculate the angle of each cardinal direction, which are then used to update the transform.
- Localization grid: Used to get the distance from the robot to the center of the current tile. Used to update the transform to send goals from the center of the tile.
diff --git a/docs/RescueMaze/ROS/SerialCommunication.md b/docs/RescueMaze/2023/ROS/SerialCommunication.md
similarity index 100%
rename from docs/RescueMaze/ROS/SerialCommunication.md
rename to docs/RescueMaze/2023/ROS/SerialCommunication.md
diff --git a/docs/RescueMaze/ROS/index.md b/docs/RescueMaze/2023/ROS/index.md
similarity index 100%
rename from docs/RescueMaze/ROS/index.md
rename to docs/RescueMaze/2023/ROS/index.md
diff --git a/docs/RescueMaze/Vision/Openmv.md b/docs/RescueMaze/2023/Vision/Openmv.md
similarity index 100%
rename from docs/RescueMaze/Vision/Openmv.md
rename to docs/RescueMaze/2023/Vision/Openmv.md
diff --git a/docs/RescueMaze/Vision/TensorFlowLite.md b/docs/RescueMaze/2023/Vision/TensorFlowLite.md
similarity index 100%
rename from docs/RescueMaze/Vision/TensorFlowLite.md
rename to docs/RescueMaze/2023/Vision/TensorFlowLite.md
diff --git a/docs/RescueMaze/Vision/index.md b/docs/RescueMaze/2023/Vision/index.md
similarity index 100%
rename from docs/RescueMaze/Vision/index.md
rename to docs/RescueMaze/2023/Vision/index.md
diff --git a/docs/RescueMaze/2023/index.md b/docs/RescueMaze/2023/index.md
new file mode 100644
index 0000000..657b442
--- /dev/null
+++ b/docs/RescueMaze/2023/index.md
@@ -0,0 +1,14 @@
+# @RescueMaze - 2023
+
+The main developments during 2023 with respect to previous years are the following:
+
+
+TODO: modify this.
+
+## Mechanics
+
+## Electronics
+
+## Programming
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
diff --git a/docs/RescueMaze/2024/Algorithm.md b/docs/RescueMaze/2024/Algorithm.md
new file mode 100644
index 0000000..0c2d16f
--- /dev/null
+++ b/docs/RescueMaze/2024/Algorithm.md
@@ -0,0 +1,36 @@
+Similar to last year's implementation, the algorithm of choice for traversing the maze was a **Depth First Search** (DFS) routine using a **Dijkstra's shortest path** implementation for calculating routes between tiles.
+
+##Implementation
+During planning, it was decided to first work on a self contained C++ iteration of the code in order to test it only using the terminal. This was, because the programmer in charged of this area wasn't going to have access to the robot prototype during winter break.
+Once this was done, a migration to arduino was worked on next. Problems arouse because testing was first conducted using an arduino mega, while the final robot was supposed to use an ESP32 microcontroller and incompatibility between libraries were found. Most of these, were used to imitate data structures from the Standard Template Library which were not available in arduino, which was also a big setback.
+
+##Tiles
+A map of the tiles is saved in the robot's memory and the information can be accessed using the x, y and z positions. In each tile data is saved like the availible adjecent tiles, the weight to visit the tile, if a victim has been detected already, if it's floor is black or if its a checkpoint.
+
+##Depth First Search
+This type of exploration algorithm prioritices visiting new tiles until a dead end is found before backtracking to visit another tile. This was prefered over a Breadth First Search algorithm that, instead of searching a complete area one by one, seeks to visit other parts of the maze first.
+
+##Dijkstra's shortest path
+This famous algorithm is used to determine the best availible path between two coordinates, avoiding blue tiles, ramps and bumpers which would make the shortest path slower. This is a huge time saver compared of simple recursive backtracking.
+
+##Engine
+Saving the information of the maze is crucial, because it avoids wasting time going to already visited tiles, dropping uneccesary kits, etc. Keeping track of the robot's position at all times allows us to know the current and adjacents tile's information and finding it's way back to the start in order to achieve bonus points.
+
+##C++ terminal testing
+Using a bidimentional characters array we were able to simulate the robot's behavior virtually and test the algorithm's functionality in various casescenarios easly.
+
+| Symbol | Represents |
+| ----- | ----- |
+| # | Wall |
+| > | Move right |
+| < | Move left |
+| ^ | Move up |
+| / | Move down |
+| S | Start of path |
+| E | End of path |
+| r | Ramp inclined right |
+| l | Ramp inclined left |
+| u | Ramp inclined up |
+| d | Ramp inclined down |
+
+
\ No newline at end of file
diff --git a/docs/RescueMaze/2024/Control/PID.md b/docs/RescueMaze/2024/Control/PID.md
new file mode 100644
index 0000000..53470cc
--- /dev/null
+++ b/docs/RescueMaze/2024/Control/PID.md
@@ -0,0 +1,26 @@
+# PID
+
+PID control was used to reach the requested RPMs for the motors. To reach the requested RPM, the behavior of the constants was plotted, in this way it was possible to observe how much it oscillated until reaching this, so after many modifications in these variables, it was possible to reach the desired behavior. The constants are kP, kI and kD, which indicate the relevance of the error in the proportional, integral and derivative.
+
+This control has as reference the speed calculated with the ticks or interruptions of the encoders, for this it was checked that each motor had the same amount of ticks per revolution.
+
+## How does it work?
+
+The encoder could be said to have an infrared that connects from side to side, so every time something passes through the middle of this "laser" there is an interruption. To get an approximation of how many ticks there are in a revolution you can make a code that counts the interrupts on the required pin with the arduino function attachInterrupt().
+
+attachInterrupt() function:
+[here](https://www.arduino.cc/reference/en/language/functions/external-interrupts/attachinterrupt/)
+
+
+In this way the speed of each motor is taken out so that the PID control allows to increase the pwm variable to reach the desired speed so that all motors go at the same speed.
+
+## PID Orientation
+
+To make the robot go in a specific orientation, the control was implemented with reference to the gyroscope (BNO), so that also for this one the constants had to be modified to reach as soon as possible its setpoint without much oscillation. For this the actual angle was used, i.e. the reference of the BNO and the desired angle if it was loaded more to one side the speed of the motors on the same side increased to reach their respective angle.
+
+## PID Walls
+
+In certain scenarios as would be the case of the ramp there are always walls on its sides so it was devised to apply a control with respect to the distances of the sides that is to say that if it was told to keep 7cm distance with respect to the wall this oscillated to reach that point and leave the ramp, of course the counters had to be modified to reach the desired behavior as well as the desired distance.
+
+Impementation of PID:
+[here](https://github.com/RoBorregos/RescueMaze2024/blob/nacionalNewMovement/PIDmotores/PID.cpp)
\ No newline at end of file
diff --git a/docs/RescueMaze/2024/Control/Sensors/BNO005.md b/docs/RescueMaze/2024/Control/Sensors/BNO005.md
new file mode 100644
index 0000000..f060140
--- /dev/null
+++ b/docs/RescueMaze/2024/Control/Sensors/BNO005.md
@@ -0,0 +1,42 @@
+# BN055
+
+The BNO055 is an intelligent MEMS sensor developed by Bosch Sensortec. It combines a 9-axis absolute-orientation sensor and sensor fusion capabilities into a single package. This compact device simplifies the process of obtaining meaningful sensor data by handling sensor fusion internally, saving developers from the complexities of implementing fusion algorithms themselves.
+
+[Link](https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/overview)
+
+[Library](https://github.com/adafruit/Adafruit_TCS34725)
+
+## Functionality
+
+Accelerometer: Measures three-axis acceleration, including both gravity and linear motion, in meters per second squared (m/s²) at a rate of 100Hz.
+
+Magnetometer: Detects three-axis magnetic field strength in microtesla (µT) at 20Hz.
+
+Gyroscope: Reports three-axis rotation speed (angular velocity) in radians per second (rad/s) at 100Hz. The high-speed ARM Cortex-M0 based processor within the BNO055 handles sensor fusion and real-time requirements.
+
+## Outputs
+
+Absolute Orientation (Euler Vector): This format gives three-axis orientation data based on a 360° sphere. It provides stable orientation output at a rate of 100Hz.
+
+Absolute Orientation (Quaternion): If you need more accurate data manipulation, the BNO055 outputs a four-point quaternion. Quaternions are useful for complex rotations and are also available at 100Hz.
+
+Angular Velocity Vector: This vector represents three-axis rotation speed (angular velocity) in rad/s.
+
+Linear Acceleration Vector: Excluding gravity, it provides three-axis linear acceleration data at 100Hz.
+
+Gravity Vector: Represents three-axis gravitational acceleration (minus any movement) at 100Hz.
+
+Temperature: The BNO055 also reports ambient temperature in degrees Celsius at 1Hz.
+
+## Calibration
+
+Before using the BNO055 sensor, it’s essential to calibrate it properly. The calibration process ensures accurate data from the gyroscope, accelerometer, and magnetometer.
+[Here](https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/device-calibration) is the explanation of the process. Depending on the axis you move the BNO055, it calibrates a specific axis.
+
+For example:
+To calibrate the gyroscope, keep the device stationary in any position.
+For the magnetometer, recent devices perform fast magnetic compensation without requiring specific ‘figure 8’ motions.
+To calibrate the accelerometer, place the BNO055 in six standing positions: +X, -X, +Y, -Y, +Z, and -Z. Use a block of wood or a similar object to maintain alignment during calibration.
+
+Remember that the BNO055 starts supplying sensor data as soon as it’s powered on, even before the calibration process is complete. In NDOF mode, discard data while the system calibration status is 0 (indicating incomplete calibration). Once the system calibration status reaches 1 or higher, the heading will reflect the absolute value once the BNO055 finds magnetic north.
+
diff --git a/docs/RescueMaze/2024/Control/Sensors/TCS34725.md b/docs/RescueMaze/2024/Control/Sensors/TCS34725.md
new file mode 100644
index 0000000..3d3697b
--- /dev/null
+++ b/docs/RescueMaze/2024/Control/Sensors/TCS34725.md
@@ -0,0 +1,51 @@
+# TCS34725
+
+The TCS34725 is a color light-to-digital converter that provides digital readings for red, green, blue (RGB), and clear light sensing values. In a specific application, it was utilized to detect black and blue tiles. The sensor was strategically positioned at the bottom front of the robot, allowing it to have a time buffer for timely responses.
+
+[Link](https://www.adafruit.com/product/1334)
+
+[Library](https://github.com/adafruit/Adafruit_TCS34725)
+
+The main methods used were:
+```cpp
+tcs.getRawData(&red_r, &green_r, &blue_r, &clear_r);
+tcs.getRGB(&red, &green, &blue);
+```
+
+## Processing RGB input
+
+he idea behind our implementation to relate a RGB output with a color was very simple:
+
+- For each color, establish minimum and maximum expected values for each color channel.
+
+```cpp
+static constexpr char colorList[colorAmount + 1] = {"NAG"}; // List of color initials
+// Each row represents the upper and lower limits for detecting a color.
+// colorThresholds[0] = {redMin, redmax, greenMin, greenMax, blueMin, blueMax}
+static constexpr int colorThresholds[colorAmount][6] = {
+{0, 40, 0, 40, 0, 40},
+{60, 100, 120, 150, 125, 175},
+{180, 240, 190, 210, 180, 210}};
+```
+
+- After each reading, check if the values are within the expected range for each color.
+
+- Return a value indicating a specific color if all three channels match the expected range.
+
+```cpp
+char TCS::getColorWithThresholds()
+{
+ if (colorThresholds == nullptr)
+ return 'u';
+
+ updateRGBC(); // Update the RGB values
+
+ for (uint8_t i = 0; i < colorAmount; i++) {
+ if (inRangeThreshold(colorThresholds[i][0], red, colorThresholds[i][1]) && inRangeThreshold(colorThresholds[i][2], green, colorThresholds[i][3]) && inRangeThreshold(colorThresholds[i][4], blue, colorThresholds[i][5])){
+ return colorList[i];
+ }
+ }
+
+ return 'u'; // In case no color is detected.
+}
+```
\ No newline at end of file
diff --git a/docs/RescueMaze/2024/Control/Sensors/VLX53L0X.md b/docs/RescueMaze/2024/Control/Sensors/VLX53L0X.md
new file mode 100644
index 0000000..3edcba4
--- /dev/null
+++ b/docs/RescueMaze/2024/Control/Sensors/VLX53L0X.md
@@ -0,0 +1,16 @@
+# VL53L0X
+
+The VLX53L0X is a time of flight distance sensor. In Rescue Maze, it was used to detect walls and robot
+displacement.
+
+Product link: [https://www.adafruit.com/product/3317](https://www.adafruit.com/product/3317)
+
+Library used: [https://github.com/adafruit/Adafruit_VL53L0X](https://github.com/adafruit/Adafruit_VL53L0X)
+
+The sensor has multiple modes (see [Adafruit_VL53L0X.cpp](https://github.com/adafruit/Adafruit_VL53L0X/blob/master/src/Adafruit_VL53L0X.cpp) lines 210-219).
+
+## Considerations
+
+While this sensor gives accurate readings, it is not very reliable for long distances. Specifically, its maximum range
+is ~1.3 meters and its acurracy decreases the further the object is. In some scenarios, the sensor wasn't reliable to
+measure robot displacement. That's why I recommend the black VL53L0X which I tested and it reached a longer distance than the blue one.
\ No newline at end of file
diff --git a/docs/RescueMaze/2024/Control/Sensors/index.md b/docs/RescueMaze/2024/Control/Sensors/index.md
new file mode 100644
index 0000000..2ca12f9
--- /dev/null
+++ b/docs/RescueMaze/2024/Control/Sensors/index.md
@@ -0,0 +1,26 @@
+# Sensor Overview
+
+The following sensors were used in Rescue Maze 2023:
+
+- Time of flight distance sensor: [VLX53L0X](./VLX53L0X.md) (Adafruit)
+- RGB Color Sensor: [TCS34725](./TCS34725.md) (Adafruit)
+- Absolute Orientation Sensor: [BNO055](./BNO055.md) (Adafruit)
+
+## Using i2c devices
+
+i2c is a protocol used by many of the sensors used in Rescue Maze. The i2c protocol is a serial communication
+protocol that allows multiple devices to be connected to the same bus. The advantage of this protocol is that
+devices only need two wires to communicate (SDA and SCL, in addition to GND and VCC). The disadvantage is
+that each device needs a unique address to be able to communicate with the master device (in this case, the Arduino).
+
+However, since many devices of the same type are used (in the case of Rescue Maze, 4 VLX53L0X distance sensors),
+then a multiplexor is needed to avoid address conflicts. The multiplexor connects to the i2c bus and allows the
+master device to select which device it wants to communicate with. The devices with the same address should be
+connected to different multiplexor channels.
+
+## Sensor abstraction
+
+In order to abstract the use of sensors, a class BNO, VLX and TCS was created. This classes are in charge of initializing
+the majority of the sensors and providing a clean interface to use them. Instead of accessing the sensor data
+directly, it is best to do so through the sensor class. That way, filters could be applied more easily and if the
+type of sensor changes, the only thing that needs to be changed is the underlying implementation.
diff --git a/docs/RescueMaze/2024/Control/index.md b/docs/RescueMaze/2024/Control/index.md
new file mode 100644
index 0000000..a2ba47e
--- /dev/null
+++ b/docs/RescueMaze/2024/Control/index.md
@@ -0,0 +1,29 @@
+# Control overview
+
+Robot control is the system in charge of moving (and controlling) the robot as expected.
+This system is of relevance in Rescue Maze as it helps the robot move through obstacles and
+locate itself, which is fundamental for exploring the whole maze using the algorithm.
+
+## Sensors
+One of the most important elements needed to develop a stable control is the use of reliable
+sensors which give feedback to the robot about its current state. The sensor input is then
+used to move the robot accordingly and reach the target positions.
+
+See [Sensors](Sensors/index.md) for information about the sensors used.
+
+## PID
+
+>The PID controller is a control loop feedback mechanism widely used in industrial control systems.
+A PID controller continuously calculates an error value as the difference between a desired setpoint
+and a measured process variable and applies a correction based on proportional, integral, and derivative
+terms (sometimes denoted P, I, and D respectively) which give their name to the controller type.
+
+- Copilot
+
+In RescueMaze, a PID controller was used to make the robot move straight and rotate to the desired angles.
+The PID control regulated the PWM signal sent to the motors such that they approached a target RPM.
+In addition, the RPM targets were increased or diminished depending on the error between the current angle
+and angle of the desired orientation.
+
+See [PID](PID.md) for information about the PID implementation.
+
diff --git a/docs/RescueMaze/2024/Dispenser.md b/docs/RescueMaze/2024/Dispenser.md
new file mode 100644
index 0000000..62a2e49
--- /dev/null
+++ b/docs/RescueMaze/2024/Dispenser.md
@@ -0,0 +1,11 @@
+# Dispenser
+
+The rescue kit deployment mechanism consists of a rack and pinion mechanism design to deploy kits on different sides of the robot, the design had quite a few changes and problems such as the difficulty of centering the servo motor to grab more kits in the storage tower however we mechanically increased the margin that could have the servo and through coding we managed to have a more reliable system.
+
+Even so, it was never as reliable as we would expect, starting with how careful we had to be sometimes just to put the kits in.
+
+To avoid this I think having heavier kits would somehow help but above all try to make the mechanism as simple as possible over any other advantage.
+
+Among the good things the design had was the space occupied, and how “unobtrusive” it was, using only a free part below the base and without having to put a prominent tower, however except for the tower that could interfere in case of using a lidar, these advantages were mostly aesthetic.
+
+
\ No newline at end of file
diff --git a/docs/RescueMaze/2024/Overall CAD.md b/docs/RescueMaze/2024/Overall CAD.md
new file mode 100644
index 0000000..5b351f0
--- /dev/null
+++ b/docs/RescueMaze/2024/Overall CAD.md
@@ -0,0 +1,23 @@
+# Overall CAD
+
+The entire robot was designed from scratch using SolidWorks so that each element along with all the components to be used were integrated and had their own place.
+
+The chassis was built thinking about making the robot as low as possible but keeping a minimum height of 2cm below its base to be able to pass the speed bumps.
+
+**Keep this minimum height and the size of your robot diagonal as important measures to consider when designing the robot**
+
+We included a box to storage the batteries, store them at the bottom was useful to lower the center of mass, and not overturning on the ramps. The box was meant to be retractible, a great idea but with bad implementation since the 3D printed part tended to break, at the end we use screws to attached it to the main mdf frame.
+
+Keep in mind mdf is brittle and easy to break, however is rapidly manufactured with the laser cutter, 3D printing although providing more design liberty takes longer to print and to iterate from different designs, remember that it is also heat deformable, a hot jetson maybe enough to deform PLA.
+
+Try to do modular design specially with sensors, although they can always be easily placed with hot glue and making a hole in the chassis, contemplate a modular piece that facilitates the change from different types of sensors (ToF or utrasonic, or from different ToF models) can be a good idea that looks more professional and clean.
+
+**Cables always take more space than expected, good communication between electronics and mechanics is essential**
+
+**Place the color sensor as low and as front as possible**
+
+
+
+
+
+
diff --git a/docs/RescueMaze/2024/Wheels.md b/docs/RescueMaze/2024/Wheels.md
new file mode 100644
index 0000000..ebfd67e
--- /dev/null
+++ b/docs/RescueMaze/2024/Wheels.md
@@ -0,0 +1,16 @@
+# Wheels
+
+The wheels were 3D printed with TPU to provide flexibility, very useful when passing speed bumps, and presented multiple iterations until their final version achieving good friction with the track through the use of plastic bracelets, and a reliable attachment to the motor shaft.
+
+The plastic bracelets went off in the middle of one of the competition rounds :c
+
+If using an extra part to improve grip with the floor make sure it is well attached.
+
+TPU wheels work “well” but not as expected, there are many other wheels that can also work well. If wanted to use this material it would take a lot more research and a lot of testing to see the right amount of flexibility in the wheels, keep in mind that as the weight of the robot changes so does the behavior, try to estimate the weight of the robot and do your testing with that.
+
+Never tested, but a silicone cover made to size with a mold could work for the problem of grip with the floor.
+
+Big wheels are useful when passing stairs and speed bumps but are tricky to avoid climbing walls or overturning.
+
+
+
diff --git a/docs/RescueMaze/2024/index.md b/docs/RescueMaze/2024/index.md
new file mode 100644
index 0000000..3432e73
--- /dev/null
+++ b/docs/RescueMaze/2024/index.md
@@ -0,0 +1,19 @@
+# @RescueMaze - 2024
+
+The main developments during 2024 with respect to previous years are the following:
+
+
+TODO: modify this.
+## Mechanics
+
+- [OverallCAD](Overrall CAD/)
+- [Wheels](Wheels/)
+- [Dispenser](Dispenser/)
+
+## Electronics
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+
+## Programming
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
\ No newline at end of file
diff --git a/docs/RescueMaze/index.md b/docs/RescueMaze/index.md
index 614f8ab..c18555b 100644
--- a/docs/RescueMaze/index.md
+++ b/docs/RescueMaze/index.md
@@ -5,10 +5,3 @@ Simulation of a disaster area where the robot has to navigate through the majori
## Competition
See the [rules](https://junior.robocup.org/wp-content/uploads/2023/01/RCJRescueMaze2023RulesFinal.pdf) for Rescue Maze 2023.
-
-## Sections
-
-- [Jetson Nano](Jetson Nano/RunningJetson/)
-- [Algorithm](Algorithm)
-- [ROS](ROS)
-- [Control](Control/)
\ No newline at end of file
diff --git a/docs/soccer/.DS_Store b/docs/SoccerLightweight/.DS_Store
similarity index 100%
rename from docs/soccer/.DS_Store
rename to docs/SoccerLightweight/.DS_Store
diff --git a/docs/soccer/Electronics/1 General.md b/docs/SoccerLightweight/2023/Electronics/1 General.md
similarity index 100%
rename from docs/soccer/Electronics/1 General.md
rename to docs/SoccerLightweight/2023/Electronics/1 General.md
diff --git a/docs/soccer/Electronics/2 Power Supply.md b/docs/SoccerLightweight/2023/Electronics/2 Power Supply.md
similarity index 100%
rename from docs/soccer/Electronics/2 Power Supply.md
rename to docs/SoccerLightweight/2023/Electronics/2 Power Supply.md
diff --git a/docs/soccer/Electronics/3 Printed Circuit Boards (PCB).md b/docs/SoccerLightweight/2023/Electronics/3 Printed Circuit Boards (PCB).md
similarity index 100%
rename from docs/soccer/Electronics/3 Printed Circuit Boards (PCB).md
rename to docs/SoccerLightweight/2023/Electronics/3 Printed Circuit Boards (PCB).md
diff --git a/docs/soccer/Electronics/4 Dribbler Implementation.md b/docs/SoccerLightweight/2023/Electronics/4 Dribbler Implementation.md
similarity index 100%
rename from docs/soccer/Electronics/4 Dribbler Implementation.md
rename to docs/SoccerLightweight/2023/Electronics/4 Dribbler Implementation.md
diff --git a/docs/soccer/Mechanics/.DS_Store b/docs/SoccerLightweight/2023/Mechanics/.DS_Store
similarity index 100%
rename from docs/soccer/Mechanics/.DS_Store
rename to docs/SoccerLightweight/2023/Mechanics/.DS_Store
diff --git a/docs/soccer/Mechanics/0.General.md b/docs/SoccerLightweight/2023/Mechanics/0.General.md
similarity index 68%
rename from docs/soccer/Mechanics/0.General.md
rename to docs/SoccerLightweight/2023/Mechanics/0.General.md
index 38b642c..8f2477e 100644
--- a/docs/soccer/Mechanics/0.General.md
+++ b/docs/SoccerLightweight/2023/Mechanics/0.General.md
@@ -1,6 +1,6 @@
# General
-
+
## Materials
@@ -8,14 +8,14 @@ This is the list of mechanical materials we used while developing the robot:
| Name | Use | Product Image |
| ---- | --- | ------------- |
-| 3mm MDF | Made up earlier parts of the robot |
|
-| ABS filament | Make up most CAD parts of the robot |
|
-| PLA filament | Make up few CAD parts of the robot |
|
-| Male-female nylon spacers | Connecting separated robot pieces |
|
-| M3 6mm round-head nylon screws | Fixing in place the robot pieces |
|
-| M3 10mm flat-head steel screws | Fixing in place the robot pieces |
|
-| M3 nylon nuts | Fixing in place the robot pieces |
|
-| M3 steel nuts | Fixing in place the robot pieces |
|
+| 3mm MDF | Made up earlier parts of the robot |
|
+| ABS filament | Make up most CAD parts of the robot |
|
+| PLA filament | Make up few CAD parts of the robot |
|
+| Male-female nylon spacers | Connecting separated robot pieces |
|
+| M3 6mm round-head nylon screws | Fixing in place the robot pieces |
|
+| M3 10mm flat-head steel screws | Fixing in place the robot pieces |
|
+| M3 nylon nuts | Fixing in place the robot pieces |
|
+| M3 steel nuts | Fixing in place the robot pieces |
|
- Piece materials
@@ -35,10 +35,10 @@ This is the list of tools we used to manufacture the robot:
| Name | Use | Product Image |
| ---- | --- | ------------- |
-| Fusion360 | CAD program |
|
-| Laser cutter | Cutting MDF |
|
-| Ender 3 V2 | Printing PLA |
|
-| Artillery Sidewinder X2 | Printing ABS |
|
+| Fusion360 | CAD program |
|
+| Laser cutter | Cutting MDF |
|
+| Ender 3 V2 | Printing PLA |
|
+| Artillery Sidewinder X2 | Printing ABS |
|
- Software
@@ -56,4 +56,4 @@ We would definetely recommend using the Zortrax material so that the
These are the robot's different versions as we progressed in its design:
-
+
diff --git a/docs/soccer/Mechanics/1.Robot_Lower_Design.md b/docs/SoccerLightweight/2023/Mechanics/1.Robot_Lower_Design.md
similarity index 87%
rename from docs/soccer/Mechanics/1.Robot_Lower_Design.md
rename to docs/SoccerLightweight/2023/Mechanics/1.Robot_Lower_Design.md
index 3a1cd0c..3c9e288 100644
--- a/docs/soccer/Mechanics/1.Robot_Lower_Design.md
+++ b/docs/SoccerLightweight/2023/Mechanics/1.Robot_Lower_Design.md
@@ -1,6 +1,6 @@
# Lower Design
-
+
## Base
@@ -8,17 +8,17 @@
We started by considering a base that could let us maximize the area permited by RCJ rules. We didn't make the exact dimensions because the rules state that the robot needs to fit *smoothly* into a cilinder of this diameter, so we had to leave a bit of leeway.
-
+
We also considered for a long time to use bases to support the PCBs. This was to keep them in a rigid piece of the robot and also to possibly give a padding to it to prevent vibrations in the PCBs. In the final implementation however, we ended up not using them because we never ended up using the padding, and connecting the PCBs alone wasn't as rigid but still maintained them correctly in place.
-
+
- Implementation
We ended up following the main ideas on our initial considerations, with some slight adjustments
-
+
The main base has the holes to support the following modules and pieces: the main PCB, the line PCBs, the voltage regulator, the IR ring, the dribbler, the ultrasonic sensor supports, the motor supports and zipties.
diff --git a/docs/soccer/Mechanics/2.Robot_Upper_Design.md b/docs/SoccerLightweight/2023/Mechanics/2.Robot_Upper_Design.md
similarity index 66%
rename from docs/soccer/Mechanics/2.Robot_Upper_Design.md
rename to docs/SoccerLightweight/2023/Mechanics/2.Robot_Upper_Design.md
index b8faeba..9f039fb 100644
--- a/docs/soccer/Mechanics/2.Robot_Upper_Design.md
+++ b/docs/SoccerLightweight/2023/Mechanics/2.Robot_Upper_Design.md
@@ -1,6 +1,6 @@
# Upper Design
-
+
## IR Ring Cover
diff --git a/docs/soccer/Programming/General.md b/docs/SoccerLightweight/2023/Programming/General.md
similarity index 92%
rename from docs/soccer/Programming/General.md
rename to docs/SoccerLightweight/2023/Programming/General.md
index 28c3bd3..b04b0a0 100644
--- a/docs/soccer/Programming/General.md
+++ b/docs/SoccerLightweight/2023/Programming/General.md
@@ -18,10 +18,10 @@ It is also important to mention that the structure of the code worked as a state
### Attacking Robot
The main objective of this robot was to gain possesion of the ball using the dribbler as fast as possible and then go towards the goal using vision. Therefore, the algorithm used is the following:
-
+
### Defending Robot
On the other hand, the defending robot should always stay near the goal and go towards it if the ball is in a 20cm radius. The algorithm for this robot is shown in the following image:
-
+
diff --git a/docs/soccer/Programming/IR_Detection.md b/docs/SoccerLightweight/2023/Programming/IR_Detection.md
similarity index 100%
rename from docs/soccer/Programming/IR_Detection.md
rename to docs/SoccerLightweight/2023/Programming/IR_Detection.md
diff --git a/docs/soccer/Programming/Line_Detection.md b/docs/SoccerLightweight/2023/Programming/Line_Detection.md
similarity index 90%
rename from docs/soccer/Programming/Line_Detection.md
rename to docs/SoccerLightweight/2023/Programming/Line_Detection.md
index 57f0328..d7c1ec7 100644
--- a/docs/soccer/Programming/Line_Detection.md
+++ b/docs/SoccerLightweight/2023/Programming/Line_Detection.md
@@ -4,13 +4,13 @@ To obtain the phototransistor values from the multiplexors, a function was creat
## Attacking Robot
Since this robot would go in all directions to search for the ball and score, it should never cross any white lines. Therefore, the phototransistors pcbs were used to estimate the angle on which the robot was touching the line to then move in the opposite direction. This was also complemented with ultrasonic sensors in order to avoid crashing with the walls, other robots and the ball itself (avoiding scoring on our goal).
-
+
The calibration for this robot was automatic and done when the robot started. Here, the robot would capture about 100 values for each sensor when standing on green to get the average measurement. Then, to check if there was a line, the robot would compare the current value with the average value and if the difference was greater than a threshold, then a line was detected and it was possible to see which phototransistor had seen it.
## Defending Robot
The defending robot worked basically as a line follower, using two phototransistors (one from each side) in order to move horizontally. Additionally, it used the front phototransistor pcb to check if it had gone too far back, hence it would move forward. There were also three ultrasonic sensors used to avoid crashing with the walls and the ball.
-
+
For this robot, it was important to check the two phototransistors for line following before the match started, since the robot would move proportionally according to the sensors, looking to stay in the average value between white and green. Therefore, the calibration was done manually, by reading and then setting the white and green values for each sensor. The front sensor was calibrated automatically using the method explained in the attacking robot.
\ No newline at end of file
diff --git a/docs/soccer/Programming/Movement.md b/docs/SoccerLightweight/2023/Programming/Movement.md
similarity index 94%
rename from docs/soccer/Programming/Movement.md
rename to docs/SoccerLightweight/2023/Programming/Movement.md
index 42e60d4..3de0ac2 100644
--- a/docs/soccer/Programming/Movement.md
+++ b/docs/SoccerLightweight/2023/Programming/Movement.md
@@ -62,7 +62,7 @@ void Motors::moveToAngle(int degree, int speed, int error) {
In order to take advantage of the HP motors, ideally, the robot should go as fast as possible, however, after a lot of testing, we found that the robot was not able to fully control the ball at high speeds, as it would usually push the ball out of bounds instead of getting it with the dribbler. Therefore, the speed was regulated depending on the distance to the ball (measured with the IR ring) using the following function:
-
+
$v(r) = 1.087 + 1/(r-11.5)$, where $r$ is the distance to the ball $\in [0.00,10.0]$
@@ -73,6 +73,6 @@ This equation was experimentally established with the goal of keeping speed at m
The idea for this robot was to keep it on the line line of the goal, always looking to keep the ball in front of it to block any goal attempts.Therefore, speed was regulated according to the angle and x-component to the ball. This meant that if the ball was in front of it, then it didn't have to move. However if the ball was far to the right or left, then speed had to be increased proportionally to the x-component of the ball, as shown in the following image:
-
+
diff --git a/docs/soccer/Programming/Vision.md b/docs/SoccerLightweight/2023/Programming/Vision.md
similarity index 100%
rename from docs/soccer/Programming/Vision.md
rename to docs/SoccerLightweight/2023/Programming/Vision.md
diff --git a/docs/SoccerLightweight/2023/index.md b/docs/SoccerLightweight/2023/index.md
new file mode 100644
index 0000000..380b241
--- /dev/null
+++ b/docs/SoccerLightweight/2023/index.md
@@ -0,0 +1,35 @@
+# @SoccerLightweight - 2023
+
+The main developments during 2023 with respect to previous years are the following:
+
+## Sections
+
+### Mechanics
+
+- [General](Mechanics/0.General.md)
+
+- [Robot Lower Design](Mechanics/1.Robot_Lower_Design.md)
+
+- [Robot Upper Design](Mechanics/2.Robot_Upper_Design.md)
+
+### Electronics
+
+- [General](Electronics/1 General.md)
+
+- [Power Supply](Electronics/2 Power Supply.md)
+
+- [Printed Circuit Boards (PCB)](Electronics/3 Printed Circuit Boards (PCB).md)
+
+- [Dribbler Implementation](Electronics/4 Dribbler Implementation.md)
+
+### Programming
+
+- [General](Programming/General.md)
+
+- [IR Detection](Programming/IR_Detection.md)
+
+- [Line Detection](Programming/Line_Detection.md)
+
+- [Movement](Programming/Movement.md)
+
+- [Vision](Programming/Vision.md)
diff --git a/docs/SoccerLightweight/2024/Electronics/1 General.md b/docs/SoccerLightweight/2024/Electronics/1 General.md
new file mode 100644
index 0000000..2e47e6f
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Electronics/1 General.md
@@ -0,0 +1,5 @@
+# General
+
+For the TMR 2024, these are some general ideas:
+
+ For the power supply we used two LiPO batteries: one 11.1V and 2250 mAh for the motors, and two 3.7V Lipo batteries connected in series to obtain 7.4V and 2500 mAh for the logic using a voltage regulator at 5V, These batteries were used due to their high amperage and optimal duration in competition. The PCBs, designed with EasyEda, included a main board with the microcontroller and sensors, boards for the TEPT5700 phototransistors connected directly to the microcontroller for line detection, and an IR ring with TSSP58038 receivers and an Atmega328p was reused to process the infrared signals. The Arduino Mega Pro microcontroller was chosen for its ease of use and multiple pins. For movement, HP motors at 2200 RPM with TB6612FNG controllers were used, configured to handle up to 2.4 Amperes continuous and peaks of 6 Amperes, adapting to the high current demand of the motors.
diff --git a/docs/SoccerLightweight/2024/Electronics/2 Power Supply.md b/docs/SoccerLightweight/2024/Electronics/2 Power Supply.md
new file mode 100644
index 0000000..78f0898
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Electronics/2 Power Supply.md
@@ -0,0 +1,13 @@
+# Power Supply
+
+For power, two batteries were used.
+
+## Movement Power
+For the motors, a 11.1V 2250 mAh battery was used; this is because the motors require 12V and so as not to worry about how long they would last, we bought them with a lot of amperage.
+
+## Logic Power
+For logic, two 3.7V Lipo batteries were used connected in series to obtain 7.4V, these batteries have an amperage of 2500 mAh.
+
+
+
+It was decided to use this type of batteries because they are very good in relation to electronic projects; likewise, their duration was very optimal when it came to being in the competition.
diff --git a/docs/SoccerLightweight/2024/Electronics/3 PCBs Designs.md b/docs/SoccerLightweight/2024/Electronics/3 PCBs Designs.md
new file mode 100644
index 0000000..1716775
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Electronics/3 PCBs Designs.md
@@ -0,0 +1,12 @@
+# PCBs Designs
+
+For the design of the PCBs, we used the EasyEda app. Three types of boards were made on it.
+
+## Main Board
+The main board included the microcontroller, as well as the sensors that would be used; additional digital/analog pins were added in case they were needed.
+
+## Phototransistors Board
+The other boards were made for the phototransistors, which were used for the lower part of the robot for line detection. It was considered to use them with a multiplexer, but for reasons of complexity, they were connected directly to the analog pins of the microcontroller.
+
+## IR Ring Board
+It was decided to reuse last year's IR ring due to its complexity and the short time we had to develop something different. With this IR ring, we were able to work very well on the electronic part because it was connected to the microcontroller's serial port.
diff --git a/docs/SoccerLightweight/2024/Electronics/4 Electronic Components.md b/docs/SoccerLightweight/2024/Electronics/4 Electronic Components.md
new file mode 100644
index 0000000..137073d
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Electronics/4 Electronic Components.md
@@ -0,0 +1,13 @@
+# Electronic Components
+
+## Microcontroller
+For the microcontroller, the Arduino Mega Pro was chosen. This is due to its ease of use in electronics and programming. The Arduino Mega Pro microcontroller has 54 digital pins, 15 digital ports with PWM output, 16 analog pins, among many other things.
+
+## Sensors
+For the orientation of the robot, we use a BNO055 sensor which has very good performance and accuracy. It is also possible to connect this sensor via I2C to the microcontroller.
+For the detection of lines, TEPT5700 phototransistors were used since the range in which they work does not include infrared light, which is crucial so that it does not interfere with the signals from the IR ball.
+
+Digital IR receivers TSSP58038 were used to detect the IR signals emitted by the ball and a custom PCB was also designed. The IR ring is made up of 12 IR receivers, and an Atmega328p was used for processing and vectoring the infrared signals.
+
+## Drivers and motors
+For the motion section, we use HP motors at 2200 RPM with TB6612FNG drivers. For this, we bridge the input and output signals to obtain up to 2.4 continuous Amps and up to 6 peak Amps since the motors require too much current.
diff --git a/docs/SoccerLightweight/2024/Programming/General.md b/docs/SoccerLightweight/2024/Programming/General.md
new file mode 100644
index 0000000..b144e2d
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Programming/General.md
@@ -0,0 +1,34 @@
+# General Overview & Strategy
+
+"Soccer Lightweight 2024" is an autonomous robot competition featuring 2 vs 2 soccer playoffs. The Soccer LighWeight team merged our knowledge in robotics and their passion for sports to design a robot that plays soccer with agility and precision.
+
+The strategy for the Soccer Lightweight competition was a blend of offensive and defensive tactics, realized with programming techniques and meticulous strategic planning. Core aspects of this strategy include:
+
+## Key Elements of the Strategy
+
+### Role Assignment
+Each robot is assigned a specific role, such as a striker or a goalkeeper. This specialization allows for focused development of skills and tactics suitable for each position.
+
+### Real-Time Vision Processing
+Using the Pixy2 camera, the robots can identify and track the ball and goals in real-time.
+
+### Holonomic Movement
+Implemented through kinematic equations, this allows the robots to move smoothly and rapidly in any direction.
+
+### Line Detection
+Utilizing phototransistors and IR sensors, the robots can detect field boundaries, ensuring they stay within the playing area and avoid penalties.
+
+## Tools and Technologies
+The main tools and software used in the development of the Soccer Lightweight robot include:
+
+- **Arduino**: Used for programming the microcontroller that controls the robot's actions.
+- **Visual Studio Code**: The primary integrated development environment (IDE) for writing and debugging code.
+- **Pixy Moon IDE**: Utilized for the calibration and detection of color blobs through the camera, essential for the robot's vision system.
+
+## Abstract
+The Soccer Lightweight robot exemplifies robotics through its integration of advanced motion algorithms and sensory systems. Designed for optimum performance on the soccer field, this robot utilizes real-time vision and sensory integration to accurately identify the ball and goals, enhancing its competitiveness. The engineering prioritizes agility within strict weight constraints, enabling the robot to execute complex maneuvers and strategies effectively during matches.
+
+
+## Algorithm of Attacking and Defending Robot
+
+
diff --git a/docs/SoccerLightweight/2024/Programming/IRDetection.md b/docs/SoccerLightweight/2024/Programming/IRDetection.md
new file mode 100644
index 0000000..3385139
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Programming/IRDetection.md
@@ -0,0 +1,53 @@
+# IR Detection
+
+In the Soccer Lightweight project, the precision of robot movement and positioning on the field was really important for successful gameplay. Essential to this was accurately determining both the angle and distance to the ball. To achieve this, an IR ring utilizing 12 TSSP-58038 IR receivers was designed.
+
+
+## IR Ring Design and Functionality
+
+The IR ring consists of 12 TSSP-58038 IR receivers arranged in a circular ring. This configuration enables the robot to detect the angle of the ball relative to its position and estimate the distance to the ball. The primary objectives of the IR detection system include:
+
+
+- **Angle Detection**: To determine the direction of the ball.
+- **Distance Measurement**: To estimate how far the ball is from the robot.
+
+## Mathematical Calculations
+
+The IR detection system processes and filters the signals received from the IR sensors to determine the ball's angle and signal strength. Referencing the Yunit team's research from 2017, the calculations can be summarized as follows:
+
+- **Signal Processing**: The IR sensors detect the ball and send angle and strength data to the Arduino.
+- **Filtering**: We apply an exponential moving average (EMA) filter to smooth the data, ensuring stable and accurate readings.
+- **Angle Adjustment**: The raw angle data is adjusted to account for any offsets and converted to a 0-360 degree format for easier interpretation.
+- **Strength Calculation**: The strength of the signal indicates the distance to the ball, with stronger signals meaning the ball is closer.
+
+## Code Implementation
+
+Here is a brief overview of the core code responsible for processing the IR data:
+
+```cpp
+void IR::updateData() {
+ if (Serial3.available()) {
+ String input = Serial3.readStringUntil('\n');
+ if (input[0] == 'a') {
+ angle = input.substring(2).toDouble() + offset;
+ filterAngle.AddValue(angle);
+ } else if (input[0] == 'r') {
+ strength = input.substring(2).toDouble();
+ filterStr.AddValue(strength);
+ }
+ }
+}
+
+double IR::getAngle() {
+ return filterAngle.GetLowPass();
+}
+
+double IR::getStrength() {
+ return filterStr.GetLowPass();
+}
+```
+This code reads data from the IR sensors, applies filtering to the angle and strength values, and adjusts them for precise detection.
+
+## Implementation and Testing
+
+Comprehensive testing was conducted to calibrate the system and verify its accuracy. By employing TSSP-58038 IR sensors and advanced filtering techniques, we achieved reliable and precise ball detection, enabling the robot to execute complex movements and strategies with effectiveness.
\ No newline at end of file
diff --git a/docs/SoccerLightweight/2024/Programming/LineDetection.md b/docs/SoccerLightweight/2024/Programming/LineDetection.md
new file mode 100644
index 0000000..e34401c
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Programming/LineDetection.md
@@ -0,0 +1,30 @@
+# Line Detection
+
+In the Soccer Lightweight project, detecting the field's lines is essential for maintaining the robot's position and ensuring it stays within the playing boundaries. The line detection system differentiates between the white lines on the field and the green background, enabling precise navigation.
+
+
+## Line Detection Strategy
+
+The line detection strategy employs a combination of analog sensors and a multiplexer to read values from different parts of the field. The key components and steps in this strategy include:
+
+
+- **Analog Sensors**: Multiple analog sensors are positioned around the robot to read the field's color, distinguishing between white lines and the green background.
+- **Multiplexer (MUX)**: A multiplexer switches between different sensor inputs, allowing the robot to monitor several sensors using a single analog input pin on the Arduino.
+- **Threshold Values**: Threshold values are set to differentiate between white lines and the green field, determining whether the sensor is over a line or the field.
+
+## Implementation
+
+The implementation involves reading sensor values and comparing them to predefined thresholds. The key functions responsible for line detection are:
+
+
+### muxSensor()
+Reads values from the multiplexer and direct sensor pins, comparing them to threshold values to determine if they are over a white line or green field.
+
+### calculateDirection()
+Determines the direction of the detected line and adjusts the robot's heading accordingly.
+
+## Testing and Calibration
+The line detection system underwent extensive testing and calibration to ensure accuracy. Thresholds for distinguishing between white and green were fine-tuned based on experimental data, ensuring the robot can reliably detect lines under various lighting conditions and on different field surfaces.
+
+
+
diff --git a/docs/SoccerLightweight/2024/Programming/Movement.md b/docs/SoccerLightweight/2024/Programming/Movement.md
new file mode 100644
index 0000000..3f6c9e6
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Programming/Movement.md
@@ -0,0 +1,62 @@
+# Motion Control: Holonomic Movement
+
+## Kinematic Model
+
+The kinematic model was crucial for accurately calculating the speed of each motor, ensuring precise movement in the desired direction. The key principles applied include:
+
+- **Motor Speed Calculation**: The speed of each motor was determined using kinematic equations based on the desired movement angle.
+- **Consistent Orientation**: : Orientation data from sensors was used to correct the robot's movement, ensuring it always faced the goal.
+
+
+## Sensors and PID Controller
+
+We employed BNO-055 and MPU sensors to capture the robot's current orientation. A simplified PID (Proportional-Integral-Derivative) controller was implemented to correct any deviations from the desired orientation. This controller minimized the error between the current orientation and the target direction, facilitating smooth and accurate movement.
+
+## Implementation
+
+Here’s the core code that shows the kinematic equations and the corrections implemented using the PID controller:
+
+```cpp
+double PID::calculateError(int angle, int set_point) {
+ unsigned long time = millis();
+ double delta_time = (time - previous_time) / 1000.0;
+
+ control_error = set_point - angle;
+ double delta_error = (control_error - previous_error) / delta_time;
+ sum_error += control_error * delta_time;
+
+ sum_error = (sum_error > max_error) ? max_error : (sum_error < -max_error) ? -max_error : sum_error;
+
+ double control = (kP * control_error) + (kI * sum_error) + (kD * delta_error);
+
+ previous_error = control_error;
+ previous_time = time;
+
+ return control;
+}
+
+void Drive::linealMovementError(int degree, int speed, int error) {
+ float m1 = sin(((60 - degree) * PI / 180));
+ float m2 = sin(((180 - degree) * PI / 180));
+ float m3 = sin(((300 - degree) * PI / 180));
+
+ int speedA = (m1 * speed);
+ int speedB = (m2 * speed);
+ int speedC = (m3 * speed);
+
+ speedA -= error;
+ speedB -= error;
+ speedC -= error;
+
+ motor_1.setSpeed(speedA);
+ motor_2.setSpeed(speedB);
+ motor_3.setSpeed(speedC);
+}
+```
+
+### PID::calculateError
+The calculateError function computes the control signal using the PID algorithm, considering the discrepancy between the desired and current orientation.
+
+### Drive::linealMovementError
+The linealMovementError function calculates the speed for each motor based on the desired movement direction and applies corrections using the error from the PID controller.
+
diff --git a/docs/SoccerLightweight/2024/Programming/Vision.md b/docs/SoccerLightweight/2024/Programming/Vision.md
new file mode 100644
index 0000000..bf54cbf
--- /dev/null
+++ b/docs/SoccerLightweight/2024/Programming/Vision.md
@@ -0,0 +1,43 @@
+# Vision System: Goal Detection
+
+In our Soccer Lightweight project, detecting the goal is essential for strategic gameplay. We employed a Pixy2 camera, which facilitated blob color detection using the PixyMon IDE. This system enabled our robots to identify the bounding box of the goal and transmit the relevant data to the Arduino for processing.
+
+## Pixy2 Camera and PixyMon IDE
+
+The Pixy2 camera, integrated with the Open MV IDE, allowed for the detection of colored blobs representing the goals. The bounding box coordinates of these detected blobs were then transmitted to the Arduino, enabling the robots to navigate towards the goal or estimate the distance to it.
+
+## Implementation
+
+Here's the core code that shows how the vision system updates goal data and checks for detected goals:
+
+```cpp
+void Goals::updateData() {
+ pixy.ccc.getBlocks();
+ numGoals = pixy.ccc.numBlocks > 2 ? 2 : pixy.ccc.numBlocks;
+ for (uint8_t i = 0; i < numGoals; i++) {
+ goals[i].x = pixy.ccc.blocks[i].m_x;
+ goals[i].y = pixy.ccc.blocks[i].m_y;
+ goals[i].width = pixy.ccc.blocks[i].m_width;
+ goals[i].height = pixy.ccc.blocks[i].m_height;
+ goals[i].color = pixy.ccc.blocks[i].m_signature;
+ }
+}
+
+bool Goals::detected(uint8_t color) {
+ for (uint8_t i = 0; i < numGoals; i++) {
+ if (goals[i].color == color) {
+ return true;
+ }
+ }
+ return false;
+}
+```
+
+### updateData()
+This function retrieves the detected blocks from the Pixy2 camera and updates the goal data by storing the coordinates, dimensions, and color signatures of up to two detected goals.
+
+### detected()
+This function checks if a goal of the specified color has been detected. It iterates through the detected goals and returns true if a match is found.
+
+
+
diff --git a/docs/SoccerLightweight/2024/index.md b/docs/SoccerLightweight/2024/index.md
new file mode 100644
index 0000000..0755f02
--- /dev/null
+++ b/docs/SoccerLightweight/2024/index.md
@@ -0,0 +1,30 @@
+# @SoccerLightweight - 2024
+
+The main developments during 2024 with respect to previous years are the following:
+
+
+TODO: modify this.
+## Mechanics
+
+- [Jetson Nano](Jetson Nano/RunningJetson/)
+- d
+
+## Electronics
+
+- [General](Electronics/1%20General.md)
+
+- [Power Supply](Electronics/2%20Power%20Supply.md)
+
+- [PCBs Designs (PCB)](Electronics/3%20PCBs%20Designs.md)
+
+- [Electronic Components](Electronics/4%20Electronic%20Components.md)
+
+
+
+## Programming
+
+- [General](Programming/General.md)
+- [IR Detection](Programming/IRDetection.md)
+- [Line Detection](Programming/LineDetection.md)
+- [Motion Control](Programming/Movement.md)
+- [Vision Processing](Programming/Vision.md)
diff --git a/docs/SoccerLightweight/index.md b/docs/SoccerLightweight/index.md
new file mode 100644
index 0000000..9621c7e
--- /dev/null
+++ b/docs/SoccerLightweight/index.md
@@ -0,0 +1,12 @@
+# @SoccerLightweight
+
+
+
+2 vs 2 autonomous robot competition where opposing team robots have to play soccer playoffs. The twist of this competition is that robots have to weigh less than 1.1 kg, hence the name "Soccer Lightweight".
+
+
+## Competition
+
+See the [rules](https://robocupjuniortc.github.io/soccer-rules/master/rules.pdf) for Soccer Lightweight.
+
+
diff --git a/docs/SoccerOpen/2024/Communication/index.md b/docs/SoccerOpen/2024/Communication/index.md
new file mode 100644
index 0000000..da7f61b
--- /dev/null
+++ b/docs/SoccerOpen/2024/Communication/index.md
@@ -0,0 +1,24 @@
+Taking into account that we used a dual microcontroller we utilized various channels for serial communication. Our pipeline follows this structure:
+
+
+
+Note that the first data package input occurrs in the camera, sending the following variables to the Raspberry Pi Pico:
+
+```
+filtered_angle
+ball_distance
+ball_angle
+goal_angle
+distance_pixels
+```
+
+In our case the Raspberry Pi Pico just works to format into String and prepare our data package in a way so the esp32 can interpret it and work with it.
+
+Once the data has reached the esp32 the data management follows these steps:
+
+1. Reading data from serial input
+2. Initializing an array and an index
+3. Tokenizing the string
+4. Converting tokens to integer and storing in array
+5. Repeat the process until the 5 values have been populated
+6. Assigning parsed values to variables
\ No newline at end of file
diff --git a/docs/SoccerOpen/2024/Control/index.md b/docs/SoccerOpen/2024/Control/index.md
new file mode 100644
index 0000000..1e23d3b
--- /dev/null
+++ b/docs/SoccerOpen/2024/Control/index.md
@@ -0,0 +1,45 @@
+### Kinematic
+
+For robot control we decided to create our motor libraries following this categories:
+
+>> 📁 motor
+
+>> 📁 motors
+
+Below is a UML diagram showing the relation between both classes and their interaction:
+
+
+
+All of our kinematic logic is found in the Motors class, containing two relevant methods
+
+```
+MoveMotors(int degree, uint8_t speed)
+MoveMotorsImu(int degree, uint8_t speed, double speed_w)
+```
+
+For both methods the relation in kinematic includes the following equations:
+
+```
+float m1 = cos(((45 + degree) * PI / 180));
+float m2 = cos(((135 + degree) * PI / 180));
+float m3 = cos(((225 + degree) * PI / 180));
+float m4 = cos(((315 + degree) * PI / 180));
+```
+Which models the robot below:
+
+
+
+For the case of using the IMU sensor, we implemented a omega PID controller to regulate the robot's angle while moving to an specific direction. We also implemented a traslational PID
+to regulate speed when approaching the ball. To simplify this we created a PID class to make our code reusable.
+
+### IMU Sensor
+
+For IMU sensor we utilized the [Adafruit_BNO055](https://github.com/adafruit/Adafruit_BNO055/tree/master), we also implemented our own library for class creation. Utilizing the yaw position we were able to calculate setpoint and error for out PID controller, below is the logic followed to match angles on the robot frame and the real world.
+
+
+
+Note that yaw is calculated using the Euler vector as shown in the equation below:
+
+`var yaw = atan2(2.0*(q.y*q.z + q.w*q.x), q.w*q.w - q.x*q.x - q.y*q.y + q.z*q.z);`
+
+With this classes we are able to control our robot movement logically.
\ No newline at end of file
diff --git a/docs/SoccerOpen/2024/Logic/index.md b/docs/SoccerOpen/2024/Logic/index.md
new file mode 100644
index 0000000..2ace29f
--- /dev/null
+++ b/docs/SoccerOpen/2024/Logic/index.md
@@ -0,0 +1,63 @@
+For the algorithm design, which is the main logic, we utilized two main files. One for the esp32 and another one for the Raspberry Pi Pico. Our logic hierarchy follows this structure:
+
+>> Goalkeeper
+
+>> - ESP32_Goalkeeper
+
+>> - Pico_Goalkeeper
+
+
+>> Striker
+
+>> - ESP32_Striker
+
+>> - Pico_Striker
+
+### Goalkeeper Logic
+
+The diagram below shows the logic flow for Goalkeeper in the Raspberry Pi Pico:
+
+
+
+For the logic to center the robot in goal observe the pathway below:
+
+1. Check if the ball is found:
+
+ - If the ball is found (`ball_found` is true), proceed to the next step.
+
+2. Determine robot movement based on ball angle:
+
+ - If the ball's angle is within -15 to 15 degrees (indicating the ball is almost directly ahead), move the robot forward towards the ball using a specific speed (`speed_t_ball`) and rotation speed (`speed_w`).
+ - If the ball's angle is outside this range, adjust the ball angle to ensure it's within a 0-360 degree range and calculate a "differential" based on the ball angle. This differential is used to adjust the ball angle to a "`ponderated_angle`," which accounts for the ball's position relative to the robot. The robot then moves in the direction of this adjusted angle with the same speed and rotation speed as before.
+
+3. Adjust robot position if the ball is not found but the goal angle is known:
+
+ - If the goal angle is positive and the ball is not found:
+ - If the goal angle is less than a certain threshold to the left, move the robot to the left (angle 270 degrees) to align with the goal using a specific speed (`speed_t_goal`) and rotation speed (`speed_w`).
+ - If the goal angle is more than a certain threshold to the right, move the robot to the right (angle 90 degrees) to align with the goal using the same speeds.
+ - If the goal angle is within the threshold, stop the robot's lateral movement but continue rotating at the current speed (`speed_w`).
+
+### Striker Logic
+
+The diagram below shows the logic flow for Striker in the Raspberry Pi Pico:
+
+
+
+For the logic to implement if the ball was found observe the pathway below:
+
+1. Check for Ball Detection or Distance Conditions:
+
+ - The robot checks if the ball is found, if the distance to the ball is greater than 100 units, or if the distance is exactly 0 units. If any of these conditions are true, it proceeds with further checks; otherwise, it moves to the else block.
+
+2. Log Ball Found:
+
+ - If the condition is true, it logs "pelota found" to the Serial monitor, indicating that the ball has been detected or the distance conditions are met.
+
+3. Direct Approach or Adjust Angle:
+
+ - Direct Approach: If the ball's angle from the robot (considering a 180-degree field of view) is between -15 and 15 degrees, it means the ball is almost directly in front of the robot. The robot then moves directly towards the ball. The movement command uses an angle of 0 degrees, the absolute value of a predefined speed towards the ball (`speed_t_ball`), and a rotation speed (`speed_w`).
+ - Adjust Angle: If the ball's angle is outside the -15 to 15-degree range, the robot needs to adjust its angle to approach the ball correctly. It calculates a "differential" based on the ball's angle (after adjusting the angle to a 0-360 range) and a factor of 0.09. This differential is used to calculate a "`ponderated_angle`," which adjusts the robot's movement direction either by subtracting or adding this differential, depending on the ball's angle relative to 180 degrees. The robot then moves in this adjusted direction with the same speed and rotation speed.
+
+4. Default Movement:
+
+ - If none of the conditions for the ball being found or the specific distance conditions are met, the robot executes a default movement. It turns around (180 degrees) with a speed of 170 units and the predefined rotation speed (`speed_w`), then pauses for 110 milliseconds.
\ No newline at end of file
diff --git a/docs/SoccerOpen/2024/Mechanics/ACADFile.md b/docs/SoccerOpen/2024/Mechanics/ACADFile.md
new file mode 100644
index 0000000..55de421
--- /dev/null
+++ b/docs/SoccerOpen/2024/Mechanics/ACADFile.md
@@ -0,0 +1,3 @@
+#CAD (STEP)
+
+Full STEP File:[STEP](https://drive.google.com/file/d/1yx2i0Vv63EeMujKTLEmpld-wuCgQxQ_m/view?usp=sharing).
\ No newline at end of file
diff --git a/docs/SoccerOpen/2024/Mechanics/ARobotSystems.md b/docs/SoccerOpen/2024/Mechanics/ARobotSystems.md
new file mode 100644
index 0000000..ed37d57
--- /dev/null
+++ b/docs/SoccerOpen/2024/Mechanics/ARobotSystems.md
@@ -0,0 +1,21 @@
+#Robot Systems
+
+##Structure Materials:
+
+Robot chassis: Primarily MDF with acrylic reinforcements, secured with brass spacers and screws. 3D printed pieces were used for non-critical areas.
+
+
+
+
+##Dribbler:
+
+Made from 1/16in aluminum, some pieces were bent to shape. Designed for easy adjustment to accommodate manufacturing error.
+"Roller" component: 3D printed pulley paired with a black silicone molded shape for its gripping abilities.
+
+
+
+
+##Kicker:
+A 5 volt solenoid was used due to size and availability. The kicker 3d printed piece was inserted by heat.
+
+
diff --git a/docs/SoccerOpen/2024/Mechanics/CAD.md b/docs/SoccerOpen/2024/Mechanics/CAD.md
new file mode 100644
index 0000000..55c49b5
--- /dev/null
+++ b/docs/SoccerOpen/2024/Mechanics/CAD.md
@@ -0,0 +1,86 @@
+
+
+
+
+
+#Mirror design
+The following code was used to find a rough estimate of what our vision would look like even though the result would change slightly with the thickness of the chroming process. To adjust for calculation errors, we used a linear regression model for greater accuracy over theoretical results.
+
+ '''
+ parabola=(0.045+(0.45x^2))^(1/2)
+ derivada=(1/2)(0.9x)(0.045+0.45x^2)^(-1/2)
+ '''
+ import matplotlib.pyplot as plt
+
+ AlturaCamara_Curva=5#cm
+ Altura_suelo=15.4
+
+ import math
+ ran =[]
+ x=0
+ print('--------------------------valores x:')
+ for i in range(25+1):#lista de 0 a 2.5(media parabola)
+ ran.append(x)
+ x=x+0.1
+ for i in ran:
+ print(i)
+
+ print('--------------------------angulos de tangentes:')
+ anguTans=[]
+ for x in ran:
+ tangente=(1/2)*(0.9*x)*(0.045+0.45*x**2)**(-1/2)#encontrar tangente con derivada
+ angulo = math.degrees(math.atan(tangente))#angulo de tangente
+ anguTans.append(angulo)
+ print(angulo)
+
+ print('--------------------------angulos de camara:')
+ #encontrar angulo de camara a parabola
+ ai=[]
+ for x in ran:
+ y=((0.1+x**2)*0.45)**(1/2)+AlturaCamara_Curva#AlturaCamara curva es la distaancia cm al punto mas cercano de la parabloa
+ if x==0:
+ ai.append(0)
+ print(90)
+ else:
+ angulo = math.degrees(math.atan((y/x)))
+ ai.append(angulo)
+ print(angulo)
+
+
+ print('--------------------------angulos triangulols222:')
+ dists=[]
+ orden=0
+ #encontrardistancia
+ for x in ran:
+ if x==0:
+ dists.append(0)
+ print(0)
+ else:
+ Y=2*(180-90-(ai[orden]-anguTans[orden]))
+ X=(180-Y)/2
+ anguTriangulo2=X-anguTans[orden]
+ print(anguTriangulo2)
+ distTotal=x+(Altura_suelo/(math.tan(math.radians(anguTriangulo2))))#ALtura suelo cm de altura respecto al suelo
+ dists.append(distTotal)
+ orden=orden+1
+ print('--------------------------distancias totales:')
+ for i in dists:
+ print(i)
+
+ plt.figure(figsize=(8, 5))
+ plt.plot(ran, dists, label="Total Distance")
+ plt.xlabel("Mirror cm")
+ plt.ylabel("Total Distance")
+ plt.title("Total Distance vs. Mirror cm ")
+ plt.grid(True)
+ plt.legend()
+ plt.show()
+
+This is the graph showed by the code comparing meters of visibility and "x" axis distance on mirror hyperbola.
+
+
+
+
+
+**Design simulation in blender**
+
diff --git a/docs/SoccerOpen/2024/SoccerOpen - 2024.md b/docs/SoccerOpen/2024/SoccerOpen - 2024.md
new file mode 100644
index 0000000..29837ad
--- /dev/null
+++ b/docs/SoccerOpen/2024/SoccerOpen - 2024.md
@@ -0,0 +1,12 @@
+# Soccer Open 2024 Sections
+
+### Electronics
+
+### Mechanics
+
+### Programming
+
+- [📁 Robot Communication](Communication/index.md)
+- [🎮 Robot Control](Control/index.md)
+- [🤖 Algorithm Design](Logic/index.md)
+- [📸 Robot Vision](Vision/index.md)
\ No newline at end of file
diff --git a/docs/SoccerOpen/2024/Vision/index.md b/docs/SoccerOpen/2024/Vision/index.md
new file mode 100644
index 0000000..cd6bf28
--- /dev/null
+++ b/docs/SoccerOpen/2024/Vision/index.md
@@ -0,0 +1,58 @@
+### Blob Detection
+
+For detection we first had to set some constants using VAD values to detect the orange blob, we also added a brightness, saturation and contrast filter to make the image obtained from the camera more clear.
+
+Our vision algorithm followed these steps:
+
+1. Initialize sensor
+2. Locate blob
+3. Calculate distance
+4. Calculate opposite distance
+5. Calculate goal distance
+6. Calculate angle
+7. Main algorithm
+8. Send data package to esp32 via UART
+
+The first step when using the camera includes reducing the vision field by placing a black blob to reduce image noise, we also locate the center of the frame using constants such as `FRAME_HEIGHT`, `FRAME_WIDTH` and `FRAME_ROBOT`.
+
+For the game we must be able to differenciate between 3 different types of blobs that include the following:
+
+- Yellow goal
+- Blue goal
+- Orange ball
+
+We created the `locate_blob` method that used `find_blobs` from the OpenMV IDE to locate blobs in the image snapshot for each specific threshold value set and returns a list of blob objects for each set. In the case of the ball we have set the `area_threshold` to 1, this small value means that even when a small area of this color is detected it will be marked as an orange ball to increase the amount of inclusivity in our vision field. For the case of the goals the `area_threshold` is set to 1000 because goals are much more larger in size.
+
+Observe the image below to see the different blobs detected from the camera's POV.
+
+
+
+Once we have our different blobs detected we can calculate the distance to the blob using the hypotenuse. First we need to calculate the relative center in x and y axis, then find the magnitude distance and finally using an exponential regression model to calculate the real distance with pixel comparison. Note that the expression was obtained by running measurments comparing cm and pixels using a proportion and data modeling on Excel.
+
+```
+magnitude_distance = math.sqrt(relative_cx**2 + relative_cy**2)
+total_distance = 11.83 * math.exp((0.0245) * magnitude_distance)
+```
+
+For the `goal_distance` we needed to calculate the opposite distance, using sine.
+
+```
+distance_final = goal_distance*math.sin (math.radians(goal_angle))
+```
+
+To calculate the angle we used the inverse tangent and then just convert to degree measurment.
+
+```
+angle = math.atan2(relative_cy, relative_cx)
+angle_degrees = math.degrees(angle)
+```
+
+Finally, depending on the blob that is being detecting for each image snapshot we perform the corresponding methods and sent the data package using a format of two floating points divided by commas.
+
+### Line Detection
+
+To avoid corrsing the lines located outside the goals we implemented the `pixel_distance` measurment, so when our robot reached certain distance it automatically moved backward so we can limit its movement to avoid crossing the white lines.
+
+See the image below to observe the line limitations from the camera's POV.
+
+
\ No newline at end of file
diff --git a/docs/SoccerOpen/index.md b/docs/SoccerOpen/index.md
new file mode 100644
index 0000000..a2c6927
--- /dev/null
+++ b/docs/SoccerOpen/index.md
@@ -0,0 +1,12 @@
+# SoccerOpen
+
+>In the RoboCupJunior Soccer challenge, teams of young engineers design, build, and program two fully autonomous mobile robots to compete against another team in matches. The robots must detect a ball and score into a color-coded goal on a special field that resembles a human soccer field.
+
+
+
+
+- Soccer Open is played using a passive, brightly colored orange ball. Robots may weigh up to 2.2 kg, may have a ball-capturing zone of up to 1.5 cm
+
+Adress the official game manual [here](https://robocup-junior.github.io/soccer-rules/master/rules.html)
+
+
diff --git a/docs/assets/Dribbler.png b/docs/assets/Dribbler.png
new file mode 100644
index 0000000..09c44de
Binary files /dev/null and b/docs/assets/Dribbler.png differ
diff --git a/docs/assets/LARC/ArduinoMEGA.webp b/docs/assets/LARC/ArduinoMEGA.webp
new file mode 100644
index 0000000..acb3dd1
Binary files /dev/null and b/docs/assets/LARC/ArduinoMEGA.webp differ
diff --git a/docs/assets/LARC/PCBLARC2024.png b/docs/assets/LARC/PCBLARC2024.png
new file mode 100644
index 0000000..b603607
Binary files /dev/null and b/docs/assets/LARC/PCBLARC2024.png differ
diff --git a/docs/assets/LARC/Raspberry Pi4.jpg b/docs/assets/LARC/Raspberry Pi4.jpg
new file mode 100644
index 0000000..429254b
Binary files /dev/null and b/docs/assets/LARC/Raspberry Pi4.jpg differ
diff --git a/docs/assets/LARC/arduino mega.jpg b/docs/assets/LARC/arduino mega.jpg
new file mode 100644
index 0000000..d773575
Binary files /dev/null and b/docs/assets/LARC/arduino mega.jpg differ
diff --git a/docs/assets/LARC/generalviewlarc.png b/docs/assets/LARC/generalviewlarc.png
new file mode 100644
index 0000000..0c08660
Binary files /dev/null and b/docs/assets/LARC/generalviewlarc.png differ
diff --git a/docs/assets/LARC/indicator.jpg b/docs/assets/LARC/indicator.jpg
new file mode 100644
index 0000000..a9c2d44
Binary files /dev/null and b/docs/assets/LARC/indicator.jpg differ
diff --git a/docs/assets/LARC/overallpcb.png b/docs/assets/LARC/overallpcb.png
new file mode 100644
index 0000000..e5208a7
Binary files /dev/null and b/docs/assets/LARC/overallpcb.png differ
diff --git a/docs/assets/LARC/schematic1.png b/docs/assets/LARC/schematic1.png
new file mode 100644
index 0000000..f1d6f4d
Binary files /dev/null and b/docs/assets/LARC/schematic1.png differ
diff --git a/docs/assets/LARC/schematic2.png b/docs/assets/LARC/schematic2.png
new file mode 100644
index 0000000..62c91f2
Binary files /dev/null and b/docs/assets/LARC/schematic2.png differ
diff --git a/docs/assets/LARC/schematic3.png b/docs/assets/LARC/schematic3.png
new file mode 100644
index 0000000..e36e6b4
Binary files /dev/null and b/docs/assets/LARC/schematic3.png differ
diff --git a/docs/assets/LARC/xt60 connector.jpg b/docs/assets/LARC/xt60 connector.jpg
new file mode 100644
index 0000000..62afb22
Binary files /dev/null and b/docs/assets/LARC/xt60 connector.jpg differ
diff --git a/docs/assets/MirrorGraph.png b/docs/assets/MirrorGraph.png
new file mode 100644
index 0000000..7bfc82b
Binary files /dev/null and b/docs/assets/MirrorGraph.png differ
diff --git a/docs/assets/Robot.png b/docs/assets/Robot.png
new file mode 100644
index 0000000..2353bb7
Binary files /dev/null and b/docs/assets/Robot.png differ
diff --git a/docs/assets/SLW2024/Algorithm_SL2024.png b/docs/assets/SLW2024/Algorithm_SL2024.png
new file mode 100644
index 0000000..3174e63
Binary files /dev/null and b/docs/assets/SLW2024/Algorithm_SL2024.png differ
diff --git a/docs/assets/maze/Dispenser.png b/docs/assets/maze/Dispenser.png
new file mode 100644
index 0000000..e0ffa20
Binary files /dev/null and b/docs/assets/maze/Dispenser.png differ
diff --git a/docs/assets/maze/TMRrobot.jpg b/docs/assets/maze/TMRrobot.jpg
new file mode 100644
index 0000000..3611b5a
Binary files /dev/null and b/docs/assets/maze/TMRrobot.jpg differ
diff --git a/docs/assets/maze/Wheel.png b/docs/assets/maze/Wheel.png
new file mode 100644
index 0000000..9a9896d
Binary files /dev/null and b/docs/assets/maze/Wheel.png differ
diff --git a/docs/assets/maze/c++maze.png b/docs/assets/maze/c++maze.png
new file mode 100644
index 0000000..d345a14
Binary files /dev/null and b/docs/assets/maze/c++maze.png differ
diff --git a/docs/assets/maze/final CAD.png b/docs/assets/maze/final CAD.png
new file mode 100644
index 0000000..ba1b800
Binary files /dev/null and b/docs/assets/maze/final CAD.png differ
diff --git a/docs/assets/maze/render2.JPG b/docs/assets/maze/render2.JPG
new file mode 100644
index 0000000..5a4b36f
Binary files /dev/null and b/docs/assets/maze/render2.JPG differ
diff --git a/docs/assets/mirrorsim.png b/docs/assets/mirrorsim.png
new file mode 100644
index 0000000..be692b4
Binary files /dev/null and b/docs/assets/mirrorsim.png differ
diff --git a/docs/assets/soccer/Programming/MotorsUML.png b/docs/assets/soccer/Programming/MotorsUML.png
new file mode 100644
index 0000000..bc12127
Binary files /dev/null and b/docs/assets/soccer/Programming/MotorsUML.png differ
diff --git a/docs/assets/soccer/Programming/bnodiagram.png b/docs/assets/soccer/Programming/bnodiagram.png
new file mode 100644
index 0000000..40f37d9
Binary files /dev/null and b/docs/assets/soccer/Programming/bnodiagram.png differ
diff --git a/docs/assets/soccer/Programming/goalkeeper_pico_diagram.png b/docs/assets/soccer/Programming/goalkeeper_pico_diagram.png
new file mode 100644
index 0000000..74d0706
Binary files /dev/null and b/docs/assets/soccer/Programming/goalkeeper_pico_diagram.png differ
diff --git a/docs/assets/soccer/Programming/line_robot_view.jpg b/docs/assets/soccer/Programming/line_robot_view.jpg
new file mode 100644
index 0000000..4d3884f
Binary files /dev/null and b/docs/assets/soccer/Programming/line_robot_view.jpg differ
diff --git a/docs/assets/soccer/Programming/robot.png b/docs/assets/soccer/Programming/robot.png
new file mode 100644
index 0000000..23e2414
Binary files /dev/null and b/docs/assets/soccer/Programming/robot.png differ
diff --git a/docs/assets/soccer/Programming/robot_vision_view.jpg b/docs/assets/soccer/Programming/robot_vision_view.jpg
new file mode 100644
index 0000000..3bc8e51
Binary files /dev/null and b/docs/assets/soccer/Programming/robot_vision_view.jpg differ
diff --git a/docs/assets/soccer/Programming/serialdiagram.png b/docs/assets/soccer/Programming/serialdiagram.png
new file mode 100644
index 0000000..309a506
Binary files /dev/null and b/docs/assets/soccer/Programming/serialdiagram.png differ
diff --git a/docs/assets/soccer/Programming/striker_pico_diagram.png b/docs/assets/soccer/Programming/striker_pico_diagram.png
new file mode 100644
index 0000000..aa33f60
Binary files /dev/null and b/docs/assets/soccer/Programming/striker_pico_diagram.png differ
diff --git a/docs/assets/topViewLowerLvl.png b/docs/assets/topViewLowerLvl.png
new file mode 100644
index 0000000..e62dab5
Binary files /dev/null and b/docs/assets/topViewLowerLvl.png differ
diff --git a/docs/home/.pages b/docs/home/.pages
new file mode 100644
index 0000000..838c483
--- /dev/null
+++ b/docs/home/.pages
@@ -0,0 +1,6 @@
+nav:
+ - index.md
+ - Overview
+ - Areas
+ - Aug 2023 - Jun 2024
+ - Aug 2022 - Jun 2023
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Face Detection and Recognition.md b/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Face Detection and Recognition.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Human Attributes.md b/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Human Attributes.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/NVIDIA-Tao.md b/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/NVIDIA-Tao.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/TF Object Detection API.md b/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/TF Object Detection API.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/home/2022-Jun 2023/Human Robot Interaction/index.md b/docs/home/2022-Jun 2023/Human Robot Interaction/index.md
deleted file mode 100644
index cdf4c18..0000000
--- a/docs/home/2022-Jun 2023/Human Robot Interaction/index.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Human Robot Interaction
-
-Human-Robot Interaction (HRI) refers to the study and design of interactions between humans and robots. It encompasses the development of technologies, interfaces, and systems that enable effective communication, collaboration, and cooperation between humans and robots. HRI aims to create intuitive and natural ways for humans to interact with robots, allowing for seamless integration of robots into various domains such as healthcare, manufacturing, entertainment, and personal assistance.
-
-In Roborregos the previous work was develop only considering [speech](speech/index.md), a subset of HRI that involves the use of speech recognition and speech synthesis to enable verbal communication between humans and robots. Also the hardware setup of the area and the software modules that are used were developed
-
-The last implementation to get the entities of the text was using GPT-3 API.
-
-Also [Human Analysis]("../vision/Human Analysis/index.md") area was develop, a basis for the development of natural behavior between humans and robots using computer vision techniques.
-
-Currently we are working on the development of the area considering features such as gestures, facial expressions, and body language recognition to enhance the human-robot interaction experience. By incorporating these features, robots can better understand and respond to non-verbal cues from humans, leading to more effective communication and collaboration.
-
diff --git a/docs/home/2022-Jun 2023/Integration and Networks/index.md b/docs/home/2022-Jun 2023/Integration and Networks/index.md
deleted file mode 100644
index 2f67b9d..0000000
--- a/docs/home/2022-Jun 2023/Integration and Networks/index.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Overview
-
-The area of integration and networks is concerned about how all the modules are connected, including hardware and software.
-
-The area is in charge of do the state machines of the tasks for Robocup@Home, remember to read frecuently the [Robocup@Home Rulebook](https://robocupathome.github.io/RuleBook/rulebook/master.pdf) to be aware of the rules of the competition.
-
-In the following sections, we will discuss the following topics:
-
-- [Jetson Xavier AGX](Jetson Xavier Agx.md)
-- [Jetson Nano](Jetson Nano.md)
-- [Network](Network.md)
-
-Integration is a key part of the project, since it is the way to connect all the modules and make them work together. You must have an idea of how the modules are connected and how they communicate with each other. And also how all the modules are working.
-
-Also you need to be aware of hardware issues and when you need to change some hardware configuration, for example, if you need more jetson nano devices, in order to have more local processing power.
-
-## Considerations of all the modules in the network
-
-### Navigation
-
-The navigation module is based on a ROS package called [ROS Navigation Stack](http://wiki.ros.org/navigation), it is a 2D navigation stack that takes in information from odometry, sensor streams, and a goal pose and outputs safe velocity commands that are sent to a mobile base. Regularly you have a move_base node that is the one that is in charge of the navigation, it takes the goal pose and the odometry information and it sends the velocity commands to the mobile base.
-
-### Human Robot Interaction
-
-The natural interaction with the user is something that you need to consider in the integration, you need to know how the user is going to interact with the robot, and how the robot is going to respond to the user. Also what the robot will do if he doesn't understand the user.
-
-#### Speech
-
-You need to fully understand the calls of the speech module, you can check specific detalis in the [Speech Module](../Human%20Robot%20Interaction/speech/index.md). Remember all the control topics and be aware of any misspelling thats why is recommended to have a confirmation call for every command given by the user.
-
-#### Human Analysis
-
-The detection or the analysis of the human should be storaged and tagged correctly, the natural implementation of the HRI area should be consider when integrating the system
-
-### Manipulation
-
-Remember to always check the hardware configurations for any manipulation task and remember that the area of manipulation is in charge of the manipulation of the objects, remember the safety of the user and include routines for different cases from a the object falling or the user not getting the object.
-
-### Vision
-
-With vision you can check different objects of the environment, and integrate them to any behavior, just rememberthe latency of the vision module and consider the control topics of this module.
\ No newline at end of file
diff --git a/docs/home/Areas/Computer Vision.md b/docs/home/Areas/Computer Vision.md
new file mode 100644
index 0000000..7ffd060
--- /dev/null
+++ b/docs/home/Areas/Computer Vision.md
@@ -0,0 +1,9 @@
+# Computer Vision
+
+The different Challenges that the @Home Competition has, requires several computer vision techniques to be solved. This section will show some of the most used techniques in the competition.
+
+First of all, it's important to know the general solution of the Vision Modules for the @Home Competition. We can divide the Vision Modules in two main categories:
+
+- Object Detection
+- Human Analysis
+
diff --git a/docs/home/2022-Jun 2023/Electronics and Control/index.md b/docs/home/Areas/Electronics and Control.md
similarity index 58%
rename from docs/home/2022-Jun 2023/Electronics and Control/index.md
rename to docs/home/Areas/Electronics and Control.md
index 503437c..7e93b2d 100644
--- a/docs/home/2022-Jun 2023/Electronics and Control/index.md
+++ b/docs/home/Areas/Electronics and Control.md
@@ -1,8 +1,3 @@
-# Overview
+# Electronics and Control
-Electronics and Control is the area in charge of the design, construction and application of all the circuits at any level, but also of the control of the movements in the robot at a low level, hence it interacts directly with the mechanics area.
-
-## Sections
-
-- [Electronics](Electronics.md)
-- [Control](Control.md)
\ No newline at end of file
+Electronics and Control is the area in charge of the design, construction and application of all the circuits at any level, but also of the control of the movements in the robot at a low level, hence it interacts directly with the mechanics area.
\ No newline at end of file
diff --git a/docs/home/Areas/HRI.md b/docs/home/Areas/HRI.md
new file mode 100644
index 0000000..3029170
--- /dev/null
+++ b/docs/home/Areas/HRI.md
@@ -0,0 +1,3 @@
+# Human Robot Interaction
+
+Human-Robot Interaction (HRI) refers to the study and design of interactions between humans and robots. It encompasses the development of technologies, interfaces, and systems that enable effective communication, collaboration, and cooperation between humans and robots. HRI aims to create intuitive and natural ways for humans to interact with robots, allowing for seamless integration of robots into various domains such as healthcare, manufacturing, entertainment, and personal assistance.
diff --git a/docs/home/Areas/Integration and Networks.md b/docs/home/Areas/Integration and Networks.md
new file mode 100644
index 0000000..964b7f3
--- /dev/null
+++ b/docs/home/Areas/Integration and Networks.md
@@ -0,0 +1,7 @@
+# Integration and Networks
+
+The area of integration and networks is concerned about how all the modules are connected, including hardware and software.
+
+Integration is a key part of the project, since it is the way to connect all the modules and make them work together. You must have an idea of how the modules are connected and how they communicate with each other. And also how all the modules are working.
+
+Also you need to be aware of hardware issues and when you need to change some hardware configuration, for example, if you need more jetson nano devices, in order to have more local processing power.
diff --git a/docs/home/Areas/Manipulation.md b/docs/home/Areas/Manipulation.md
new file mode 100644
index 0000000..36d7b97
--- /dev/null
+++ b/docs/home/Areas/Manipulation.md
@@ -0,0 +1,5 @@
+# Manipulation
+
+Dynamic manipulation systems are crucial for advancing robotics because they allow robots to interact with their environments in ways that go far beyond pre-programmed actions. This capability is particularly important for service robotics, where robots must navigate unstructured and unpredictable environments. Dynamic manipulation helps robots sense changes, modify trajectories, and grasp objects with variations in shape, size, and material. This opens the door to more complex tasks, from assisting with meal preparation to handling delicate objects in healthcare settings. The ability to adapt in real-time also promotes safe and seamless human-robot collaboration.
+
+Our physical implementation consists of a 6-DOF Xarm6 robotic arm utilizing MoveIt for trajectory planning. We developed custom picking and placing functionalities, incorporating additional libraries and algorithms, and leveraging feedback from a Zed2 stereo camera.
diff --git a/docs/home/Areas/Mechanics.md b/docs/home/Areas/Mechanics.md
new file mode 100644
index 0000000..995f582
--- /dev/null
+++ b/docs/home/Areas/Mechanics.md
@@ -0,0 +1 @@
+# Mechanics
diff --git a/docs/home/Areas/Navigation.md b/docs/home/Areas/Navigation.md
new file mode 100644
index 0000000..69c0d5e
--- /dev/null
+++ b/docs/home/Areas/Navigation.md
@@ -0,0 +1 @@
+# Navigation
\ No newline at end of file
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Pose Estimation.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Human Analysis/Pose Estimation.md
similarity index 97%
rename from docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Pose Estimation.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/Human Analysis/Pose Estimation.md
index 78bfeec..0e5b229 100644
--- a/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/Pose Estimation.md
+++ b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Human Analysis/Pose Estimation.md
@@ -90,7 +90,7 @@ As a result, you'll not only be able to get the pose estimation array. but also
Example:
-
+
### Using pose estimation with ROS
@@ -164,5 +164,5 @@ with mp_pose.Pose(
Here is an example of the result:
-
+
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Human Analysis/index.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Human Analysis/index.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Computer Vision/Human Analysis/index.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/Human Analysis/index.md
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/TFLite Model Maker.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Custom Models/TFLite Model Maker.md
similarity index 96%
rename from docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/TFLite Model Maker.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Custom Models/TFLite Model Maker.md
index 38ec1e1..7345ffb 100644
--- a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/TFLite Model Maker.md
+++ b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Custom Models/TFLite Model Maker.md
@@ -75,4 +75,4 @@ model.export(export_dir='.', export_format=[ExportFormat.SAVED_MODEL, ExportForm
Sample results:
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/yolov5.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Custom Models/yolov5.md
similarity index 93%
rename from docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/yolov5.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Custom Models/yolov5.md
index 81d7946..06d4f6f 100644
--- a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Custom Models/yolov5.md
+++ b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Custom Models/yolov5.md
@@ -87,7 +87,7 @@ yolov5l.pt LARGE
yolov5x.pt EXTRA LARGE
```
With the following precision and speed each:
-
+
With testing, yolov5m has proven of enough accuracy while occupying less than 2GB of VRAM allocated on runtime.
After training has finished, it will create a new folder on runs/train. There the created models will be contained on the weights folder, including the one generated on the last epoch of the training and the one with the best average precision. The folder will also include the results statistics in a graphical and as a CSV file.
@@ -96,7 +96,7 @@ To validate and test the model, it is recommended to use the tutorial.ipynb note
Sample results:
-
+
Detection was made using the YOLO ROS Wrapper available at:
```
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Dataset Automatization.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Dataset Automatization.md
similarity index 85%
rename from docs/home/2022-Jun 2023/Computer Vision/Object Detection/Dataset Automatization.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Dataset Automatization.md
index 4cfefb0..71bd456 100644
--- a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/Dataset Automatization.md
+++ b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/Dataset Automatization.md
@@ -26,17 +26,17 @@ With the scripts provided in the dataset, the object is cut to be used for the c
- First the image is detected using available YOLOv5 models (which remains one of the most important improvements to be made, as some objects could prove difficult to find an adequate model to be detected). Then the object is segmented and cut using the Segment-anything model from META.
-
+
After cut:
-
+
With this photo, the object remains at its original position in the image (which can also be used as labels for training with real images).
- Then, the image is cut to the object size, which will be the one used in the creation of datasets.
-
+
## Dataset creation
@@ -45,15 +45,15 @@ Using a dataset of various backgrounds (including the area of the competition an
It then exports it in a format readable for the training models, currently with a version exporting annotations as a COCO json format and in the YOLOv5 one. The notebook exports the COCO format with segmentation, so it can be used with models that support training with segmentation labels.
Example of an image produced:
-
+
Segmentation visualized:
-
+
## Results Obtained
While images from the datasets created may sometimes look curious and the objects shown on positions and places they would never be found, this process was shown to be succesful to a great extent. An example of this was how one of the first models was trained, on which only backgrounds from the table the model was going to be detecting on were used. The results of this model, while accurate on that space, struggled on other areas:
-
+
After retraining with a dataset using diverse backgrounds (both taken from the work area and others), this problem was mostly solved:
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/index.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/index.md
similarity index 91%
rename from docs/home/2022-Jun 2023/Computer Vision/Object Detection/index.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/index.md
index c620d31..7aec858 100644
--- a/docs/home/2022-Jun 2023/Computer Vision/Object Detection/index.md
+++ b/docs/home/Aug 2022 - Jun 2023/Computer Vision/Object Detection/index.md
@@ -2,13 +2,13 @@
The Object Detection module represent the challenge of identify and locate objects in the environment. This module is used in computer vision and image processing to detect objects in an image or video sequence. The goal is to identify the object and its location within the image or video frame. The module uses various techniques such as feature extraction, object recognition, and machine learning algorithms to achieve this task.
-
+
## Challenges and Tasks
The Object Detection module is focused on produce a highly accurate while fast object detection system. The first approach for this in the first year of participation, was to use a pre-trained model based on the Object Detection API from Tensorflow. This model was trained with the COCO dataset, which contains at most 4 different classes of objects. The process was to generate the dataset from a manually labeled dataset of images, and train the model with it.
-
+
This approach had several problems listed below:
diff --git a/docs/home/2022-Jun 2023/Computer Vision/index.md b/docs/home/Aug 2022 - Jun 2023/Computer Vision/index.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Computer Vision/index.md
rename to docs/home/Aug 2022 - Jun 2023/Computer Vision/index.md
diff --git a/docs/home/2022-Jun 2023/Electronics and Control/Boards/Boards.md b/docs/home/Aug 2022 - Jun 2023/Electronics and Control/Boards/Boards.md
similarity index 89%
rename from docs/home/2022-Jun 2023/Electronics and Control/Boards/Boards.md
rename to docs/home/Aug 2022 - Jun 2023/Electronics and Control/Boards/Boards.md
index 319a211..02a6953 100644
--- a/docs/home/2022-Jun 2023/Electronics and Control/Boards/Boards.md
+++ b/docs/home/Aug 2022 - Jun 2023/Electronics and Control/Boards/Boards.md
@@ -5,20 +5,20 @@
This board main goal is to control the motors of the base, which are 4 DC 12v/6v motors with 270:1 gear box reduction. Model [CQGB37Y001](http://www.cqrobot.wiki/index.php/Metal_DC_Geared_Motor_w/Encoder_CQGB37Y001), these motors come with magentic encoders already installed.
So the board has one [ATMega 2560](https://ww1.microchip.com/downloads/en/devicedoc/atmel-2549-8-bit-avr-microcontroller-atmega640-1280-1281-2560-2561_datasheet.pdf) has the microcontroller, it has tow voltage supplies, one of 12v that can be conneceted thru a XT-60 connector or a terminal block.
-
+
And also has a 5v voltage supply provided by the serial port which also allows communication with the main computer. This port is supported by a female microsUSB that is conected to the FT232RQ chip. This chip also allows to use the a FTDI to acces the microcontroller.
-
+
Voltage can be interrupted manually by the headers shown bellow.
-
+
For the drivers we have two Dual [MC33926](https://www.pololu.com/product/1213) Motor Driver Carrier, so each one controls two motors, giving us the total 4 motors we need. This driver was selected because its reliable protection features and its voltage-current range.
For the orientation feedback we have the adafruit IMU sensor [BNO055](https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/overview).
-
+
This board also has some miscelanous features such as extra voltage Outputs (5v/12V), I2C pins for communication, bootloader pins, and 6 extra digital pins for general purposes(emergency stop for example), indicator LEDs and a reset button.
@@ -38,12 +38,12 @@ For the pinout to the drivers you can download the next [Arduino](stepperPruebas
This board also supports feedback for the stepper motors, thru the [AS5600](https://pdf1.alldatasheet.com/datasheet-pdf/view/621657/AMSCO/AS5600.html) contactless potentiometer, which allows to know the position of the motor using a magnet fixed to the back of the motor.
-
+
This model of encoder recquired the desgin of a housing so that the encoder and the motor will stay together. It uses communication via I2C, so for the correct use of this encoders it is also recquired to implement a multiplexor so that you can read multiple encoders.
-
+
Now for the servo motors, we mainly use two models, the [1245MG](https://www.pololu.com/file/0J706/HD-1235MG.pdf) when more torque is needed, or the smaller but less current demanding [MG995](https://pdf1.alldatasheet.com/datasheet-pdf/view/1132435/ETC2/MG995.html)
This board supports up to six 7v servomotors and the pinout to the motors is also declared on the Arduino file shown before.
@@ -55,7 +55,7 @@ And also has some miscelanous features such as extra voltage Outputs (5v/7v/12V)
Lastly for the power supply board we just desgined a distribution circuit that can be interrupted via a Start and Stop buttons. The interruption is made by a two relay latch circuit.
-
+
NOTE: The connection in the normally close and normally open contacts is done like this because the blueprint and the real pinout is reversed. So in the real circuit, the connections go to the normally open contact.
diff --git a/docs/home/2022-Jun 2023/Electronics and Control/Boards/stepperPruebas.ino b/docs/home/Aug 2022 - Jun 2023/Electronics and Control/Boards/stepperPruebas.ino
similarity index 100%
rename from docs/home/2022-Jun 2023/Electronics and Control/Boards/stepperPruebas.ino
rename to docs/home/Aug 2022 - Jun 2023/Electronics and Control/Boards/stepperPruebas.ino
diff --git a/docs/home/2022-Jun 2023/Electronics and Control/Control.md b/docs/home/Aug 2022 - Jun 2023/Electronics and Control/Control.md
similarity index 87%
rename from docs/home/2022-Jun 2023/Electronics and Control/Control.md
rename to docs/home/Aug 2022 - Jun 2023/Electronics and Control/Control.md
index 6d9ef10..81588f3 100644
--- a/docs/home/2022-Jun 2023/Electronics and Control/Control.md
+++ b/docs/home/Aug 2022 - Jun 2023/Electronics and Control/Control.md
@@ -3,7 +3,7 @@ On the this side, we mainly develop reliable control, for the basic movements of
First of all we need to explain the communication between the main computer and the microcontrollers. We can understand it more easy with these diagram:
-
+
The serial communication is done thru (pyserial)[https://pypi.org/project/pyserial/]. Thelogic of controll is that the microcontroller is running constantly a PID control system
diff --git a/docs/home/2022-Jun 2023/Electronics and Control/Electronics.md b/docs/home/Aug 2022 - Jun 2023/Electronics and Control/Electronics.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Electronics and Control/Electronics.md
rename to docs/home/Aug 2022 - Jun 2023/Electronics and Control/Electronics.md
diff --git a/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/index.md b/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/index.md
new file mode 100644
index 0000000..93ee114
--- /dev/null
+++ b/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/index.md
@@ -0,0 +1,10 @@
+# Human Robot Interaction
+
+In Roborregos the previous work was developed only considering [speech](speech/index.md), a subset of HRI that involves the use of speech recognition and speech synthesis to enable verbal communication between humans and robots. Also the hardware setup of the area and the software modules that are used were developed
+
+The last implementation to get the entities of the text was using GPT-3 API.
+
+Also [Human Analysis]("../vision/Human Analysis/index.md") area was develop, a basis for the development of natural behavior between humans and robots using computer vision techniques.
+
+Currently we are working on the development of the area considering features such as gestures, facial expressions, and body language recognition to enhance the human-robot interaction experience. By incorporating these features, robots can better understand and respond to non-verbal cues from humans, leading to more effective communication and collaboration.
+
diff --git a/docs/home/2022-Jun 2023/Human Robot Interaction/speech/GPT-3 API.md b/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/GPT-3 API.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Human Robot Interaction/speech/GPT-3 API.md
rename to docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/GPT-3 API.md
diff --git a/docs/home/2022-Jun 2023/Human Robot Interaction/speech/index.md b/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/index.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Human Robot Interaction/speech/index.md
rename to docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/index.md
diff --git a/docs/home/2022-Jun 2023/Human Robot Interaction/speech/speech_to_text.md b/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/speech_to_text.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Human Robot Interaction/speech/speech_to_text.md
rename to docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/speech_to_text.md
diff --git a/docs/home/2022-Jun 2023/Human Robot Interaction/speech/text_to_speech.md b/docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/text_to_speech.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Human Robot Interaction/speech/text_to_speech.md
rename to docs/home/Aug 2022 - Jun 2023/Human Robot Interaction/speech/text_to_speech.md
diff --git a/docs/home/2022-Jun 2023/Integration and Networks/Jetson Nano.md b/docs/home/Aug 2022 - Jun 2023/Integration and Networks/Jetson Nano.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Integration and Networks/Jetson Nano.md
rename to docs/home/Aug 2022 - Jun 2023/Integration and Networks/Jetson Nano.md
diff --git a/docs/home/2022-Jun 2023/Integration and Networks/Jetson Xavier Agx.md b/docs/home/Aug 2022 - Jun 2023/Integration and Networks/Jetson Xavier Agx.md
similarity index 89%
rename from docs/home/2022-Jun 2023/Integration and Networks/Jetson Xavier Agx.md
rename to docs/home/Aug 2022 - Jun 2023/Integration and Networks/Jetson Xavier Agx.md
index 470b0d2..c9e3fe8 100644
--- a/docs/home/2022-Jun 2023/Integration and Networks/Jetson Xavier Agx.md
+++ b/docs/home/Aug 2022 - Jun 2023/Integration and Networks/Jetson Xavier Agx.md
@@ -6,10 +6,10 @@ To flash the Jetson AGX Xavier, you need to download the [JetPack SDK](https://d
1. Download the JetPack SDK from Nvidia. You will need to create an account to download the SDK in [Nvidia Developer](https://developer.nvidia.com/drive/downloads).
2. Put the jetson AGX Xavier on recovery mode. There is a recovery button on the board, which is in the middle of three buttons. Hold the recovery button and then power it up, which will enter the Force Recovery Mode
-3. You should connect the Jetson AGX Xavier to your computer with a USB to USB-C cable. The USB-C port is the one that is closer to the power button. It should detect the board
-4. You Can select the components you want to install. We recommend to install all of them.
-5. Then you should flash the board (if you are using SSD storage you should change the Storage Device).
-6. When you get to install additional components, you will get a promt, at that moment the board has already a initial OS, if you get an error in the connection change to another method and check with a display the real IP of the board.
+3. You should connect the Jetson AGX Xavier to your computer with a USB to USB-C cable. The USB-C port is the one that is closer to the power button. It should detect the board
+4. You Can select the components you want to install. We recommend to install all of them.
+5. Then you should flash the board (if you are using SSD storage you should change the Storage Device).
+6. When you get to install additional components, you will get a promt, at that moment the board has already a initial OS, if you get an error in the connection change to another method and check with a display the real IP of the board.
## Setting up the Jetson AGX Xavier
diff --git a/docs/home/2022-Jun 2023/Integration and Networks/Network.md b/docs/home/Aug 2022 - Jun 2023/Integration and Networks/Network.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Integration and Networks/Network.md
rename to docs/home/Aug 2022 - Jun 2023/Integration and Networks/Network.md
diff --git a/docs/home/Aug 2022 - Jun 2023/Integration and Networks/index.md b/docs/home/Aug 2022 - Jun 2023/Integration and Networks/index.md
new file mode 100644
index 0000000..fc760e2
--- /dev/null
+++ b/docs/home/Aug 2022 - Jun 2023/Integration and Networks/index.md
@@ -0,0 +1,7 @@
+# Overview
+
+In the following sections, we will discuss the following topics:
+
+- [Jetson Xavier AGX](Jetson Xavier Agx.md)
+- [Jetson Nano](Jetson Nano.md)
+- [Network](Network.md)
diff --git a/docs/home/2022-Jun 2023/Mechanics/DashGO x ARM/Design.md b/docs/home/Aug 2022 - Jun 2023/Mechanics/DashGO x ARM/Design.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Mechanics/DashGO x ARM/Design.md
rename to docs/home/Aug 2022 - Jun 2023/Mechanics/DashGO x ARM/Design.md
diff --git a/docs/home/2022-Jun 2023/Mechanics/RBGS/Base.md b/docs/home/Aug 2022 - Jun 2023/Mechanics/RBGS/Base.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Mechanics/RBGS/Base.md
rename to docs/home/Aug 2022 - Jun 2023/Mechanics/RBGS/Base.md
diff --git a/docs/home/2022-Jun 2023/Team Members.md b/docs/home/Aug 2022 - Jun 2023/Team Members.md
similarity index 100%
rename from docs/home/2022-Jun 2023/Team Members.md
rename to docs/home/Aug 2022 - Jun 2023/Team Members.md
diff --git a/docs/home/2022-Jun 2023/index.md b/docs/home/Aug 2022 - Jun 2023/index.md
similarity index 100%
rename from docs/home/2022-Jun 2023/index.md
rename to docs/home/Aug 2022 - Jun 2023/index.md
diff --git a/docs/home/Aug 2023-Present/Computer Vision/Human Recognition/Face detection.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Face Recognition.md
similarity index 100%
rename from docs/home/Aug 2023-Present/Computer Vision/Human Recognition/Face detection.md
rename to docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Face Recognition.md
diff --git a/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Person Counting.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Person Counting.md
new file mode 100644
index 0000000..7231c44
--- /dev/null
+++ b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Person Counting.md
@@ -0,0 +1,14 @@
+# Person Counting and Finding
+
+For the GPSR task, it was necessary to count and/or identify people who meet certain characteristics. Thus, a node for this services was developed.
+
+## Person Counting
+For the person counting module, `YOLOv8` was used to detect people and `Mediapipe` was used to detect the different poses of each person. Implementing the `REID` module, each person was counted once, keeping a vector with the differnt poses to return only the final count of a single pose.
+
+## Person Identifying
+A similar process was followed, however, once a pose is identified, the coordinates of the person are published.
+
+## Node structure
+
+For the counting process, the node has two services, one to begin counting and one to end the process and return the count. On the other hand, the finding service is called once and it publishes the coordinates of the person identified.
+
diff --git a/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Person Tracking.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Person Tracking.md
new file mode 100644
index 0000000..371341a
--- /dev/null
+++ b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Human Analysis/Person Tracking.md
@@ -0,0 +1,101 @@
+# Person Tracking
+
+Person tracking requires the robot to identify a person and be able to follow them as they move around the environment. This process is required in several tasks, such as Carry My Luggage and GPSR (General Purpose Service Robot). Therefore, computational vision is required to detect, identify and track a person.
+
+## Person detection
+
+In order to detect people, the `YOLO v8` model was used, obtaining the bounding boxes of each person in each frame. Additionally the `ByteTracker` algorithm was implemented to automatically assign a track_id to each person that is still visible from the previous frame. However, using the default tracker from YOLO was not consistent enough, since people who go behind large objects or even other people are assigned new ids, since this tracker does not keep track of people who leave and re-enter the frame. This introduced the necessity of a re-identification model that could recognize people who have appeared in previous frames, specially because the tracked person could exit the frame at any moment and it is necessary to identify them again to continue following.
+
+```python
+# Get the results from the YOLOv8 model
+results = self.model.track(frame, persist=True, tracker='bytetrack.yaml', classes=0, verbose=False)
+```
+
+## Person re-identification
+
+To re-identify people, the repository [Person_reID_baseline_pytorch]("https://github.com/layumi/Person_reID_baseline_pytorch") was used as a base to train different models using the `Market1501` dataset. More specifically, the models `ResNet50`, `Swin`, `PCB` and `DenseNet` were trained and tested, eventually opting for the `DenseNet` model as it was the lightest. With the model trained, it was possible to obtain a feature vector from an image with a person.
+
+```python
+# Crop the image using the bbox coordinates
+cropped_image = frame[y1:y2, x1:x2]
+
+# Convert the image array to a PIL image
+pil_image = PILImage.fromarray(cropped_image)
+
+# Get feature
+with torch.no_grad():
+ new_feature = extract_feature_from_img(pil_image, self.model_reid)
+```
+
+Nonetheless, extracting the embeddings from each person in every frame is not efficient, therefore an array of previous detections was kept to check if there is a new person detected. In such case, the embeddings would be extracted and compared to the embeddings of the tracked person using a cosine difference threshold.
+
+```python
+def compare_images(features1, features2, threshold=0.6):
+ if features1.ndim != 1 or features2.ndim != 1:
+ return False
+
+ # Compute cosine similarity between feature vectors
+ similarity_score = 1 - cosine(features1, features2)
+
+ # Compare similarity score with threshold
+ if similarity_score >= threshold:
+ return True # Same person
+ else:
+ return False # Different person
+```
+
+## Media pipe implementation
+
+An issue encountered with the detection and re-identification process was that the YOLO model would sometimes identify parts of people as a person, meaning that bounding boxes would sometimes contain only hands or heads, for example. This became problematic as the re-identification model could return unexpected results for this cases. Therefore, the `MediaPipe` library was used to detect the chest of a person, and only then the re-identification model would be used.
+
+```python
+pose_model = mp.solutions.pose.Pose(min_detection_confidence=0.8)
+
+def check_visibility(poseModel, image):
+ pose = poseModel
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
+
+ # Process the image
+ results = pose.process(image)
+
+ # Check if the pose landmarks are detected
+ if results.pose_landmarks is not None:
+
+ # Get the x and y coordinates of the chest
+ chest_x = results.pose_landmarks.landmark[11].x
+ chest_y = results.pose_landmarks.landmark[11].y
+ chest_visibility = results.pose_landmarks.landmark[11].visibility
+
+ # Check if the chest is in the frame
+ if (chest_x < 0 or chest_x > 1 or chest_y < 0 or chest_y > 1) and chest_visibility < 0.95:
+ return False
+ else:
+ return True
+```
+
+## Node structure
+
+In order to start tracking, the service should be called, recieving a boolean to start or end the process. Once the service is called to begin tracking, the person detected with the largest bounding box area is assigned as the tracked person. While the tracking process is still enabled, the node will publish the coordinates of the person relative to the image (in pixels) as long as the tracked person is in frame, else it will not publish anything.
+
+The ROS node is structured in the following way:
+
+### Subscriber topics
+
+- `/zed2/zed_node/left/image_rect_color`: ZED camera topic
+
+### Publisher topics
+
+- `/vision/person_detection`: Publishes a Point with the x and y coordinates of the tracked person. (Does not publish if the track person is not visible)
+- `/vision/img_tracking`: Publishes an annotated image for debugging and display purposes.
+
+### Service topics
+
+- `/vision/change_person_tracker_state`: Of type `SetBool` to enable or disable tracking.
+
+### Launch file
+
+For easier execution, the node can be launched using the following command:
+
+```bash
+roslaunch vision receptionist.launch
+```
diff --git a/docs/home/Aug 2023 - Jun 2024/Computer Vision/Object Detection/Seat Detection.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Object Detection/Seat Detection.md
new file mode 100644
index 0000000..f6d77a5
--- /dev/null
+++ b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Object Detection/Seat Detection.md
@@ -0,0 +1,124 @@
+# Seat detection
+
+Detecting an available seat is a procedure necessary for the Receptionist Task, where there are either couches or chairs and the robot should be able to point to a position where there is a free seat, considering that there could be people already sitting on some of them.
+
+## Detection
+
+In order to detect people, chairs and couches, the `YOLO v8` model was used, filtering by classes.
+
+- Person: class 0
+- Chair: class 56
+- Couch: class 57
+
+```python
+# Get the results from the YOLOv8 model
+results = self.model(frame, verbose=False, classes=[0,56,57])
+
+people = []
+chairs = []
+couches = []
+
+for out in results:
+ for box in out.boxes:
+ x1, y1, x2, y2 = [round(x) for x in box.xyxy[0].tolist()]
+ class_id = box.cls[0].item()
+ label = self.model.names[class_id]
+ bbox = (x1, y1, x2, y2)
+
+ if class_id == 0:
+ people.append({"bbox": bbox, "label": label, "class": class_id})
+
+ elif class_id == 56:
+ chairs.append({"bbox": bbox, "label": label, "class": class_id})
+
+ elif class_id == 57:
+ couches.append({"bbox": bbox, "label": label, "class": class_id})
+```
+
+## Selection
+
+To select a seat, it was a safer option to choose an available chair, so the chair array would be traversed to see if there are any chairs that don't have people sitting on them. For this, the bounding boxes of chairs were compared with the center point of each person to detect if the center point was inside the chair bounding box, meaning the seat was taken. Additionally, since there could be people and chairs behind the living room area, the available chairs were added to a priority queue, sorting by the bbox width. Finally if the queue was not empty, the node would return the center pixel of the chair with the largest area.
+
+```python
+chair_q = queue.PriorityQueue()
+for chair in chairs:
+ occupied = False
+ xmin = chair["bbox"][0]
+ xmax = chair["bbox"][2]
+ y_center_chair = (chair["bbox"][1] + chair["bbox"][3]) / 2
+
+ # Check if there is a person sitting on the chair
+ for person in people:
+ center_x = (person["bbox"][0] + person["bbox"][2]) / 2
+ person_y = person["bbox"][3]
+
+ if center_x >= xmin and center_x <= xmax and person_y > y_center_chair:
+ occupied = True
+ break
+
+ if not occupied:
+ area = xmax - xmin
+ output = (chair["bbox"][0] + chair["bbox"][2]) / 2
+ chair_q.put((-1*area, output, chair["bbox"][0],chair["bbox"][1],chair["bbox"][2],chair["bbox"][3]))
+
+
+if len(chairs) != 0:
+ space, output, a,b,c,d = chair_q.get()
+ print(space)
+ return self.getAngle(output, frame.shape[1])
+
+```
+
+On the other hand, if there were no available chairs or the seats were all couches, a different method was used, since it was possible for a large couch to have multiple people sitting on it. In such case, the largest available space should be selected. Therefore, a zero array with size of the width of the image was created. Then, for each couch detected, the spaces with people were marked as 1, according to the bounding boxes. Finally, the array would be traversed according to the bounds of the couch (xmin and xmax), performing a sliding window to find the spaces filled with 0s, which where added to another priority queue to find the largest available space.
+
+```python
+available_spaces = queue.PriorityQueue()
+for couch in couches:
+
+ # Bounds of the couch
+ couch_left = couch["bbox"][0]
+ couch_right = couch["bbox"][2]
+
+ # Create a space array
+ space = np.zeros(frame.shape[1], dtype=int)
+
+ # Fill the space with 1 if there is a person
+ for person in people:
+ xmin = person["bbox"][0]
+ xmax = person["bbox"][2]
+
+ space[xmin:xmax+1] = 1
+
+ # Set bounds pixels to 0
+ left = couch_left
+ space[couch_left] = 0
+ space[couch_right] = 0
+
+ # Traverse the space array to find empty spaces
+ for i in range(couch_left, couch_right):
+ if space[i] == 0:
+ if left is None:
+ left = i
+
+ else:
+ if left is not None:
+ available_spaces.put((-1*(i - left), left, i))
+ left = None
+
+
+ if left is not None:
+ available_spaces.put((-1*(couch_right - left), left, couch_right))
+
+print(f"Found {len(couches)} couches")
+
+# Get largest space, return center:
+if available_spaces.qsize() > 0:
+ max_space, left, right = available_spaces.get()
+ output = (left + right) / 2
+ print("Space found", output)
+ return self.getAngle(output, frame.shape[1])
+
+else:
+ print("No couch or chair found")
+ return -1
+```
diff --git a/docs/home/Aug 2023 - Jun 2024/Computer Vision/Object Detection/ShelfDetection.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Object Detection/ShelfDetection.md
new file mode 100644
index 0000000..3b74700
--- /dev/null
+++ b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Object Detection/ShelfDetection.md
@@ -0,0 +1,38 @@
+# Shelf Object detection
+
+For the Storing Groceries task, objects need to be stored in shelves, where each level has items of the same category. Therefore it was necessary to first identify the objects that are in each shelf and group them by their level for HRI to determine the category of each level.
+
+## Detection
+
+To detect the objects, the `YOLO v8` model was used (80 classes), combined with the `YOLO v5` (360 objects) model to detect several objects from the shelf. This way, an array of detections was obtained containing the bounding box, class, name and score of each object.
+
+## Clustering
+
+To group the objects by level, a `KMeans` clustering algorithm from [sklearn](https://scikit-learn.org/stable/) was used. This way the objects were cluster according to the min y-coordinate of the bounding box, since all objects of the same level should be on similar starting y-coordinates, while using the center y-coordinate could vary if the objects had different sizes.
+
+## Areas of improvement
+
+Combining several models worked to detect several objects. However, some classes would sometimes overlap and the results could have the same object twice. Therefore, it would be ideal to use the model trained with the objects from the competition.
+
+## ROS Node
+
+The node begins the clustering process as soon as the service is called, recieving a boolean value and returning a message of type shelf which contains an array of shelf_levels, each level containing a height value (average of heights of that level), a label and the objects detected in that level (string array).
+
+The ROS node is structured in the following way:
+
+### Subscriber topics
+
+- `/zed2/zed_node/left/image_rect_color`: ZED camera topic
+- `/zed2/zed_node/depth/depth_registered`: ZED depth topic
+- `/zed2/zed_node/depth/camera_info`: ZED camera info topic
+
+### Publisher topics
+
+- `/vision/3D_shelf_detection`: Publishes markers for 3D visualization in RVIZ.
+- `/vision/img_shelf_detection`: Publishes an annotated image for debugging and display purposes.
+
+### Service topics
+
+- `/vision/shelf_detector`: Of type `ShelfDetections`, recieving bool and returning a message of type shelf.
+
+
diff --git a/docs/home/Aug 2023 - Jun 2024/Computer Vision/Utils/ZED_Simulation.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Utils/ZED_Simulation.md
new file mode 100644
index 0000000..3a1bd27
--- /dev/null
+++ b/docs/home/Aug 2023 - Jun 2024/Computer Vision/Utils/ZED_Simulation.md
@@ -0,0 +1,41 @@
+# ZED_Simulation
+
+Node to simulate ZED topic, useful for testing and debugging without the need of the camera.
+
+```python
+#!/usr/bin/env python3
+import rospy
+
+import time
+from sensor_msgs.msg import Image
+import cv2
+from cv_bridge import CvBridge
+
+class ZedSimulation():
+
+ def __init__(self):
+ rospy.init_node('zed_simulation')
+ self.bridge = CvBridge()
+ self.image_pub = rospy.Publisher("/zed2/zed_node/rgb/image_rect_color", Image, queue_size=10)
+
+ def run(self):
+ while not rospy.is_shutdown():
+ cap = cv2.VideoCapture(0)
+
+ while cap.isOpened():
+ ret, frame = cap.read()
+
+ if not ret:
+ break
+
+ ros_image = self.bridge.cv2_to_imgmsg(frame, encoding='bgr8')
+ self.image_pub.publish(ros_image)
+ cv2.imshow('frame', frame)
+ if cv2.waitKey(1) & 0xFF == ord("q"):
+ break
+
+ cap.release()
+ cv2.destroyAllWindows()
+p = ZedSimulation()
+p.run()
+```
\ No newline at end of file
diff --git a/docs/home/Aug 2023-Present/Computer Vision/index.md b/docs/home/Aug 2023 - Jun 2024/Computer Vision/index.md
similarity index 95%
rename from docs/home/Aug 2023-Present/Computer Vision/index.md
rename to docs/home/Aug 2023 - Jun 2024/Computer Vision/index.md
index 115fcd6..0b7bbe4 100644
--- a/docs/home/Aug 2023-Present/Computer Vision/index.md
+++ b/docs/home/Aug 2023 - Jun 2024/Computer Vision/index.md
@@ -1,5 +1,5 @@
# Computer Vision
-### Human Recognition
+### Human Analysis
- Replaced DeepFace for face_recognition from dlib, allowing for faster and more accurate face recognition.
- Developed a custom human attribute recognition using the PETA dataset.
diff --git a/docs/home/Aug 2023-Present/Electronics and Control/index.md b/docs/home/Aug 2023 - Jun 2024/Electronics and Control/index.md
similarity index 100%
rename from docs/home/Aug 2023-Present/Electronics and Control/index.md
rename to docs/home/Aug 2023 - Jun 2024/Electronics and Control/index.md
diff --git a/docs/home/Aug 2023 - Jun 2024/Human Robot Interaction/Human Physical Analysis/Face Following.md b/docs/home/Aug 2023 - Jun 2024/Human Robot Interaction/Human Physical Analysis/Face Following.md
new file mode 100644
index 0000000..e5054e8
--- /dev/null
+++ b/docs/home/Aug 2023 - Jun 2024/Human Robot Interaction/Human Physical Analysis/Face Following.md
@@ -0,0 +1,7 @@
+# Face following
+
+To allow for a more human-like interaction, the robot was programmed to follow a person's face when it recieves new instructions. This was achieved by using the face [detection and recognition node](../../Computer%20Vision/Human%20Recognition/Face%20detection.md), which publishes the position of the largest face detected in the frame. This way, joints from the arm are adjusted to keep the person's face centered in the camera's field of view.
+
+
+
+
-
+
-
-
export ROS_IP=YOUR_IP
-export ROS_MASTER_URI=JETSON_IP
-
-rosrun rviz rviz -d $(rospack find robot_description)/rviz/urdf.rviz
-
If new code was implemented outside of the jetson, the file(s) can be copied using the following command:
-# use -r (recursive flag) for folders
-scp -r SOURCE/ DESTINATION/
-# e.g pass files from laptop to jetson (run command on laptop terminal with ssh connected to jetson)
-scp -r /home/oscar/maze_ws/src/devices/ jetson@IP:/home/jetson/maze_ws/src/
-
Also, you may want to consider deleting the files from the jetson first before using scp with the new files:
- -Finally, use catkin_make to apply changes.
- - - - - - - -Here is an example of how we can se the behaviour of the usb ports.
-For the USB port automatization, we use udev rules. These rules are located in the rules.d
folder. The rules are loaded by the udev daemon when the system starts. The udev daemon monitors the kernel for events and executes the rules when a device is added or removed.
To automate the USB ports, we need to create a rule for each port. The rule will be executed when the port is connected to the computer. The rule will execute a script that will set the port to the desired mode.
-Here is the hard investigation we did to determine the rules:
-Simulation of a disaster area where the robot has to navigate through the majority of a maze, detect victims through different stimuli (visual images), and evade obstacles. The maze may have multiple floors and the robot must be autonomous.
-See the rules for Rescue Maze 2023.
-@Home is one of the main competitions for RoBorregos, since it contains a lot of the knowledge that we have acquired throughout the years. It's a complex competitions in multiple levels, which makes it a great challenge for the team.
-The competition consists of a series of tasks that the robot must complete. ...
-Computer Vision is one of the main areas of development for RoBorregos in the @Home competition. It is a very important area, since it is the one that allows the robot to perceive the environment and interact with it.
-Pose estimation was implemented using MediaPipe for the RoboCup 2022 @Home Simulation competition. The pose estimation algorithm is based on the MediaPipe Pose solution.
-It's very simple, acurate and fast. It's also very easy to use, since it's a pre-trained model that can be used directly.
-First of all, you need to install MediaPipe. You can do it by running the following command:
- -Then, you can use the following code to get the pose estimation:
-import mediapipe as mp
-
-# Calling the pose solution from MediaPipe
-mp_pose = mp.solutions.pose
-
-# Opening the image source to be used
-image = cv2.imread("image.jpg")
-
-# Calling the pose detection model
-with mp_pose.Pose(
- min_detection_confidence=0.5,
- min_tracking_confidence=0.5) as pose:
- # Detecting the pose with the image
- poseResult = pose.process(image)
-
As a result, you'll have a poseResult
array of points. That each point represent a joint of the body, as shown in the following image:
You can also use pose estimation with a webcam to get streamed video. You can use the following code to do it:
-import mediapipe as mp
-
-import cv2
-
-# Calling the pose solution from MediaPipe
-mp_pose = mp.solutions.pose
-
-# Calling the solution for image drawing from MediaPipe
-mp_drawing = mp.solutions.drawing_utils
-mp_drawing_styles = mp.solutions.drawing_styles
-
-
-# Opening the webcam
-cap = cv2.VideoCapture(0)
-
-# Calling the pose detection model
-with mp_pose.Pose(
- min_detection_confidence=0.5,
- min_tracking_confidence=0.5) as pose:
- # Looping through the webcam frames
- while cap.isOpened():
- # Reading the webcam frame
- success, image = cap.read()
- if success:
-
- # Managing the webcam frame
- image.flags.writeable = False
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
-
- # Detecting the pose with the image
- results = pose.process(image)
-
- # Drawing the pose detection results
- image.flags.writeable = True
- image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
- mp_drawing.draw_landmarks(
- image,
- results.pose_landmarks,
- mp_pose.POSE_CONNECTIONS,
- landmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style())
- cv2.imshow('MediaPipe Pose', cv2.flip(image, 1))
- if cv2.waitKey(5) & 0xFF == 27:
- break
-cap.release()
-
As a result, you'll not only be able to get the pose estimation array. but also the stream with the drawing of the pose estimation.
-Example:
-You can receive the image source from a ROS topic. You can use the following code to do it:
-import mediapipe as mp
-from time import sleep
-from typing import Tuple
-import cv2
-import numpy as np
-import rospy
-from cv_bridge import CvBridge
-from sensor_msgs.msg import Image
-
-# Calling the pose solution from MediaPipe
-mp_pose = mp.solutions.pose
-
-# Calling the solution for image drawing from MediaPipe
-mp_drawing = mp.solutions.drawing_utils
-mp_drawing_styles = mp.solutions.drawing_styles
-
-# Declaring the CvBridge for image conversion from ROS to OpenCV
-bridge = CvBridge()
-
-# Declaring the image and its callback for the ROS topic
-imageReceved = None
-def image_callback(data):
- global imageReceved
- imageReceved = data
-
-# Initializing the ROS node
-rospy.init_node('ImageRecever', anonymous=True)
-
-# Subscribing to the ROS topic
-imageSub = rospy.Subscriber(
- "/hsrb/head_center_camera/image_raw", Image, image_callback)
-
-# Calling the pose detection model
-with mp_pose.Pose(
- min_detection_confidence=0.5,
- min_tracking_confidence=0.5) as pose:
- # Looping through the image frames
- while not rospy.is_shutdown():
- if imageReceved is not None:
- # Converting the ROS image to OpenCV
- image = bridge.imgmsg_to_cv2(imageReceved, "rgb8")
-
- # Detecting the pose with the image
- image.flags.writeable = False
- results = pose.process(image)
-
- # Drawing the pose detection results
- image.flags.writeable = True
- image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
- mp_drawing.draw_landmarks(
- image,
- results.pose_landmarks,
- mp_pose.POSE_CONNECTIONS,
- landmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style())
-
-
- cv2.imshow('MediaPipe Pose', image)
- if cv2.waitKey(5) & 0xFF == 27:
- break
- else:
- print("Image not recived")
- sleep(1)
-
Here is an example of the result:
-This is the documentation for RoBorregos. Here you can find information about the team, the projects we have worked on, and the tools we use.
-RoBorregos is the Tecnológico de Monterrey's International Robotics Representative Team. We are a team of students passionate about robotics and technology that have several projects in which we participate. To learn more about us, visit our website.
-Name | -Github | -Role | -|
---|---|---|---|
Iván Romero | -i.wells.ar@gmail.com | -@IvanRomero03 | -Software Developer, Repo Mantainer and Automatization | -
Vic | -@gmail.com | -@ | -Software Developer, Repo Mantainer and Automatization | -
Kevin | -@gmail.com | -@ | -PM | -
This is the documentation for RoBorregos. Here you can find information about the team, the projects we have worked on, and the tools we use.
"},{"location":"#roborregos","title":"RoBorregos","text":"RoBorregos is the Tecnol\u00f3gico de Monterrey's International Robotics Representative Team. We are a team of students passionate about robotics and technology that have several projects in which we participate. To learn more about us, visit our website.
"},{"location":"#sections","title":"Sections","text":"Simulation of a disaster area where the robot has to navigate through the majority of a maze, detect victims through different stimuli (visual images), and evade obstacles. The maze may have multiple floors and the robot must be autonomous.
"},{"location":"RescueMaze/#competition","title":"Competition","text":"See the rules for Rescue Maze 2023.
"},{"location":"RescueMaze/#sections","title":"Sections","text":"roslaunch nav_main launch_jetson.launch\n
roslaunch exploration main\n
export ROS_IP=YOUR_IP\nexport ROS_MASTER_URI=JETSON_IP\n\nrosrun rviz rviz -d $(rospack find robot_description)/rviz/urdf.rviz\n
"},{"location":"RescueMaze/Jetson%20Nano/RunningJetson/#connecting-to-jetson-using-ssh","title":"Connecting to Jetson using SSH","text":"sudo nmap -sn YOUR_IP/24\nssh username_jetson@JETSON_IP\n
"},{"location":"RescueMaze/Jetson%20Nano/RunningJetson/#debug-using-teleop","title":"Debug using teleop","text":"rosrun teleop_twist_keyboard teleop_twist_keyboard.py _speed:=0.8 _turn:=2.4 _repeat_rate:=10\n
"},{"location":"RescueMaze/Jetson%20Nano/RunningJetson/#add-new-files-to-jetson","title":"Add new files to jetson","text":"If new code was implemented outside of the jetson, the file(s) can be copied using the following command:
# use -r (recursive flag) for folders\nscp -r SOURCE/ DESTINATION/\n# e.g pass files from laptop to jetson (run command on laptop terminal with ssh connected to jetson)\nscp -r /home/oscar/maze_ws/src/devices/ jetson@IP:/home/jetson/maze_ws/src/\n
Also, you may want to consider deleting the files from the jetson first before using scp with the new files:
# e.g. deleting devices folder before scp\n# in jetson\nrm -rf /home/jetson/maze_ws/src/devices\n
Finally, use catkin_make to apply changes.
cd ~/maze_ws\ncatkin_make\n
"},{"location":"RescueMaze/Jetson%20Nano/USBRules/","title":"USB Port automatization","text":""},{"location":"RescueMaze/Jetson%20Nano/USBRules/#this-file-is-part-of-the-roborregos-rescuemaze-project","title":"This file is part of the RoBorregos RescueMaze project.","text":"Here is an example of how we can se the behaviour of the usb ports.
"},{"location":"RescueMaze/Jetson%20Nano/USBRules/#udev-rules","title":"Udev Rules","text":"For the USB port automatization, we use udev rules. These rules are located in the rules.d
folder. The rules are loaded by the udev daemon when the system starts. The udev daemon monitors the kernel for events and executes the rules when a device is added or removed.
To automate the USB ports, we need to create a rule for each port. The rule will be executed when the port is connected to the computer. The rule will execute a script that will set the port to the desired mode.
Here is the hard investigation we did to determine the rules:
"},{"location":"home/","title":"@Home","text":"@Home is one of the main competitions for RoBorregos, since it contains a lot of the knowledge that we have acquired throughout the years. It's a complex competitions in multiple levels, which makes it a great challenge for the team.
"},{"location":"home/#competition","title":"Competition","text":"The competition consists of a series of tasks that the robot must complete. ...
"},{"location":"home/#sections","title":"Sections","text":"Computer Vision is one of the main areas of development for RoBorregos in the @Home competition. It is a very important area, since it is the one that allows the robot to perceive the environment and interact with it.
"},{"location":"home/vision/#sections","title":"Sections","text":"Pose estimation was implemented using MediaPipe for the RoboCup 2022 @Home Simulation competition. The pose estimation algorithm is based on the MediaPipe Pose solution.
It's very simple, acurate and fast. It's also very easy to use, since it's a pre-trained model that can be used directly.
"},{"location":"home/vision/pose_estimation/#how-to-use-it","title":"How to use it","text":"First of all, you need to install MediaPipe. You can do it by running the following command:
pip install mediapipe\n
Then, you can use the following code to get the pose estimation:
import mediapipe as mp\n# Calling the pose solution from MediaPipe\nmp_pose = mp.solutions.pose\n# Opening the image source to be used\nimage = cv2.imread(\"image.jpg\")\n# Calling the pose detection model\nwith mp_pose.Pose(\nmin_detection_confidence=0.5,\nmin_tracking_confidence=0.5) as pose:\n# Detecting the pose with the image\nposeResult = pose.process(image)\n
As a result, you'll have a poseResult
array of points. That each point represent a joint of the body, as shown in the following image:
You can also use pose estimation with a webcam to get streamed video. You can use the following code to do it:
import mediapipe as mp\nimport cv2\n# Calling the pose solution from MediaPipe\nmp_pose = mp.solutions.pose\n# Calling the solution for image drawing from MediaPipe\nmp_drawing = mp.solutions.drawing_utils\nmp_drawing_styles = mp.solutions.drawing_styles\n# Opening the webcam\ncap = cv2.VideoCapture(0)\n# Calling the pose detection model\nwith mp_pose.Pose(\nmin_detection_confidence=0.5,\nmin_tracking_confidence=0.5) as pose:\n# Looping through the webcam frames\nwhile cap.isOpened():\n# Reading the webcam frame\nsuccess, image = cap.read()\nif success:\n# Managing the webcam frame\nimage.flags.writeable = False\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n# Detecting the pose with the image\nresults = pose.process(image)\n# Drawing the pose detection results\nimage.flags.writeable = True\nimage = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)\nmp_drawing.draw_landmarks(\nimage,\nresults.pose_landmarks,\nmp_pose.POSE_CONNECTIONS,\nlandmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style())\ncv2.imshow('MediaPipe Pose', cv2.flip(image, 1))\nif cv2.waitKey(5) & 0xFF == 27:\nbreak\ncap.release()\n
As a result, you'll not only be able to get the pose estimation array. but also the stream with the drawing of the pose estimation.
Example:
"},{"location":"home/vision/pose_estimation/#using-pose-estimation-with-ros","title":"Using pose estimation with ROS","text":"You can receive the image source from a ROS topic. You can use the following code to do it:
import mediapipe as mp\nfrom time import sleep\nfrom typing import Tuple\nimport cv2\nimport numpy as np\nimport rospy\nfrom cv_bridge import CvBridge\nfrom sensor_msgs.msg import Image\n# Calling the pose solution from MediaPipe\nmp_pose = mp.solutions.pose\n# Calling the solution for image drawing from MediaPipe\nmp_drawing = mp.solutions.drawing_utils\nmp_drawing_styles = mp.solutions.drawing_styles\n# Declaring the CvBridge for image conversion from ROS to OpenCV\nbridge = CvBridge()\n# Declaring the image and its callback for the ROS topic\nimageReceved = None\ndef image_callback(data):\nglobal imageReceved\nimageReceved = data\n# Initializing the ROS node\nrospy.init_node('ImageRecever', anonymous=True)\n# Subscribing to the ROS topic\nimageSub = rospy.Subscriber(\n\"/hsrb/head_center_camera/image_raw\", Image, image_callback)\n# Calling the pose detection model\nwith mp_pose.Pose(\nmin_detection_confidence=0.5,\nmin_tracking_confidence=0.5) as pose:\n# Looping through the image frames\nwhile not rospy.is_shutdown():\nif imageReceved is not None:\n# Converting the ROS image to OpenCV\nimage = bridge.imgmsg_to_cv2(imageReceved, \"rgb8\")\n# Detecting the pose with the image\nimage.flags.writeable = False\nresults = pose.process(image)\n# Drawing the pose detection results\nimage.flags.writeable = True\nimage = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)\nmp_drawing.draw_landmarks(\nimage,\nresults.pose_landmarks,\nmp_pose.POSE_CONNECTIONS,\nlandmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style())\ncv2.imshow('MediaPipe Pose', image)\nif cv2.waitKey(5) & 0xFF == 27:\nbreak\nelse:\nprint(\"Image not recived\")\nsleep(1)\n
Here is an example of the result:
"},{"location":"soccer/","title":"@SoccerLightweight","text":"Simulation of a disaster area where the robot has to navigate through the majority of a maze, detect victims through different stimuli (visual images), and evade obstacles. The maze may have multiple floors and the robot must be autonomous.
"},{"location":"soccer/#competition","title":"Competition","text":"See the rules for Rescue Maze 2023.
"},{"location":"soccer/#sections","title":"Sections","text":"For the dribbler, a 1000KV brushless motor was used with its respective speed controller that is controlled through a PWM signal from the microcontroller.
"},{"location":"soccer/Electronics/General/","title":"General","text":""},{"location":"soccer/Electronics/General/#design-software","title":"Design Software","text":"For the PCB Design the EasyEda software was used. The robot electronics are made up of 5 custom PCBs designed specifically for a specific use. The design of the PCBs was carried out in the EasyEda software.
Initially, the design of the main PCB began. which has measurements of 10 x 10 cm.
For the main processing of our robot, an Arduino Mega Pro of 32 Bits and 2 Clocks of 16 MHz was used. It was mainly chosen for its light weight and its size reduction.
"},{"location":"soccer/Electronics/General/#drivers","title":"Drivers","text":"HP Pololu 12V Motors at 2200 RPM were used. Our tires are GTF Robots that were in our laboratory from past competitions. For these motors we use Mosfet TB6612FNG type drivers. These drivers are bridged in parallel on the PCB, since HP motors demand a lot of current, especially when forcing between robots, giving us a current continuity of 1.2 and peaks of up to 3 Amps.
"},{"location":"soccer/Electronics/General/#how-can-we-handle-this-motors-with-this-drivers","title":"How can we handle this motors with this drivers?","text":"By bridging the motor outputs and input signals, the output channel can be increased up to 2.4 Amps continuous and up to 6 Amps peak. What allows us to use HP motors without danger of having damage to our PCB, drivers, or in the worst case, the motors themselves.
"},{"location":"soccer/Electronics/General/#sensors","title":"Sensors","text":"For the control of the robot a BNO 055 gyroscope sensor was used due to its reliability, performance and size. Ultrasonics were used for speed regulation with respect to the detected distance to prevent the robot from getting out of line.
"},{"location":"soccer/Electronics/Power%20Supply/","title":"Power Supply","text":"3.7V LIPO batteries were used For the power supply, 3 lipo batteries of 3.7V at 2500 maH were used in a series arrangement, giving them approximately 11.1 v in nominal state.
For the logic supply, lipo batteries of 3.7V at 1200 maH were used in a serious arrangement, giving them approximately 7 V in nominal state.
"},{"location":"soccer/Electronics/Printed%20Circuit%20Boards%20%28PCB%29/","title":"PCB Designs","text":""},{"location":"soccer/Electronics/Printed%20Circuit%20Boards%20%28PCB%29/#ir-ring","title":"IR Ring","text":"Digital IR receivers: TSSP58038 were used to detect the IR signals emitted by the ball and a custom PCB was also designed.
The Ir ring is made up of 12 IR receivers, and an Atmega328p was used for processing and vectoring the infrared signals.
"},{"location":"soccer/Electronics/Printed%20Circuit%20Boards%20%28PCB%29/#line-detection-boards","title":"Line Detection Boards","text":"For the detection of the lines, independent PCBs were also designed for each of the three sides in the circumference of the robot.
These boards consist of 4 phototransistors each, getting an analog reading by processing our microcontroller on the main board.
The phototransistors we used were the TEPT5700. The reason we use these phototransistors is because of their gain, which allows the color reading to have a high difference range. In our case, when we detect white, the phototransistor gives a value close to 300 units. On the other hand, when it detects green, the value decreases to a value of 30, so we have a fairly reliable interval to distinguish between colors.
One setback with the choice of phototransistors was the color and incidence of light that had to be thrown at it. Initially the NJL7502L phototransistors were considered, but because these were in a deteriorated state they were discarded. Subsequently, we proceeded to search for those phototransistors that had a peak close to 600 nm, thus preventing them from detecting the infrared signal that is above 700 nm and causing us problems when detecting the lines.
"},{"location":"soccer/Electronics/Printed%20Circuit%20Boards%20%28PCB%29/#main-board","title":"Main Board","text":""},{"location":"soccer/Mechanics/More_Characteristics/1.Materials/","title":"Materials","text":"This is the list of mechanical materials we used in the final robot:
Name Use Product Image Robot Image ABS filament Printing of most CAD parts of the robot PLA Filament Printing of few CAD parts of the robot Male-female nylon spacers Used for interconnecting the CAD pieces 3x10mm flat-head steel screw Printing of most CAD parts of the robot 3x10mm flat-head steel screw Printing of most CAD parts of the robot"},{"location":"soccer/Mechanics/More_Characteristics/1.Materials/#breakdown-of-the-materials","title":"Breakdown of the materials","text":""},{"location":"soccer/Mechanics/More_Characteristics/1.Materials/#spacers-screws-and-nuts","title":"Spacers, screws and nuts","text":"The materials used to support the robot parts worked very well. The
"},{"location":"soccer/Mechanics/More_Characteristics/2.Tools/","title":"Tools","text":""},{"location":"soccer/Mechanics/Robot_Lower_Design/1.Base/","title":"Base","text":""},{"location":"soccer/Mechanics/Robot_Lower_Design/2.Motors/","title":"Motors","text":""},{"location":"soccer/Mechanics/Robot_Lower_Design/3.Wheels/","title":"Wheels","text":""},{"location":"soccer/Mechanics/Robot_Lower_Design/4.Dribbler/","title":"Dribbler","text":""},{"location":"soccer/Mechanics/Robot_Lower_Design/5.Kicker/","title":"Kicker","text":""},{"location":"soccer/Programming/General/","title":"General","text":""},{"location":"soccer/Programming/General/#tools","title":"Tools","text":"The main tools used to program the robot are:
During the regional competition we decided to have two attacking robots which had the same code due to time issues and other setbacks. Nonetheless, we learned that this was not a very good strategy, since both robots would sometimes crash with each other when searching for the ball, making scoring very difficult. Therefore, for the national competition we chose the strategy of developing an attacking and a defending robot. Ideally both robots would be able to change roles during the game, however the defending robot had the camera facing backwards, so this was not possible.
It is also important to mention that the structure of the code worked as a state machine, advancing to different states until the previous one was completed. This was necessary since it was immportant to keep certain priorities. For example, for the attacking robot, it should first check that it isn't on a line before doing anything else.
"},{"location":"soccer/Programming/General/#algorithm","title":"Algorithm","text":""},{"location":"soccer/Programming/General/#attacking-robot","title":"Attacking Robot","text":"The main objective of this robot was to gain possesion of the ball using the dribbler as fast as possible and then go towards the goal using vision. Therefore, the algorithm used is the following:
"},{"location":"soccer/Programming/General/#defending-robot","title":"Defending Robot","text":"On the other hand, the defending robot should always stay near the goal and go towards it if the ball is in a 20cm radius. The algorithm for this robot is shown in the following image:
"},{"location":"soccer/Programming/IR_Detection/","title":"IR Detection","text":"For the robot's movements, it was very important to know both the angle and the distance to the ball, so an IR ring was made with 12 IR receivers. However, before calculating these values, it was first necessary to obtain the pulse width of all TPS-58038s, which should be obtained during a ball emission cycle. However, this would imply a time of \\(833 \\mu s \\times 12\\) sensors. Therefore, a better approach was to obtain a reading from all the sensors all at once and repeat this process during the cycle \\((833 \\mu s)\\). Once the method was chosen, the distance and angle had to be calculated. Nonetheless, each had different methods:
"},{"location":"soccer/Programming/IR_Detection/#angle","title":"Angle","text":"To obtain the angle towards the ball, there were two main options:
Since we wanted more precise values for the angle, we chose to use vector addition to get the resulting vector and hence, its angle. This was possible as each sensor had its unitary vector according to its position in the ring and the magnitud was the pulse width value. Therefore, after getting all sensor values in a cycle, the vectors where added using its x and y components, finally obtaining the angle as the inverse tangent of the result vector.
"},{"location":"soccer/Programming/IR_Detection/#distance","title":"Distance","text":"For the distance, there were also different methods:
Even though vector calculation seemed to be the best method, we faced several issues, since it was not very consistent and the result value usually varied within a range from 20 to 40. In addition, the first and second method were also ineficient by themselves, resulting also in a small range of distance. Therefore, both options were combined, which provided the best results, getting a distance with a range from 20 to 100.
It is also important to mention, that we used the research of the Yunit team (2017) as a reference. There were also several tests and conclusions done, which can be found in the following document: IR Reasarch.
"},{"location":"soccer/Programming/IR_Detection/#detect-possession","title":"Detect Possession","text":"Ideally, we wanted to use the IR Ring to check if the robots had possesion of the ball. However during some tests, we discovered that due to the bright colors, the goals could reflect infrared light emitted by the ball. Therefore, the IR Ring was placed on the robot at a height slightly above the goals to avoid reflections. Nonetheless, this did not allow us to get precise distance measurements when the ball was very close, so we could'nt know if we had possesion or not. For this reason, another phototransistor was used with the only purpose to determine ball possesion. Similarly to the ring, the sensor counts the readings per cycle to determine the pulse width. However, to reduce noise and get more stable measurements, an Exponential Moving Average (EMA).
int detect() {\nint pulseWidth = 0;\nint deltaPulseWidth = 5;\nconst unsigned long startTime_us = micros();\ndo { filterAnalog.AddValue(analogRead(Constantes::analogo)); //Add value to filter\nif(filterAnalog.GetLowPass() > 700) { //Only consider the reading if the \npulseWidth += deltaPulseWidth; //infrarred emission is actually significant\n}\n} while((micros() - startTime_us) < 833);\nreturn pulseWidth;\n}\n
"},{"location":"soccer/Programming/Movement/","title":"Movement","text":""},{"location":"soccer/Programming/Movement/#holonomic-movemnt","title":"Holonomic movemnt","text":"In order to take advantage of the holonomic drivetrain, the robot had to be able to move in any direction given the desired angle. Therefore, a kinematic model was used in order to determine the speed of each motor. In addition, part of our strategy required our robots to be always facing towards the goal, so the movement had to be corrected if the orientation changed. For this, the BNO-055 was used to get the current orientation and with a simplified PID controller (only P), the error was corrected. The following image shows the kinematic ecuations and corrections implemented:
double PID::getError(int target, int cur, int speed) {\nerror = abs(target - cur);\nerror = min(error, 100); //Limit error to have a max value of 100\nerror *= kP; //Constant of proportionality\nreturn error;\n}\n
void Motors::moveToAngle(int degree, int speed, int error) {\n//Define each speed (values from 0-1)\nfloat m1 = cos(((150 - degree) * PI / 180));\nfloat m2 = cos(((30 - degree) * PI / 180));;\nfloat m3 = cos(((270 - degree) * PI / 180));\n//Multiply by given speed (0-255)\nint speedA = (int(m1 * velocidad));\nint speedB = (int(m2 * velocidad));\nint speedC = (int(m3 * velocidad));\n//Add error\nspeedA += error;\nspeedB += error;\nspeedC += error;\n//Define absolute values\nint abSpeedA = abs(speedA);\nint abSpeedB = abs(speedB);\nint abSpeedC = abs(speedC);\n//Normalize values (to not exceed 255)\nint maxSpeed = max(abSpeedA, max(abSpeedB, abSpeedC));\nif (maxSpeed > 255) {\nabSpeedA = map(abSpeedA, 0, maxSpeed, 0, 255);\nabSpeedB = map(abSpeedB, 0, maxSpeed, 0, 255);\nabSpeedC = map(abSpeedC, 0, maxSpeed, 0, 255);\n}\n//Set speed to each motor\nanalogWrite(motor1.getPwmPin(), abSpeedA);\nanalogWrite(motor2.getPwmPin(), abSpeedB);\nanalogWrite(motor3.getPwmPin(), abSpeedC);\n//Move motors depending on the direction needed\n(speedA >= 0) ? motor1.motorForward() : motor1.motorBackward();\n(speedB >= 0) ? motor2.motorForward() : motor2.motorBackward();\n(speedC >= 0) ? motor3.motorForward() : motor3.motorBackward();\n}\n
"},{"location":"soccer/Programming/Movement/#attacking-robot","title":"Attacking robot","text":"In order to take advantage of the HP motors, ideally, the robot should go as fast as possible, however, after a lot of testing, we found that the robot was not able to fully control the ball at high speeds, as it would usually push the ball out of bounds instead of getting it with the dribbler. Therefore, the speed was regulated depending on the distance to the ball (measured with the IR ring) using the following function:
\\(v(r) = 1.087 + 1/(r-11.5)\\), where \\(r\\) is the distance to the ball \\(\\in [0.00,10.0]\\)
This equation was experimentally established with the goal of keeping speed at maximum until the robot gets very close to the ball, when the speed is quickly reduced.
"},{"location":"soccer/Programming/Movement/#defending-robot","title":"Defending robot","text":"The idea for this robot was to keep it on the line line of the goal, always looking to keep the ball in front of it to block any goal attempts.Therefore, speed was regulated according to the angle and x-component to the ball. This meant that if the ball was in front of it, then it didn't have to move. However if the ball was far to the right or left, then speed had to be increased proportionally to the x-component of the ball, as shown in the following image:
"},{"location":"soccer/Programming/Vision/","title":"Vision","text":""},{"location":"soccer/Programming/Vision/#target-detection","title":"Target detection","text":"For goal detections, an Open MV H7 camera was used. Using the Open MV IDE, blob color detection was possible using micropyhton scripts. With this, the bounding box was identified and sent to the arduino. This measures were then used by both robots when going towards the goal or estimating the distance to the goal.
"},{"location":"soccer/Programming/Vision/#uart-communication","title":"UART communication","text":"When sending information to the arduino, we faced several issues as the program would sometimes get stuck. We eventually realized that this was due to the protocol that we were using, as the buffer would sometimes receive an incomplete message and get an error when trying to process it. Therefore, to solve this issue, we changed the way to send and receive messages. In python the message was sent in the following format:
uart.write(f\"{tag},{x},{y},{w},{h}\\n\")\n
Here, the tag value was either an a or a b according to the color of the goal, then the x and y values were the center of the blob and w and h the width and heigth. this message was then received on the arduino on the Serial 3 port, using the following code:
void updateGoals() {\nif (Serial3.available()) {\nString input1 = Serial3.readStringUntil('\\n');\nif (input1[0] == 'a')\nyellowGoal.update(input1);\nelse if (input1[0] == 'b')\nblueGoal.update(input1);\n}\n}\n
For this first function, we received the bounding box and if the message began with an a, then the object yellow goal was updated and the same thing occured if it began with a b. It was important to first identify when the message began, since sometimes due to the buffer space, messages could be cut and begin with numbers or commas. Then, for the object update, the following code was used:
void Goal::update(String str) {\nint arr[4];\nString data = \"\";\nint index = 0;\nfor (int i = 2; i < str.length() && index < 4; i++) {\nif (!(str[i] == ',')) {\ndata += str[i];\n} else if (str[i] == ',' || i == str.length() - 1) {\narr[index++] = data.toInt();\ndata = \"\";\n}\nx = arr[0];\ny = arr[1];\nw = arr[2];\nh = data.toInt();\narea = w * h;\n}\n}\n
In this function, it was important for the loop to run until either the length of the string was done or the index got to three, which meant that 4 values were read. It was important to keep this counter, since once again, due to the buffer size, sometimes messages would be cut and combined with other values, which resulted in a longer string with more commas that would eventually cause an error."},{"location":"util/markdown/","title":"Getting Started with Markdown","text":"Markdown is a simple markup language that allows you to write using a simple sintax. It's used in many places because of how easy it's to use, understand and read.
"},{"location":"util/markdown/#headings","title":"Headings","text":"Headings are created using the #
symbol. The more #
you use, the smaller the heading will be.
Example:
# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n
"},{"location":"util/markdown/#text","title":"Text","text":"Text is written as it is. You can use bold and italic text. You can also use ~~strikethrough~~ text.
Example:
This is a normal text. You can use **bold** and *italic* text. You can also use ~~strikethrough~~ text. \n
"},{"location":"util/markdown/#lists","title":"Lists","text":"You can create lists using the -
symbol. You can also create numbered lists using the 1.
symbol.
Example:
- Item 1\n- Item 2\n - Item 2.1\n - Item 2.2\n1. Item 1\n2. Item 2\n 1. Item 2.1\n 2. Item 2.2\n
Example output: You can create links using the [text](link)
sintax.
Example:
[RoBorregos](\n https://www.roborregos.com\n)\n
Example output: RoBorregos"},{"location":"util/markdown/#images","title":"Images","text":"Similar to links, you can add images using the 
sintax.
Example:
\n
Example output: "},{"location":"util/markdown/#code","title":"Code","text":"You can add code using the `
symbol. You can also add code blocks using the ``` symbol.
Example:
`print(\"Hello World\")`\n
Example output: print(\"Hello World\")
Example:
```python\nprint(\"Hello World\")\n ```\n
Example output: print(\"Hello World\")\n
"},{"location":"util/markdown/#tables","title":"Tables","text":"You can create tables using the |
symbol.
Example:
| Name | Email | Role |\n| ---- | ----- | ---- |\n| Ivan | [i.wells.ar@gmail.com](mailto:i.wells.ar@gmail.com) | Software Developer, Repo Mantainer and Automatization |\n
Example output:
Name Email Role Ivan i.wells.ar@gmail.com Software Developer, Repo Mantainer and Automatization"},{"location":"util/markdown/#quotes","title":"Quotes","text":"You can create quotes using the >
symbol.
Example:
> This is a quote\n
Example output: This is a quote
"},{"location":"util/markdown/#horizontal-rule","title":"Horizontal Rule","text":"You can create a horizontal rule using the ---
symbol.
Example:
---\n
Example output:"},{"location":"util/markdown/#todo-list","title":"ToDo List","text":"You can create a task list using the - [ ]
symbol.
Example:
- [ ] ToDo \n- [x] Done ToDo\n
Example output: For the dribbler, a 1000KV brushless motor was used with its respective speed controller that is controlled through a PWM signal from the microcontroller.
- - - - - - -For the PCB Design the EasyEda software was used. -The robot electronics are made up of 5 custom PCBs designed specifically for a specific use. The design of the PCBs was carried out in the EasyEda software.
-Initially, the design of the main PCB began. which has measurements of 10 x 10 cm.
-For the main processing of our robot, an Arduino Mega Pro of 32 Bits and 2 Clocks of 16 MHz was used. It was mainly chosen for its light weight and its size reduction.
-HP Pololu 12V Motors at 2200 RPM were used. Our tires are GTF Robots that were in our laboratory from past competitions. For these motors we use Mosfet TB6612FNG type drivers. These drivers are bridged in parallel on the PCB, since HP motors demand a lot of current, especially when forcing between robots, giving us a current continuity of 1.2 and peaks of up to 3 Amps.
-By bridging the motor outputs and input signals, the output channel can be increased up to 2.4 Amps continuous and up to 6 Amps peak. What allows us to use HP motors without danger of having damage to our PCB, drivers, or in the worst case, the motors themselves.
-For the control of the robot a BNO 055 gyroscope sensor was used due to its reliability, performance and size. Ultrasonics were used for speed regulation with respect to the detected distance to prevent the robot from getting out of line.
- - - - - - -3.7V LIPO batteries were used -For the power supply, 3 lipo batteries of 3.7V at 2500 maH were used in a series arrangement, giving them approximately 11.1 v in nominal state.
-For the logic supply, lipo batteries of 3.7V at 1200 maH were used in a serious arrangement, giving them approximately 7 V in nominal state.
- - - - - - -Digital IR receivers: TSSP58038 were used to detect the IR signals emitted by the ball and a custom PCB was also designed.
-The Ir ring is made up of 12 IR receivers, and an Atmega328p was used for processing and vectoring the infrared signals.
-For the detection of the lines, independent PCBs were also designed for each of the three sides in the circumference of the robot.
-These boards consist of 4 phototransistors each, getting an analog reading by processing our microcontroller on the main board.
-The phototransistors we used were the TEPT5700. The reason we use these phototransistors is because of their gain, which allows the color reading to have a high difference range. In our case, when we detect white, the phototransistor gives a value close to 300 units. On the other hand, when it detects green, the value decreases to a value of 30, so we have a fairly reliable interval to distinguish between colors.
-One setback with the choice of phototransistors was the color and incidence of light that had to be thrown at it. Initially the NJL7502L phototransistors were considered, but because these were in a deteriorated state they were discarded. Subsequently, we proceeded to search for those phototransistors that had a peak close to 600 nm, thus preventing them from detecting the infrared signal that is above 700 nm and causing us problems when detecting the lines.
-This is the list of mechanical materials we used in the final robot:
-Name | -Use | -Product Image | -Robot Image | -
---|---|---|---|
ABS filament | -Printing of most CAD parts of the robot | -![]() |
-![]() |
-
PLA Filament | -Printing of few CAD parts of the robot | -![]() |
-![]() |
-
Male-female nylon spacers | -Used for interconnecting the CAD pieces | -![]() |
-![]() |
-
3x10mm flat-head steel screw | -Printing of most CAD parts of the robot | -![]() |
-![]() |
-
3x10mm flat-head steel screw | -Printing of most CAD parts of the robot | -![]() |
-![]() |
-
The materials used to support the robot parts worked very well. The
- - - - - - -The main tools used to program the robot are:
-During the regional competition we decided to have two attacking robots which had the same code due to time issues and other setbacks. Nonetheless, we learned that this was not a very good strategy, since both robots would sometimes crash with each other when searching for the ball, making scoring very difficult. Therefore, for the national competition we chose the strategy of developing an attacking and a defending robot. Ideally both robots would be able to change roles during the game, however the defending robot had the camera facing backwards, so this was not possible.
-It is also important to mention that the structure of the code worked as a state machine, advancing to different states until the previous one was completed. This was necessary since it was immportant to keep certain priorities. For example, for the attacking robot, it should first check that it isn't on a line before doing anything else.
-The main objective of this robot was to gain possesion of the ball using the dribbler as fast as possible and then go towards the goal using vision. Therefore, the algorithm used is the following:
-On the other hand, the defending robot should always stay near the goal and go towards it if the ball is in a 20cm radius. The algorithm for this robot is shown in the following image:
-For the robot's movements, it was very important to know both the angle and the distance to the ball, so an IR ring was made with 12 IR receivers. However, before calculating these values, it was first necessary to obtain the pulse width of all TPS-58038s, which should be obtained during a ball emission cycle. However, this would imply a time of \(833 \mu s \times 12\) sensors. Therefore, a better approach was to obtain a reading from all the sensors all at once and repeat this process during the cycle \((833 \mu s)\). -Once the method was chosen, the distance and angle had to be calculated. Nonetheless, each had different methods:
-To obtain the angle towards the ball, there were two main options:
-Since we wanted more precise values for the angle, we chose to use vector addition to get the resulting vector and hence, its angle. This was possible as each sensor had its unitary vector according to its position in the ring and the magnitud was the pulse width value. Therefore, after getting all sensor values in a cycle, the vectors where added using its x and y components, finally obtaining the angle as the inverse tangent of the result vector.
-For the distance, there were also different methods:
-Even though vector calculation seemed to be the best method, we faced several issues, since it was not very consistent and the result value usually varied within a range from 20 to 40. In addition, the first and second method were also ineficient by themselves, resulting also in a small range of distance. Therefore, both options were combined, which provided the best results, getting a distance with a range from 20 to 100.
-It is also important to mention, that we used the research of the Yunit team (2017) as a reference. There were also several tests and conclusions done, which can be found in the following document: IR Reasarch.
-Ideally, we wanted to use the IR Ring to check if the robots had possesion of the ball. However during some tests, we discovered that due to the bright colors, the goals could reflect infrared light emitted by the ball. Therefore, the IR Ring was placed on the robot at a height slightly above the goals to avoid reflections. Nonetheless, this did not allow us to get precise distance measurements when the ball was very close, so we could'nt know if we had possesion or not. -For this reason, another phototransistor was used with the only purpose to determine ball possesion. Similarly to the ring, the sensor counts the readings per cycle to determine the pulse width. However, to reduce noise and get more stable measurements, an Exponential Moving Average (EMA).
-int detect() {
- int pulseWidth = 0;
- int deltaPulseWidth = 5;
-
- const unsigned long startTime_us = micros();
- do {
- filterAnalog.AddValue(analogRead(Constantes::analogo)); //Add value to filter
- if(filterAnalog.GetLowPass() > 700) { //Only consider the reading if the
- pulseWidth += deltaPulseWidth; //infrarred emission is actually significant
- }
-
- } while((micros() - startTime_us) < 833);
-
- return pulseWidth;
-}
-
In order to take advantage of the holonomic drivetrain, the robot had to be able to move in any direction given the desired angle. Therefore, a kinematic model was used in order to determine the speed of each motor. In addition, part of our strategy required our robots to be always facing towards the goal, so the movement had to be corrected if the orientation changed. For this, the BNO-055 was used to get the current orientation and with a simplified PID controller (only P), the error was corrected. The following image shows the kinematic ecuations and corrections implemented:
-double PID::getError(int target, int cur, int speed) {
- error = abs(target - cur);
- error = min(error, 100); //Limit error to have a max value of 100
- error *= kP; //Constant of proportionality
- return error;
-}
-
void Motors::moveToAngle(int degree, int speed, int error) {
- //Define each speed (values from 0-1)
- float m1 = cos(((150 - degree) * PI / 180));
- float m2 = cos(((30 - degree) * PI / 180));;
- float m3 = cos(((270 - degree) * PI / 180));
-
- //Multiply by given speed (0-255)
- int speedA = (int(m1 * velocidad));
- int speedB = (int(m2 * velocidad));
- int speedC = (int(m3 * velocidad));
-
- //Add error
- speedA += error;
- speedB += error;
- speedC += error;
-
- //Define absolute values
- int abSpeedA = abs(speedA);
- int abSpeedB = abs(speedB);
- int abSpeedC = abs(speedC);
-
- //Normalize values (to not exceed 255)
- int maxSpeed = max(abSpeedA, max(abSpeedB, abSpeedC));
- if (maxSpeed > 255) {
- abSpeedA = map(abSpeedA, 0, maxSpeed, 0, 255);
- abSpeedB = map(abSpeedB, 0, maxSpeed, 0, 255);
- abSpeedC = map(abSpeedC, 0, maxSpeed, 0, 255);
- }
-
- //Set speed to each motor
- analogWrite(motor1.getPwmPin(), abSpeedA);
- analogWrite(motor2.getPwmPin(), abSpeedB);
- analogWrite(motor3.getPwmPin(), abSpeedC);
-
- //Move motors depending on the direction needed
- (speedA >= 0) ? motor1.motorForward() : motor1.motorBackward();
-
- (speedB >= 0) ? motor2.motorForward() : motor2.motorBackward();
-
- (speedC >= 0) ? motor3.motorForward() : motor3.motorBackward();
-
-}
-
In order to take advantage of the HP motors, ideally, the robot should go as fast as possible, however, after a lot of testing, we found that the robot was not able to fully control the ball at high speeds, as it would usually push the ball out of bounds instead of getting it with the dribbler. Therefore, the speed was regulated depending on the distance to the ball (measured with the IR ring) using the following function:
-\(v(r) = 1.087 + 1/(r-11.5)\), where \(r\) is the distance to the ball \(\in [0.00,10.0]\)
-This equation was experimentally established with the goal of keeping speed at maximum until the robot gets very close to the ball, when the speed is quickly reduced.
-The idea for this robot was to keep it on the line line of the goal, always looking to keep the ball in front of it to block any goal attempts.Therefore, speed was regulated according to the angle and x-component to the ball. This meant that if the ball was in front of it, then it didn't have to move. However if the ball was far to the right or left, then speed had to be increased proportionally to the x-component of the ball, as shown in the following image:
-For goal detections, an Open MV H7 camera was used. Using the Open MV IDE, blob color detection was possible using micropyhton scripts. With this, the bounding box was identified and sent to the arduino. This measures were then used by both robots when going towards the goal or estimating the distance to the goal.
-When sending information to the arduino, we faced several issues as the program would sometimes get stuck. We eventually realized that this was due to the protocol that we were using, as the buffer would sometimes receive an incomplete message and get an error when trying to process it. Therefore, to solve this issue, we changed the way to send and receive messages. In python the message was sent in the following format:
- -Here, the tag value was either an a or a b according to the color of the goal, then the x and y values were the center of the blob and w and h the width and heigth. this message was then received on the arduino on the Serial 3 port, using the following code:
-void updateGoals() {
-
- if (Serial3.available()) {
- String input1 = Serial3.readStringUntil('\n');
-
- if (input1[0] == 'a')
- yellowGoal.update(input1);
- else if (input1[0] == 'b')
- blueGoal.update(input1);
- }
-
-}
-
Then, for the object update, the following code was used:
-void Goal::update(String str) {
- int arr[4];
- String data = "";
- int index = 0;
-
- for (int i = 2; i < str.length() && index < 4; i++) {
-
- if (!(str[i] == ',')) {
- data += str[i];
- } else if (str[i] == ',' || i == str.length() - 1) {
- arr[index++] = data.toInt();
- data = "";
- }
-
- x = arr[0];
- y = arr[1];
- w = arr[2];
- h = data.toInt();
-
- area = w * h;
-
- }
-
-}
-
Simulation of a disaster area where the robot has to navigate through the majority of a maze, detect victims through different stimuli (visual images), and evade obstacles. The maze may have multiple floors and the robot must be autonomous.
-See the rules for Rescue Maze 2023.
-Markdown is a simple markup language that allows you to write using a simple sintax. It's used in many places because of how easy it's to use, understand and read.
-Headings are created using the #
symbol. The more #
you use, the smaller the heading will be.
Example: -
-Text is written as it is. You can use bold and italic text. You can also use ~~strikethrough~~ text.
-Example: -
This is a normal text. You can use **bold** and *italic* text. You can also use ~~strikethrough~~ text.
-
You can create lists using the -
symbol. You can also create numbered lists using the 1.
symbol.
Example: -
-Example output: -You can create links using the [text](link)
sintax.
Example: -
-Example output: -RoBorregos -Similar to links, you can add images using the 
sintax.
Example: -
-Example output: -You can add code using the `
symbol. You can also add code blocks using the ``` symbol.
Example: -
-Example output: -print("Hello World")
-Example: -
-Example output: - -You can create tables using the |
symbol.
Example: -
| Name | Email | Role |
-| ---- | ----- | ---- |
-| Ivan | [i.wells.ar@gmail.com](mailto:i.wells.ar@gmail.com) | Software Developer, Repo Mantainer and Automatization |
-
Example output:
-Name | -Role | -|
---|---|---|
Ivan | -i.wells.ar@gmail.com | -Software Developer, Repo Mantainer and Automatization | -
You can create quotes using the >
symbol.
Example: -
-Example output: ---This is a quote
-
You can create a horizontal rule using the ---
symbol.
Example: -
-Example output: -You can create a task list using the - [ ]
symbol.
Example: -
-Example output: -