Skip to content

OnBoarding

For the computer vision area, here are some reccommendations to get started:

Python3

Python is the main language used in vision, so it would be a good idea to get familiar with it if you are new to python. You can check the following resources:

Vision tools

Here are some of the libraries and tools used in the vision area:

A recommended exercise is to use openCV and YOLO to detect a person and draw the bounding box.

Terminal

You will probably be using the terminal a lot, so it is recommended to get familiar with the most used commands. Here are some resources:

ROS2

We currently use the ROS2 (humble) framework. This is a very useful tool in robotics, used mostly for communication between different modules and devices. It allows us to create nodes that can publish and subscribe to topics, making it easier to integrate different components of the robot.

It is recommended to install ubuntu to use ROS2, however it is also possible to use it with Docker, but not recommended when starting out. Make sure to check the official documentation for installation and tutorials:

Also, make sure to check out how we use ROS2 in the home2 repo. If possible, try to run some of the examples:

Docker

Docker is a tool that we use to run the modules in containers. This allows us to have a consistent environment across different machines and it gets everything setup and ready to use. It is not necessary to understand it at fist because we already have bash scripts that run everything automatically, but it is a powerful tool worth learning.

You can check the official documentation for installation and tutorials:

Finally, check out how to run the vision module using Docker in the home2 repo:

To ensure everything runs properly, try to run the zed-simulator (this will simulate the zed using your own webcam) and the face recognition node.

Check out the docs

Finally, check out our documentation to see the different subareas, current implementations and areas of improvement.