Hardware section:
Our partner group is in charge of the hardware implementation of the vehicle mechanics. We report the current progress of the robotic arm module: We have implemented and tried out the functions of the robotic arm, such as grabbing and sorting. We planned to guide the engagement of the robotic arm after we solved the problem of calculating the relative distance.
Software section:
We have completed a QR code scanning and localization program that recognizes a QR code from a video stream by a camera, providing the location of the elevator buttons that the robotic arm is to engage. We have also completed a naive AI-based object recognition program, which achieves a preliminary environment scanning combined with the SLAM navigation module.
We encountered many difficulties, including comparing and choosing different SLAM models, optimizing the elevator-button recognition algorithm, building mechanical structures(robotic arm and the hardware frame), and modulation of both software and hardware parts for our group, which was in charge of the visual recognition and robotic arm engagement. The most difficult part was how to measure the relative position between the camera and the elevator buttons and then guide the robotic arm to press. There were no existing packages for us to estimate the relative position, and we are trying to implement this by utilizing the focal length and the image size. This is not accurate enough, so further investigation is needed.
YAN, Yamin
January 18, 2023 2:45 pm
The project sounds interesting! What are the biggest novelties of this project in your view?
This project collaborates with SL-05b to construct one finished product. Our group focuses on visual recognition and the robot-arm engagement module.
The main novelty in this project is the integration and interaction between different parts. Different modules are designed and constructed both bottom-up and top-down to refit the delivery vehicle for different aims easily. Another novelty is the package delivery utilizing an offline robot across floors in an elevator, which to our best knowledge, no other product has achieved (the existing product needs the robot to connect online). The third novelty is the attempt at deep-learning-based computer vision. Integrating the recognition model within the robot enables high-level tasks such as object localization, scene segmentation and more. The underlying potentials are profound.
Awesome.
What is the progress of the hardware implementation and software development, respectively?
Hardware section:
Our partner group is in charge of the hardware implementation of the vehicle mechanics. We report the current progress of the robotic arm module: We have implemented and tried out the functions of the robotic arm, such as grabbing and sorting. We planned to guide the engagement of the robotic arm after we solved the problem of calculating the relative distance.
Software section:
We have completed a QR code scanning and localization program that recognizes a QR code from a video stream by a camera, providing the location of the elevator buttons that the robotic arm is to engage. We have also completed a naive AI-based object recognition program, which achieves a preliminary environment scanning combined with the SLAM navigation module.
What is the biggest challenge?
We encountered many difficulties, including comparing and choosing different SLAM models, optimizing the elevator-button recognition algorithm, building mechanical structures(robotic arm and the hardware frame), and modulation of both software and hardware parts for our group, which was in charge of the visual recognition and robotic arm engagement. The most difficult part was how to measure the relative position between the camera and the elevator buttons and then guide the robotic arm to press. There were no existing packages for us to estimate the relative position, and we are trying to implement this by utilizing the focal length and the image size. This is not accurate enough, so further investigation is needed.
The project sounds interesting! What are the biggest novelties of this project in your view?
This project collaborates with SL-05b to construct one finished product. Our group focuses on visual recognition and the robot-arm engagement module.
The main novelty in this project is the integration and interaction between different parts. Different modules are designed and constructed both bottom-up and top-down to refit the delivery vehicle for different aims easily. Another novelty is the package delivery utilizing an offline robot across floors in an elevator, which to our best knowledge, no other product has achieved (the existing product needs the robot to connect online). The third novelty is the attempt at deep-learning-based computer vision. Integrating the recognition model within the robot enables high-level tasks such as object localization, scene segmentation and more. The underlying potentials are profound.