We are using GPIO to control the Robot through USB port. The motor can be controlled by a set of python code provided by the manufacturer. We control the motor by specifying the USB port, motor ID, goal position,and some more other parameters using the python code. Alternately, we can also use the SDK provided by Hanson Robotics to control the robot after proper calibration.
Last edited 2 years ago by AU, Chin Hang
CAO, Xuanyu
January 19, 2022 3:08 pm
What algorithm did you use to extract and utilize the eye-gaze signals?
For extracting, we will try on to methods, mpiigaze and GEDDnet, both will return us pitch and yaw value of the person.
Here are some references:
1. Chen, Z. and Shi, B., 2020. GEDDnet: A Network for Gaze Estimation with Dilation and Decomposition. arXiv preprint
arXiv:2001.09284
2. https://github.com/hysts/pytorch_mpiigaze
Currently, we are not integrating any existing algorithm to utilize the signal. We will further develop our own algorithm to make use of these values so that the eye motor can saccade to specific person (who makes eye contact with the robot).
Thanks a lot.
SINGH, Dilsher
January 19, 2022 2:28 pm
Hello Param,
I had a doubt about the overall plethora of contents that have created a cornucopia of scientific techniques. Keeping that in mind, what is the status on the update of the position feedback as well as load feedback. Linear or progressive? What is the O(n)?
The calibration process is done by comparing the current load feedback to the load limit predetermined load limit as the motor is moving. The position feedback is used to determine the next goal position in the calibration process as well as locating the position limit. We are both trying for linear and progressive and the complexity should be in order of O(n).
JAIN, Prabhansh
January 19, 2022 2:28 pm
Hi Param,
May I know what all technologies were used in this project?
An Active Efficeint Coding algorithm is used to complete saccade calibration. Given a reference point in the video stream captured by the eye camera, the robot try to saccade the eye motor to the point.
Reference:
Q. Zhu, J. Triesch and B. E. Shi, “Integration of Vergence, Cyclovergence, and Saccades through Active Efficient Coding,” 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDLEpiRob), 2020, pp. 1-6, doi: 10.1109/ICDL-EpiRob48136.2020.9278126.
The GEDDnet is integrated for eye gaze estimation, the GEDDnet will return pitch and yaw value that will be used in deciding whether a person is looking at the robot. The robot will make contact to that person.
Reference:
Chen, Z. and Shi, B., 2020. GEDDnet: A Network for Gaze Estimation with Dilation and Decomposition. arXiv preprint
arXiv:2001.09284
Wow! So cool that you used this on Sophia the robot
Thank you so much for the appreciation!
What hardware you used, I see you tried to run the code in Ubuntu.
I use Arch btw.
Hi,
Here are some system requirements for Sophia Robot.
OS: Ubuntu 16.04 (amd64) or later LTS
Processor: Quad Core 3.5 GHz or better
Memory: 16 GB RAM
GPU: Nvidia GeForce GTX 1050 or better
Network: Broadband Internet connection
Storage: 35 GB available space
Other Ports available on host:
1024, 1025, 1026, 3306, 5555, 5556, 8000, 8103,
8104, 8105, 8106, 8110, 8111, 9002, 9090, 10001, 11311
Our system setting is listed in below.
OS: Ubuntu 18.04.5 LTS (64-bits)
Processor: Intel Core i5-7400 CPU@ 3.00GHz x 4
GPU: Nvidia GeForce GTX 1060 6GB/PCle/SSE2
Storage: 313.9GB
I hope this answers the question.
Do you use GPIO to control the robot, or is it wireless?
Hi,
We are using GPIO to control the Robot through USB port. The motor can be controlled by a set of python code provided by the manufacturer. We control the motor by specifying the USB port, motor ID, goal position,and some more other parameters using the python code. Alternately, we can also use the SDK provided by Hanson Robotics to control the robot after proper calibration.
What algorithm did you use to extract and utilize the eye-gaze signals?
Hi,
For extracting, we will try on to methods, mpiigaze and GEDDnet, both will return us pitch and yaw value of the person.
Here are some references:
1. Chen, Z. and Shi, B., 2020. GEDDnet: A Network for Gaze Estimation with Dilation and Decomposition. arXiv preprint
arXiv:2001.09284
2. https://github.com/hysts/pytorch_mpiigaze
Currently, we are not integrating any existing algorithm to utilize the signal. We will further develop our own algorithm to make use of these values so that the eye motor can saccade to specific person (who makes eye contact with the robot).
Thanks a lot.
Hello Param,
I had a doubt about the overall plethora of contents that have created a cornucopia of scientific techniques. Keeping that in mind, what is the status on the update of the position feedback as well as load feedback. Linear or progressive? What is the O(n)?
Hi Dilsher,
The calibration process is done by comparing the current load feedback to the load limit predetermined load limit as the motor is moving. The position feedback is used to determine the next goal position in the calibration process as well as locating the position limit. We are both trying for linear and progressive and the complexity should be in order of O(n).
Hi Param,
May I know what all technologies were used in this project?
Thank you
Hi Prabhansh,
Thanks for your question.
An Active Efficeint Coding algorithm is used to complete saccade calibration. Given a reference point in the video stream captured by the eye camera, the robot try to saccade the eye motor to the point.
Reference:
Q. Zhu, J. Triesch and B. E. Shi, “Integration of Vergence, Cyclovergence, and Saccades through Active Efficient Coding,” 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDLEpiRob), 2020, pp. 1-6, doi: 10.1109/ICDL-EpiRob48136.2020.9278126.
The GEDDnet is integrated for eye gaze estimation, the GEDDnet will return pitch and yaw value that will be used in deciding whether a person is looking at the robot. The robot will make contact to that person.
Reference:
Chen, Z. and Shi, B., 2020. GEDDnet: A Network for Gaze Estimation with Dilation and Decomposition. arXiv preprint
arXiv:2001.09284
I hope this answers your question.
Thanks a lot param, this helped