How about the precision of the location of a marker in an image, e.g., the standard deviation of the center? It is a bit odd to use such a powerful algorithm to do the detection. This kind of object detection algorithms typically does not focus on precision measurements. NN is notorious for over-fitting and being noisy. Besides, the loss function of a detection algorithm is typically a classifier/detection loss instead of something more sensible in your use case, like an absolute error or a least square error.
Thank you for your comments, we haven’t test the precision yet. We will try to do it as soon as possible. And we will also try other algorithms. Now we are trying to use QR code as our rreference points, and use pyzbar to locate the center. But we are facing to problem of zoom in the QR code.
The original title is “ToF and Color Multicamera system”. However, we found that the time-of-flight camera is not operable in the analyzer environment, we are forced to change the objective and decided this temporary project title.
YIP, Kam Wai
January 19, 2022 10:41 am
What are the accuracy and precision of the calibration?
How about the precision of the location of a marker in an image, e.g., the standard deviation of the center? It is a bit odd to use such a powerful algorithm to do the detection. This kind of object detection algorithms typically does not focus on precision measurements. NN is notorious for over-fitting and being noisy. Besides, the loss function of a detection algorithm is typically a classifier/detection loss instead of something more sensible in your use case, like an absolute error or a least square error.
Thank you for your comments, we haven’t test the precision yet. We will try to do it as soon as possible. And we will also try other algorithms. Now we are trying to use QR code as our rreference points, and use pyzbar to locate the center. But we are facing to problem of zoom in the QR code.
Why not use a simple template matching, e.g., cross-correlation? OpenCV has it.
I see the problem, it seems to be better to use non-machine method to do the detection. Thank you for your reminder!
The title does not seem to reflect the objective of the work.
thank you for your comment, we will change the suitable title to reflect the objective of the work.
The original title is “ToF and Color Multicamera system”. However, we found that the time-of-flight camera is not operable in the analyzer environment, we are forced to change the objective and decided this temporary project title.
What are the accuracy and precision of the calibration?
To ensure normal operation, the maximum error should not exceed 0.2mm
Is the project past the stage that you can take some photos and locate the markers in terms of pixel coordinates?
YES, we can locate the center point of the markers. And also compare it with the center point of the captured frame, to know the offsets.
How about mapping them the physical coordinates?
I think we need to know the machine’s cordinate first.
What is the physical resolution (mm/px) of the camera?
0.0042mm per pixel
Does it mean that the camera is shooting at a distance of about 2 to 3mm and the FOV is about 5.4mm * 3mm?
No, the camera is shooting at a distance of few centimeters, the value is just the pixel size of the camera.
Then what is the physical width being captured corresponding to the width of a pixel?
I think the focal length is 2.8mm, FOV = 100; If shooting at 2mm, we can get a 2cm*1cm image. These may answer the question.
720p(1280×720), 100degree, 30 frames per second, and picel size is 4.2um*4.2um.