CN116132806A - Camera calibration method and device for robot, computer equipment and storage medium - Google Patents

Camera calibration method and device for robot, computer equipment and storage medium Download PDF

Info

Publication number
CN116132806A
CN116132806A CN202211619058.5A CN202211619058A CN116132806A CN 116132806 A CN116132806 A CN 116132806A CN 202211619058 A CN202211619058 A CN 202211619058A CN 116132806 A CN116132806 A CN 116132806A
Authority
CN
China
Prior art keywords
camera
base station
information
robot
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211619058.5A
Other languages
Chinese (zh)
Inventor
赵永亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Topband Co Ltd
Original Assignee
Shenzhen Topband Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Topband Co Ltd filed Critical Shenzhen Topband Co Ltd
Priority to CN202211619058.5A priority Critical patent/CN116132806A/en
Publication of CN116132806A publication Critical patent/CN116132806A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application relates to a camera calibration method, a camera calibration device, a computer device, a storage medium and a computer program product of a robot. The method comprises the following steps: responding to the calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station; recognizing coordinates of the base station image marked on the base station image, and converting coordinate conversion of the base station image marked on the base station image to obtain actual position information of the current camera; when the fact that the calibration condition is met is determined according to the actual position information of the current camera, compensating information of the camera is obtained based on the difference between the actual position information of the current camera and preset position information; and calibrating the camera according to the compensation information of the camera. According to the method, the compensation information is obtained by image recognition and calculation, calibration can be completed based on the original hardware design, and the calibration cost is reduced.

Description

Camera calibration method and device for robot, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of intelligent robots, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for calibrating a camera of a robot.
Background
Along with the continuous improvement of the living standard of people, the utilization rate of the intelligent robot in life gradually rises.
The intelligent robot adopts the vision camera to recognize the obstacle, and the recognition accuracy of the vision camera depends on the installation accuracy of the camera. The camera position is calibrated when leaving the factory, and along with the use of intelligent robots, reasons such as collision and the like, the camera position is no longer the position that leaves the factory and is the camera calibration.
There is a need for a camera calibration method for robots.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a camera calibration method, apparatus, computer device, computer-readable storage medium, and computer program product for a robot that can improve recognition accuracy.
In a first aspect, the present application provides a camera calibration method of a robot, the method comprising:
responding to a calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station;
Recognizing coordinates of the mark in the base station image, and converting coordinate conversion of the mark in the base station image to obtain actual position information of the current camera;
when the fact that the calibration condition is met is determined according to the actual position information of the current camera, obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information;
and calibrating the camera according to the compensation information of the camera.
In one embodiment, when the calibration condition is met according to the actual position information of the current camera, obtaining the compensation information of the camera based on the difference between the actual position information of the current camera and the preset position information includes:
calculating to obtain the position error of the current camera according to the difference between the actual position information of the current camera and the preset position information;
if the position errors of the current cameras calculated according to at least three base station images are all larger than a preset value, determining that the calibration condition is met;
calculating to obtain compensation information corresponding to each current camera according to the position error of the current camera in each base station image;
And taking the average value and/or variance of each piece of compensation information to obtain the final compensation information of the camera.
In one embodiment, the method further comprises:
and when the acquisition quantity of the base station images reaches a set value, determining that the calibration condition is met.
In one embodiment, when the number of acquired base station images reaches a set value, the averaging and/or variance of each piece of compensation information is performed to obtain final compensation information of the camera, including:
sorting all the compensation information in descending order of absolute values of the compensation information;
removing the top N pieces of compensation information which are sequenced in the front;
and taking average value and/or variance of the reserved compensation information to obtain final compensation information of the camera.
In one embodiment, the actual position information includes actual altitude information and actual angle information; the preset position information comprises preset height information and preset angle information; the position errors include a height error and an angle error;
the calculating to obtain the position error of the camera according to the difference between the preset position information and the actual position information of the camera includes:
Calculating to obtain the height error of the camera according to the difference between the actual height information of the current camera and the preset height information, and calculating to obtain the angle error of the camera according to the difference between the actual angle information of the current camera and the preset angle information;
and if the position errors of the cameras calculated according to at least three base station images are all larger than a preset value, determining that the calibration condition is met comprises the following steps: and if the height errors and/or the angle errors of the cameras calculated according to at least three base station images are larger than the preset value, determining that the calibration condition is met.
In one embodiment, the method for calculating the compensation information corresponding to the current camera according to the position error of the current camera in the base station image includes:
and calculating by using a trigonometric function according to the mark and the preset horizontal position information of the camera, the actual height information of the current camera, the actual angle information of the current camera, the preset height information of the camera and the preset angle information of the camera to obtain the compensation information corresponding to the current camera.
In a second aspect, the present application provides a camera calibration apparatus for a robot, the apparatus comprising:
the shooting module is used for responding to the calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification, which is shot when the robot enters and exits the base station each time;
the identification module is used for identifying the coordinates of the identification in the base station image and converting the coordinate conversion of the identification in the base station image to obtain the actual position information of the current camera;
the compensation module is used for obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information when the fact that the calibration condition is met is determined according to the actual position information of the current camera;
and the calibration module is used for calibrating the camera according to the compensation information of the camera.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the following method steps when executing the computer program: responding to a calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station;
Recognizing coordinates of the mark in the base station image, and converting coordinate conversion of the mark in the base station image to obtain actual position information of the current camera;
when the fact that the calibration condition is met is determined according to the actual position information of the current camera, obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information;
and calibrating the camera according to the compensation information of the camera.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the following method steps: responding to a calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station;
recognizing coordinates of the mark in the base station image, and converting coordinate conversion of the mark in the base station image to obtain actual position information of the current camera;
When the fact that the calibration condition is met is determined according to the actual position information of the current camera, obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information;
and calibrating the camera according to the compensation information of the camera.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, performs the following method steps: responding to a calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station;
recognizing coordinates of the mark in the base station image, and converting coordinate conversion of the mark in the base station image to obtain actual position information of the current camera;
when the fact that the calibration condition is met is determined according to the actual position information of the current camera, obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information;
And calibrating the camera according to the compensation information of the camera.
According to the camera calibration method, device, computer equipment, storage medium and computer program product of the robot, the robot is controlled to enter and exit the base station for many times to shoot a plurality of groups of base station images containing marks, the marks in the base station images are identified, the coordinates of the marks in the base station images are subjected to coordinate conversion to obtain actual position information of the current camera, when the actual position of the current camera is determined to meet calibration conditions, compensation information of the camera is obtained based on the difference between preset position information and actual position information of the camera, and the position calibration of the camera is realized. On one hand, the position of the camera can be calibrated, so that the recognition accuracy of the robot is guaranteed, on the other hand, the calibration mode utilizes image recognition calculation to obtain compensation information, the calibration of the camera can be completed based on the original hardware design, the camera is not required to be replaced, and the calibration cost is reduced.
Drawings
FIG. 1 is an application environment diagram of a camera calibration method of a robot in one embodiment;
FIG. 2 is a flow chart of a method for calibrating a camera of a robot in one embodiment;
FIG. 3 is a schematic diagram of an identification at a base station in one embodiment;
FIG. 4 is a flowchart illustrating a process for determining compensation information of a camera according to an embodiment;
fig. 5 is a flowchart illustrating a process of determining compensation information of a camera according to another embodiment;
FIG. 6 is a schematic diagram showing a relationship between a camera and a display in an embodiment;
FIG. 7 is a flow chart of a method for a user to select an auto-calibration camera in one embodiment;
FIG. 8 is a block diagram of a camera calibration device of a robot in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Visual ranging relies on the installation precision of a camera, and when the precision is low, the ranging error can be increased, so that the obstacle avoidance function is influenced. The method adopted at present ensures the tolerance and the installation precision of parts during production, further calibrates the camera and eliminates the ranging error as much as possible. However, after the product is shipped, the problems of position sending change of the camera and the like caused by collision, structural abrasion and other reasons are often solved, so that the distance measurement error is increased and the obstacle avoidance effect is greatly reduced. At present, a user of a product has no good solution for the ranging error of the camera of the product, and only can hope to keep the structure stable as far as possible before the product is shipped, and if the ranging error of the product occurs, the product can only be returned to the factory for after-sales treatment.
In view of this, the method for calibrating the camera of the robot provided in the embodiment of the present application may be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 communicates with the robot 106. The terminal 102 and the robot 106 may communicate directly, and may communicate by wireless transmission methods such as bluetooth, NFC, ant+.
The robot 106 responds to the calibration instruction, controls the times of the robot 106 entering and exiting the base station to be at least one time, and when the robot 106 enters and exiting the base station, the robot 106 controls a camera on the robot 106 to shoot the identification of the base station, so as to obtain a base station image comprising the identification shot each time the robot 106 enters and exits the base station; the robot 106 recognizes the coordinates of the base station image marked on the base station image, and converts the coordinate conversion of the base station image marked on the base station image to obtain the actual position information of the current camera; when the robot 106 determines that the calibration condition is met according to the actual position information of the current camera, obtaining compensation information of the camera based on the difference between the actual position information of the current camera and the preset position information; the server 104 calibrates the camera according to the compensation information of the camera.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers. The robot 106 may be an intelligent sweeping robot, an intelligent mowing robot, or other intelligent robots with certain image acquisition and recognition capabilities.
In one embodiment, as shown in fig. 2, a method for calibrating a camera of a robot is provided, and the method is applied to the robot in fig. 1 for illustration, and includes the following steps:
s202, responding to a calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station.
The calibration instruction is one of robot control instructions, and the control instructions are used for controlling the robot to move according to a preset teaching track and execute corresponding programs, such as controlling the robot to execute shooting actions, controlling the robot to execute obstacle avoidance actions, controlling the robot to execute recharging actions and controlling the robot to execute camera calibration actions. Specifically, the calibration command may be sent by the user through the terminal, or a control button may be set on the robot, and the user operates the control button to send the calibration command.
It should be noted that, the calibration command may also be sent by the robot by self-judgment, for example, if the robot detects, through the collision sensor, that the number of collisions of the robot is greater than a preset number of times in a certain time, the calibration command is sent by itself, and the preset number of times may be flexibly selected according to the model of the robot and the change of the use scene of the robot, which is not limited herein.
The base station may be a fixed location for providing charging, storing, calibrating, etc. for the robot, and specifically, the base station may be a recharging base station for the robot, or may be a temporary storing point for the robot, or a station with identification information on a path of the robot. For a single robot, the number of base stations may be one or more, depending on the complexity of the operation of the robot, the duration of the operation of the robot, and the endurance of the robot.
The camera is a camera installed at the recognition end of the robot, and can be used for shooting the surrounding environment of the robot.
The camera shoots the surrounding environment at a refresh rate to form a continuous video stream. A common camera captures images at a rate of 30 images per second. The depth camera can collect color images, can read the distance between each pixel and the camera, and has higher material cost.
The camera can be divided into: monocular, binocular and depth cameras.
The monocular camera has the advantages of simple structure and low material cost, is used for shooting a scene essentially, and can be used for leaving a projection on an imaging plane of the camera to reflect the three-dimensional world in a two-dimensional mode.
The binocular camera generally consists of two monocular cameras, wherein the distance between the two monocular cameras is a fixed value, and the distance is also called a base line. The spatial position of each pixel is estimated through a base line, the larger the base line distance is, the farther the distance which can be measured is, the depth information of the image is obtained through calculation by the binocular camera, and the depth information of the image can be calculated in real time after the acceleration of the GPU and the FPGA equipment is carried out.
The depth camera, also called RGB-D camera, is used for measuring the distance between the object and the camera according to the principle of triangle ranging by actively emitting infrared structure light to the object and receiving returned light through the principle of infrared structure light or TOF sensor. The type of the camera adopted according to different measurement precision is different, the TOF camera is adopted to carry out centimeter-level distance measurement, the RGB binocular camera is adopted to carry out millimeter-level distance measurement, the infrared structured light camera is adopted to carry out distance measurement, and the short-distance measurement precision can reach 0.01-1 mm.
Specifically, the camera installed at the robot recognition end may be a monocular camera, and the camera of the monocular camera is used for shooting the identifier of the base station, so that the base station image including the identifier, which is shot when the robot enters and exits the base station each time, can be obtained.
The identifier refers to an identifier of a specific shape which is set when the robot leaves the factory, specifically, the identifier may be a two-dimensional picture, as shown in a schematic diagram of the identifier on the base station shown in fig. 3, one identifier includes a plurality of identifiers of specific shapes, the identifier may be a rectangle or a polygon with a side length parallel to a boundary of the two-dimensional picture, and the color of the two-dimensional picture in the identifier may be selected according to an actual test effect, for example, white.
The location of the marker may be set at a predetermined distance from the limit location of the base station, for example, the marker is set at 10cm from the limit location of the base station.
When the camera of the robot is deviated, the actual position of the camera can be judged by identifying the marker with a specific shape, so that the aim of calibration is fulfilled.
The shooting time can be the limit position of the robot reaching the base station, the limit position is used for fixing the robot, the relative position of the robot and the mark is kept fixed, and the limit position can be the position in the base station or the position outside the base station.
Specifically, the robot immediately and automatically enters and exits the base station in response to the calibration instruction, and simulates the action of returning to the base station for charging and leaving the base station for specific work when the robot normally works.
When the robot is at the limit position of the base station, the identification of the base station can be shot, and a base station image is obtained when the robot returns to the base station from outside the base station or when the robot returns to the outside of the base station from inside the base station.
Specifically, in some cases, when the number of times of the robot entering and exiting the base station is one, the robot is located at the limit position of the base station, in some cases, because errors exist in the recharging action of the robot, the robot does not accurately reach the limit position, and the robot needs to be controlled to enter and exit the base station at least once, so that the robot accurately reaches the limit position of the base station, when the number of times of the robot entering and exiting the base station is larger, the more base station images shot by the robot are, and the higher the identification precision of the base station identification is.
It should be noted that, after the robot work is finished, the base station needs to be returned to charge, that is, the base station needs to be returned to charge, and the base station image of the robot entering and exiting the base station can be saved by shooting the identification of the base station when the robot returns to the base station or exits the base station, so that a basis is provided for subsequent identification and calibration.
S204, recognizing coordinates of the base station image marked on the base station image, and converting coordinate conversion of the base station image marked on the base station image to obtain actual position information of the current camera.
The coordinates of the image marked on the base station can be pixel coordinates, and the image marked on the robot is shot by a camera of the camera to obtain the pixel coordinates. In the technical field of image processing, the pixel coordinates in the pixel coordinate system can be converted into coordinates in different coordinate systems, such as coordinates in an image coordinate system, world coordinates in a camera coordinate system and world coordinates in a world coordinate system, by using a conversion relation between coordinate systems.
Specifically, the identification pixel coordinate of the identification a in the pixel coordinate system uv is (u, v), the identification pixel coordinate (u, v) can be converted into the identification image coordinate (x, y) in the image coordinate system O-xy, and the identification image coordinate (x, y) can be converted into the camera coordinate system O c -X c Y c Z c Under the identification camera coordinates (X c ,Y c ,Z c ) Identification of camera coordinates (X) c ,Y c ,Z c ) Can be converted into world coordinate system O w -X w Y w Z w World coordinates of the lower mark (X w ,Y w ,Z w )。
Wherein, origin O of camera coordinate system c May be the optical center of the camera, the origin O of the world coordinate system w The camera can be a light center corresponding to a camera at a preset position.
It should be noted that the world coordinates are real world coordinates, which are manually measurable coordinates, and the pixel coordinates, the image coordinates and the camera coordinates are all of the type of coordinates inside the camera, which cannot be obtained by manual measurement, and are virtual.
In particular, according to the identified world coordinates (X w ,Y w ,Z w ) Can reversely calculate and obtain the world coordinate (X) wc ,Y wc ,Z wc )。
S206, when the fact that the calibration condition is met is determined according to the actual position information of the current camera, the compensation information of the camera is obtained based on the difference between the actual position information of the current camera and the preset position information.
Wherein, the preset position information of the camera can be the generation of the camera calibrated when the robot leaves the factoryWorld coordinates (X) w1 ,Y w1 ,Z w1 )。
Specifically, the world coordinates (X w1 ,Y w1 ,Z w1 ) World coordinates (X) corresponding to the actual position of the camera wc ,Y wc ,Z wc ) Calculating that the distance between world coordinates exceeds a preset threshold, namely, the calibration condition is met. The preset threshold is flexibly selected according to actual conditions, and is not limited herein.
The compensation information of the camera may be a difference between preset position information and actual position information of the camera.
Specifically, by calculating world coordinates (X w1 ,Y w1 ,Z w1 ) World coordinates (X) corresponding to the actual position of the camera wc ,Y wc ,Z wc ) Obtaining a position information difference, and reversely pushing the position difference of the coordinates back to the camera coordinate system O according to the coordinate system conversion formula based on the position information difference c -X c Y c Z c Lower virtual camera coordinates (X c ,Y c ,Z c ) The virtual image coordinates (x, y) in the image coordinate system O-xy and the virtual pixel coordinates (u, v) in the pixel coordinate system uv.
And S208, calibrating the camera according to the compensation information of the camera.
Specifically, according to the virtual camera coordinates (X c ,Y c ,Z c ) The camera is calibrated for virtual image coordinates (x, y) and virtual pixel coordinates (u, v).
According to the camera calibration method of the robot, the robot is controlled to enter and exit the base station for multiple times to shoot multiple groups of base station images containing the marks, the marks in the base station images are identified, the coordinates of the marks in the base station images are subjected to coordinate conversion to obtain the actual position information of the current camera, when the current camera is determined to meet the calibration conditions according to the actual position of the current camera, the compensation information of the camera is obtained based on the difference between the preset position information and the actual position information of the camera, and the position calibration of the camera is realized. On one hand, the position of the camera can be calibrated, so that the recognition accuracy of the robot is guaranteed, on the other hand, the calibration mode utilizes image recognition calculation to obtain compensation information, the calibration of the camera can be completed based on the original hardware design, the camera is not required to be replaced, and the calibration cost is reduced.
When the robot is recharged, the robot is not in place due to the fact that the robot is not clung to the limiting position of the base station, at the moment, although the robot can shoot the identification and the charging of the base station, the position error of the camera is calculated according to the difference between the preset position information and the actual position information of the camera, and the reference value is small. Therefore, it is necessary to avoid unnecessary errors caused by the robot not reaching the recharging position or not being in close contact with the limit position of the base station. In view of this, in one embodiment, when it is determined that the calibration condition is satisfied according to the actual position information of the current camera, the compensation information of the camera is obtained based on the difference between the actual position information of the current camera and the preset position information, as shown in a flow chart of a method for determining the compensation information of the camera in fig. 4, which includes:
s402, calculating to obtain the position error of the current camera according to the difference between the actual position information of the current camera and the preset position information.
The camera position error calculated by the base station image can be obtained by comparing the difference between the preset position information of the camera and the actual position information of the camera.
Specifically, the camera position error calculated from the base station image can be calculated by comparing the preset position information (X w1 ,Y w1 ,Z w1 ) And the actual position information (X wc ,Y wc ,Z wc ) The distance between the two is obtained in cm, for example, the preset position information of the camera is (0, 0), the actual position information of the camera is (0, 2), the distance between the two is 2cm, and the actual position of the camera and the preset position just before leaving the factory are shifted upwards by 2cm.
The preset value is a judgment threshold value of the position error, and the size of the preset value can be flexibly selected according to practical situations, for example, any value between 1cm and 5 cm.
Taking a preset value of 1cm as an example, specifically, when the position errors of the cameras calculated according to at least three base station images are all larger than 1cm, it is determined that the calibration conditions are met, and the offset degree of the cameras is considered to influence the normal working flow of the robot under the current condition.
S404, if the position errors of the current cameras calculated according to the at least three base station images are all larger than a preset value, determining that the calibration condition is met.
And if the position error of the current camera calculated according to the at least three base station images is not greater than a preset value, continuing to acquire the base station images.
The at least three base station images may be continuous three base station images captured by the camera when the robot executes the recharging operation, or discontinuous three base station images.
S406, calculating to obtain compensation information corresponding to each current camera according to the position errors of the current cameras in the base station images.
The position error of the camera after processing is used as the reference of the compensation information, and the compensation information is obtained according to the reference calculation of the compensation information.
And S408, taking an average value and/or variance of each piece of compensation information to obtain the final piece of compensation information of the camera.
In this embodiment, through the step of setting up the pre-calibration, when the position errors of the cameras calculated by recognizing three base station images are all greater than a preset value, it is determined that the calibration conditions are satisfied, and on the basis of avoiding the error calibration caused by accidental factors, the time for image acquisition can be reduced, so that the action consistency and the working efficiency of the robot are improved.
In one embodiment, the calibration condition is determined to be satisfied when the number of acquisitions of the base station image reaches a set value.
The flowchart of the process of determining the compensation information of the camera shown in fig. 5 includes:
s502, calculating to obtain the position error of the current camera according to the difference between the actual position information of the current camera and the preset position information;
s504, if the position errors of the current cameras calculated according to at least three base station images are all larger than a preset value, determining that the calibration condition is met;
if the position error of the current camera calculated according to the at least three base station images is not greater than the preset value, S506 is executed.
S506, continuing to acquire the base station images, and determining that the calibration condition is met when the acquisition quantity of the base station images reaches a set value.
The setting value may be set according to the processing capability of the server, and the setting value is equal to 10 times.
S508, calculating to obtain the compensation information corresponding to each current camera according to the position errors of the current cameras in the base station images.
The position errors of the cameras of the 10 base station images can be averaged or variance obtained to obtain a processed position error of the current camera, the position error of the current camera in each base station image is replaced, and compensation information corresponding to each current camera is obtained through calculation according to the processed position error of the current camera.
S510, average value and/or variance are obtained for each piece of compensation information, and final piece of compensation information of the camera is obtained.
It should be noted that, the 10 base station images include position errors of 10 groups of cameras, the values of the position errors of the cameras may be positive numbers or negative numbers, before averaging or taking differences of the position errors of the cameras of the 10 base station images, two groups of camera position errors with the largest and smallest position errors of the 10 groups of cameras may be screened out first to obtain remaining 8 groups of camera position errors, and then averaging or taking variances are performed according to the remaining 8 groups of camera position errors to obtain a processed camera position error, and the processed camera position error is used as a reference of compensation information.
In this embodiment, the position errors of the plurality of groups of cameras are obtained by collecting the plurality of base station images, and two end values in the position errors of the plurality of groups of cameras are screened out, so that the situation that the position errors of the cameras are overlarge due to the fact that the robot is not filled in place or due to other reasons can be reduced, and the calculation accuracy of the position errors is improved.
In one embodiment, when the number of base station images acquired reaches a set value, averaging and/or variance is performed on each piece of compensation information to obtain final compensation information of the camera, including: ordering all the compensation information in descending order according to the absolute value of the compensation information; removing the top N pieces of compensation information sequenced in the front; and taking the average value and/or variance of the reserved compensation information to obtain the final compensation information of the camera.
The larger the absolute value of the compensation information, which is calculated by all the base station images, is, the farther the actual position of the camera deviates from the preset position, that is, the greater the probability that the robot is not recharged in place.
Specifically, the value of N is at least 1, and the specific value of N can be flexibly selected according to practical situations.
And taking the average value or variance of the absolute value of the reserved compensation information to obtain the final compensation information of the camera.
In this embodiment, the first N pieces of compensation information with the largest absolute value of the compensation information of the plurality of groups of cameras are screened out, so that the situation that the position error is high and low due to the fact that the robot does not reach the recharging limit position or other reasons during shooting is avoided, and the calculation accuracy of the compensation information can be improved.
In one embodiment, the actual position information includes actual altitude information and actual angle information; the preset position information comprises preset height information and preset angle information; the position errors include altitude errors and angle errors; according to the difference between the preset position information and the actual position information of the camera, calculating to obtain the position error of the camera, including: calculating to obtain the height error of the camera according to the difference between the actual height information and the preset height information of the current camera, and calculating to obtain the angle error of the camera according to the difference between the actual angle information and the preset angle information of the current camera; if the position errors of the cameras calculated according to the at least three base station images are all larger than a preset value, determining that the calibration conditions are met comprises: and if the height errors and/or the angle errors of the cameras calculated according to the at least three base station images are larger than the preset value, determining that the calibration condition is met.
When the robot is charged back, the robot reaches the limit position of the base station, and the horizontal distance between the camera and the identifier of the base station is relatively fixed, so that the position information can be converted into height information, angle information and the horizontal distance between the camera and the identifier of the base station.
Specifically, the actual height information may be the height of the current camera from the ground, and the actual angle information may be the angle formed by the current camera and the ground.
If the height error and/or the angle error of the camera calculated according to at least three base station images are larger than the preset value, determining that the calibration condition is satisfied includes: the difference between the current camera and the preset camera is larger than the preset height, the difference between the current camera and the ground and the preset camera and the ground is larger than the preset angle, and the difference between the current camera and the preset camera and the ground is larger than the preset height.
Specifically, when the calibration condition is met, compensating information of the camera is obtained according to a difference value between the ground clearance height of the camera and a preset ground clearance height of the camera and a difference value between an angle formed by the camera and the ground and an angle formed by the preset camera and the ground.
In the embodiment, the calculation process is simplified by converting the position information into the height information and the angle information, the phenomenon that the camera is offset due to structural abrasion or other conditions after the robot leaves the factory is considered, different compensation information generation modes are set for different offset phenomena, and the accuracy of camera calibration is improved.
For the camera to generate the offset in the height direction or the offset in the angle direction, different calculation modes need to be determined according to different types of the offset of the camera, so as to obtain the compensation of the camera, in one embodiment, the mode of calculating to obtain the compensation information corresponding to the current camera according to the position error of the current camera in the base station image comprises the following steps: and calculating by using a trigonometric function according to the preset horizontal position information of the mark and the camera, the actual height information of the current camera, the actual angle information of the current camera, the preset height information of the camera and the preset angle information of the camera to obtain the compensation information corresponding to the current camera.
The preset horizontal position information of the mark and the camera represents the horizontal distance between the camera and the representation, and the horizontal distance is generally a fixed value.
Specifically, the schematic diagram of the relationship between the camera and the representation position shown in fig. 6 includes: the method comprises the steps of marking preset horizontal position information X of a camera, actual height information H of a current camera, actual angle information alpha of the current camera, preset height information H of the camera and preset angle information beta of the camera, wherein A is a preset camera position, B is the current camera position, and C is the marking position of a base station.
In this case, both the offset in the angular direction of the camera (α to β) and the offset in the height direction of the camera (H to H) are generated. Taking the compensation information corresponding to the preset camera height and the preset camera angle as 0 as an example, the following description is made: the calculation formula of the compensation information of the camera at the current moment specifically comprises the following steps: compensation information=h-h+x (tanα -tanβ).
In this embodiment, compensation information is calculated according to a formula of the trigonometric function, so that a basis can be provided for subsequent camera calibration.
In one embodiment, as shown in fig. 7, there is provided a method for a user to select an auto-calibration camera, comprising:
s702, responding to the calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station.
S704, judging whether the base station images are full of 10, if yes, executing S708, if no, executing S706
S706, controlling the robot to enter an idle mode.
The idle mode means that a camera of the robot enters a sleep mode, the camera does not collect images, and the robot does not process the images. In which the robot may perform a charging operation.
S708, recognizing coordinates in the base station image, which are marked in the base station image, and writing the coordinates into the cordinate.
Wherein, the coordinate file is used for temporarily storing the coordinates marked in the base station image.
S710, after recharging is completed, checking whether a reorder. Txt file exists, if so, executing S712, if not, not calibrating, and ending the flow.
S712, converting coordinate conversion of the image marked on the base station to obtain actual position information of the current camera.
S714, calculating to obtain the position error of the camera according to the difference between the preset position information and the actual position information of the camera.
S716, if the position errors of the cameras calculated according to the at least three base station images are all larger than the preset value, determining that the calibration condition is met.
And when the acquisition quantity of the base station images reaches a set value, determining that the calibration condition is met.
When the collection number of the base station images reaches a set value, taking an average value and/or a variance of each piece of compensation information to obtain final piece of compensation information of the camera, wherein the method comprises the following steps: ordering all the compensation information in descending order according to the absolute value of the compensation information; removing the top N pieces of compensation information sequenced in the front; and taking the average value and/or variance of the reserved compensation information to obtain the final compensation information of the camera.
Wherein the actual position information includes actual altitude information and actual angle information; the preset position information comprises preset height information and preset angle information; the position errors include altitude errors and angle errors;
according to the difference between the preset position information and the actual position information of the camera, calculating to obtain the position error of the camera, including:
calculating to obtain the height error of the camera according to the difference between the actual height information and the preset height information of the current camera, and calculating to obtain the angle error of the camera according to the difference between the actual angle information and the preset angle information of the current camera;
if the position errors of the cameras calculated according to the at least three base station images are all larger than a preset value, determining that the calibration conditions are met comprises: and if the height errors and/or the angle errors of the cameras calculated according to the at least three base station images are larger than the preset value, determining that the calibration condition is met.
The method for calculating the compensation information corresponding to the current camera according to the position error of the current camera in the base station image comprises the following steps:
and calculating by using a trigonometric function according to the preset horizontal position information of the mark and the camera, the actual height information of the current camera, the actual angle information of the current camera, the preset height information of the camera and the preset angle information of the camera to obtain the compensation information corresponding to the current camera.
S718, calculating to obtain compensation information corresponding to each current camera according to the position errors of the current cameras in the base station images.
S720, taking average value and/or variance of each piece of compensation information to obtain final piece of compensation information of the camera.
S722, calibrating the camera according to the compensation information of the camera.
S724, saving the calibration parameters to the calibration. Txt and deleting the cordinate. Txt file.
The calibration file is used for storing the change condition of the internal parameters of the camera in the process of calibrating the camera according to the camera compensation information.
In this embodiment, a plurality of groups of base station images including marks are photographed by controlling a robot to enter and exit a base station for many times, the marks in the base station images are identified, coordinates of the marks in the base station images are converted to obtain actual position information of a current camera, and when the actual position of the current camera is determined to meet a calibration condition, compensation information of the camera is obtained based on the difference between preset position information and actual position information of the camera, so that position calibration of the camera is achieved. On one hand, the position of the camera can be calibrated, so that the recognition accuracy of the robot is guaranteed, on the other hand, the calibration mode utilizes image recognition calculation to obtain compensation information, the calibration of the camera can be completed based on the original hardware design, the camera is not required to be replaced, and the calibration cost is reduced. .
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a camera calibration device of the robot for realizing the camera calibration method of the robot. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the camera calibration device for one or more robots provided below may be referred to the limitation of the camera calibration method for a robot in the above description, and will not be repeated here.
In one embodiment, as shown in fig. 8, there is provided a camera calibration apparatus of a robot, including: a capture module 802, an identification module 804, a compensation module 806, and a calibration module 808, wherein:
the shooting module 802 is configured to respond to the calibration instruction, control the number of times of the robot entering and exiting the base station to be at least one, and control a camera on the robot to shoot the identifier of the base station when the robot enters and exits the base station, so as to obtain a base station image including the identifier shot each time the robot enters and exits the base station;
the identifying module 804 is configured to identify coordinates of the base station image marked on the base station image, and convert coordinate conversion of the base station image marked on the base station image to obtain actual position information of the current camera;
the compensation module 806 is configured to obtain compensation information of the camera based on a difference between the actual position information of the current camera and the preset position information when it is determined that the calibration condition is satisfied according to the actual position information of the current camera;
and the calibration module 808 is used for calibrating the camera according to the compensation information of the camera.
In one embodiment, the compensation module 806 is further configured to calculate, according to a difference between the actual position information of the current camera and the preset position information, a position error of the current camera; if the position errors of the current cameras calculated according to the at least three base station images are all larger than a preset value, determining that the calibration conditions are met; calculating to obtain compensation information corresponding to each current camera according to the position error of the current camera in each base station image; and (5) taking average value and/or variance of each piece of compensation information to obtain final compensation information of the camera.
In one embodiment, the camera calibration device of the robot further includes an acquisition module, configured to determine that the calibration condition is satisfied when the number of acquired base station images reaches a set value.
In one embodiment, the acquisition module is further configured to sort all compensation information in descending order of absolute values of the compensation information; removing the top N pieces of compensation information sequenced in the front; and taking the average value and/or variance of the reserved compensation information to obtain the final compensation information of the camera.
In one embodiment, the actual position information includes actual altitude information and actual angle information; the preset position information comprises preset height information and preset angle information; the position error comprises a height error and an angle error, and the camera calibration device of the robot further comprises a calculation module, wherein the calculation module is used for calculating the height error of the camera according to the difference between the actual height information of the current camera and the preset height information, and calculating the angle error of the camera according to the difference between the actual angle information of the current camera and the preset angle information; and if the height errors and/or the angle errors of the cameras calculated according to the at least three base station images are larger than the preset value, determining that the calibration condition is met.
In one embodiment, the calculating module is further configured to calculate, according to the preset horizontal position information of the identifier and the camera, the actual height information of the current camera, the actual angle information of the current camera, the preset height information of the camera, and the preset angle information of the camera, and by using a trigonometric function, obtain compensation information corresponding to the current camera.
The modules in the camera calibration device of the robot can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing base station image data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for calibrating a camera of a robot.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method for calibrating a camera of a robot, the method comprising:
responding to a calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification shot each time the robot enters and exits the base station;
Recognizing coordinates of the mark in the base station image, and converting coordinate conversion of the mark in the base station image to obtain actual position information of the current camera;
when the fact that the calibration condition is met is determined according to the actual position information of the current camera, obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information;
and calibrating the camera according to the compensation information of the camera.
2. The method according to claim 1, wherein the obtaining the compensation information of the camera based on a difference between the actual position information of the current camera and the preset position information when it is determined that the calibration condition is satisfied according to the actual position information of the current camera includes:
calculating to obtain the position error of the current camera according to the difference between the actual position information of the current camera and the preset position information;
if the position errors of the current cameras calculated according to at least three base station images are all larger than a preset value, determining that the calibration condition is met;
calculating to obtain compensation information corresponding to each current camera according to the position error of the current camera in each base station image;
And taking the average value and/or variance of each piece of compensation information to obtain the final compensation information of the camera.
3. The method according to claim 2, wherein the method further comprises:
and when the acquisition quantity of the base station images reaches a set value, determining that the calibration condition is met.
4. A method according to claim 3, wherein when the number of acquired base station images reaches a set value, the averaging and/or variance of the compensation information to obtain final compensation information of the camera includes:
sorting all the compensation information in descending order of absolute values of the compensation information;
removing the top N pieces of compensation information which are sequenced in the front;
and taking average value and/or variance of the reserved compensation information to obtain final compensation information of the camera.
5. The method of claim 4, wherein the actual position information comprises actual altitude information and actual angle information; the preset position information comprises preset height information and preset angle information; the position errors include a height error and an angle error;
the calculating to obtain the position error of the camera according to the difference between the preset position information and the actual position information of the camera includes:
Calculating to obtain the height error of the camera according to the difference between the actual height information of the current camera and the preset height information, and calculating to obtain the angle error of the camera according to the difference between the actual angle information of the current camera and the preset angle information;
and if the position errors of the cameras calculated according to at least three base station images are all larger than a preset value, determining that the calibration condition is met comprises the following steps: and if the height errors and/or the angle errors of the cameras calculated according to at least three base station images are larger than the preset value, determining that the calibration condition is met.
6. The method of claim 5, wherein the calculating the manner of obtaining the compensation information corresponding to the current camera according to the position error of the current camera in the base station image comprises:
and calculating by using a trigonometric function according to the mark and the preset horizontal position information of the camera, the actual height information of the current camera, the actual angle information of the current camera, the preset height information of the camera and the preset angle information of the camera to obtain the compensation information corresponding to the current camera.
7. A camera calibration apparatus for a robot, the apparatus comprising:
the shooting module is used for responding to the calibration instruction, controlling the times of the robot entering and exiting the base station to be at least one time, and controlling a camera on the robot to shoot the identification of the base station when the robot enters and exits the base station, so as to obtain a base station image comprising the identification, which is shot when the robot enters and exits the base station each time;
the identification module is used for identifying the coordinates of the identification in the base station image and converting the coordinate conversion of the identification in the base station image to obtain the actual position information of the current camera;
the compensation module is used for obtaining compensation information of the camera based on the difference between the actual position information of the current camera and preset position information when the fact that the calibration condition is met is determined according to the actual position information of the current camera;
and the calibration module is used for calibrating the camera according to the compensation information of the camera.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202211619058.5A 2022-12-13 2022-12-13 Camera calibration method and device for robot, computer equipment and storage medium Pending CN116132806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211619058.5A CN116132806A (en) 2022-12-13 2022-12-13 Camera calibration method and device for robot, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211619058.5A CN116132806A (en) 2022-12-13 2022-12-13 Camera calibration method and device for robot, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116132806A true CN116132806A (en) 2023-05-16

Family

ID=86298349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211619058.5A Pending CN116132806A (en) 2022-12-13 2022-12-13 Camera calibration method and device for robot, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116132806A (en)

Similar Documents

Publication Publication Date Title
CN110568447B (en) Visual positioning method, device and computer readable medium
CN109737974B (en) 3D navigation semantic map updating method, device and equipment
CN108986161B (en) Three-dimensional space coordinate estimation method, device, terminal and storage medium
JP6785860B2 (en) Spatial mapping using a multi-directional camera
WO2020223974A1 (en) Method for updating map and mobile robot
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN110176032B (en) Three-dimensional reconstruction method and device
CN110573901A (en) calibration of laser sensor and vision sensor
TW202115366A (en) System and method for probabilistic multi-robot slam
CN111127540B (en) Automatic distance measurement method and system for three-dimensional virtual space
WO2019136613A1 (en) Indoor locating method and device for robot
CN109949367B (en) Visible light imaging positioning method based on circular projection
EP3621041A1 (en) Three-dimensional representation generating system
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
JP7166446B2 (en) System and method for estimating pose of robot, robot, and storage medium
US20180225007A1 (en) Systems and methods for user input device tracking in a spatial operating environment
CN113474819A (en) Information processing apparatus, information processing method, and program
US20210141381A1 (en) Information processing device, information processing system, behavior planning method, and computer program
CN111105467A (en) Image calibration method and device and electronic equipment
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
CN112288821B (en) Method and device for calibrating external parameters of camera
CN108332662A (en) A kind of object measuring method and device
CN116132806A (en) Camera calibration method and device for robot, computer equipment and storage medium
CN104200469A (en) Data fusion method for vision intelligent numerical-control system
US11216966B2 (en) Systems and methods for automated product measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination