CN115861431A - Camera registration method and device, communication equipment and storage medium - Google Patents

Camera registration method and device, communication equipment and storage medium Download PDF

Info

Publication number
CN115861431A
CN115861431A CN202111117364.4A CN202111117364A CN115861431A CN 115861431 A CN115861431 A CN 115861431A CN 202111117364 A CN202111117364 A CN 202111117364A CN 115861431 A CN115861431 A CN 115861431A
Authority
CN
China
Prior art keywords
rgb
feature point
camera
correction map
energy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111117364.4A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202111117364.4A priority Critical patent/CN115861431A/en
Publication of CN115861431A publication Critical patent/CN115861431A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement Of Optical Distance (AREA)

Abstract

The present disclosure relates to a camera registration method, apparatus, device and medium, applied to an electronic device including at least one RGB camera and a depth camera, including: correcting an RGB image collected by the RGB camera and an energy map collected by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map; determining a target feature point pair from the matched feature point pairs on the RGB correction image and the energy correction image, wherein the difference value of the first parallax of the first feature point and the second parallax of the second feature point in the target feature point pair is greater than a preset threshold value; based on the difference value, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion; and correcting the RGB image and the energy diagram again according to the adjusted homography matrix. The registration accuracy between the RGB camera and the depth camera can be improved.

Description

Camera registration method and device, communication equipment and storage medium
Technical Field
The present disclosure relates to the field of camera technologies, and in particular, to a camera registration method and apparatus, a communication device, and a storage medium.
Background
Under the scene that an RGB camera and a Depth (Depth) camera are used in a matched mode, the relative position relation of the RGB camera and the Depth camera needs to be registered so as to improve the accuracy of image acquisition. In a related scene, the left view and the right view corresponding to the two cameras are respectively subjected to distortion elimination and line alignment to realize registration, so that the imaging origin coordinates of the left view and the right view are consistent, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, and the epipolar lines are aligned in a line mode.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a camera registration method, apparatus, communication device and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a camera registration method applied to an electronic device including at least one RGB camera and at least one depth camera, the method including:
correcting the RGB image collected by the RGB camera and the energy map collected by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
determining a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair comprises a first feature point on the RGB correction map and a second feature point on the energy correction map, and the difference value of a first parallax of the first feature point in the target feature point pair and a second parallax of the second feature point in the target feature point pair is greater than a preset threshold value;
based on the difference value, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion;
and correcting the RGB image and the energy diagram again according to the adjusted homography matrix.
Optionally, the determining a target feature point pair from the RGB correction map and the matched pair of feature points on the energy correction map includes:
regarding any one of the feature point pairs, in a case where a difference between a first disparity of the first feature point and a second disparity of the second feature point in the feature point pair is greater than the preset threshold, regarding the feature point pair as the target feature point pair; alternatively, the first and second electrodes may be,
and determining candidate characteristic point pairs with the confidence degrees larger than a preset confidence degree threshold value from each characteristic point pair, and regarding any one candidate characteristic point pair, taking the candidate characteristic point pair as the target characteristic point pair when the difference value between the first parallax of the first characteristic point and the second parallax of the second characteristic point in the candidate characteristic point pair is larger than the preset threshold value.
Optionally, the adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment ratio based on the difference includes:
determining the adjustment quantity of parameters according to the baseline distance and the preset adjustment proportion;
increasing the homography matrix by the parameter adjustment amount if the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax; alternatively, the first and second electrodes may be,
reducing the homography matrix by the parameter adjustment amount if a difference between the first disparity and the second disparity is less than the preset threshold and the first disparity is less than the second disparity.
Optionally, the adjusting, based on the difference, the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment ratio includes:
determining target parameters in the homography matrix for representing translation and rotation relations between the RGB camera and the depth camera;
and adjusting the target parameters in the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion based on the difference.
Optionally, the method comprises:
and after the RGB image and the energy map are corrected again according to the adjusted homography matrix, returning to execute the step of determining the target characteristic point pairs from the matched characteristic point pairs on the RGB correction map and the energy correction map until the target characteristic point pairs do not exist in the characteristic point pairs on the new RGB correction map and the new energy correction map.
Optionally, the determining of the first disparity of the first feature point comprises:
determining RGB coordinates of the first feature point in the RGB correction map and energy coordinates in the energy correction map;
taking a difference between the RGB coordinates and the energy coordinates as the first disparity.
Optionally, the determining of the second disparity of the second feature point comprises:
determining first imaging coordinates of the second feature point in the depth camera and second imaging coordinates of the second feature point in the RGB camera;
and taking the difference value between the first imaging coordinate and the second imaging coordinate as the second parallax.
Optionally, the determining the first imaging coordinate of the second feature point in the depth camera includes:
acquiring the depth value of the second characteristic point, and determining a first distance from the second characteristic point to the optical axis of the depth camera under a world coordinate system;
and determining a first imaging coordinate of the second feature point in the depth camera according to the focal length of the depth camera, the depth value and the first distance.
Optionally, the determining second imaging coordinates of the second feature point in the RGB camera includes:
determining a second distance from the second characteristic point to the optical axis of the RGB camera under a world coordinate system according to the central distance between the RGB camera and the depth camera;
and determining a second imaging coordinate of the second feature point in the RGB camera according to the focal length of the RGB camera, the depth value and the second distance.
According to a second aspect of the embodiments of the present disclosure, there is provided a camera registration apparatus applied to an electronic device including at least one RGB camera and at least one depth camera, the apparatus including:
the first correction module is configured to correct the RGB images acquired by the RGB camera and the energy map acquired by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
a determining module configured to determine a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair includes a first feature point on the RGB correction map and a second feature point on the energy correction map, and a difference value between a first disparity of the first feature point in the target feature point pair and a second disparity of the second feature point in the target feature point pair is greater than a preset threshold;
an adjusting module configured to adjust the homography matrix according to a baseline distance between the RGB camera and the depth camera and a preset adjustment ratio based on the difference;
and the second correction module is configured to correct the RGB image and the energy map again according to the adjusted homography matrix.
Optionally, the determining module is configured to: regarding any one of the feature point pairs, in a case where a difference between a first disparity of the first feature point and a second disparity of the second feature point in the feature point pair is greater than the preset threshold, regarding the feature point pair as the target feature point pair; alternatively, the first and second electrodes may be,
and determining candidate characteristic point pairs with the confidence degrees larger than a preset confidence degree threshold value from each characteristic point pair, and regarding any one candidate characteristic point pair, taking the candidate characteristic point pair as the target characteristic point pair when the difference value between the first parallax of the first characteristic point and the second parallax of the second characteristic point in the candidate characteristic point pair is larger than the preset threshold value.
Optionally, the adjusting module is configured to determine a parameter adjustment amount according to the baseline distance and the preset adjustment ratio;
increasing the homography matrix by the parameter adjustment amount if the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax; alternatively, the first and second liquid crystal display panels may be,
reducing the homography matrix by the parameter adjustment amount if a difference between the first disparity and the second disparity is less than the preset threshold and the first disparity is less than the second disparity.
Optionally, the adjusting module is configured to determine target parameters in the homography matrix for characterizing a translation and rotation relationship between the RGB camera and the depth camera;
and adjusting the target parameters in the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion based on the difference.
Optionally, the determining module is further configured to, after the RGB image and the energy map are corrected again according to the adjusted homography matrix, return to performing the step of determining the target feature point pair from the matching feature point pairs on the RGB correction map and the energy correction map until the target feature point pair does not exist in each feature point pair on the new RGB correction map and the new energy correction map.
Optionally, the determining module is configured to determine RGB coordinates of the first feature point in the RGB correction map and energy coordinates in the energy correction map;
taking a difference between the RGB coordinates and the energy coordinates as the first disparity.
Optionally, the determining module is configured to determine first imaging coordinates of the second feature point in the depth camera and second imaging coordinates in the RGB camera;
and taking the difference value between the first imaging coordinate and the second imaging coordinate as the second parallax.
Optionally, the determining module is configured to obtain a depth value of the second feature point, and determine a first distance from the second feature point to an optical axis of the depth camera in a world coordinate system;
and determining a first imaging coordinate of the second feature point in the depth camera according to the focal length of the depth camera, the depth value and the first distance.
Optionally, the determining module is configured to determine, according to a center distance between the RGB camera and the depth camera, a second distance from the second feature point to an optical axis of the RGB camera in a world coordinate system;
and determining a second imaging coordinate of the second feature point in the RGB camera according to the focal length of the RGB camera, the depth value and the second distance.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
correcting the RGB images collected by the RGB camera and the energy map collected by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
determining a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair comprises a first feature point on the RGB correction map and a second feature point on the energy correction map, and the difference value of a first parallax of the first feature point in the target feature point pair and a second parallax of the second feature point in the target feature point pair is greater than a preset threshold value;
based on the difference value, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion;
and correcting the RGB image and the energy diagram again according to the adjusted homography matrix.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of any one of the first aspects.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method comprises the steps of correcting an RGB image and an energy map through a homography matrix, determining a target feature point pair from feature point pairs matched with the obtained RGB correction map and the energy correction map, and adjusting the homography matrix according to a baseline distance between an RGB camera and a depth camera and a preset adjustment proportion on the basis of a difference value under the condition that the difference value between a first parallax error of a first feature point in the target feature point pair and a second parallax error of a second feature point in the target feature point pair is larger than a preset threshold value, so that the registration accuracy between the RGB camera and the depth camera can be improved, and the accuracy of related application can be improved under the condition that the RGB camera and the depth camera are simultaneously configured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a camera registration method according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating a depth map derived from an energy map in accordance with an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating a camera calibration according to an exemplary embodiment.
Fig. 4 is a flow chart illustrating a method of determining a first disparity for a first feature point according to an example embodiment.
Fig. 5 is a flow chart illustrating a method of determining a second disparity for a second feature point according to an example embodiment.
FIG. 6 is a schematic diagram illustrating pixel point imaging according to an example embodiment.
Fig. 7 is a flowchart illustrating an implementation of step S13 in fig. 1 according to an exemplary embodiment.
Fig. 8 is a flow chart illustrating another camera registration method according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating a camera registration apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating an apparatus for camera registration in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Also, it should be noted that for simplicity of description, the method embodiments provided in the present disclosure are described as a series of action combinations, but those skilled in the art should understand that the present disclosure is not limited by the described action sequences. For example, the terms "S131", "S132", etc. are used to distinguish method steps and are not necessarily to be construed as describing a particular order of execution. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments and that no particular act is required by the present disclosure.
The inventor finds that the registration is realized by carrying out distortion elimination and line alignment on the left view and the right view, the registration accuracy between the RGB camera and the depth camera is low due to the limitation of the number of feature points in a detected feature point set and the distribution of the feature points, and therefore the application accuracy in a related application scene is low.
Therefore, the present disclosure provides a camera registration method, which aims to improve the registration accuracy between an RGB camera and a depth camera, and further improve the accuracy of related applications in a scene where the RGB camera and the depth camera are configured simultaneously, for example, in a gesture recognition scene, an RGB image and a depth map at an edge of a target or an optimal feature point have a higher matching degree, and the gesture recognition accuracy can be improved, or the distance determination accuracy can be improved in a distance determination scene.
Fig. 1 is a flowchart illustrating a camera registration method according to an exemplary embodiment, applied to an electronic device including at least one RGB camera and at least one depth camera, where the method is used to determine a relative position relationship between the RGB camera and the depth camera, and for example, if the electronic device is configured with two RGB cameras and one depth camera, the method may determine the relative position relationship between the two RGB cameras and the depth camera respectively. As shown in fig. 1, the following steps are included.
In step S11, the RGB image collected by the RGB camera and the energy map collected by the depth camera are corrected according to the homography matrix between the RGB camera and the depth camera, so as to obtain an RGB correction map and an energy correction map.
The method comprises the steps of acquiring an RGB image through an RGB camera, acquiring an energy map through a depth camera, acquiring a depth value and a confidence value aiming at each pixel point according to the energy map, acquiring the depth map according to the depth value of each pixel point, and acquiring the confidence map according to the confidence value of each pixel point. The energy map is used for reflecting the energy of each pixel point collected by the depth camera, the depth map is used for reflecting the depth value of each pixel point collected by the depth camera, and the confidence map is used for reflecting the reliability degree of depth value prediction of each pixel point collected by the depth camera.
Referring to fig. 2, the energy map may be converted into a gray map, and based on a mapping function corresponding to the depth camera, a depth value for each pixel in the collected energy map is calculated according to a gray value of each pixel in the gray map, and then the depth map is obtained according to the depth value of each pixel.
It can be understood that, referring to fig. 3, the homography matrix is a matrix used for describing a position mapping relationship of a pixel point between a world coordinate system and a pixel coordinate system, which is obtained by calibrating parameters between an RGB camera and a depth camera on line according to camera internal parameters and camera external parameters calibrated by a factory in combination with an actual application scene.
The camera internal reference comprises the focal length, coordinate axis inclination parameters, principal point coordinates in an imaging plane and principal point coordinates relative to the imaging plane of the RGB camera and the depth camera; the camera extrinsic parameters include a rotation matrix that describes a rotational transformation of the world coordinate system to the camera coordinate system, and a translation matrix that describes a translational transformation of the world coordinate system to the camera coordinate system.
In specific implementation, the homography matrix is multiplied by an RGB image acquired by an RGB camera, so that RGB coordinates of pixel points in the RGB image are corrected to obtain an RGB correction image; in a similar way, the homography matrix is multiplied by the energy map collected by the depth camera, so that the energy coordinates of the pixel points in the energy map are corrected to obtain the energy correction map.
In step S12, a target feature point pair is determined from the matched pair of feature points on the RGB correction map and the energy correction map.
The feature point pairs comprise first feature points on the RGB correction map and second feature points on the energy correction map, and the difference value between the first parallax of the first feature points in the target feature point pairs and the second parallax of the second feature points in the target feature point pairs is larger than a preset threshold value.
Wherein, the difference of the first parallax of the first feature point in the target feature point pair and the second parallax of the second feature point in the target feature point pair is greater than a preset threshold, including: the difference value of the first parallax minus the second parallax is larger than a preset threshold value, or the difference value of the second parallax minus the first parallax is larger than the preset threshold value. That is, the absolute value of the difference between the first parallax and the second parallax is greater than the preset threshold.
During implementation, feature point detection is carried out on the RGB correction graph to obtain a first feature point, feature point detection is carried out on the energy correction graph to obtain a second feature point, the first feature point in the RGB correction graph is matched with the second feature point in the energy correction graph to obtain a matched feature point pair, wherein the matched feature point pair is used for representing the same pixel point in a world coordinate system, and an RGB coordinate projected in the RGB correction graph and an energy coordinate projected in the energy correction graph are obtained.
On this basis, fig. 4 is a flowchart illustrating a method for determining a first disparity of a first feature point according to an exemplary embodiment, and referring to fig. 4, the method includes the following steps:
in step S41, RGB coordinates of the first feature point in the RGB correction map and energy coordinates in the energy correction map are determined.
And determining the RGB coordinates of the first feature point after feature matching in the RGB correction map and the energy coordinates in the energy correction map.
In step S42, the difference between the RGB coordinates and the energy coordinates is taken as a first parallax.
In practice, the first parallax D 1 Equal to the difference between the RGB coordinates and the energy coordinates.
On the basis of the foregoing embodiments, fig. 5 is a flowchart illustrating a method for determining a second disparity of a second feature point according to an exemplary embodiment, and referring to fig. 5, the method includes the following steps:
in step S51, first imaging coordinates of the second feature point in the depth camera and second imaging coordinates in the RGB camera are determined.
In this step, determining first imaging coordinates of the second feature point in the depth camera includes:
and acquiring the depth value of the second characteristic point, and determining a first distance from the second characteristic point to the optical axis of the depth camera under a world coordinate system. And acquiring the depth value of the second characteristic point from the depth map.
And determining a first imaging coordinate of the second feature point in the depth camera according to the focal length, the depth value and the first distance of the depth camera.
Specifically, referring to fig. 6, in this embodiment, the right eye camera is a depth camera, the left eye camera is an RGB camera, and the first imaging coordinates of the second feature point in the depth camera are calculated by the following formula:
Figure BDA0003275955880000111
wherein f is r Is the focal length of the depth camera, Z is the depth value of the second characteristic point p, X is the first distance from the second characteristic point p to the optical axis of the depth camera under the world coordinate system, and X r Imaging point p in a depth camera for a second feature point r The first imaging coordinates of (a).
In this step, determining second imaging coordinates of the second feature point in the RGB camera includes:
and determining a second distance from the second characteristic point to the optical axis of the RGB camera in the world coordinate system according to the central distance between the RGB camera and the depth camera.
Referring to fig. 6, the second distance is equal to the first distance minus the center distance, wherein the center distance is the baseline distance. Taking the second feature point p as the feature point outside the depth camera and the RGB camera as an example, the second distance is equal to the first distance minus the center distance.
In addition to the above embodiment, if the second feature point is a feature point between the depth camera and the RGB camera, the second distance is the center distance minus the first distance.
And determining a second imaging coordinate of the second characteristic point in the RGB camera according to the focal length, the depth value and the second distance of the RGB camera.
Specifically, referring to fig. 6, the second imaging coordinates of the second feature point in the RGB camera are calculated by the following formula:
Figure BDA0003275955880000121
wherein f is l In this embodiment, the focal length of the depth camera is equal to that of the RGB camera, b is the center distance, x l Imaging point p in an RGB camera for a second feature point l The second imaging coordinates of (1).
In step S52, the difference between the first imaged coordinates and the second imaged coordinates is taken as the second parallax.
In practice, the second parallax D 2 Is the difference between the first imaging coordinates and the second imaging coordinates.
Specifically, the second parallax D is calculated by the following formula 2
Figure BDA0003275955880000122
Then D is 2 (= x) | r -x l I is described. In the case of depth camera and RGB camera focal lengths being equal, D 2 Equal to the ratio of the center distance to the depth value.
On the basis of the foregoing embodiment, in step S12, the determining a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map includes:
and regarding any characteristic point pair, taking the characteristic point pair as a target characteristic point pair when the difference value between the first parallax of the first characteristic point and the second parallax of the second characteristic point in the characteristic point pair is larger than a preset threshold value.
Specifically, one feature point pair is selected at any time, a difference value between a first disparity of a first feature point in the feature point pair and a second disparity of a second feature point is calculated, and when the difference value is smaller than or equal to a preset threshold value, a new feature point pair is reselected from an unselected feature point pair until the difference value corresponding to the feature point pair is larger than the preset threshold value, the feature point pair is taken as a target feature point pair, and the process of traversing calculation is finished.
In one embodiment, candidate feature point pairs with a confidence greater than a preset confidence threshold are determined from the feature point pairs, and for any candidate feature point pair, the candidate feature point pair is taken as the target feature point pair in the case that a difference value between a first disparity of a first feature point and a second disparity of a second feature point in the candidate feature point pair is greater than the preset threshold.
In one embodiment, the preset confidence level threshold corresponds to the application environment type one to one, for example, when the application environment type is an indoor environment, the first preset confidence level threshold corresponds to the application environment type, and when the application environment type is an outdoor environment, the second preset confidence level threshold corresponds to the application environment type. Through the size relation between the confidence coefficient and the preset confidence coefficient threshold value, the candidate characteristic points can be screened out, so that the accuracy of the characteristic point pairs is improved, and the accuracy of registration is improved.
Specifically, for any characteristic point pair, the confidence of the second characteristic point in the characteristic point pair is determined according to the confidence map, and when the confidence of the characteristic point is less than or equal to a preset confidence threshold, the characteristic point pair is determined to be a non-candidate characteristic point pair, and when the confidence of the characteristic point is greater than the preset confidence threshold, the characteristic point pair is determined to be a candidate characteristic point pair. The method for determining the target feature point pair from the candidate feature point pair is the same as the method for determining the target feature point pair from the candidate feature point pair, and is not described herein again.
In step S13, based on the difference, the homography matrix is adjusted according to the baseline distance between the RGB camera and the depth camera and a preset adjustment ratio.
In a possible implementation manner, the preset threshold may be 0, that is, when the first parallax and the second parallax are equal, the adjustment is not performed, and when the first parallax and the second parallax are not equal, the adjustment is performed on the homography matrix.
On the basis of the foregoing embodiment, fig. 7 is a flowchart illustrating an implementation of step S13 in fig. 1 according to an exemplary embodiment, and referring to fig. 7, in step S13, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and the preset adjustment ratio based on the difference includes:
in step S131, the parameter adjustment amount is determined according to the baseline distance and the preset adjustment ratio.
Specifically, the product of the baseline distance and the preset adjustment ratio is used as the parameter adjustment amount. For example, in the case where the baseline distance is 100mm and the preset adjustment ratio is one hundredth, the parameter adjustment amount is determined to be 1mm.
In step S132, if the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax, the homography matrix is increased by the parameter adjustment amount.
When the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax, it indicates that the acquired depth value is smaller than the true value, and therefore external parameters between the RGB camera and the depth camera need to be increased. Therefore, the homography matrix is increased according to the parameter adjustment quantity. The increase in the homography matrix by 1mm is explained using the above examples.
In step S133, if the difference between the first parallax and the second parallax is smaller than the preset threshold and the first parallax is smaller than the second parallax, the homography matrix is reduced by the parameter adjustment amount.
Similarly, when the difference between the first parallax and the second parallax is smaller than the preset threshold and the first parallax is smaller than the second parallax, it indicates that the acquired depth value is larger than the true value, and therefore external reference between the RGB camera and the depth camera needs to be reduced. The homography matrix is reduced according to the parameter adjustment amount. The homography matrix is reduced by 1mm as explained in the above examples.
On the basis of the foregoing embodiment, in step S13, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and the preset adjustment ratio based on the difference includes:
and determining target parameters used for representing translation and rotation relations between the RGB camera and the depth camera in the homography matrix.
Specifically, a translation matrix in the homography matrix is determined as a target parameter and/or a rotation matrix in the homography matrix is determined as a target parameter.
And adjusting the target parameters in the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion based on the difference value.
Specifically, when the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax, it indicates that the acquired depth value is smaller than the true value, and therefore, it is necessary to increase the external reference between the RGB camera and the depth camera. Therefore, the target parameters in the homography matrix are adjusted to be larger according to the parameter adjustment quantity.
When the difference between the first parallax and the second parallax is smaller than the preset threshold and the first parallax is smaller than the second parallax, it represents that the acquired depth value is larger than the true value, and therefore external parameters between the RGB camera and the depth camera need to be reduced. Therefore, the target parameters in the homography matrix are adjusted to be small according to the parameter adjustment quantity.
In step S14, the RGB image and the energy map are again corrected based on the adjusted homography matrix.
Specifically, the adjusted homography matrix is multiplied by the RGB image to obtain a new RGB image, and the adjusted homography matrix is multiplied by the energy map to obtain a new energy map.
According to the technical scheme, the RGB image and the energy map are corrected through the homography matrix, the target characteristic point pair is determined from the obtained RGB correction map and the characteristic point pair matched with the energy correction map, and the homography matrix is adjusted according to the baseline distance between the RGB camera and the depth camera and the preset adjusting proportion on the basis of the difference value under the condition that the difference value between the first parallax of the first characteristic point in the target characteristic point pair and the second parallax of the second characteristic point in the target characteristic point pair is larger than the preset threshold value, so that the more accurate relative position relation between the RGB camera and the depth camera is obtained, the registration accuracy between the RGB camera and the depth camera can be improved, and the accuracy of related application can be improved under the scene that the RGB camera and the depth camera are simultaneously configured.
On the basis of the above embodiment, the method includes:
and after the RGB image and the energy map are corrected again according to the adjusted homography matrix, returning to execute the step of determining the target characteristic point pairs from the matched characteristic point pairs on the RGB correction map and the energy correction map until the target characteristic point pairs do not exist in the characteristic point pairs on the new RGB correction map and the new energy correction map.
Specifically, the confidence of the new second feature point is determined according to the confidence map of the depth camera, the corresponding feature point pair is determined to be a non-candidate target feature point pair under the condition that the confidence of the new second feature point is smaller than or equal to a preset confidence threshold, and if the confidence of the second feature points of all the new feature point pairs is smaller than or equal to the preset confidence threshold, it is determined that no target feature point pair exists in each feature point pair on the new RGB correction map and the new energy correction map.
And under the condition that the confidence coefficient of the new second characteristic point is greater than a preset confidence coefficient threshold value, determining the corresponding characteristic point pair as a candidate target characteristic point pair. And aiming at the candidate target characteristic point pair, determining a new RGB coordinate of a first characteristic point of the candidate target characteristic point pair in the new RGB correction image and a new energy coordinate in the new energy correction image, and further calculating a difference value between the new RGB coordinate and the new energy coordinate to serve as a first parallax.
Further, with respect to the candidate target feature point pair, in the same manner as in the above-described embodiment, the second disparity of the new second feature point is calculated. And calculating a difference value between the first parallax and the second parallax, and determining that no target characteristic point pair exists in each characteristic point pair on the new RGB correction map and the new energy correction map under the condition that the difference value corresponding to any candidate target characteristic point pair is smaller than a preset threshold value.
The technical solution of the present disclosure is described below by a specific embodiment, and referring to fig. 8, the method includes the following steps:
before equipment leaves a factory, internal parameters of the RGB camera and internal parameters of the depth camera are calibrated through a factory, and then a double-shot calibration result, namely external parameters between the RGB camera and the depth camera, is completed according to the internal parameters of the RGB camera and the internal parameters of the depth camera, so that a homography matrix is obtained.
The RGB image is acquired through the RGB camera, and the corresponding energy map, depth map and confidence map are acquired through the depth camera. And aiming at the RGB image and the energy map, carrying out three-dimensional correction on the pixel points through the homography matrix to obtain an RGB correction map and an energy correction map.
And detecting feature points aiming at the RGB correction graph and the energy correction graph, respectively detecting a first feature point and a second feature point from the pixel points, and performing feature matching aiming at the first feature point and the second feature point to obtain a feature point pair.
And detecting a second feature point in the feature point pair according to the depth map and the confidence map, determining whether the feature point pair is matched again, determining the confidence of the second feature point according to the confidence map under the condition that the feature point pair is matched, and judging whether the confidence of the second feature point is greater than a preset confidence threshold. And under the condition that the confidence coefficient of the second characteristic point is less than or equal to a preset confidence coefficient threshold, reselecting a new characteristic point pair until the confidence coefficients of the second characteristic points of all the characteristic point pairs are less than or equal to the preset confidence coefficient threshold, namely, the target characteristic point pair does not exist, and the camera registration is ended.
And under the condition that the confidence coefficient of the second characteristic point is greater than a preset confidence coefficient threshold value, determining the characteristic point pair as a candidate characteristic point pair. And calculating a second parallax according to the basic information of the depth camera and the RGB camera, wherein the basic information is the internal reference of the depth camera and the RGB camera.
Meanwhile, aiming at a first feature point in the feature point pair, determining RGB coordinates according to the RGB correction diagram, determining energy coordinates according to the energy correction diagram, and calculating a first parallax according to the RGB coordinates and the energy coordinates. And then, under the condition that the first parallax is equal to the second parallax, reselecting the characteristic point pairs until the first parallax and the second parallax of all the characteristic point pairs are equal, and determining that the camera does not need to be registered. And under the condition that the first parallax and the second parallax are not equal, determining the size of a parameter adjustment amount for adjusting the homography matrix according to the central distance and a preset adjustment proportion, adjusting the homography matrix according to the parameter adjustment amount based on the size relation between the first parallax and the second parallax, and performing stereo correction on the RGB image and the energy map again according to the adjusted homography matrix until no target characteristic point exists, and finishing the camera registration.
Based on the same inventive concept, the present disclosure also provides a camera registration apparatus, which is applied to an electronic device including at least one RGB camera and at least one depth camera, and can implement all or part of the steps of the camera registration method in a software, hardware, or a combination of the two. Fig. 9 is a block diagram illustrating a camera registration apparatus 100 according to an exemplary embodiment, the apparatus 100, as shown in fig. 9, comprising: a first corrective module 110, a determination module 120, an adjustment module 130, and a second corrective module 140.
The first correction module 110 is configured to correct, according to a homography matrix between the RGB camera and the depth camera, an RGB image acquired by the RGB camera and an energy map acquired by the depth camera, so as to obtain an RGB correction map and an energy correction map;
the determining module 120 is configured to determine a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, where the feature point pair includes a first feature point on the RGB correction map and a second feature point on the energy correction map, and a difference between a first disparity of the first feature point in the target feature point pair and a second disparity of the second feature point in the target feature point pair is greater than a preset threshold;
the adjusting module 130 is configured to adjust the homography matrix according to a baseline distance between the RGB camera and the depth camera and a preset adjustment ratio based on the difference value;
the second rectification module 140 is configured to rectify the RGB image and the energy map again according to the adjusted homography matrix.
The device can improve the registration accuracy between the RGB camera and the depth camera, and further can improve the accuracy of related applications in a scene where the RGB camera and the depth camera are simultaneously configured.
Optionally, the determining module 120 is configured to: regarding any one of the feature point pairs, in a case where a difference between a first disparity of the first feature point and a second disparity of the second feature point in the feature point pair is greater than the preset threshold, regarding the feature point pair as the target feature point pair; alternatively, the first and second electrodes may be,
and determining candidate characteristic point pairs with the confidence degrees larger than a preset confidence degree threshold value from each characteristic point pair, and regarding any one candidate characteristic point pair, taking the candidate characteristic point pair as the target characteristic point pair when the difference value between the first parallax of the first characteristic point and the second parallax of the second characteristic point in the candidate characteristic point pair is larger than the preset threshold value.
Optionally, the adjusting module 130 is configured to determine a parameter adjustment amount according to the baseline distance and the preset adjustment ratio;
increasing the homography matrix by the parameter adjustment amount if the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax; alternatively, the first and second electrodes may be,
reducing the homography matrix by the parameter adjustment amount if a difference between the first disparity and the second disparity is less than the preset threshold and the first disparity is less than the second disparity.
Optionally, the adjusting module 130 is configured to determine target parameters in the homography matrix for characterizing a translational and rotational relationship between the RGB camera and the depth camera;
and adjusting the target parameters in the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion based on the difference value.
Optionally, the determining module 120 is further configured to, after the RGB image and the energy map are corrected again according to the adjusted homography matrix, return to performing the step of determining the target feature point pair from the matching feature point pairs on the RGB correction map and the energy correction map until the target feature point pair does not exist in each feature point pair on the new RGB correction map and the new energy correction map.
Optionally, the determining module 120 is configured to determine RGB coordinates of the first feature point in the RGB correction map and energy coordinates in the energy correction map;
taking a difference between the RGB coordinates and the energy coordinates as the first disparity.
Optionally, the determining module 120 is configured to determine first imaging coordinates of the second feature point in the depth camera and second imaging coordinates in the RGB camera;
and taking the difference value between the first imaging coordinate and the second imaging coordinate as the second parallax.
Optionally, the determining module 120 is configured to obtain a depth value of the second feature point, and determine a first distance from the second feature point to an optical axis of the depth camera in a world coordinate system;
and determining a first imaging coordinate of the second feature point in the depth camera according to the focal length of the depth camera, the depth value and the first distance.
Optionally, the determining module 120 is configured to determine a second distance from the second feature point to the optical axis of the RGB camera in a world coordinate system according to a center distance between the RGB camera and the depth camera;
and determining a second imaging coordinate of the second feature point in the RGB camera according to the focal length of the RGB camera, the depth value and the second distance.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that, in the above embodiments, the modules may be independent devices or may be the same device in specific implementation, for example, the first correction module 110 and the second correction module 140 may be the same module or may be two modules, which is not limited in this disclosure.
The present disclosure also provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
correcting the RGB images collected by the RGB camera and the energy map collected by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
determining a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair comprises a first feature point on the RGB correction map and a second feature point on the energy correction map, and the difference value between the first parallax of the first feature point in the target feature point pair and the second parallax of the second feature point in the target feature point pair is greater than a preset threshold value;
based on the difference value, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion;
and correcting the RGB image and the energy diagram again according to the adjusted homography matrix.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the camera registration method provided by the present disclosure.
Fig. 10 is a block diagram illustrating an apparatus 800 for camera registration according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera photographing operations, and camera registration operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the camera registration method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on the device 800. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 806 provides power to the various components of device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera are configured with at least one RGB camera and at least one depth camera, and can receive external multimedia data when the device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the camera registration methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the apparatus 800 to perform the camera registration method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the camera registration method described above when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A camera registration method applied to an electronic device comprising at least one RGB camera and at least one depth camera, the method comprising:
correcting the RGB images collected by the RGB camera and the energy map collected by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
determining a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair comprises a first feature point on the RGB correction map and a second feature point on the energy correction map, and the difference value of a first parallax of the first feature point in the target feature point pair and a second parallax of the second feature point in the target feature point pair is greater than a preset threshold value;
based on the difference value, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion;
and correcting the RGB image and the energy diagram again according to the adjusted homography matrix.
2. The method according to claim 1, wherein the determining target pairs of feature points from the pairs of matched feature points on the RGB correction map and the energy correction map comprises:
regarding any one of the characteristic point pairs, taking the characteristic point pair as the target characteristic point pair when a difference value between a first parallax of the first characteristic point and a second parallax of the second characteristic point in the characteristic point pair is greater than the preset threshold value; alternatively, the first and second electrodes may be,
and determining candidate characteristic point pairs with the confidence degrees larger than a preset confidence degree threshold value from each characteristic point pair, and regarding any one candidate characteristic point pair, taking the candidate characteristic point pair as the target characteristic point pair when the difference value between the first parallax of the first characteristic point and the second parallax of the second characteristic point in the candidate characteristic point pair is larger than the preset threshold value.
3. The method of claim 2, wherein the adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment ratio based on the difference comprises:
determining the adjustment quantity of parameters according to the baseline distance and the preset adjustment proportion;
increasing the homography matrix by the parameter adjustment amount if the difference between the first parallax and the second parallax is greater than the preset threshold and the first parallax is greater than the second parallax; alternatively, the first and second electrodes may be,
reducing the homography matrix by the parameter adjustment amount if a difference between the first disparity and the second disparity is less than the preset threshold and the first disparity is less than the second disparity.
4. The method of claim 1, wherein the adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment ratio based on the difference comprises:
determining target parameters in the homography matrix for representing translation and rotation relations between the RGB camera and the depth camera;
and adjusting the target parameters in the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion based on the difference value.
5. The method of claim 2, wherein the method comprises:
and after the RGB image and the energy map are corrected again according to the adjusted homography matrix, returning to execute the step of determining the target characteristic point pairs from the characteristic point pairs matched with the RGB correction map and the energy correction map until the target characteristic point pairs do not exist in the characteristic point pairs on the new RGB correction map and the new energy correction map.
6. The method according to any one of claims 2-5, wherein the determination of the first disparity for the first feature point comprises:
determining RGB coordinates of the first feature point in the RGB correction map and energy coordinates in the energy correction map;
taking a difference between the RGB coordinates and the energy coordinates as the first disparity.
7. The method according to any one of claims 2-5, wherein the determination of the second disparity for the second feature point comprises:
determining first imaging coordinates of the second feature point in the depth camera and second imaging coordinates of the second feature point in the RGB camera;
and taking the difference value between the first imaging coordinate and the second imaging coordinate as the second parallax.
8. The method of claim 7, wherein the determining first imaging coordinates of the second feature point in the depth camera comprises:
acquiring the depth value of the second characteristic point, and determining a first distance from the second characteristic point to the optical axis of the depth camera under a world coordinate system;
and determining a first imaging coordinate of the second feature point in the depth camera according to the focal length of the depth camera, the depth value and the first distance.
9. The method of claim 8, wherein the determining second imaging coordinates of the second feature point in the RGB camera comprises:
determining a second distance from the second characteristic point to the optical axis of the RGB camera under a world coordinate system according to the central distance between the RGB camera and the depth camera;
and determining a second imaging coordinate of the second feature point in the RGB camera according to the focal length of the RGB camera, the depth value and the second distance.
10. A camera registration apparatus applied to an electronic device including at least one RGB camera and at least one depth camera, the apparatus comprising:
the first correction module is configured to correct the RGB images acquired by the RGB camera and the energy map acquired by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
a determining module configured to determine a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair includes a first feature point on the RGB correction map and a second feature point on the energy correction map, and a difference value between a first disparity of the first feature point in the target feature point pair and a second disparity of the second feature point in the target feature point pair is greater than a preset threshold;
an adjusting module configured to adjust the homography matrix according to a baseline distance between the RGB camera and the depth camera and a preset adjustment ratio based on the difference;
and the second correction module is configured to correct the RGB image and the energy map again according to the adjusted homography matrix.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
correcting the RGB images collected by the RGB camera and the energy map collected by the depth camera according to a homography matrix between the RGB camera and the depth camera to obtain an RGB correction map and an energy correction map;
determining a target feature point pair from the matched feature point pairs on the RGB correction map and the energy correction map, wherein the feature point pair comprises a first feature point on the RGB correction map and a second feature point on the energy correction map, and the difference value between the first parallax of the first feature point in the target feature point pair and the second parallax of the second feature point in the target feature point pair is greater than a preset threshold value;
based on the difference value, adjusting the homography matrix according to the baseline distance between the RGB camera and the depth camera and a preset adjustment proportion;
and correcting the RGB image and the energy diagram again according to the adjusted homography matrix.
12. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
CN202111117364.4A 2021-09-23 2021-09-23 Camera registration method and device, communication equipment and storage medium Pending CN115861431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111117364.4A CN115861431A (en) 2021-09-23 2021-09-23 Camera registration method and device, communication equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111117364.4A CN115861431A (en) 2021-09-23 2021-09-23 Camera registration method and device, communication equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115861431A true CN115861431A (en) 2023-03-28

Family

ID=85652407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111117364.4A Pending CN115861431A (en) 2021-09-23 2021-09-23 Camera registration method and device, communication equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115861431A (en)

Similar Documents

Publication Publication Date Title
JP6348611B2 (en) Automatic focusing method, apparatus, program and recording medium
CN111105454B (en) Method, device and medium for obtaining positioning information
CN106778773B (en) Method and device for positioning target object in picture
CN113643356B (en) Camera pose determination method, virtual object display method, device and electronic equipment
CN112188096A (en) Photographing method and device, terminal and storage medium
CN115861431A (en) Camera registration method and device, communication equipment and storage medium
CN112702514B (en) Image acquisition method, device, equipment and storage medium
CN114418865A (en) Image processing method, device, equipment and storage medium
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
WO2023240401A1 (en) Camera calibration method and apparatus, and readable storage medium
WO2023245378A1 (en) Method and apparatus for determining confidence level of depth information of image, and storage medium
CN111986097B (en) Image processing method and device
CN112070681B (en) Image processing method and device
CN111985280B (en) Image processing method and device
CN116863162A (en) Parameter optimization method and device of camera module, electronic equipment and storage medium
CN115965675A (en) Image processing method and device, electronic device and storage medium
CN116805285A (en) Image processing method and device, electronic equipment and storage medium
CN115809958A (en) Image processing method and device and calibration method and device
CN117392232A (en) Image calibration method, device, terminal and storage medium
CN117974772A (en) Visual repositioning method, device and storage medium
CN117291823A (en) Image processing method, device and storage medium
CN115222818A (en) Calibration verification method, calibration verification device and storage medium
CN115690221A (en) Abnormal positioning method and device for camera calibration and storage medium
CN114898074A (en) Three-dimensional information determination method and device, electronic equipment and storage medium
CN117132657A (en) Pose correction method, pose correction device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination