CN116863162A - Parameter optimization method and device of camera module, electronic equipment and storage medium - Google Patents

Parameter optimization method and device of camera module, electronic equipment and storage medium Download PDF

Info

Publication number
CN116863162A
CN116863162A CN202210303687.0A CN202210303687A CN116863162A CN 116863162 A CN116863162 A CN 116863162A CN 202210303687 A CN202210303687 A CN 202210303687A CN 116863162 A CN116863162 A CN 116863162A
Authority
CN
China
Prior art keywords
point set
feature point
image
feature
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210303687.0A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210303687.0A priority Critical patent/CN116863162A/en
Publication of CN116863162A publication Critical patent/CN116863162A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a parameter optimization method, a device, an electronic device and a storage medium of a camera module, wherein a sensor is arranged in the electronic device, and the method comprises the following steps: according to the color image acquired by the first camera, and the depth image, the gray level image and the confidence level image acquired by the second camera; and determining a position parameter, wherein the position parameter is used for representing the relative position relation of the first camera and the second camera. According to the image capturing module, the feature point sets of the depth image, the confidence level image, the gray level image and the color image which are acquired by the image capturing module are acquired respectively, confidence level filtering is conducted on the feature point sets based on the first preset condition, the feature point sets meeting the first preset condition are used for determining the position parameters of the image capturing module to obtain the position parameters with higher accuracy, and the effects of the automatic focusing and the optical anti-shake functions of the image capturing module are improved.

Description

Parameter optimization method and device of camera module, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of imaging technologies, and in particular, to a method and a device for optimizing parameters of an imaging module, electronic equipment and a storage medium.
Background
Currently, cameras mainly comprising RGB-D cameras and having 3D imaging functions have wide application in the fields of augmented reality, virtual reality, simultaneous localization, map construction and the like. In the process of acquiring an image, the visual positioning error caused by different focusing positions among different lenses or the posture change of the carried camera and shake is reduced in the camera through functions of automatic focusing, optical anti-shake and the like, and the effects of the functions of automatic focusing and optical anti-shake depend on the relative position relationship among the lenses in the camera, namely the parameters of the camera.
At present, parameters of a camera are mainly obtained by means of obtaining a homography matrix through feature detection and matching of an acquired gray level image and a color image. The obtained homography matrix is often error due to the influence of shooting scenes, so that the obtained relative position relationship among lenses in the camera is inaccurate, and the effects of automatic focusing and optical anti-shake functions are affected.
Disclosure of Invention
In view of this, the disclosure provides a method, a device, an electronic device and a readable storage medium for optimizing parameters of a camera module, so as to at least solve the problem of error in calculating parameters of a camera in the related art.
According to a first aspect of an embodiment of the present disclosure, there is provided a parameter optimization method of a camera module, where the camera module includes a first camera and a second camera, and the method includes:
acquiring a color image acquired by the first camera, and a depth image, a gray level image and a confidence level image acquired by the second camera, wherein a first corresponding relation is arranged among the pixel points of the depth image, the pixel points of the gray level image and the pixel points of the confidence level image;
acquiring a first characteristic point set of the color image and a second characteristic point set of the gray image with a second corresponding relation;
determining a third feature point set of the color image, a fourth feature point set of the gray level image and a fifth feature point set of the depth image with a third corresponding relation according to the first corresponding relation, the second corresponding relation and the confidence degree of the pixel point of the confidence degree image, wherein the confidence degree of the pixel point of the confidence degree image corresponding to each feature point in the fourth feature point set meets a first preset condition;
and determining a position parameter according to the third characteristic point set, the fourth characteristic point set and the fifth characteristic point set, wherein the position parameter is used for representing the relative position relation of the first camera and the second camera.
In combination with any one of the embodiments of the present disclosure, the acquiring the first set of feature points of the color image and the second set of feature points of the gray scale image with the second correspondence includes:
performing feature matching on the color image and the gray level image according to the current position parameters of the camera module to obtain a first feature point set of the color image and a second feature point set of the gray level image with a second corresponding relationship;
after the position parameter is determined, the method further comprises:
the current location parameters are updated using the determined location parameters.
In combination with any one of the embodiments of the present disclosure, the feature matching the color image and the gray scale image includes:
adjusting the color image and the grayscale image to a coplanar line alignment;
and performing feature matching on the color image and the gray scale image which are adjusted to be aligned in a coplanar line.
In combination with any one of the embodiments of the present disclosure, the determining, according to the first correspondence, the second correspondence, and the confidence of the pixel point of the confidence image, the third feature point set of the color image, the fourth feature point set of the gray image, and the fifth feature point set of the depth image, which have a third correspondence, includes:
According to the first corresponding relation and the second characteristic point set, a sixth characteristic point set of the depth image with a fourth corresponding relation with the second characteristic point set is determined;
determining the confidence coefficient of each feature point of the sixth feature point set and the confidence coefficient of each feature point in the second feature point set according to the confidence coefficient of the pixel point of the confidence coefficient image and the first corresponding relation;
determining the confidence coefficient of each feature point in the first feature point set according to the confidence coefficient of each feature point in the second feature point set and the second corresponding relation;
and determining a third feature point set of the color image, a fourth feature point set of the gray image and a fifth feature point set of the depth image with a third corresponding relation according to the confidence coefficient of the first feature point set, the confidence coefficient of the second feature point set, the confidence coefficient of the sixth feature point set and the first preset condition.
In combination with any one of the embodiments of the present disclosure, the determining the third set of feature points of the color image, the fourth set of feature points of the gray scale image, and the fifth set of feature points of the depth image, which have a third correspondence, includes:
Removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the first feature point set to obtain a third feature point set;
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the second feature point set to obtain a fourth feature point set;
and removing the feature points with the confidence coefficient lower than the confidence coefficient threshold value in the sixth feature point set to obtain a fifth feature point set.
In combination with any of the embodiments of the present disclosure, the location parameter includes a baseline;
the determining a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set includes:
acquiring a baseline of each characteristic point pair between the third characteristic point set and the fourth characteristic point set;
and clustering the baselines of each characteristic point pair, and determining the clustering result as the baselines of the camera module.
In combination with any one of the embodiments of the present disclosure, the acquiring a baseline of each feature point pair between the third feature point set and the fourth feature point set includes:
acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set;
determining the depth value of each feature point in the fifth feature point set as the depth value of the corresponding feature point pair between the third feature point set and the fourth feature point set according to the third corresponding relation;
And determining a base line of each feature point pair between the third feature point set and the fourth feature point set according to the coordinate distance and the depth value of each feature point pair and the focal length of the camera module.
In combination with any one of the embodiments of the present disclosure, the obtaining the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set includes:
acquiring a parallax set between the first characteristic point set and the second characteristic point set according to the second corresponding relation, wherein the parallax set comprises a coordinate distance of each characteristic point pair between the first characteristic point set and the second characteristic point set;
and acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set from the parallax set.
In combination with any of the embodiments of the present disclosure, the position parameter includes a rotation angle;
the determining a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set includes:
determining a seventh feature point set of the color image and an eighth feature point set of the gray scale image with a fifth corresponding relation according to the third corresponding relation and the depth value of each feature point in the fifth feature point set, wherein the depth value of the feature point in the fifth feature point set corresponding to each feature point in the eighth feature point set meets a second preset condition;
Acquiring a rotation angle of each characteristic point pair between the seventh characteristic point set and the eighth characteristic point set;
and clustering the rotation angle of each characteristic point pair, and determining the clustering result as the rotation angle of the camera module.
In combination with any one of the embodiments of the present disclosure, the determining the seventh feature point set of the color image and the eighth feature point set of the grayscale image having the fifth correspondence includes:
respectively removing the third characteristic point set and the fourth characteristic point set, and obtaining a seventh characteristic point set and an eighth characteristic point set by the characteristic points with depth values larger than a first depth threshold value; or alternatively, the first and second heat exchangers may be,
and respectively removing the third characteristic point set and the fourth characteristic point set, wherein the depth value is larger than the second depth threshold value, and the depth value is smaller than the third depth threshold value, so as to obtain a seventh characteristic point set and an eighth characteristic point set.
According to a second aspect of the embodiments of the present disclosure, there is provided a parameter optimization apparatus of a camera module, the camera module including a first camera and a second camera, the apparatus including:
an image acquisition module: the method comprises the steps of acquiring a color image acquired by a first camera, and a depth image, a gray level image and a confidence level image acquired by a second camera, wherein a first corresponding relation is arranged among pixels of the depth image, pixels of the gray level image and pixels of the confidence level image;
And an online calibration module: acquiring a first characteristic point set of the color image and a second characteristic point set of the gray image with a second corresponding relationship;
confidence filtering module: the method comprises the steps of determining a third feature point set of the color image, a fourth feature point set of the gray head image and a fifth feature point set of the depth image with a third corresponding relation according to the first corresponding relation, the second corresponding relation and the confidence degree of the pixel point of the confidence degree image, wherein the confidence degree of the pixel point of the confidence degree image corresponding to each feature point in the fourth feature point set meets a first preset condition;
and a parameter determining module: the position parameter is used for representing the relative position relation of the first camera and the second camera.
In combination with any one of the embodiments of the present disclosure, the online calibration module obtains a first feature point set of the color image and a second feature point set of the gray image, where the first feature point set and the second feature point set have a second correspondence, for:
performing feature matching on the color image and the gray level image according to the current position parameters of the camera module to obtain a first feature point set of the color image and a second feature point set of the gray level image with a second corresponding relationship;
The parameter determining module further comprises a parameter updating module for:
the current location parameter is updated using the determined location parameter.
In combination with any one of the embodiments of the present disclosure, the feature matching is performed on the color image and the grayscale image, for:
adjusting the color image and the grayscale image to a coplanar line alignment;
and performing feature matching on the color image and the gray scale image which are adjusted to be aligned in a coplanar line.
In combination with any one of the embodiments of the present disclosure, the confidence filtering module determines, according to the first correspondence, the second correspondence, and the confidence of the pixel points of the confidence image, a third feature point set of the color image having a third correspondence, a fourth feature point set of the gray head portrait, and a fifth feature point set of the depth map, where the third feature point set, the fourth feature point set of the gray head portrait, and the fifth feature point set of the depth map are configured to:
according to the first corresponding relation and the second characteristic point set, a sixth characteristic point set of the depth image with a fourth corresponding relation with the second characteristic point set is determined;
determining the confidence coefficient of each feature point of the sixth feature point set and the confidence coefficient of each feature point in the second feature point set according to the confidence coefficient of the pixel point of the confidence coefficient image and the first corresponding relation;
Determining the confidence coefficient of each feature point in the first feature point set according to the confidence coefficient of each feature point in the second feature point set and the second corresponding relation;
and determining a third feature point set of the color image, a fourth feature point set of the gray image and a fifth feature point set of the depth image with a third corresponding relation according to the confidence coefficient of the first feature point set, the confidence coefficient of the second feature point set, the confidence coefficient of the sixth feature point set and the first preset condition.
In combination with any one of the embodiments of the present disclosure, the confidence filtering module determines a third set of feature points of the color image, a fourth set of feature points of the gray scale head, and a fifth set of feature points of the depth image having a third correspondence, for:
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the first feature point set to obtain a third feature point set;
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the second feature point set to obtain a fourth feature point set;
and removing the feature points with the confidence coefficient lower than the confidence coefficient threshold value in the sixth feature point set to obtain a fifth feature point set.
In combination with any of the embodiments of the present disclosure, the location parameter includes a baseline; the parameter determining module determines a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set, and the position parameter is used for:
Acquiring a baseline of each characteristic point pair between the third characteristic point set and the fourth characteristic point set;
and clustering the baselines of each characteristic point pair, and determining the clustering result as the baselines of the camera module.
In combination with any one of the embodiments of the present disclosure, the acquiring a baseline of each feature point pair between the third feature point set and the fourth feature point set is configured to:
acquiring the coordinate distance of each characteristic point pair between the third characteristic point set and the fourth characteristic point set, wherein the characteristic point pair comprises two characteristic points respectively belonging to the third characteristic point set and the fourth characteristic point set;
acquiring a depth value of each feature point pair between the third feature point set and the fourth feature point set according to the third corresponding relation and the fifth feature point set;
and acquiring a base line of each feature point pair between the third feature point set and the fourth feature point set according to the coordinate distance and the depth value of each feature point pair and the focal length of the camera module.
In combination with any one of the embodiments of the present disclosure, the acquiring a coordinate distance of each feature point pair between the third feature point set and the fourth feature point set is configured to:
Acquiring a parallax set between the first characteristic point set and the second characteristic point set according to the second corresponding relation, wherein the parallax set comprises a coordinate distance of each characteristic point pair between the first characteristic point set and the second characteristic point set;
and acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set from the parallax set.
In combination with any of the embodiments of the present disclosure, the position parameter includes a rotation angle;
the parameter determining module determines a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set, and the position parameter is used for:
determining a seventh feature point set of the color image and an eighth feature point set of the gray scale image with a fifth corresponding relation according to the third corresponding relation and the depth value of each feature point in the fifth feature point set, wherein the depth value of the feature point in the fifth feature point set corresponding to each feature point in the eighth feature point set meets a second preset condition;
acquiring a rotation angle of each characteristic point pair between the seventh characteristic point set and the eighth characteristic point set;
and clustering the rotation angle of each characteristic point pair, and determining the clustering result as the rotation angle of the camera module.
In combination with any one of the embodiments of the present disclosure, the determining the seventh feature point set of the color image and the eighth feature point set of the gray scale image having the fifth correspondence is configured to:
respectively removing the third characteristic point set and the fourth characteristic point set, and obtaining a seventh characteristic point set and an eighth characteristic point set by the characteristic points with depth values larger than a first depth threshold value; or alternatively, the first and second heat exchangers may be,
and respectively removing the third characteristic point set and the fourth characteristic point set, wherein the depth value is larger than the second depth threshold value, and the depth value is smaller than the third depth threshold value, so as to obtain a seventh characteristic point set and an eighth characteristic point set.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a memory for storing the processor-executable instructions;
a processor configured to execute executable instructions in the memory to implement the steps of the method of any of the embodiments of the first aspect described above.
According to a fourth aspect of the disclosed embodiments, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of the embodiments of the first aspect described above.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an image capturing module, including the above electronic device.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
the depth image, the confidence level image, the gray level image and the characteristic point set of the color image acquired by the camera module are acquired respectively, confidence level filtering is carried out on the characteristic point set based on a first preset condition, the characteristic point set meeting the first preset condition is used for determining the position parameter of the camera module, so that the position parameter with higher accuracy can be obtained, and the effects of the automatic focusing and the optical anti-shake function of the camera module are further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1A is a flowchart of a method for optimizing parameters of a camera module according to an exemplary embodiment of the present disclosure;
FIG. 1B is a flowchart of another method for optimizing parameters of a camera module according to an exemplary embodiment of the present disclosure;
FIG. 2A is a flowchart of a baseline determination method according to an exemplary embodiment of the present disclosure;
FIG. 2B is a flowchart of another baseline determination method according to an exemplary embodiment of the present disclosure;
FIG. 2C is a flowchart of another baseline determination method according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of determining a rotation angle line according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a parameter optimization apparatus of a camera module according to an exemplary embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Fig. 1A illustrates a flowchart of a method for optimizing parameters of an imaging module according to an exemplary embodiment of the present disclosure.
In step S101, a color image acquired by the first camera, and a depth image, a gray scale image, and a confidence image acquired by the second camera are acquired, where a first correspondence is provided among pixels of the depth image, pixels of the gray scale image, and pixels of the confidence image.
In the present disclosure, the camera module includes at least two cameras, and in the case where the camera module is an RGB-D camera, the first camera may be a color camera, and the second camera may be a depth camera. The color image can be acquired through a color camera, and the depth image, the gray level image and the confidence level image can be acquired through a depth camera. Wherein the color image can show color (RGB) information of each pixel in the photographed scene, the depth image can show distance information of the second camera to the photographed scene, the gray image can show gray values of each pixel in the photographed scene, and the confidence image can show depth confidence levels of each pixel in the photographed scene. Because the depth image, the gray level image and the confidence level image are all acquired by the second camera, the images have a first corresponding relation, namely the resolution of the images is the same, the number of pixels is the same, and the depth value and the confidence level value of each pixel point are the same.
In step S102, a first set of feature points of the color image and a second set of feature points of the grayscale image having a second correspondence are acquired.
Because the color image and the gray level image are respectively collected by the first camera and the second camera, the resolution ratio between the two images may be different, the corresponding relation of pixels is uncertain, a second corresponding relation between the color image and the gray level image, namely, the corresponding relation between the characteristic points of the two images, can be established through characteristic detection and matching, a first characteristic point set of the color image with the second corresponding relation and a second characteristic point set of the gray level image, namely, the number of characteristic points is the same, and the depth value and the confidence value of each characteristic point are the same. The feature detection may be implemented by SIFT, SURF, etc. algorithms, which are not described in detail herein.
In step S103, a third feature point set of the color image, a fourth feature point set of the gray scale image, and a fifth feature point set of the depth image, which have a third correspondence, are determined according to the first correspondence, the second correspondence, and the confidence of the pixel point of the confidence image, where the confidence of the pixel point of the confidence image corresponding to each feature point in the fourth feature point set satisfies a first preset condition.
The second feature point set of the gray level image has a first corresponding relation with the depth image and the confidence level image, and has a second corresponding relation with the second feature point set of the color image. The confidence value of each feature point in the first feature point set can be determined by the confidence coefficient of each pixel in the confidence coefficient image. In a shooting scene, the confidence level of the depth value of the edge of the shot object obtained by the shooting module is poor, and errors are likely to occur in parameter calculation of the shooting module. After the confidence value of each pixel point of the confidence image is obtained, confidence filtering is conducted on the feature point set, the first feature point set and the second feature point set of the depth image based on the confidence of the depth value, namely, the third feature point set, the fourth feature point set and the fifth feature point set of the gray image of the color image, wherein the confidence value of the third feature point set, the fourth feature point set and the fifth feature point set of the color image meet a first preset condition, are obtained.
In step S104, a position parameter is determined according to the third feature point set, the fourth feature point set, and the fifth feature point set, where the position parameter is used to characterize a relative positional relationship between the first camera and the second camera.
The third feature point set, the fourth feature point set and the fifth feature point set are feature point sets formed by feature points with confidence degrees meeting the first preset condition respectively. Based on the feature point set, the position parameter of the camera module can be redetermined, so that the accuracy of the position parameter is improved, and the calculation error is reduced. The position parameter is used for representing the relative position relation between the first camera and the second camera, and can comprise a base line and a rotation angle of the camera module. In the case that the camera module is an RGB-D camera, the position parameter may include any external parameter of the RGB-D camera.
According to the method, the feature point sets of the depth image, the confidence level image, the gray level image and the color image acquired by the camera module are acquired respectively, confidence level filtering is conducted on the feature point sets based on the first preset condition, the feature point sets meeting the first preset condition are used for determining the position parameters of the camera module to obtain the position parameters with higher accuracy, and the effects of the automatic focusing and the optical anti-shake functions of the camera module are improved.
In an alternative embodiment, the acquiring the first set of feature points of the color image and the second set of feature points of the gray scale image having the second correspondence includes:
And performing feature matching on the color image and the gray level image according to the current position parameters of the camera module to obtain a first feature point set of the color image and a second feature point set of the gray level image with a second corresponding relationship.
Before determining the new position parameter, the color image and the gray image may be subjected to feature detection and matching by using the current relative position relation between the first camera and the second camera, that is, the current position parameter, so as to obtain a first feature point set of the color image and a second feature point set of the gray image, where the first feature point set and the second feature point set have a first correspondence, and the current position parameter may include a current baseline and a rotation angle of the camera module.
After the position parameter is determined, the method further comprises:
the current location parameters are updated using the determined location parameters.
After determining the new position parameters, the position parameters may be used to update the current position parameters to promote feature detection and matching results for the grayscale image and the color image. Or the first camera and the second camera can acquire more accurate relative position relation, and the effects of automatic focusing and optical anti-shake functions of the camera module are improved. In one example, the determined location parameters may be used to update the current location parameters in real time, or according to a set time interval. In another example, the determined location parameters may be used to update the current location parameters after the camera module detects jitter.
According to the method, the first characteristic point set of the color image and the second characteristic point set of the gray image with the second corresponding relation are obtained through characteristic matching of the color image and the gray image, confidence degree filtering is conducted on the characteristic point set based on the first preset condition and the current position parameter, and the characteristic point set meeting the first preset condition is used for determining the position parameter of the camera module to obtain the position parameter with higher accuracy.
In an alternative embodiment, said feature matching said color image and said gray scale image comprises: adjusting the color image and the grayscale image to a coplanar line alignment; and performing feature matching on the color image and the gray scale image which are adjusted to be aligned in a coplanar line.
Before the feature matching is performed, the gray level and the image collected by the first camera and the color image collected by the second camera are usually in different planes and have shape distortion, and the color image and the gray level image can be adjusted to a coplanar line alignment state so that the gray level image and the color image are located in the same plane and the y-axis coordinates of feature point pairs in the two images are unified, so that the calculation pressure in the feature matching and the position parameter calculation process is reduced. The coplanar line alignment is realized by unifying image planes acquired by different cameras into the same plane through a three-dimensional correction algorithm, and unifying the ordinate of the images acquired by the different cameras to achieve the line alignment effect.
According to the method, the depth image, the confidence level image, the gray level image and the characteristic point set of the color image which are acquired by the camera module are acquired respectively, the color image and the gray level image are adjusted to be in coplanar line alignment, confidence level filtering is conducted on the characteristic point set based on a first preset condition and a current position parameter, the characteristic point set meeting the first preset condition is used for determining the position parameter of the camera module to obtain the position parameter with higher accuracy, calculation pressure in the characteristic matching and position parameter calculation process is reduced, the current position parameter is updated through the determined position parameter, the first camera and the second camera acquire more accurate relative position relation, and the effect of the automatic focusing and optical anti-shake function of the camera module is improved.
Fig. 1B illustrates a flowchart of another method for optimizing parameters of an imaging module according to an exemplary embodiment of the present disclosure.
In step S1031, determining a third feature point set of the color image, a fourth feature point set of the gray scale image, and a fifth feature point set of the depth image according to the first correspondence, the second correspondence, and the confidence of the pixel point of the confidence image, including:
And determining a sixth characteristic point set of the depth image with a fourth corresponding relation with the second characteristic point set according to the first corresponding relation and the second characteristic point set.
After feature detection and matching, the second feature point set of the gray image may be distorted compared with the gray image acquired by the depth camera. In order to make the second feature point set and the depth image establish a corresponding relation, a sixth feature point set of the depth image can be obtained by obtaining feature points corresponding to the second feature point set in the depth image, in one example, if the gray scale image is distorted in shape in the process of feature detection and matching, each image of the depth image can be converted into the same coordinate system as the second feature point set by setting the relation, and the sixth feature point set can be obtained, wherein the sixth feature point set and the second feature point set have a fourth corresponding relation, namely the number of feature points is the same, and the depth value and the confidence value of each feature point are the same.
In step 1032, the confidence level of each feature point in the sixth feature point set and the confidence level of each feature point in the second feature point set are determined according to the confidence level of the pixel point of the confidence level image and the first correspondence.
The depth image and the confidence image have the first corresponding relation, and the confidence of each pixel point in the confidence image can represent the confidence of each feature point in the sixth feature point set. Similarly, the second feature point set and the sixth feature point set of the gray image have a fourth corresponding relationship, and the confidence level of each feature point in the sixth feature point set can represent the confidence level of each point in the second feature point.
In step S1033, a confidence level of the first feature point set is determined according to the confidence level of the second feature point set and the second correspondence.
The second characteristic point set of the gray level image and the first characteristic point set of the color image have the second corresponding relation, and the confidence degree of each point in the second characteristic point of the gray level image can represent the confidence degree of each point in the first characteristic point set of the color image.
In step S1034, a third feature point set of the color image, a fourth feature point set of the gray image, and a fifth feature point set of the depth image having a third correspondence are determined according to the confidence level of the first feature point set, the confidence level of the second feature point set, the confidence level of the sixth feature point set, and the first preset condition.
After the confidence coefficient of each feature point in the first feature point set, the second feature point set and the sixth feature point set is obtained, confidence coefficient filtering can be performed on the image based on the confidence coefficient of the depth value, namely, a third feature point set, with a third corresponding relation, of the color image, the fourth feature point set, with the confidence coefficient meeting a first preset condition, of the gray image and a fifth feature point set of the depth image are obtained. Wherein the first preset condition may include a confidence range that is set to be capable of raising a confidence level of the feature point set.
According to the method, the confidence degree of the first feature point set, the confidence degree of the second feature point set and the confidence degree of the sixth feature point set are filtered on the basis of the first preset condition, the feature point set meeting the first preset condition is used for determining the position parameters of the camera module to obtain the position parameters with higher accuracy, and the effects of automatic focusing and optical anti-shake functions of the camera module are improved.
In an alternative embodiment, the determining the third set of feature points of the color image, the fourth set of feature points of the gray scale image, and the fifth set of feature points of the depth image having the third correspondence includes: removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the first feature point set to obtain a third feature point set;
Removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the second feature point set to obtain a fourth feature point set;
and removing the feature points with the confidence coefficient lower than the confidence coefficient threshold value in the sixth feature point set to obtain a fifth feature point set.
The confidence threshold is a preset confidence level threshold of the feature points, in the disclosure, the first feature point set, the second feature point set and the sixth feature point set may be identified, the image feature points with confidence lower than the confidence threshold are identified as feature points with low depth confidence levels, and the feature points are removed. And reserving the third feature point set, the fourth feature point set and the sixth feature point set, wherein the feature points with the confidence degree higher than the confidence degree threshold value.
According to the method, the feature points with the confidence coefficient lower than the confidence coefficient threshold value are removed from the feature point sets respectively, the feature point sets meeting the first preset condition are used for determining the position parameters of the camera module to obtain the position parameters with higher accuracy, and the effects of the automatic focusing and optical anti-shake functions of the camera module are improved.
FIG. 2A is a flowchart of a baseline determination method according to an exemplary embodiment of the present disclosure;
In step S201, the location parameter includes a baseline, and the determining a location parameter according to the third feature point set, the fourth feature point set, and the fifth feature point set includes: a baseline for each feature point pair between the third feature point set and the fourth feature point set is obtained.
The length of the base line is used for representing the position relation among cameras, is a component part of the position parameters of the camera module and is positively correlated with the measuring range of the camera module. And obtaining a plurality of baselines for representing the distance between the characteristic point pairs by acquiring a third characteristic point set of the color image and a fourth characteristic point set of the gray image, wherein the characteristic point pairs comprise two characteristic points respectively belonging to the third characteristic point set and the fourth characteristic point set.
In step S202, the base lines of each feature point pair are clustered, and the clustering result is determined as the base line of the camera module.
Clustering the base lines of each characteristic point pair in the third characteristic point set of the color image and the fourth characteristic point set of the gray image, wherein the obtained base line length represents the distance between optical centers of the cameras. In one example, after determining a new baseline, the baseline may be used to update the current baseline to optimize feature detection and matching results for the grayscale image and the color image. Or the first camera and the second camera can acquire more accurate relative position relation, and the effects of automatic focusing and optical anti-shake functions of the camera module are improved.
According to the method, the characteristic point set meeting the first preset condition is used for determining the base line of the camera module to obtain the position parameter with higher accuracy, the current base line is updated, the first camera and the second camera acquire the relative position relation with higher accuracy, and the effects of the automatic focusing and optical anti-shake functions of the camera module are improved.
FIG. 2B is a flowchart of another baseline determination method according to an exemplary embodiment of the present disclosure;
in step S2011, the acquiring the baseline of each feature point pair between the third feature point set and the fourth feature point set includes: and acquiring the coordinate distance of each characteristic point pair between the third characteristic point set and the fourth characteristic point set.
The coordinate distance of each feature point pair between the third feature point set and the fourth feature point set may be used to calculate a baseline for each feature point pair. In the case where the color image and the gray-scale image are adjusted to a line-aligned state through stereo correction, the y-axis coordinate value of each of the pairs of feature points in the third feature point set and the fourth feature point set is the same, and the coordinate distance may include only the x-axis coordinate value difference, that is, the coordinate distance in the x-axis direction of each of the pairs of feature points, to reduce the calculation pressure generated when the base line is acquired.
In step S2012, according to the third correspondence, the depth value of each feature point in the fifth feature point set is determined as the depth value of the corresponding feature point pair between the third feature point set and the fourth feature point set.
The depth value of each feature point pair between the third feature point set and the fourth feature point set may be used to calculate a baseline for each feature point pair. The third feature point set, the fourth feature point set and the fifth feature point set which meet the first preset condition have the third corresponding relation, and based on the third corresponding relation, the depth value of each feature point in the third feature point set and the fifth feature point set can be obtained through the fifth feature point set of the depth image. And the depth value represents the distance from the characteristic point to the camera module. And under the condition that the color image and the gray level image are adjusted to a coplanar state through three-dimensional correction, the characteristic point pairs respectively belong to the third characteristic point set and the fourth characteristic point set, the depth value of each corresponding characteristic point is the same, and the depth value can be determined as the depth value of each characteristic point pair.
In step S2013, a baseline of each feature point pair between the third feature point set and the fourth feature point set is determined according to the coordinate distance and the depth value of each feature point pair, and the focal length of the camera module.
A baseline for each feature point pair between the third feature point set and the fourth feature point set may be obtained by equation (1).
Wherein B is a baseline of each feature point pair between the third feature point set and the fourth feature point set, z is a depth value of each feature point pair between the third feature point set and the fourth feature point set, x right -x left Is the coordinates of each feature point pair between the third feature point set and the fourth feature point setAnd f is the focal length of the camera module. In the camera module, the focal length of the first camera may be different from the focal length of the second camera, and the focal length of any one of the camera modules or any value in the middle of two focal lengths of the camera may be determined by an equivalent transformation method to obtain a baseline of each feature point pair, and the focal length of the camera map may be obtained by off-line calibration, which is not described herein.
According to the method, the characteristic point set meeting the first preset condition is used for determining the base line of the camera module to obtain the position parameter with higher accuracy, the current base line is updated, the first camera and the second camera acquire the relative position relation with higher accuracy, and the effects of the automatic focusing and optical anti-shake functions of the camera module are improved.
FIG. 2C is a flowchart of another baseline determination method according to an exemplary embodiment of the present disclosure;
in step S2011-1, a parallax set between the first feature point set and the second feature point set is obtained according to the second correspondence, where the parallax set includes a coordinate distance of each feature point pair between the first feature point set and the second feature point set.
The second corresponding relation between the second characteristic point set of the gray level image and the first characteristic point set of the color image comprises a translation mapping relation between a world coordinate system and a pixel coordinate system of each characteristic point and a translation mapping relation between each characteristic point pair in the gray level image and the color image. A transformation matrix (homography matrix) between the third feature point set and the fourth feature point set may be obtained according to the mapping relation. According to the distance of each characteristic point pair in the homography matrix, a first parallax set between the first characteristic point set and the second characteristic point set can be obtained and used for representing the coordinate distance of each characteristic point pair in the first characteristic point set and the second characteristic point set.
In step S2011-2, the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set is acquired from the parallax set.
In the shooting scene, the confidence level of the depth value of the edge of the shot object obtained by the second camera is poor, and errors are likely to occur in parameter calculation of the shooting module. In one example, the second set of disparities whose confidence level satisfies the first preset condition may be obtained by confidence filtering the set of disparities based on the confidence level of the depth value after the set of feature points of the confidence level image is obtained. The first preset condition may include a confidence level range that is set to be capable of improving a confidence level of the feature point set. In one example, feature points in the first disparity set that are below the confidence threshold may be identified as feature points with low depth confidence levels and removed. And reserving characteristic points in the parallax set, which are higher than the confidence threshold. The feature points in each of the second disparity sets are used to characterize the coordinate distance of each feature point pair in the third feature point set and the fourth feature point set. And acquiring the coordinate distance of each feature point pair relative to the coordinate distance acquired according to the third feature point set and the fourth feature point set. The calculation pressure may be further reduced by acquiring the coordinate distances of the feature point pairs from the second parallax set.
According to the method, the parallax sets of the color image and the gray image are subjected to confidence degree filtering based on the first preset condition and the current position parameter, the feature point set meeting the first preset condition is used for determining the base line of the camera module to obtain the position parameter with higher accuracy, the current base line is updated, the first camera and the second camera acquire the relative position relation with higher accuracy, and the effects of the automatic focusing and the optical anti-shake function of the camera module are improved.
FIG. 3 is a flow chart of a method of determining a rotation angle line according to an exemplary embodiment of the present disclosure;
in step S301, the position parameter includes a rotation angle, and the determining a position parameter according to the third feature point set, the fourth feature point set, and the fifth feature point set includes:
and determining a seventh feature point set of the color image and an eighth feature point set of the gray scale image with a fifth corresponding relation according to the third corresponding relation and the depth value of each feature point in the fifth feature point set, wherein the depth value of the feature point in the fifth feature point set corresponding to each feature point in the eighth feature point set meets a second preset condition.
The rotation angle is used for representing the angle difference between the camera mirrors and is a component part of the position parameters of the camera module. The third set of feature points of the color image, the fourth set of feature points of the grayscale image, and the fifth set of feature points of the depth image have the third correspondence. The depth value of each feature point in the third feature point set and the fourth feature point set may be determined by the depth value of each feature point in the fifth feature point set. In the shooting scene, along with the increase of the distance from the shot object to the shooting module, errors are likely to occur in parameter calculation of the shooting module. And performing depth value filtering on the third feature point set and the fourth feature point set based on a depth value, namely the distance from the shot object to the shooting module after the fifth feature point set of the depth map is acquired, namely acquiring a seventh feature point set of the color image and an eighth feature point set of the gray image, wherein the seventh feature point set and the eighth feature point set of the color image have a fifth corresponding relation, and the depth value of the seventh feature point set satisfy a second preset condition. Based on the fifth correspondence, a depth value of each feature point in the seventh feature point set and the eighth feature point set may be obtained through the fifth feature point set.
In step S302, a rotation angle of each feature point pair between the seventh feature point set and the eighth feature point set is acquired.
The fourth correspondence between the seventh feature point set and the eighth feature point set includes a rotation mapping relationship between each of the feature point pairs. And acquiring a homography matrix of each characteristic point pair between the seventh characteristic point set and the eighth characteristic point set according to the mapping relation. By decomposing the homography matrix, a rotation matrix and a translation matrix of each feature point pair can be obtained. According to the rotation matrix, a rotation angle of each feature point pair between the seventh feature point set and the eighth feature point set can be acquired.
In step S303, the rotation angle of each feature point pair is clustered, and the clustering result is determined as the rotation angle of the camera module.
And clustering the rotation angle of each feature point pair in the seventh feature point set and the eighth feature point set, wherein the obtained rotation angle represents the mirror angle difference between the first camera and the second camera. In one example, after determining a new rotation angle, the rotation angle may be used to update the current rotation angle to optimize the feature detection and matching results for the grayscale image and the color image. Or the first camera and the second camera can acquire more accurate relative position relation, and the effects of automatic focusing and optical anti-shake functions of the camera module are improved.
According to the method, the feature point sets of the depth image, the confidence level image, the gray level image and the color image acquired by the camera module are acquired respectively, confidence level filtering is conducted on the feature point sets based on the first preset condition and the current position parameter, the feature point sets meeting the first preset condition are used for determining the rotation angle of the camera module to obtain the position parameter with higher accuracy, the current rotation angle is updated, the first camera and the second camera acquire the relative position relation with higher accuracy, and the effects of the automatic focusing and optical anti-shake functions of the camera module are improved.
In an optional embodiment, the determining, according to the fourth correspondence, a seventh feature point set of the color image and an eighth feature point set of the gray scale image having a fifth correspondence includes:
respectively removing the third characteristic point set and the fourth characteristic point set, and obtaining a seventh characteristic point set and an eighth characteristic point set by the characteristic points with depth values larger than a first depth threshold value; or respectively removing the third feature point set and the fourth feature point set, wherein the depth value is larger than the second depth threshold value, and the depth value is smaller than the third depth threshold value, so as to obtain a seventh feature point set and an eighth feature point set.
In one example, in the seventh feature point set and the eighth feature point set, feature points having a depth value greater than a depth value threshold may be identified as feature points that exceed a detection range of the camera module, and the feature points may be removed. And reserving the seventh characteristic point set and the eighth characteristic point set, wherein the depth value of the characteristic points is smaller than the depth value threshold value. The depth value threshold may be determined according to a current baseline length of the camera module, in an example, the depth value threshold may be set to 25 times of a current baseline, and in a case that the current baseline length is 20mm, the depth value threshold is 50cm, that is, feature points outside 50cm are filtered.
In another example, for the process of acquiring the rotation angle, the homography matrix of all the feature point pairs with the depth value smaller than the depth threshold may be acquired by acquiring the homography matrix of part of the feature point pairs to reduce the calculation pressure of acquiring the homography matrix. In one example, the depth value threshold is 50cm, that is, after filtering the feature points outside 50cm, the feature points within 40cm may be removed, the feature points with depth ranging from 40cm to 50cm may be retained, or the feature points with depth ranging from 50cm to any sub-range may be retained without the previous depth threshold. And acquiring a rotation angle of each feature point pair between the seventh feature point set and the eighth feature point set, wherein the depth value of each feature point pair meets a preset depth range.
According to the method, the feature point sets of the depth image, the confidence level image, the gray level image and the color image acquired by the camera module are acquired respectively, confidence level filtering is conducted on the feature point sets based on the first preset condition and the current position parameter, the feature point sets meeting the first preset condition are used for determining the rotation angle of the camera module to obtain the position parameter with higher accuracy, the current rotation angle is updated, the first camera and the second camera acquire the relative position relation with higher accuracy, and the effects of the automatic focusing and optical anti-shake functions of the camera module are improved.
For the foregoing method embodiments, for simplicity of explanation, the methodologies are shown as a series of acts, but one of ordinary skill in the art will appreciate that the present disclosure is not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the disclosure.
Further, those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments, and that the acts and modules referred to are not necessarily required by the present disclosure.
Corresponding to the embodiment of the application function implementation method, the disclosure also provides an embodiment of the application function implementation device and a corresponding terminal.
An apparatus block diagram of parameter optimization of a camera module according to an exemplary embodiment of the present disclosure is shown in fig. 4, where the camera module includes a first camera and a second camera, and the apparatus includes:
image acquisition module 401: the method comprises the steps of acquiring a color image acquired by a first camera, and a depth image, a gray level image and a confidence level image acquired by a second camera, wherein a first corresponding relation is arranged among pixels of the depth image, pixels of the gray level image and pixels of the confidence level image;
on-line calibration module 402: acquiring a first characteristic point set of the color image and a second characteristic point set of the gray image with a second corresponding relationship;
confidence filtering module 403: the method comprises the steps of determining a third feature point set of the color image, a fourth feature point set of the gray head image and a fifth feature point set of the depth image with a third corresponding relation according to the first corresponding relation, the second corresponding relation and the confidence degree of the pixel point of the confidence degree image, wherein the confidence degree of the pixel point of the confidence degree image corresponding to each feature point in the fourth feature point set meets a first preset condition;
Parameter determination module 404: the position parameter is used for representing the relative position relation of the first camera and the second camera.
In combination with any one of the embodiments of the present disclosure, the online calibration module obtains a first feature point set of the color image and a second feature point set of the gray image, where the first feature point set and the second feature point set have a second correspondence, for:
performing feature matching on the color image and the gray level image according to the current position parameters of the camera module to obtain a first feature point set of the color image and a second feature point set of the gray level image with a second corresponding relationship;
the parameter determining module further comprises a parameter updating module for:
the current location parameter is updated using the determined location parameter.
In combination with any one of the embodiments of the present disclosure, the feature matching is performed on the color image and the grayscale image, for:
adjusting the color image and the grayscale image to a coplanar line alignment;
and performing feature matching on the color image and the gray scale image which are adjusted to be aligned in a coplanar line.
In combination with any one of the embodiments of the present disclosure, the confidence filtering module determines, according to the first correspondence, the second correspondence, and the confidence of the pixel points of the confidence image, a third feature point set of the color image having a third correspondence, a fourth feature point set of the gray head portrait, and a fifth feature point set of the depth map, where the third feature point set, the fourth feature point set of the gray head portrait, and the fifth feature point set of the depth map are configured to:
according to the first corresponding relation and the second characteristic point set, a sixth characteristic point set of the depth image with a fourth corresponding relation with the second characteristic point set is determined;
determining the confidence coefficient of each feature point of the sixth feature point set and the confidence coefficient of each feature point in the second feature point set according to the confidence coefficient of the pixel point of the confidence coefficient image and the first corresponding relation;
determining the confidence coefficient of each feature point in the first feature point set according to the confidence coefficient of each feature point in the second feature point set and the second corresponding relation;
and determining a third feature point set of the color image, a fourth feature point set of the gray image and a fifth feature point set of the depth image with a third corresponding relation according to the confidence coefficient of the first feature point set, the confidence coefficient of the second feature point set, the confidence coefficient of the sixth feature point set and the first preset condition.
In combination with any one of the embodiments of the present disclosure, the confidence filtering module determines a third set of feature points of the color image, a fourth set of feature points of the gray scale head, and a fifth set of feature points of the depth image having a third correspondence, for:
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the first feature point set to obtain a third feature point set;
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the second feature point set to obtain a fourth feature point set;
and removing the feature points with the confidence coefficient lower than the confidence coefficient threshold value in the sixth feature point set to obtain a fifth feature point set.
In combination with any of the embodiments of the present disclosure, the location parameter includes a baseline; the parameter determining module determines a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set, and the position parameter is used for:
acquiring a baseline of each characteristic point pair between the third characteristic point set and the fourth characteristic point set;
and clustering the baselines of each characteristic point pair, and determining the clustering result as the baselines of the camera module.
In combination with any one of the embodiments of the present disclosure, the acquiring a baseline of each feature point pair between the third feature point set and the fourth feature point set is configured to:
Acquiring the coordinate distance of each characteristic point pair between the third characteristic point set and the fourth characteristic point set, wherein the characteristic point pair comprises two characteristic points respectively belonging to the third characteristic point set and the fourth characteristic point set;
acquiring a depth value of each feature point pair between the third feature point set and the fourth feature point set according to the third corresponding relation and the fifth feature point set;
and acquiring a base line of each feature point pair between the third feature point set and the fourth feature point set according to the coordinate distance and the depth value of each feature point pair and the focal length of the camera module.
In combination with any one of the embodiments of the present disclosure, the acquiring a coordinate distance of each feature point pair between the third feature point set and the fourth feature point set is configured to:
acquiring a parallax set between the first characteristic point set and the second characteristic point set according to the second corresponding relation, wherein the parallax set comprises a coordinate distance of each characteristic point pair between the first characteristic point set and the second characteristic point set;
and acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set from the parallax set.
In combination with any of the embodiments of the present disclosure, the position parameter includes a rotation angle;
the parameter determining module determines a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set, and the position parameter is used for:
determining a seventh feature point set of the color image and an eighth feature point set of the gray scale image with a fifth corresponding relation according to the third corresponding relation and the depth value of each feature point in the fifth feature point set, wherein the depth value of the feature point in the fifth feature point set corresponding to each feature point in the eighth feature point set meets a second preset condition;
acquiring a rotation angle of each characteristic point pair between the seventh characteristic point set and the eighth characteristic point set;
and clustering the rotation angle of each characteristic point pair, and determining the clustering result as the rotation angle of the camera module.
In combination with any one of the embodiments of the present disclosure, the determining the seventh feature point set of the color image and the eighth feature point set of the gray scale image having the fifth correspondence is configured to:
respectively removing the third characteristic point set and the fourth characteristic point set, and obtaining a seventh characteristic point set and an eighth characteristic point set by the characteristic points with depth values larger than a first depth threshold value; or alternatively, the first and second heat exchangers may be,
And respectively removing the third characteristic point set and the fourth characteristic point set, wherein the depth value is larger than the second depth threshold value, and the depth value is smaller than the third depth threshold value, so as to obtain a seventh characteristic point set and an eighth characteristic point set.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements described above as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Fig. 5 illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Referring to fig. 5, a block diagram of an electronic device is shown. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 5, an apparatus 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the apparatus 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 504 is configured to store various types of data to support operations at device 500. Examples of such data include instructions for any application or method operating on the apparatus 500, contact data, phonebook data, messages, pictures, videos, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 500.
The multimedia component 508 includes a screen between the device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the apparatus 500 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in memory 504 or transmitted via communication component 516. In some embodiments, the audio component 510 includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the apparatus 500. For example, the sensor assembly 514 may detect the on/off state of the device 500, the relative positioning of the components, such as the display and keypad of the device 500, the sensor assembly 514 may detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, the orientation or acceleration/deceleration of the device 500, and a change in temperature of the device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G or 5G, or a combination thereof. In one exemplary embodiment, the communication part 516 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 516 includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the power supply methods of electronic devices described above.
In an exemplary embodiment, the present disclosure provides a non-transitory computer storage medium including instructions, such as memory 504 including instructions, executable by processor 520 of apparatus 500 to perform the method of powering an electronic device described above. For example, the non-transitory computer storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (23)

1. The parameter optimization method of the camera module is characterized in that the camera module comprises a first camera and a second camera, and the method comprises the following steps:
acquiring a color image acquired by the first camera, and a depth image, a gray level image and a confidence level image acquired by the second camera, wherein a first corresponding relation is arranged among the pixel points of the depth image, the pixel points of the gray level image and the pixel points of the confidence level image;
Acquiring a first characteristic point set of the color image and a second characteristic point set of the gray image with a second corresponding relation;
determining a third feature point set of the color image, a fourth feature point set of the gray level image and a fifth feature point set of the depth image with a third corresponding relation according to the first corresponding relation, the second corresponding relation and the confidence degree of the pixel point of the confidence degree image, wherein the confidence degree of the pixel point of the confidence degree image corresponding to each feature point in the fourth feature point set meets a first preset condition;
and determining a position parameter according to the third characteristic point set, the fourth characteristic point set and the fifth characteristic point set, wherein the position parameter is used for representing the relative position relation of the first camera and the second camera.
2. The method of claim 1, wherein the acquiring the first set of feature points of the color image and the second set of feature points of the grayscale image having the second correspondence comprises:
performing feature matching on the color image and the gray level image according to the current position parameters of the camera module to obtain a first feature point set of the color image and a second feature point set of the gray level image with a second corresponding relationship;
After the position parameter is determined, the method further comprises:
the current location parameters are updated using the determined location parameters.
3. The method of claim 2, wherein the feature matching the color image and the grayscale image comprises:
adjusting the color image and the grayscale image to a coplanar line alignment;
and performing feature matching on the color image and the gray scale image which are adjusted to be aligned in a coplanar line.
4. The method of claim 1, wherein determining the third set of feature points of the color image, the fourth set of feature points of the grayscale image, and the fifth set of feature points of the depth image having the third correspondence based on the first correspondence, the second correspondence, and the confidence of the pixel points of the confidence image comprises:
according to the first corresponding relation and the second characteristic point set, a sixth characteristic point set of the depth image with a fourth corresponding relation with the second characteristic point set is determined;
determining the confidence coefficient of each feature point of the sixth feature point set and the confidence coefficient of each feature point in the second feature point set according to the confidence coefficient of the pixel point of the confidence coefficient image and the first corresponding relation;
Determining the confidence coefficient of each feature point in the first feature point set according to the confidence coefficient of each feature point in the second feature point set and the second corresponding relation;
and determining a third feature point set of the color image, a fourth feature point set of the gray image and a fifth feature point set of the depth image with a third corresponding relation according to the confidence coefficient of the first feature point set, the confidence coefficient of the second feature point set, the confidence coefficient of the sixth feature point set and the first preset condition.
5. The method of claim 4, wherein the determining the third set of feature points for the color image, the fourth set of feature points for the grayscale image, and the fifth set of feature points for the depth image having the third correspondence comprises:
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the first feature point set to obtain a third feature point set;
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the second feature point set to obtain a fourth feature point set;
and removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the sixth feature point set to obtain a fifth feature point set.
6. The method of claim 1, wherein the location parameter comprises a baseline;
The determining a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set includes:
acquiring a baseline of each characteristic point pair between the third characteristic point set and the fourth characteristic point set;
and clustering the baselines of each characteristic point pair, and determining the clustering result as the baselines of the camera module.
7. The method of claim 6, wherein the obtaining a baseline for each feature point pair between the third feature point set and the fourth feature point set comprises:
acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set;
determining the depth value of each feature point in the fifth feature point set as the depth value of the corresponding feature point pair between the third feature point set and the fourth feature point set according to the third corresponding relation;
and determining a base line of each feature point pair between the third feature point set and the fourth feature point set according to the coordinate distance and the depth value of each feature point pair and the focal length of the camera module.
8. The method of claim 7, wherein the obtaining the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set comprises:
Acquiring a parallax set between the first characteristic point set and the second characteristic point set according to the second corresponding relation, wherein the parallax set comprises a coordinate distance of each characteristic point pair between the first characteristic point set and the second characteristic point set;
and acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set from the parallax set.
9. The method of claim 1, wherein the position parameter comprises a rotation angle;
the determining a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set includes:
determining a seventh feature point set of the color image and an eighth feature point set of the gray scale image with a fifth corresponding relation according to the third corresponding relation and the depth value of each feature point in the fifth feature point set, wherein the depth value of the feature point in the fifth feature point set corresponding to each feature point in the eighth feature point set meets a second preset condition;
acquiring a rotation angle of each characteristic point pair between the seventh characteristic point set and the eighth characteristic point set;
and clustering the rotation angle of each characteristic point pair, and determining the clustering result as the rotation angle of the camera module.
10. The method of claim 9, wherein the determining the seventh set of feature points of the color image and the eighth set of feature points of the grayscale image having the fifth correspondence comprises:
respectively removing the third characteristic point set and the fourth characteristic point set, and obtaining a seventh characteristic point set and an eighth characteristic point set by the characteristic points with depth values larger than a first depth threshold value; or alternatively, the first and second heat exchangers may be,
and respectively removing the third characteristic point set and the fourth characteristic point set, wherein the depth value is larger than the second depth threshold value, and the depth value is smaller than the third depth threshold value, so as to obtain a seventh characteristic point set and an eighth characteristic point set.
11. The utility model provides a parameter optimization device of module of making a video recording, its characterized in that, the module of making a video recording includes first camera and second camera, the device includes:
an image acquisition module: the method comprises the steps of acquiring a color image acquired by a first camera, and a depth image, a gray level image and a confidence level image acquired by a second camera, wherein a first corresponding relation is arranged among pixels of the depth image, pixels of the gray level image and pixels of the confidence level image;
and an online calibration module: acquiring a first characteristic point set of the color image and a second characteristic point set of the gray image with a second corresponding relationship;
Confidence filtering module: the method comprises the steps of determining a third feature point set of the color image, a fourth feature point set of the gray head image and a fifth feature point set of the depth image with a third corresponding relation according to the first corresponding relation, the second corresponding relation and the confidence degree of the pixel point of the confidence degree image, wherein the confidence degree of the pixel point of the confidence degree image corresponding to each feature point in the fourth feature point set meets a first preset condition;
and a parameter determining module: the position parameter is used for representing the relative position relation of the first camera and the second camera.
12. The apparatus of claim 11, wherein the online calibration module obtains a first set of feature points of the color image and a second set of feature points of the grayscale image having a second correspondence for:
performing feature matching on the color image and the gray level image according to the current position parameters of the camera module to obtain a first feature point set of the color image and a second feature point set of the gray level image with a second corresponding relationship;
The parameter determining module further comprises a parameter updating module for:
the current location parameter is updated using the determined location parameter.
13. The apparatus of claim 12, wherein the feature matching of the color image and the grayscale image is for:
adjusting the color image and the grayscale image to a coplanar line alignment;
and performing feature matching on the color image and the gray scale image which are adjusted to be aligned in a coplanar line.
14. The apparatus of claim 11, wherein the confidence filtering module determines a third set of feature points of the color image, a fourth set of feature points of the gray scale head, and a fifth set of feature points of the depth map having a third correspondence based on the first correspondence, the second correspondence, and the confidence of the pixel points of the confidence image, for:
according to the first corresponding relation and the second characteristic point set, a sixth characteristic point set of the depth image with a fourth corresponding relation with the second characteristic point set is determined;
determining the confidence coefficient of each feature point of the sixth feature point set and the confidence coefficient of each feature point in the second feature point set according to the confidence coefficient of the pixel point of the confidence coefficient image and the first corresponding relation;
Determining the confidence coefficient of each feature point in the first feature point set according to the confidence coefficient of each feature point in the second feature point set and the second corresponding relation;
and determining a third feature point set of the color image, a fourth feature point set of the gray image and a fifth feature point set of the depth image with a third corresponding relation according to the confidence coefficient of the first feature point set, the confidence coefficient of the second feature point set, the confidence coefficient of the sixth feature point set and the first preset condition.
15. The apparatus of claim 14, wherein the confidence filtering module determines a third set of feature points of the color image, a fourth set of feature points of the gray scale head, and a fifth set of feature points of the depth image having a third correspondence for:
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the first feature point set to obtain a third feature point set;
removing the feature points with the confidence coefficient lower than a confidence coefficient threshold value in the second feature point set to obtain a fourth feature point set;
and removing the feature points with the confidence coefficient lower than the confidence coefficient threshold value in the sixth feature point set to obtain a fifth feature point set.
16. The apparatus of claim 11, wherein the location parameter comprises a baseline; the parameter determining module determines a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set, and the position parameter is used for:
acquiring a baseline of each characteristic point pair between the third characteristic point set and the fourth characteristic point set;
and clustering the baselines of each characteristic point pair, and determining the clustering result as the baselines of the camera module.
17. The apparatus of claim 16, wherein the obtaining a baseline for each feature point pair between the third feature point set and the fourth feature point set is configured to:
acquiring the coordinate distance of each characteristic point pair between the third characteristic point set and the fourth characteristic point set, wherein the characteristic point pair comprises two characteristic points respectively belonging to the third characteristic point set and the fourth characteristic point set;
acquiring a depth value of each feature point pair between the third feature point set and the fourth feature point set according to the third corresponding relation and the fifth feature point set;
and acquiring a base line of each feature point pair between the third feature point set and the fourth feature point set according to the coordinate distance and the depth value of each feature point pair and the focal length of the camera module.
18. The apparatus of claim 17, wherein the obtaining the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set is configured to:
acquiring a parallax set between the first characteristic point set and the second characteristic point set according to the second corresponding relation, wherein the parallax set comprises a coordinate distance of each characteristic point pair between the first characteristic point set and the second characteristic point set;
and acquiring the coordinate distance of each feature point pair between the third feature point set and the fourth feature point set from the parallax set.
19. The apparatus of claim 11, wherein the position parameter comprises a rotation angle;
the parameter determining module determines a position parameter according to the third feature point set, the fourth feature point set and the fifth feature point set, and the position parameter is used for:
determining a seventh feature point set of the color image and an eighth feature point set of the gray scale image with a fifth corresponding relation according to the third corresponding relation and the depth value of each feature point in the fifth feature point set, wherein the depth value of the feature point in the fifth feature point set corresponding to each feature point in the eighth feature point set meets a second preset condition;
Acquiring a rotation angle of each characteristic point pair between the seventh characteristic point set and the eighth characteristic point set;
and clustering the rotation angle of each characteristic point pair, and determining the clustering result as the rotation angle of the camera module.
20. The apparatus of claim 19, wherein the determining the seventh set of feature points of the color image and the eighth set of feature points of the grayscale image having the fifth correspondence is configured to:
respectively removing the third characteristic point set and the fourth characteristic point set, and obtaining a seventh characteristic point set and an eighth characteristic point set by the characteristic points with depth values larger than a first depth threshold value; or alternatively, the first and second heat exchangers may be,
and respectively removing the third characteristic point set and the fourth characteristic point set, wherein the depth value is larger than the second depth threshold value, and the depth value is smaller than the third depth threshold value, so as to obtain a seventh characteristic point set and an eighth characteristic point set.
21. An electronic device, the electronic device comprising:
a memory for storing processor-executable instructions;
a processor configured to execute executable instructions in the memory to implement the steps of the method of any one of claims 1 to 10.
22. A computer storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1-10.
23. A camera module comprising the electronic device of claim 21.
CN202210303687.0A 2022-03-24 2022-03-24 Parameter optimization method and device of camera module, electronic equipment and storage medium Pending CN116863162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210303687.0A CN116863162A (en) 2022-03-24 2022-03-24 Parameter optimization method and device of camera module, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210303687.0A CN116863162A (en) 2022-03-24 2022-03-24 Parameter optimization method and device of camera module, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116863162A true CN116863162A (en) 2023-10-10

Family

ID=88234568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210303687.0A Pending CN116863162A (en) 2022-03-24 2022-03-24 Parameter optimization method and device of camera module, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116863162A (en)

Similar Documents

Publication Publication Date Title
CN107798669B (en) Image defogging method and device and computer readable storage medium
CN106778773B (en) Method and device for positioning target object in picture
JP6348611B2 (en) Automatic focusing method, apparatus, program and recording medium
CN107944367B (en) Face key point detection method and device
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
KR20160021737A (en) Method, apparatus and device for image segmentation
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN113301320B (en) Image information processing method and device and electronic equipment
CN105678296B (en) Method and device for determining character inclination angle
CN110930351A (en) Light spot detection method and device and electronic equipment
CN107239758B (en) Method and device for positioning key points of human face
CN109934168B (en) Face image mapping method and device
CN115601316A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN117522942A (en) Depth distance measuring method, depth distance measuring device, electronic equipment and readable storage medium
CN115861431A (en) Camera registration method and device, communication equipment and storage medium
CN116934823A (en) Image processing method, device, electronic equipment and readable storage medium
CN116863162A (en) Parameter optimization method and device of camera module, electronic equipment and storage medium
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN115118950B (en) Image processing method and device
CN112070681B (en) Image processing method and device
CN111985280B (en) Image processing method and device
CN109949212B (en) Image mapping method, device, electronic equipment and storage medium
CN118351164A (en) Scale recovery method, device, equipment and storage medium
CN116095466A (en) Shooting method, shooting device, electronic equipment and storage medium
CN115115683A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination