CN109559353B - Camera module calibration method and device, electronic equipment and computer readable storage medium - Google Patents

Camera module calibration method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109559353B
CN109559353B CN201811455209.1A CN201811455209A CN109559353B CN 109559353 B CN109559353 B CN 109559353B CN 201811455209 A CN201811455209 A CN 201811455209A CN 109559353 B CN109559353 B CN 109559353B
Authority
CN
China
Prior art keywords
image
camera module
parallax
depth
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811455209.1A
Other languages
Chinese (zh)
Other versions
CN109559353A (en
Inventor
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811455209.1A priority Critical patent/CN109559353B/en
Publication of CN109559353A publication Critical patent/CN109559353A/en
Application granted granted Critical
Publication of CN109559353B publication Critical patent/CN109559353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a camera module calibration method, a camera module calibration device, electronic equipment and a computer-readable storage medium, wherein the camera module calibration method is applied to the electronic equipment with a first camera module, a second camera module and a depth camera module, a first image is obtained through the first camera module, a second image of a scene is shot through the second camera module, and a depth image is obtained through the depth camera module in the same scene; extracting the same pixel points of the first image and the second image to obtain a first parallax; extracting the same pixel points of the first image and the depth image to obtain a second parallax; and comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed. Through comparing the parallax information of the image shot by the calibration camera module, whether the calibration result of the camera module is qualified is checked, and the calibration accuracy of the camera module is improved.

Description

Camera module calibration method and device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to a method and an apparatus for calibrating a camera module, an electronic device, and a computer-readable storage medium.
Background
Before the camera leaves the factory, the camera needs to be calibrated to obtain calibration parameters of the camera, and the calibration parameters are subjected to qualification test, so that the camera can process images according to the qualified calibration parameters, and the processed images can restore objects in a three-dimensional space. However, in the use process of the camera, different shooting conditions can affect the imaging effect of the image, and the problem of low calibration accuracy of the camera exists.
Disclosure of Invention
Accordingly, it is desirable to provide a camera module calibration method, device, electronic device and computer-readable storage medium for solving the problem of low camera module calibration accuracy.
The utility model provides a camera module calibration method, is applied to the electronic equipment that has first camera module, second camera module and degree of depth camera module, includes:
under the same scene, a first image is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image to obtain a first parallax;
extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The utility model provides a module calibration device makes a video recording, is applied to the electronic equipment who has first module, the second module of making a video recording and the module of making a video recording of degree of depth, includes:
the image acquisition module is used for acquiring a first image through the first camera module, acquiring a second image of the scene through the second camera module and acquiring a depth image through the depth camera module in the same scene;
the first extraction module is used for extracting the same pixel points of the first image and the second image so as to obtain a first parallax;
the second extraction module is used for extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and the calibration test module is used for comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
under the same scene, a first image is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image to obtain a first parallax;
extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
under the same scene, a first image is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image to obtain a first parallax;
extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
According to the camera module calibration method, the camera module calibration device, the electronic equipment and the computer-readable storage medium, in the same scene, a first image is obtained through the first camera module, a second image of the scene is obtained through the second camera module, and a depth image is obtained through the depth camera module; extracting the same pixel points of the first image and the second image to obtain a first parallax; extracting the same pixel points of the first image and the depth image to obtain a second parallax; and comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed. Through comparing the parallax information of the image acquired by the calibration camera module, whether the calibration result of the camera module is qualified is checked, and the calibration accuracy of the camera module is improved.
Drawings
Fig. 1a is a schematic view of an application environment of a camera module calibration method according to an embodiment of the present invention;
fig. 1b is a schematic view of an application environment of a camera module calibration method according to another embodiment of the present invention;
fig. 1c is a schematic view of an application environment of a camera module calibration method according to another embodiment of the present invention;
FIG. 2 is a flowchart of a camera module calibration method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a camera module calibration method according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating a depth image acquisition process performed by the depth camera module according to an embodiment of the present invention;
fig. 5 is a block diagram of a camera module calibration apparatus according to an embodiment of the present invention;
FIG. 6 is a block diagram of the internal structure of an electronic device in one embodiment of the invention;
FIG. 7 is a diagram of an image processing circuit according to an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and in order to provide a better understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. This invention can be embodied in many different forms than those herein described and many modifications may be made by those skilled in the art without departing from the spirit of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise. In the description of the present invention, "a plurality" means at least one, e.g., one, two, etc., unless specifically limited otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic diagram of an application environment of the camera module calibration method in one embodiment. As shown in fig. 1, the application environment includes an electronic device 110 having a first camera module 111, a second camera module 112, and a depth camera module 113. The mechanical arrangement of the first camera module 111, the second camera module 112 and the depth camera module 113 may be: the first camera module 111, the second camera module 112, and the depth camera module 113 are arranged in sequence, as shown in fig. 1 a; or the first camera module 111, the depth camera module 113 and the second camera module 112 are arranged in sequence, as shown in fig. 1 b; or the second camera module 112, the first camera module 111, and the depth camera module 113 are arranged in sequence, as shown in fig. 1 c; or the second camera module 112, the depth camera module 113 and the first camera module 111 are arranged in sequence (not shown in the figure); or the depth camera module 113, the second camera module 112, and the first camera module 111 are arranged in sequence (not shown in the figure); or a depth camera module 113, a first camera module 111, and a second camera module 112 (not shown).
The first camera module 111 and the second camera module 112 are any camera modules in the prior art, and are not limited herein. For example, the first Camera module 111 and the second Camera module 112 may be visible light Camera modules (RGB cameras). The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera.
Fig. 2 is a flowchart of a camera module calibration method according to an embodiment of the present invention, and the camera module calibration method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 2, the camera module calibration method includes steps 201 to 204.
Step 201, in the same scene, acquiring a first image through a first camera module, acquiring a second image of the scene through a second camera module, and acquiring a depth image through a depth camera module;
the user selects a scene chart1, the electronic device utilizes the first camera module, the second camera module and the depth camera module to shoot a chart1 at the same angle, the first camera module shoots a chart1 to obtain a first image, the second camera module shoots a chart1 to obtain a second image, and the depth camera module shoots a chart1 to obtain a depth image. The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
Step 202, extracting the same pixel points of the first image and the second image to obtain a first parallax;
image identification is a process of classification that distinguishes images from other different classes of images. Extracting pixel points of the first image and the second image by using a Scale-invariant feature transform (SIFT) method or an accelerated robust features (SURF) method, matching the pixel points extracted from the first image with the pixel points extracted from the second image by using a stereo matching algorithm to obtain a matched pixel point image, and acquiring a first parallax.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay following SIFT algorithm, and the method performs block feature matching on the basis of SURF using an image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the Signal-to-Noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
Step 203, extracting the same pixel points of the first image and the depth image to obtain a second parallax;
image identification is a process of classification that distinguishes images from other different classes of images. Extracting pixel points of the first image and the second image by using a Scale-invariant feature transform (SIFT) method or an accelerated robust features (SURF) method, matching the pixel points extracted from the first image with the pixel points extracted from the second image by using a stereo matching algorithm to obtain a matched pixel point image, and acquiring a first parallax.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay after SIFT algorithm, and the method performs block feature matching on the basis of SURF using image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the Signal-to-Noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
And 204, comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold, generating a prompt signal that the calibration test is passed.
And acquiring a difference absolute value of the first parallax and the second parallax determined from the parallax image, and comparing the difference absolute value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of the camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the parallax is smaller than the preset threshold, the actual error of the calibration result of the camera module is within the error allowable range, and then a calibration test passing prompt signal is generated and used for prompting a calibration test processing unit of the electronic equipment, and the calibration result of the camera module passes the calibration test.
In one embodiment, as shown in fig. 3, the method for calibrating a camera module further includes:
step 305, if the difference value between the first parallax and the second parallax is greater than or equal to a preset threshold value, generating a failure prompt signal for the calibration test to pass; and calibrating the camera module for the second time.
And further acquiring a difference absolute value of the first parallax and the second parallax determined from the parallax map, and comparing the difference absolute value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of the camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the parallax is larger than or equal to the preset threshold, it indicates that the actual error of the calibration result of the camera module exceeds the error allowable range, and further generates a calibration test passing failure prompt signal, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and if the calibration result of the camera module does not pass the calibration test, the camera module needs to be calibrated for the second time. The calibration of the camera module is to restore an object in a space by using an image shot by the camera module, and the linear relation exists between the image shot by the camera and the object in a three-dimensional space, namely an image matrix is equal to a physical matrix, and the physical matrix can be regarded as a geometric model of camera imaging. The parameters in the physical matrix are camera parameters. The process of solving the parameters of the physical matrix is called camera calibration. The camera module calibration algorithm can be briefly described as follows: printing a template and attaching the template on a plane; shooting a plurality of template images from different angles; detecting characteristic points in the image; solving internal parameters and external parameters of the camera; solving distortion coefficients of the internal parameter and the external parameter; and optimizing distortion refinement.
In one embodiment, as shown in fig. 4, acquiring a depth image by a depth camera module includes: step 401, identifying an object on the depth image, and acquiring an identification confidence of the object; step 402, comparing the recognition confidence coefficient with a set threshold value, and acquiring a difference value; and 403, if the difference between the recognition confidence and the set threshold meets a preset condition, performing optical zooming and/or digital zooming on the object to acquire the depth image for the second time.
In this embodiment, because the depth camera module determines the distance between each object in the image and the camera module by shooting, it is particularly important for the depth camera module to accurately identify whether the object in the image is used. When an object in an image is identified, if the object in the image is distorted seriously or the picture occupation ratio is too small, it is often difficult to accurately identify the object. Acquiring a depth image through a depth camera module, calculating the similarity between an object picture and an actual object picture in the depth image, and determining the maximum similarity; and calculating the recognition confidence coefficient of the object to be detected according to the maximum similarity. And then judging whether the recognition confidence of the object to be detected is smaller than a set threshold value. If the object is recognized and the recognition confidence coefficient is smaller than the set threshold value, adjusting the optical focal length of the depth camera module to shoot the color image again or perform digital zooming on the existing color image, and performing object recognition on the color image obtained for the second time again until the recognition confidence coefficient is larger than the set threshold value, and outputting the depth image of the object; if the object recognition confidence coefficient is greater than or equal to the set threshold value, directly outputting the depth image of the object if the object recognition confidence coefficient is greater than or equal to the set threshold value; it should be noted that if the object is not recognized, the optical focal length of the optical camera module is adjusted to shoot the color image again. The set threshold of the recognition confidence is determined by an engineer in software design according to hardware conditions and specific conditions of the software design. And are not limited herein.
In one embodiment, a depth camera module optically zooms an object, comprising: and the depth camera module acquires an optical focal length used in the optical zooming according to the depth information in the depth image and the duty ratio of the object in the depth image. In this embodiment, the duty ratio refers to a ratio of the object to be measured in the depth image. Optical zooming occurs by changing the position of the lens, the object and the focal point. When the imaging plane moves in the horizontal direction, the angle of view and the focal length change, and the farther scene becomes clearer. According to the depth information in the depth image and the duty ratio of the object in the depth image, the optical focal length currently using the optical zoom is obtained, for example, a suspected object occupying the area 1/10 in the depth image picture is detected in the current color image, and if it is desired to scale up the suspected object in the color image picture to 1/2, the position coordinates of a plurality of vertexes on the outline of the suspected object in the depth image can be obtained, and the focal length candidate is calculated.
In one embodiment, a depth camera module digitally zooms an object, comprising: and the image sharpness parameters and the image size of the depth image are adaptively adjusted in the digital focal length process of the depth camera module.
Digital zoom is to enlarge part of the pixels on the original Charge-coupled Device (CCD) image sensor by "interpolation". Namely, part of the image on the CCD image sensor is enlarged to the whole picture. The digital zooming can improve the accuracy of object identification to a certain extent, pixel zooming is carried out on an existing image, the image does not need to be shot again, the calculation speed is higher than that of optical zooming, however, the improvement degree of the object identification effect is limited, in the embodiment, the image sharpness parameters and the image size of the depth image are adaptively adjusted in the process of digital zooming of the object, and the object identification effect is improved while the object in the depth image is ensured to be clear as much as possible. The digital zoom Interpolation method may adopt Nearest neighbor Interpolation (Nearest Interpolation), Bilinear Interpolation (Bilinear Interpolation), or Bicubic Interpolation (Bicubic Interpolation).
In one embodiment, a depth camera module optically and digitally zooms an object, comprising: the optical zoom and the digital zoom are performed at a preset brightness.
In one embodiment, performing the optical zoom and the digital zoom at the preset brightness includes: performing digital zooming at a first brightness; performing optical zooming until a predetermined ratio of the maximum optical zooming is reached and performing digital zooming at the second brightness; and performing optical zooming until the maximum optical zooming is reached at a third brightness, and performing digital zooming, wherein the first brightness, the second brightness and the third brightness are defined by preset brightness.
The preset brightness is set through the limit value of the light of the illumination area, and the limit value of the light of the illumination area is defined as the brightness first brightness of 1-1001 ux; defining the brightness of the light of the illumination area with the limit value of 100 and 10001ux as a second brightness; a brightness with a limit value of light of the illumination area larger than 10001ux is defined as a third brightness. The skilled person will understand that the selected illumination level is used for exemplary purposes.
At the first brightness, direct digital zoom. Digital zoom provides better illumination to the final depth image than using optical zoom, using only digital zoom up to a reasonable zoom level. However, if zooming is required at a reasonable level, further optical zooming is used. In other words, at the first brightness, optical zooming will be avoided because the light level reduction due to optical magnification is more pronounced than the light level reduction using digital zooming.
At the second brightness, optical zooming can be used first, after which digital zooming is performed if zooming is further required. The amount of camera module optical zoom may vary, but by way of example, optical zoom may be used until it reaches approximately half its maximum value or some other predetermined proportion. As mentioned, the ratio is a predetermined amount, which may be any value of 40%, 50%, 60%, 70%, 80%, 90%, or 30% -100% depending on the function of the image forming apparatus, and is not limited herein. In other words, in the second brightness condition, some amount of optical zoom may be used, but in order to avoid losing too much light, a part is done by digital zoom.
At a third brightness, optical zooming may be used, for example, up to its maximum. After the maximum optical zoom is reached and depending on whether further zooming is required, digital zooming may be performed. In other words, in bright light conditions, optical zoom can be tolerated to reduce the level of light reaching the optical image sensor, overcoming the disadvantage of digital zoom, i.e. not using all pixels in the final image quality.
Fig. 5 is a schematic structural diagram of an image processing apparatus provided in an embodiment, and an embodiment of the present application further provides a camera module calibration apparatus, which is applied to an electronic device having a first camera module, a second camera module, and a depth camera module, and is characterized by including: the system comprises an image acquisition module 501, a first extraction module 502, a second extraction module 503 and a calibration test module 504.
The image obtaining module 501 is configured to obtain a first image through a first camera module, obtain a second image through a second camera module, and obtain a depth image through a depth camera module in the same scene;
the user selects a scene chart1, the electronic device utilizes the first camera module, the second camera module and the depth camera module to shoot a chart1 at the same angle, the first camera module shoots a chart1 to obtain a first image, the second camera module shoots a chart1 to obtain a second image, and the depth camera module shoots a chart1 to obtain a depth image. The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
A first extraction module 502, configured to extract the same pixel points of the first image and the second image to obtain a first parallax; image identification is a process of classification that distinguishes images from other different classes of images. The first extraction module 502 extracts pixels of the first image and the second image by using a Scale-invariant feature transform (SIFT) method or a Speeded Up Robust Features (SURF) method, and matches the pixels extracted from the first image with the pixels extracted from the second image by using a stereo matching algorithm to obtain a matched pixel image, and obtains a first parallax.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay after SIFT algorithm, and the method performs block feature matching on the basis of SURF using image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the Signal-to-Noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
A second extraction module 503, configured to extract the same pixel points of the first image and the depth image to obtain a second parallax; image identification is a process of classification that distinguishes images from other different classes of images. The second extraction module 503 extracts pixels of the first image and the depth image by using a Scale-invariant feature transform (SIFT) method or a Speeded Up Robust Features (SURF) method, matches the pixels extracted from the first image with the pixels extracted from the second image by using a stereo matching algorithm to obtain a matched pixel image, and obtains a first parallax.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay after SIFT algorithm, and the method performs block feature matching on the basis of SURF using image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the Signal-to-Noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
The calibration testing module 504 is configured to compare the first parallax with the second parallax, and generate a prompt signal that the calibration test is passed if a difference between the first parallax and the second parallax is smaller than a preset threshold. The calibration testing module 504 obtains the absolute value of the difference between the first parallax and the second parallax determined from the parallax map, and compares the absolute value of the difference with a preset threshold, where the preset threshold is set by an engineer during calibration of the camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific situations. If the absolute value of the parallax is smaller than the preset threshold, the actual error of the calibration result of the camera module is within the error allowable range, and then a calibration test passing prompt signal is generated and used for prompting a calibration test processing unit of the electronic equipment, and the calibration result of the camera module passes the calibration test.
It should be understood that although the various steps in the flowcharts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 6 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 6, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a camera module calibration method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The modules in the camera module calibration device provided in the embodiment of the present application may be implemented in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device comprises a first camera module, a second camera module, a depth camera module, a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor executes the camera module calibration method in any of the embodiments. Included in the electronic device is an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units that define an ISP (Image Signal Processing) pipeline. FIG. 7 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 7, for convenience of explanation, only aspects of the image compensation technique related to the embodiments of the present application are shown.
As shown in fig. 7, the image processing circuit includes a first ISP processor 730, a second ISP processor 740 and a control logic 750. The first camera module 710 includes one or more first lenses 712 and a first image sensor 714. The first image sensor 714 may include a color filter array (e.g., a Bayer filter), and the first image sensor 714 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 714 and provide a set of image data that may be processed by the first ISP processor 730. The second camera module 720 includes one or more second lenses 722 and a second image sensor 724. The second image sensor 724 may include a color filter array (e.g., a Bayer filter), and the second image sensor 724 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 724 and provide a set of image data that may be processed by the second ISP processor 740.
The first image collected by the first camera module 710 is transmitted to the first ISP processor 730 for processing, after the first ISP processor 730 processes the first image, the statistical data (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) of the first image can be sent to the control logic 750, and the control logic 750 can determine the control parameters of the first camera module 710 according to the statistical data, so that the first camera module 710 can perform operations such as auto-focus and auto-exposure according to the control parameters. The first image may be stored in the image memory 760 after being processed by the first ISP processor 730, and the first ISP processor 730 may also read the image stored in the image memory 760 for processing. In addition, the first image may be directly transmitted to the display 770 to be displayed after being processed by the ISP processor 730, or the display 770 may read the image in the image memory 760 to be displayed.
Wherein the first ISP processor 730 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 730 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image memory 760 may be a part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (Direct memory access) feature.
Upon receiving an interface from the first image sensor 714, the first ISP processor 730 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 760 for additional processing before being displayed. The first ISP processor 730 receives the processed data from the image memory 760 and performs image data processing in RGB and YCbCr color spaces on the processed data. The image data processed by the first ISP processor 730 may be output to a display 770 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 730 may also be sent to an image memory 760, and the display 770 may read image data from the image memory 760. In one embodiment, image memory 760 may be configured to implement one or more frame buffers.
The statistics determined by first ISP processor 730 may be sent to control logic 750. For example, the statistical data may include first image sensor 714 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 712 shading correction, and the like. The control logic 750 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters for the first camera module 710 and control parameters for the first ISP processor 730 based on the received statistical data. For example, the control parameters of the first camera module 710 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 712 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 712 shading correction parameters.
Similarly, the second image captured by the second camera module 720 is transmitted to the second ISP processor 740 for processing, after the second ISP processor 740 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) can be sent to the control logic 750, and the control logic 750 can determine the control parameters of the second camera module 720 according to the statistical data, so that the second camera module 720 can perform operations such as auto-focus and auto-exposure according to the control parameters. The second image may be stored in the image memory 760 after being processed by the second ISP processor 740, and the second ISP processor 740 may also read the image stored in the image memory 760 to process the image. In addition, the second image may be directly transmitted to the display 770 to be displayed after being processed by the ISP processor 740, or the display 770 may read the image in the image memory 760 to be displayed. The second camera module 720 and the second ISP processor 740 may also implement the processes as described for the first camera module 710 and the first ISP processor 730.
In the embodiment of the present application, the image processing technology in fig. 7 is used to implement the steps of the camera module calibration method:
under the same scene, a first image is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image to obtain a first parallax;
extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and comparing the first parallax with the second parallax, and if the difference value of the first parallax and the second parallax is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the camera module calibration method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a camera module calibration method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features. It should be noted that "in one embodiment," "for example," "as another example," and the like, are intended to illustrate the application and are not intended to limit the application.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. The utility model provides a camera module calibration method, is applied to the electronic equipment that has first camera module, second camera module and degree of depth camera module, its characterized in that includes:
under the same scene, a first image is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image to obtain a first parallax;
extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and comparing the first parallax with the second parallax, and if the absolute value of the difference value between the first parallax and the second parallax is smaller than a preset threshold, generating a prompt signal that the calibration test is passed.
2. The camera module calibration method according to claim 1, further comprising:
if the absolute value of the difference value between the first parallax and the second parallax is greater than or equal to a preset threshold value, generating a calibration test failure prompt signal; and calibrating the camera module for the second time.
3. The camera module calibration method according to claim 1 or 2, wherein the obtaining of the depth image by the depth camera module comprises:
identifying an object on the depth image, and acquiring an identification confidence coefficient of the object;
comparing the recognition confidence coefficient with a set threshold value, and acquiring a difference value;
and if the difference value between the recognition confidence coefficient and the set threshold value meets the preset condition, carrying out optical zooming and/or digital zooming on the object to obtain the depth image for the second time.
4. The camera module calibration method according to claim 3, wherein the optically zooming the object by the depth camera module comprises:
and the depth camera module acquires an optical focal length used in optical zooming according to the depth information in the depth image and the duty ratio of the object in the depth image.
5. The camera module calibration method according to claim 3, wherein the digital zooming of the object by the depth camera module comprises: and the image sharpness parameters and the image size of the depth image are adaptively adjusted in the digital focal length process of the depth camera module.
6. The camera module calibration method according to claim 3, wherein the depth camera module performs optical zooming and digital zooming on the object, and comprises: the optical zoom and the digital zoom are performed at a preset brightness.
7. The camera module calibration method according to claim 6, wherein the performing the optical zooming and the digital zooming according to the preset brightness comprises:
performing the digital zooming at a first brightness;
performing the optical zooming until a predetermined proportion of a maximum optical zooming is reached and performing the digital zooming at a second brightness;
and performing the optical zooming until a maximum optical zooming is reached at a third brightness, and performing the digital zooming, wherein the first brightness, the second brightness and the third brightness are defined by preset brightness.
8. The utility model provides a module calibration device makes a video recording, is applied to the electronic equipment who has first module, the second module of making a video recording and the module of making a video recording of degree of depth, a serial communication port, includes:
the image acquisition module is used for acquiring a first image through a first camera module, acquiring a second image of the scene through a second camera module and acquiring a depth image through a depth camera module in the same scene;
the first extraction module is used for extracting the same pixel points of the first image and the second image so as to obtain a first parallax;
the second extraction module is used for extracting the same pixel points of the first image and the depth image to obtain a second parallax;
and the calibration test module is used for comparing the first parallax with the second parallax, and if the absolute value of the difference value between the first parallax and the second parallax is smaller than a preset threshold, generating a prompt signal that the calibration test is passed.
9. An electronic device comprising a camera module, a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the steps of the camera module calibration method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201811455209.1A 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium Active CN109559353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811455209.1A CN109559353B (en) 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811455209.1A CN109559353B (en) 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109559353A CN109559353A (en) 2019-04-02
CN109559353B true CN109559353B (en) 2021-02-02

Family

ID=65868321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811455209.1A Active CN109559353B (en) 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109559353B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223338A (en) * 2019-06-11 2019-09-10 中科创达(重庆)汽车科技有限公司 Depth information calculation method, device and electronic equipment based on image zooming-out
CN112862880A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN111768434A (en) * 2020-06-29 2020-10-13 Oppo广东移动通信有限公司 Disparity map acquisition method and device, electronic equipment and storage medium
CN112098044B (en) * 2020-09-07 2022-07-26 深圳惠牛科技有限公司 Detection method, system, monocular module detection equipment and storage medium
CN112734859A (en) * 2021-01-11 2021-04-30 Oppo广东移动通信有限公司 Camera module parameter calibration method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101185322A (en) * 2005-05-31 2008-05-21 诺基亚公司 Optical and digital zooming for an imaging device
CN101630406A (en) * 2008-07-14 2010-01-20 深圳华为通信技术有限公司 Camera calibration method and camera calibration device
CN101753824A (en) * 2008-12-19 2010-06-23 三洋电机株式会社 Image sensing apparatus
CN103546686A (en) * 2012-07-17 2014-01-29 奥林巴斯映像株式会社 Camera device and shooting method
CN107424196A (en) * 2017-08-03 2017-12-01 江苏钜芯集成电路技术股份有限公司 A kind of solid matching method, apparatus and system based on the weak more mesh cameras of demarcation
CN107730462A (en) * 2017-09-30 2018-02-23 努比亚技术有限公司 A kind of image processing method, terminal and computer-readable recording medium
CN108288294A (en) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 A kind of outer ginseng scaling method of a 3D phases group of planes
CN108780504A (en) * 2015-12-22 2018-11-09 艾奎菲股份有限公司 Three mesh camera system of depth perception

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013079660A1 (en) * 2011-11-30 2013-06-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Disparity map generation including reliability estimation
US10540784B2 (en) * 2017-04-28 2020-01-21 Intel Corporation Calibrating texture cameras using features extracted from depth images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101185322A (en) * 2005-05-31 2008-05-21 诺基亚公司 Optical and digital zooming for an imaging device
CN101630406A (en) * 2008-07-14 2010-01-20 深圳华为通信技术有限公司 Camera calibration method and camera calibration device
CN101753824A (en) * 2008-12-19 2010-06-23 三洋电机株式会社 Image sensing apparatus
CN103546686A (en) * 2012-07-17 2014-01-29 奥林巴斯映像株式会社 Camera device and shooting method
CN108780504A (en) * 2015-12-22 2018-11-09 艾奎菲股份有限公司 Three mesh camera system of depth perception
CN107424196A (en) * 2017-08-03 2017-12-01 江苏钜芯集成电路技术股份有限公司 A kind of solid matching method, apparatus and system based on the weak more mesh cameras of demarcation
CN107730462A (en) * 2017-09-30 2018-02-23 努比亚技术有限公司 A kind of image processing method, terminal and computer-readable recording medium
CN108288294A (en) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 A kind of outer ginseng scaling method of a 3D phases group of planes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Local 3D Map Building and Error Analysis Based on Stereo Vision;Huahua Chen et al.;《31st Annual Conference of IEEE Industrial Electronics Society》;20060116;第379-382页 *
利用双目视差理论的摄像机参数标定方法;郭艾侠 等;《计算机工程与应用》;20091231;第45卷(第13期);第240-242页 *
深度与彩色相机的联合标定及其在增强现实中的应用;琚旋;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140615;第2014年卷(第6期);摘要,第三章 *

Also Published As

Publication number Publication date
CN109559353A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN109767467B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107948519B (en) Image processing method, device and equipment
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
CN107977940B (en) Background blurring processing method, device and equipment
CN107945105B (en) Background blurring processing method, device and equipment
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3480784B1 (en) Image processing method, and device
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN107948617B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN112004029B (en) Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium
CN108053438B (en) Depth of field acquisition method, device and equipment
CN109559352B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109584312B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN110660090A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN109951641B (en) Image shooting method and device, electronic equipment and computer readable storage medium
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
CN112866553B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109584311B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109697737B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant