CN109040745B - Camera self-calibration method and device, electronic equipment and computer storage medium - Google Patents

Camera self-calibration method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN109040745B
CN109040745B CN201810866119.5A CN201810866119A CN109040745B CN 109040745 B CN109040745 B CN 109040745B CN 201810866119 A CN201810866119 A CN 201810866119A CN 109040745 B CN109040745 B CN 109040745B
Authority
CN
China
Prior art keywords
camera
image
rgb
point set
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810866119.5A
Other languages
Chinese (zh)
Other versions
CN109040745A (en
Inventor
郭子青
周海涛
欧锦荣
谭筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810866119.5A priority Critical patent/CN109040745B/en
Publication of CN109040745A publication Critical patent/CN109040745A/en
Priority to PCT/CN2019/075375 priority patent/WO2020024576A1/en
Priority to EP19178034.5A priority patent/EP3608875A1/en
Priority to TW108120211A priority patent/TWI709110B/en
Priority to US16/524,678 priority patent/US11158086B2/en
Application granted granted Critical
Publication of CN109040745B publication Critical patent/CN109040745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a camera self-calibration method and device, electronic equipment and a computer-readable storage medium. The method comprises the following steps: when the image definition is detected to be smaller than a definition threshold value, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by an infrared camera and an RGB camera; extracting feature points with equal distances in the target infrared image to obtain a first feature point set, extracting feature points with equal distances in the RGB image to obtain a second feature point set, and matching the first feature point set and the second feature point set; and acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points. The characteristic points with equal intervals are used for matching, the matching accuracy is high, the transformation relation between the two cameras is calibrated, and the definition of subsequent shot images is improved.

Description

Camera self-calibration method and device, electronic equipment and computer storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to a camera self-calibration method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of electronic devices and imaging technologies, more and more users use cameras of the electronic devices to acquire images. The infrared camera and the RGB camera need to be subjected to parameter calibration before leaving a factory. When the temperature changes or the electronic equipment falls, the camera module deforms, and the definition of a shot image is influenced.
Disclosure of Invention
The embodiment of the application provides a camera self-calibration method and device, electronic equipment and a computer readable storage medium, which can improve the definition of a shot image.
A camera self-calibration method, comprising:
when the image definition is detected to be smaller than a definition threshold value, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by an infrared camera and an RGB camera;
extracting feature points with equal distances in the target infrared image to obtain a first feature point set, extracting feature points with equal distances in the RGB image to obtain a second feature point set, and matching the first feature point set and the second feature point set;
and acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points.
A camera head self-calibration apparatus comprising:
the image acquisition module is used for acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera when the image definition is detected to be smaller than a definition threshold;
the matching module is used for extracting feature points with equal distances in the target infrared image to obtain a first feature point set, extracting feature points with equal distances in the RGB image to obtain a second feature point set, and matching the first feature point set and the second feature point set;
and the parameter determining module is used for acquiring the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the camera self-calibration method.
A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the camera head self-calibration method.
According to the camera self-calibration method and device, the electronic equipment and the computer-readable storage medium, when the fact that the image definition is smaller than the definition threshold value is detected, a target infrared image and an RGB image which are obtained by shooting the same scene through the infrared camera and the RGB camera are obtained, a first characteristic point set is obtained by extracting the target infrared image, a second characteristic point set is obtained by extracting the RGB image, the first characteristic point set and the second characteristic point set are matched, and the matched characteristic points are adopted to obtain the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera, so that the external parameters between the infrared camera and the RGB camera are calibrated, the transformation relation between the two cameras is calibrated, and the image definition is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic application environment diagram of a camera self-calibration method in one embodiment.
FIG. 2 is a flow diagram of a method for self-calibration of a camera in one embodiment.
Fig. 3 is a block diagram of a self-calibration apparatus of a camera in one embodiment.
Fig. 4 is a schematic diagram of an internal structure of the electronic device in one embodiment.
Fig. 5 is a schematic diagram of another application environment of the camera self-calibration method in one embodiment.
FIG. 6 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, the first calibration image may be referred to as a second calibration image, and similarly, the second calibration image may be referred to as the first calibration image, without departing from the scope of the present application. Both the first calibration image and the second calibration image are calibration images, but they are not the same calibration image.
Fig. 1 is a schematic application environment diagram of a camera self-calibration method in one embodiment. As shown in fig. 1, the application environment includes an electronic device 110 and a scene 120. The electronic device 110 includes an Infrared camera (Infrared RadiationCamera) and an RGB (Red, Green, Blue) camera. The electronic device 110 may capture the scene 120 through the infrared camera and the RGB camera to obtain a target infrared image and an RGB image, perform feature point matching according to the target infrared image and the RGB image, and then calculate a transformation relationship between a coordinate system of the infrared camera and a coordinate system of the RGB camera. The electronic device 110 may be a smartphone, a tablet, a personal digital assistant, a wearable device, or the like.
FIG. 2 is a flow diagram of a method for self-calibration of a camera in one embodiment. As shown in fig. 2, a camera self-calibration method includes:
step 202, when the image definition is detected to be smaller than the definition threshold, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera.
Specifically, the electronic device may periodically detect an image captured by the camera, and when it is detected that the sharpness of the image is smaller than a sharpness threshold, it indicates that the camera needs to be self-calibrated. The sharpness threshold may be set as desired, e.g., 80%, 90%, etc. The image definition can be obtained by calculating the square of the gray difference of two connected pixels by adopting a Brenner gradient function, or by respectively extracting the gradient values in the horizontal direction and the vertical direction by adopting a sobel operator of a Tenengrad gradient function, and the image definition is obtained based on the image definition of the Tenengrad gradient function. The image sharpness may also be detected using a variance function, an energy gradient function, a vollath function, an entropy function, an EAV point sharpness algorithm function, a Laplacian gradient function, an SMD grayscale variance function, and the like.
And 204, extracting characteristic points with equal intermediate distances from the target infrared image to obtain a first characteristic point set, extracting characteristic points with equal intermediate distances from the RGB image to obtain a second characteristic point set, and matching the first characteristic point set and the second characteristic point set.
Specifically, Scale-invariant feature transform (Scale-invariant feature transform, abbreviated as SIFT) detection is performed on a target infrared image to obtain feature points, then the feature points with equal intervals are selected and added into a first feature point set, SIFT detection is performed on an RGB image to obtain feature points, then the feature points with equal intervals are selected and added into a second feature point set. SIFT is a description used in image processing. This description has scale invariance and keypoints can be detected in the image. And matching the feature points in the first feature point set and the second feature point set through SIFT. Equal spacing means equal spacing between adjacent feature points.
SIFT feature detection comprises the following steps: searching image positions on all scales; determining a position and a scale by fitting on each candidate position; assigning one or more directions to each keypoint location based on the local gradient direction of the image; local gradients of the image are measured at a selected scale in the field around each keypoint.
The feature matching of SIFT includes: extracting feature vectors irrelevant to scale scaling, rotation and brightness change from the plurality of images to obtain SIFT feature vectors; and judging the similarity of key points in the two images by adopting the Euclidean distance of the SIFT feature vectors. The smaller the euclidean distance, the higher the similarity. When the Euclidean distance is smaller than the set threshold value, the matching can be judged to be successful.
And step 206, acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
Let PirIs the spatial coordinate, p, of a point under the infrared camera coordinatesirIs the projection coordinate of the point on the image plane (x, y is pixel, z is equal to depth value, mm), HirThe internal reference matrix of the depth camera is known from a pinhole imaging model, and the internal reference matrix and the pinhole imaging model satisfy the following relation:
then is provided with PrgbSpatial coordinates of the same point under RGB camera coordinates, prgbIs the projection coordinate of the point on the RGB image plane, HrgbIs an internal reference matrix of the RGB camera. Because the coordinates of the infrared camera and the coordinates of the RGB camera are different, the infrared camera and the RGB camera can be linked by a rotation translation transformation, namely:
Prgb=RPir+ T formula (2)
Wherein, R is a rotation matrix, and T is a translation vector. Finally, reuse HrgbTo PrgbAnd projecting to obtain the corresponding RGB coordinates of the point.
prgb=HrgbPrgbFormula (3)
It is to be noted that pirAnd prgbAll use homogeneous coordinates, and therefore construct pirWhen the original pixel coordinates (x, y) should be multiplied by the depth value, and the final RGB pixel coordinates must be prgbDivided by the z component, i.e. (x/z, y/z), and the value of the z component is the distance (in millimeters) of the point to the RGB camera.
The external parameter of the infrared camera comprises a rotation matrix RirAnd translation vector TirThe external reference of the RGB camera comprises a rotation matrix RrgbAnd translation vector TrgbThe external reference of the camera is to transform a point P under a world coordinate system into a camera coordinate system, and respectively transform the infrared camera and the RGB camera, and the following relations are provided:
Pir=RirP+Tir
Prgb=RrgbP+Trgbformula (4)
Converting the formula (4) to obtain:
in conjunction with equation (2) one can obtain:
the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera comprises a rotation matrix and a translation vector between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
According to the camera self-calibration method, when the condition that the preset condition is met is detected, a target infrared image and an RGB image which are obtained by shooting the same scene through the infrared camera and the RGB camera are obtained, feature points with equal intervals in the target infrared image are extracted to obtain a first feature point set, feature points with equal intervals in the RGB image are extracted to obtain a second feature point set, the first feature point set and the second feature point set are matched, the matched feature points are adopted to obtain the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera, the feature points with equal intervals are used for matching, the matching is more accurate, the calibration of external parameters between the infrared camera and the RGB camera is realized, the transformation relation between the two cameras is calibrated, and the image definition is improved.
In one embodiment, when it is detected that the image definition is smaller than the definition threshold, acquiring an infrared image and an RGB image of a target obtained by shooting the same scene by an infrared camera and an RGB camera, including: when the image definition is detected to be smaller than a definition threshold value, detecting the working state of the electronic equipment; when the working state of the electronic equipment is a preset working state, a target infrared image and a target RGB image which are obtained by shooting the same scene through the infrared camera and the RGB camera are obtained.
The preset working state is a preset working state. The preset operating state may include at least one of the following conditions: receiving opening instructions of the infrared camera and the RGB camera; detecting that the face input by the unlocking screen fails to be matched with the prestored face; the matching failure times of the face input by the unlocking screen and the prestored face are detected to exceed a first preset time.
Specifically, the turn-on instructions for the infrared camera and the RGB camera may be triggered for starting the camera application. And the electronic equipment receives the trigger camera application program of the user and starts the infrared camera and the RGB camera. After the camera is started, a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera are obtained. When the camera is started, the conversion relation between the coordinate systems of the infrared camera and the RGB camera is obtained, and the accuracy of shooting images by the subsequent camera can be ensured.
The screen unlocking can adopt a face verification mode, and when face matching fails, the electronic equipment can acquire a target infrared image and an RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera, and the conversion relation between the infrared camera and the RGB camera coordinate system is obtained by matching calculation. After the face unlocking fails, the infrared camera and the RGB camera are calibrated, the accuracy of images can be improved, and the face unlocking can be performed again conveniently.
Further, when the face matching failure times exceed a first preset time, the electronic device may obtain a target infrared image and an RGB image obtained by shooting the same scene by the infrared camera and the RGB camera, and perform matching calculation to obtain a transformation relationship between the infrared camera and the RGB camera coordinate system.
When the matching failure times exceed the first preset times, the transformation relation between the infrared camera and the RGB camera coordinate system is calculated, and the infrared camera and the RGB camera are prevented from being calibrated due to face matching failure caused by other factors.
In one embodiment, when the operating state of the electronic device is a preset operating state, acquiring a target infrared image and a target RGB image obtained by shooting the same scene with an infrared camera and an RGB camera, including: when the fact that the face input by the unlocking screen fails to be matched with the prestored face is detected, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene through the infrared camera and the RGB camera; and when receiving an opening instruction for the infrared camera and the RGB camera, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera.
Specifically, when the working state of the electronic equipment is detected to be that the matching between a face input by an unlocking screen and a prestored face fails, a target infrared image and an RGB image which are obtained by shooting the same scene by an infrared camera and an RGB camera are obtained, feature points with equal distances in the target infrared image are extracted to obtain a first feature point set, feature points with equal distances in the RGB image are extracted to obtain a second feature point set, and the first feature point set and the second feature point set are matched; and acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points.
When the working state of the electronic equipment is detected again to be that an opening instruction for the infrared camera and the RGB camera is received, a target infrared image and an RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera are obtained, feature points with equal distances in the target infrared image are extracted to obtain a first feature point set, feature points with equal distances in the RGB image are extracted to obtain a second feature point set, and the first feature point set and the second feature point set are matched; and acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points.
In one embodiment, when the operating state of the electronic device is a preset operating state, acquiring a target infrared image and a target RGB image obtained by shooting the same scene with an infrared camera and an RGB camera, including:
when the number of times of failure in matching the face input by the unlocking screen and the prestored face is detected to exceed a first preset number, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera; and when receiving an opening instruction for the infrared camera and the RGB camera, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera.
Specifically, when the working state of the electronic equipment is detected that the number of times of failure in matching between a face input by an unlocking screen and a prestored face exceeds a first preset number of times, a target infrared image and an RGB image which are obtained by shooting the same scene by an infrared camera and an RGB camera are obtained; extracting characteristic points with equal intermediate distances from the target infrared image to obtain a first characteristic point set, extracting characteristic points with equal intermediate distances from the RGB image to obtain a second characteristic point set, and matching the first characteristic point set and the second characteristic point set; and acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points.
When the working state of the electronic equipment is detected again to be that an opening instruction for the infrared camera and the RGB camera is received, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera; extracting characteristic points with equal intermediate distances from the target infrared image to obtain a first characteristic point set, extracting characteristic points with equal intermediate distances from the RGB image to obtain a second characteristic point set, and matching the first characteristic point set and the second characteristic point set; and acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points.
In one embodiment, the camera self-calibration method further includes: acquiring RGB images obtained by shooting the same scene by an RGB camera in a first operation environment, and acquiring target infrared images obtained by shooting the same scene by an infrared camera in a second operation environment; extracting feature points with equal intermediate distances from the target infrared image in the second operating environment to obtain a first feature point set, extracting feature points with equal intermediate distances from the RGB image in the first operating environment to obtain a second feature point set, and matching the first feature point set and the second feature point set in the second operating environment, wherein the security level of the second operating environment is higher than that of the first operating environment.
Specifically, the first runtime Environment may be an REE (natural runtime Environment) Environment, and the second runtime Environment may be a TEE (Trusted runtime Environment). The security level of TEE is higher than that of REE. And the matching of the characteristic points is carried out in the second operation environment, so that the safety is improved, and the safety of data is ensured.
In one embodiment, the camera self-calibration method further includes: acquiring RGB images obtained by shooting the same scene by an RGB camera in a first operation environment, and acquiring target infrared images obtained by shooting the same scene by an infrared camera in a second operation environment; the RGB image is transmitted to a second operation environment, feature points with equal distances among the target infrared images are extracted in the second operation environment to obtain a first feature point set, feature points with equal distances among the RGB images are extracted to obtain a second feature point set, the first feature point set and the second feature point set are matched in the second operation environment, and the security level of the second operation environment is higher than that of the first operation environment.
And the characteristic points are extracted from the second operation environment and matched, so that the safety is improved, and the safety of data is ensured.
In one embodiment, the camera self-calibration method further includes: acquiring RGB images obtained by shooting the same scene by an RGB camera in a first operation environment, and acquiring target infrared images obtained by shooting the same scene by an infrared camera in a second operation environment; extracting feature points with equal distances among the target infrared images from the second operating environment to obtain a first feature point set, and transmitting the first feature point set to the first operating environment; extracting feature points with equal distances in the RGB image in the first operating environment to obtain a second feature point set, and matching the first feature point set and the second feature point set in the first operating environment, wherein the security level of the second operating environment is higher than that of the first operating environment.
And the characteristic points are matched in the first operating environment, so that the data processing efficiency is high.
Fig. 3 is a block diagram of a self-calibration apparatus of a camera in one embodiment. As shown in fig. 3, the camera self-calibration apparatus includes an image acquisition module 310, a matching module 320, and a parameter determination module 330. Wherein:
the image obtaining module 310 is configured to obtain an infrared image and an RGB image of a target obtained by shooting the same scene with the infrared camera and the RGB camera when it is detected that the image definition is smaller than the definition threshold.
The matching module 320 is configured to extract feature points with equal distances between the target infrared image to obtain a first feature point set, extract feature points with equal distances between the RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set.
The parameter determining module 330 is configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
According to the camera self-calibration device, when the condition that the preset condition is met is detected, a target infrared image and an RGB image which are obtained by shooting the same scene through the infrared camera and the RGB camera are obtained, feature points with equal intervals in the target infrared image are extracted to obtain a first feature point set, feature points with equal intervals in the RGB image are extracted to obtain a second feature point set, the first feature point set and the second feature point set are matched, the matched feature points are adopted to obtain the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera, the feature points with equal intervals are used for matching, the matching is more accurate, the calibration of external parameters between the infrared camera and the RGB camera is realized, the transformation relation between the two cameras is calibrated, and the image definition is improved.
In one embodiment, the camera self-calibration device further comprises a detection module. The detection module is used for detecting the working state of the electronic equipment when detecting that the image definition is smaller than the definition threshold value.
The image obtaining module 310 is further configured to obtain a target infrared image and a target RGB image obtained by shooting the same scene by the infrared camera and the RGB camera when the operating state of the electronic device is the preset operating state.
The preset working state comprises at least one of the following conditions: receiving opening instructions of the infrared camera and the RGB camera; detecting that the face input by the unlocking screen fails to be matched with the prestored face; the matching failure times of the face input by the unlocking screen and the prestored face are detected to exceed a first preset time.
In one embodiment, the image obtaining module 310 is further configured to, when it is detected that the face input for unlocking the screen fails to match with the pre-stored face, obtain a target infrared image and an RGB image obtained by shooting the same scene by the infrared camera and the RGB camera; and when receiving an opening instruction for the infrared camera and the RGB camera, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera.
In one embodiment, the image obtaining module 310 is configured to, when it is detected that the working state of the electronic device is that matching between a face recorded on an unlocking screen and a pre-stored face fails, obtain a target infrared image and an RGB image obtained by shooting the same scene by an infrared camera and an RGB camera; the matching module 320 is configured to extract feature points with equal intermediate distances from the target infrared image to obtain a first feature point set, extract feature points with equal intermediate distances from the RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set; the parameter determining module 330 is configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
Secondly, the image obtaining module 310 is further configured to obtain a target infrared image and a target RGB image obtained by shooting the same scene by the infrared camera and the RGB camera when it is detected that the operating state of the electronic device is that an opening instruction for the infrared camera and the RGB camera is received; the matching module 320 is further configured to extract feature points with equal distances between the target infrared image to obtain a first feature point set, extract feature points with equal distances between the RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set; the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
In one embodiment, the image obtaining module 310 is further configured to obtain RGB images obtained by the RGB camera shooting the same scene in the first operating environment, and obtain a target infrared image obtained by the infrared camera shooting the same scene in the second operating environment;
the matching module 320 is further configured to extract feature points with equal distances between the target infrared images in the second operating environment to obtain a first feature point set, extract feature points with equal distances between the RGB images in the first operating environment to obtain a second feature point set, and match the first feature point set and the second feature point set in the second operating environment, where a security level of the second operating environment is higher than that of the first operating environment.
In one embodiment, the image obtaining module 310 is further configured to obtain RGB images obtained by the RGB camera shooting the same scene in the first operating environment, and obtain a target infrared image obtained by the infrared camera shooting the same scene in the second operating environment;
the matching module 320 is further configured to transmit the RGB image to the second operating environment, extract feature points with equal distances between the target infrared image in the second operating environment to obtain a first feature point set, extract feature points with equal distances between the RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set in the second operating environment, where a security level of the second operating environment is higher than that of the first operating environment.
In one embodiment, the image obtaining module 310 is further configured to obtain RGB images obtained by the RGB camera shooting the same scene in the first operating environment, and obtain a target infrared image obtained by the infrared camera shooting the same scene in the second operating environment;
the matching module 320 is further configured to extract feature points with equal distances among the target infrared images in the second operating environment to obtain a first feature point set, transmit the first feature point set to the first operating environment, extract feature points with equal distances among the RGB images in the first operating environment to obtain a second feature point set, and match the first feature point set and the second feature point set in the first operating environment, where a security level of the second operating environment is higher than that of the first operating environment.
The embodiment of the application also provides the electronic equipment. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the computer program causes the processor to execute the operation in the camera self-calibration method when being executed by the processor.
The embodiment of the application provides a nonvolatile computer readable storage medium. A non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements operations in the following camera self-calibration method.
Fig. 4 is a schematic diagram of an internal structure of the electronic device in one embodiment. As shown in fig. 4, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to realize the wireless network communication method suitable for the electronic device provided by the embodiment of the application. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program is executable by a processor for implementing a structured light module calibration method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the camera self-calibration apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
A computer program product containing instructions which, when run on a computer, cause the computer to perform a camera head self-calibration method.
Fig. 5 is a schematic diagram of an internal structure of the electronic device in one embodiment. As shown in fig. 5, the electronic device 50 may include a camera module 510, a second processor 520, and a first processor 530. The second processor 520 may be a Central Processing Unit (CPU) module. The first processor 530 may be an MCU (micro controller unit) module or the like. The first processor 530 is connected between the second processor 520 and the camera module 550, the first processor 530 can control the laser camera 512, the floodlight 514 and the laser light 518 in the camera module 510, and the second processor 520 can control the RGB (Red/Green/Blue, Red/Green/Blue color mode) camera 516 in the camera module 510.
The camera module 510 includes a laser camera 512, a floodlight 514, an RGB camera 516 and a laser 518. The laser camera 512 is an infrared camera and is used for acquiring infrared images. The floodlight 514 is a surface light source capable of emitting infrared light; the laser lamp 518 is a point light source capable of generating laser light and is a point light source with a pattern. When the floodlight 514 emits a surface light source, the laser camera 552 can obtain an infrared image according to the reflected light. When the laser lamp 518 emits a point light source, the laser camera 512 can obtain a speckle image according to the reflected light. The speckle image is an image in which the point light source with the pattern emitted by the laser lamp 518 is reflected and the pattern is deformed.
The second processor 520 may include a CPU core that operates in a TEE (Trusted Execution Environment) Environment and a CPU core that operates in a REE (natural Execution Environment) Environment. The TEE environment and the REE environment are both running modes of an ARM module (Advanced RISC Machines, Advanced reduced instruction set processor). The security level of the TEE environment is high, and only one CPU core in the second processor 520 can operate in the TEE environment at the same time. Generally, the operation behavior with higher security level in the electronic device 50 needs to be executed in the CPU core in the TEE environment, and the operation behavior with lower security level can be executed in the CPU core in the REE environment.
The first processor 530 includes a PWM (Pulse Width Modulation) module 532, an SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) Interface 534, a RAM (Random Access Memory) module 536, and a depth engine 538. The PWM module 532 may transmit pulses to the camera module to control the floodlight 514 or the laser 518 to be turned on, so that the laser camera 512 may collect infrared images or speckle images. The SPI/I2C interface 534 is used for receiving image acquisition commands sent by the second processor 520. The depth engine 538 may process the speckle images to obtain a depth disparity map.
When the second processor 520 receives a data acquisition request of an application program, for example, when the application program needs to perform face unlocking and face payment, an image acquisition instruction may be sent to the first processor 530 through the CPU core operating in the TEE environment. After the first processor 530 receives the image acquisition command, the PWM module 532 emits a pulse wave to control the floodlight 554 in the camera module 550 to be turned on and acquire an infrared image through the laser camera 552, and to control the laser light 558 in the camera module 550 to be turned on and acquire a speckle image through the laser camera 512. The camera module 550 may transmit the collected infrared image and speckle image to the first processor 530. The first processor 530 may process the received infrared image to obtain an infrared disparity map; and processing the received speckle images to obtain a speckle parallax image or a depth parallax image. The processing of the infrared image and the speckle image by the first processor 530 refers to the correction of the infrared image or the speckle image, and the influence of internal and external parameters in the camera module 510 on the image is removed. The first processor 530 can be set to different modes, and the images output by the different modes are different. When the first processor 530 is set to the speckle pattern mode, the first processor 530 processes the speckle image to obtain a speckle disparity map, and a target speckle image can be obtained according to the speckle disparity map; when the first processor 530 is set to the depth map mode, the first processor 530 processes the speckle images to obtain a depth disparity map, and obtains a depth image according to the depth disparity map, where the depth image is an image with depth information. The first processor 530 may transmit the infrared disparity map and the speckle disparity map to the second processor 520, and the first processor 530 may also transmit the infrared disparity map and the depth disparity map to the second processor 520. The second processor 520 may obtain an infrared image according to the infrared disparity map and a depth image according to the depth disparity map. Further, the second processor 520 may perform face recognition, face matching, living body detection, and depth information acquisition of the detected face according to the infrared image and the depth image.
The communication between the first processor 530 and the second processor 520 is through a fixed security interface to ensure the security of the transmitted data. As shown in FIG. 5, data sent by second Processor 520 to first Processor 530 is via SECURESPI/I2C 540, and data sent by first Processor 530 to second Processor 520 is via SECURE MIPI (Mobile industry Processor Interface) 550.
In an embodiment, the first processor 530 may also obtain a target infrared image according to the infrared disparity map, calculate and obtain a depth image according to the depth disparity map, and send the target infrared image and the depth image to the second processor 520.
In one embodiment, the first processor 530 may perform face recognition, face matching, liveness detection, and depth information acquisition on the detected face according to the target infrared image and the depth image. Wherein, the first processor 530 sending the image to the second processor 520 means that the first processor 530 sends the image to a CPU core in the second processor 520 under the TEE environment.
In one embodiment, an RGB image obtained by an RGB camera shooting the same scene may be acquired in the REE, and a target infrared image obtained by an infrared camera shooting the same scene may be acquired in the TEE; extracting characteristic points with equal intermediate distances from the target infrared image in the TEE to obtain a first characteristic point set, extracting characteristic points with equal intermediate distances from the RGB image in the REE to obtain a second characteristic point set, and matching the first characteristic point set and the second characteristic point set in the TEE, wherein the security level of the TEE is higher than that of the REE.
In one embodiment, an RGB image obtained by an RGB camera shooting the same scene may be acquired in the REE, and a target infrared image obtained by an infrared camera shooting the same scene may be acquired in the TEE; and transmitting the RGB image to a TEE, extracting characteristic points with equal intermediate distances from the target infrared image in the TEE to obtain a first characteristic point set, extracting characteristic points with equal intermediate distances from the RGB image to obtain a second characteristic point set, and matching the first characteristic point set and the second characteristic point set in the TEE.
In one embodiment, an RGB image obtained by an RGB camera shooting the same scene may be acquired in the REE, and a target infrared image obtained by an infrared camera shooting the same scene may be acquired in the TEE; extracting feature points with equal distances among the target infrared images from the TEE to obtain a first feature point set, and transmitting the first feature point set to the REE; and extracting feature points with equal distances in the RGB image in the REE to obtain a second feature point set, and matching the first feature point set and the second feature point set in the REE.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 6 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 6, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 6, the image processing circuit includes a first ISP processor 630, a second ISP processor 640 and a control logic 650. The first camera 610 includes one or more first lenses 612 and a first image sensor 614. The first image sensor 614 may include a color filter array (e.g., a Bayer filter), and the first image sensor 614 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 614 and provide a set of image data that may be processed by the first ISP processor 630. The second camera 620 includes one or more second lenses 622 and a second image sensor 624. Second image sensor 624 may include a color filter array (e.g., a Bayer filter), and second image sensor 624 may acquire light intensity and wavelength information captured with each imaging pixel of second image sensor 624 and provide a set of image data that may be processed by second ISP processor 640.
The first image collected by the first camera 610 is transmitted to the first ISP processor 630 for processing, after the first ISP processor 630 processes the first image, the statistical data of the first image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 650, and the control logic 650 may determine the control parameter of the first camera 610 according to the statistical data, so that the first camera 66 may perform operations such as auto-focus and auto-exposure according to the control parameter. The first image may be stored in the image memory 660 after being processed by the first ISP processor 630, and the first ISP processor 630 may also read the image stored in the image memory 660 for processing. In addition, the first image may be directly transmitted to the display 670 for display after being processed by the ISP processor 630, and the display 670 may also read and display the image in the image memory 660.
Wherein the first ISP processor 630 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 630 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth calculation accuracy.
The image Memory 660 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving an interface from first image sensor 614, first ISP processor 630 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 660 for additional processing before being displayed. The first ISP processor 630 receives the processed data from the image memory 660 and performs image data processing in RGB and YCbCr color space on the processed data. The image data processed by the first ISP processor 630 may be output to a display 670 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 630 may also be sent to an image memory 660, and the display 670 may read image data from the image memory 660. In one embodiment, image memory 660 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 630 may be sent to the control logic 650. For example, the statistical data may include first image sensor 614 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 612 shading correction, and the like. Control logic 650 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for first camera 610 and control parameters for first ISP processor 630 based on the received statistical data. For example, the control parameters of the first camera 610 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 612 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters, and the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 612 shading correction parameters.
Similarly, the second image collected by the second camera 620 is transmitted to the second ISP processor 640 for processing, after the second ISP processor 640 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 650, and the control logic 650 may determine the control parameter of the second camera 620 according to the statistical data, so that the second camera 620 may perform operations such as auto-focus and auto-exposure according to the control parameter. The second image may be stored in the image memory 660 after being processed by the second ISP processor 640, and the second ISP processor 640 may also read the image stored in the image memory 660 to process the image. In addition, the second image may be directly transmitted to the display 670 for display after being processed by the ISP processor 640, and the display 670 may also read and display the image in the image memory 660. The second camera 620 and the second ISP processor 640 may also implement the processes as described for the first camera 610 and the first ISP processor 630.
The first camera 610 may be an infrared camera and the second camera 620 may be an RGB camera.
The following steps are steps for implementing the camera self-calibration method by using the image processing technology in fig. 6.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A camera self-calibration method is characterized by comprising the following steps:
when the image definition is detected to be smaller than a definition threshold value, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by an infrared camera and an RGB camera;
performing scale invariant feature transform detection on the target infrared image to obtain feature points, extracting feature points with equal intermediate distances from the target infrared image to obtain a first feature point set, performing SIFT detection on the RGB image to obtain feature points, extracting feature points with equal intermediate distances from the RGB image to obtain a second feature point set, and matching the first feature point set and the second feature point set;
acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points; and the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera comprises a rotation matrix and a translation vector between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
2. The method of claim 1, wherein when it is detected that the image definition is smaller than the definition threshold, acquiring the infrared image and the RGB image of the target obtained by the infrared camera and the RGB camera shooting the same scene comprises:
when the image definition is detected to be smaller than a definition threshold value, detecting the working state of the electronic equipment;
and when the working state of the electronic equipment is a preset working state, acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera.
3. The method of claim 2, wherein the preset operating state comprises at least one of:
receiving opening instructions of the infrared camera and the RGB camera;
detecting that the face input by the unlocking screen fails to be matched with the prestored face;
the matching failure times of the face input by the unlocking screen and the prestored face are detected to exceed a first preset time.
4. The method according to any one of claims 1 to 3, further comprising:
acquiring RGB images obtained by shooting the same scene by an RGB camera in a first operation environment, and acquiring target infrared images obtained by shooting the same scene by an infrared camera in a second operation environment;
extracting feature points with equal distances among the target infrared images in the second operation environment to obtain a first feature point set, extracting feature points with equal distances among the RGB images in the first operation environment to obtain a second feature point set, and matching the first feature point set and the second feature point set in the second operation environment, wherein the security level of the second operation environment is higher than that of the first operation environment.
5. The method according to any one of claims 1 to 3, further comprising:
acquiring RGB images obtained by shooting the same scene by an RGB camera in a first operation environment, and acquiring target infrared images obtained by shooting the same scene by an infrared camera in a second operation environment;
and transmitting the RGB images to a second operation environment, extracting characteristic points with equal distances among the target infrared images in the second operation environment to obtain a first characteristic point set, extracting characteristic points with equal distances among the RGB images to obtain a second characteristic point set, and matching the first characteristic point set and the second characteristic point set in the second operation environment, wherein the security level of the second operation environment is higher than that of the first operation environment.
6. The method according to any one of claims 1 to 3, further comprising:
acquiring RGB images obtained by shooting the same scene by an RGB camera in a first operation environment, and acquiring target infrared images obtained by shooting the same scene by an infrared camera in a second operation environment;
extracting feature points with equal distances among the target infrared images in the second operating environment to obtain a first feature point set, and transmitting the first feature point set to the first operating environment;
extracting feature points with equal distances in the RGB images in the first operating environment to obtain a second feature point set, and matching the first feature point set and the second feature point set in the first operating environment, wherein the security level of the second operating environment is higher than that of the first operating environment.
7. A camera head self-calibration apparatus, comprising:
the image acquisition module is used for acquiring a target infrared image and a target RGB image which are obtained by shooting the same scene by the infrared camera and the RGB camera when the image definition is detected to be smaller than a definition threshold;
the matching module is used for detecting the target infrared image by adopting scale-invariant feature transformation to obtain feature points, extracting the feature points with equal intermediate distances from the target infrared image to obtain a first feature point set, detecting the RGB image by adopting SIFT to obtain feature points, extracting the feature points with equal intermediate distances from the RGB image to obtain a second feature point set, and matching the first feature point set and the second feature point set;
the parameter determining module is used for acquiring a transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched characteristic points; and the transformation relation between the coordinate system of the infrared camera and the coordinate system of the RGB camera comprises a rotation matrix and a translation vector between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
8. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the camera head self-calibration method according to any one of claims 1 to 6.
9. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the camera head self-calibration method according to any one of claims 1 to 6.
CN201810866119.5A 2018-08-01 2018-08-01 Camera self-calibration method and device, electronic equipment and computer storage medium Active CN109040745B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201810866119.5A CN109040745B (en) 2018-08-01 2018-08-01 Camera self-calibration method and device, electronic equipment and computer storage medium
PCT/CN2019/075375 WO2020024576A1 (en) 2018-08-01 2019-02-18 Camera calibration method and apparatus, electronic device, and computer-readable storage medium
EP19178034.5A EP3608875A1 (en) 2018-08-01 2019-06-04 Camera calibration method and apparatus, electronic device, and computer-readable storage medium
TW108120211A TWI709110B (en) 2018-08-01 2019-06-12 Camera calibration method and apparatus, electronic device
US16/524,678 US11158086B2 (en) 2018-08-01 2019-07-29 Camera calibration method and apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810866119.5A CN109040745B (en) 2018-08-01 2018-08-01 Camera self-calibration method and device, electronic equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN109040745A CN109040745A (en) 2018-12-18
CN109040745B true CN109040745B (en) 2019-12-27

Family

ID=64647588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810866119.5A Active CN109040745B (en) 2018-08-01 2018-08-01 Camera self-calibration method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN109040745B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022159244A1 (en) * 2021-01-19 2022-07-28 Google Llc Cross spectral feature mapping for camera calibration

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024576A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Camera calibration method and apparatus, electronic device, and computer-readable storage medium
CN110619656B (en) * 2019-09-05 2022-12-02 杭州宇泛智能科技有限公司 Face detection tracking method and device based on binocular camera and electronic equipment
CN110599550A (en) * 2019-09-09 2019-12-20 香港光云科技有限公司 Calibration system of RGB-D module and equipment and method thereof
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102782721A (en) * 2009-12-24 2012-11-14 康耐视公司 System and method for runtime determination of camera miscalibration
CN106340045A (en) * 2016-08-25 2017-01-18 电子科技大学 Calibration optimization method based on binocular stereoscopic vision in three-dimensional face reconstruction
CN108230395A (en) * 2017-06-14 2018-06-29 深圳市商汤科技有限公司 Stereoscopic image is calibrated and image processing method, device, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102782721A (en) * 2009-12-24 2012-11-14 康耐视公司 System and method for runtime determination of camera miscalibration
CN106340045A (en) * 2016-08-25 2017-01-18 电子科技大学 Calibration optimization method based on binocular stereoscopic vision in three-dimensional face reconstruction
CN108230395A (en) * 2017-06-14 2018-06-29 深圳市商汤科技有限公司 Stereoscopic image is calibrated and image processing method, device, storage medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022159244A1 (en) * 2021-01-19 2022-07-28 Google Llc Cross spectral feature mapping for camera calibration

Also Published As

Publication number Publication date
CN109040745A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109040745B (en) Camera self-calibration method and device, electronic equipment and computer storage medium
CN109767467B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
TWI709110B (en) Camera calibration method and apparatus, electronic device
CN108716983B (en) Optical element detection method and device, electronic equipment, storage medium
CN109544620B (en) Image processing method and apparatus, computer-readable storage medium, and electronic device
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN108716982B (en) Optical element detection method, optical element detection device, electronic equipment and storage medium
CN108924426B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108600740B (en) Optical element detection method, optical element detection device, electronic equipment and storage medium
CN109327626B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN107948617B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN109040746B (en) Camera calibration method and apparatus, electronic equipment, computer readable storage medium
CN109584312B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
US11218650B2 (en) Image processing method, electronic device, and computer-readable storage medium
CN109683698B (en) Payment verification method and device, electronic equipment and computer-readable storage medium
CN109068060B (en) Image processing method and device, terminal device and computer readable storage medium
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
CN108760245B (en) Optical element detection method and device, electronic equipment, readable storage medium storing program for executing
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN109658459B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN108737733B (en) Information prompting method and device, electronic equipment and computer readable storage medium
CN109120846A (en) Image processing method and device, electronic equipment, computer readable storage medium
JP7321772B2 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant