WO2020024576A1 - 摄像头校准方法和装置、电子设备、计算机可读存储介质 - Google Patents

摄像头校准方法和装置、电子设备、计算机可读存储介质 Download PDF

Info

Publication number
WO2020024576A1
WO2020024576A1 PCT/CN2019/075375 CN2019075375W WO2020024576A1 WO 2020024576 A1 WO2020024576 A1 WO 2020024576A1 CN 2019075375 W CN2019075375 W CN 2019075375W WO 2020024576 A1 WO2020024576 A1 WO 2020024576A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
rgb
image
point set
feature point
Prior art date
Application number
PCT/CN2019/075375
Other languages
English (en)
French (fr)
Inventor
郭子青
周海涛
欧锦荣
谭筱
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810866119.5A external-priority patent/CN109040745B/zh
Priority claimed from CN201810867079.6A external-priority patent/CN109040746B/zh
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020024576A1 publication Critical patent/WO2020024576A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • the present application relates to the field of imaging technology, and in particular, to a camera calibration method, a camera calibration device, an electronic device, and a computer-readable storage medium.
  • the infrared camera and RGB camera need to be calibrated before leaving the factory.
  • the camera module is deformed, which affects the clarity of the captured image.
  • the embodiments of the present application provide a camera calibration method, a camera calibration device, an electronic device, and a computer-readable storage medium, which can improve the sharpness of a captured image.
  • a camera calibration method includes: when it is detected that the image sharpness is less than a sharpness threshold, acquiring a target infrared image and an RGB image obtained by an infrared camera and an RGB camera shooting the same scene; extracting feature points in the target infrared image to obtain A first feature point set and a feature point in the RGB image to obtain a second feature point set, and matching the first feature point set and the second feature point set; and obtaining the according to the matched feature points A transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
  • a camera calibration device includes an image acquisition module, a matching module, and a parameter determination module.
  • the image acquisition module is configured to acquire a target infrared image and an RGB image obtained by an infrared camera and an RGB camera when the image clarity is less than a sharpness threshold, which is obtained by shooting the same scene.
  • the matching module is used for extracting feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and combining the first feature point set and the second feature point set.
  • Feature point set for matching is configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • An electronic device includes a memory and a processor.
  • a computer program is stored in the memory.
  • the processor causes the processor to perform the following steps: When the threshold value is obtained, the target infrared image and the RGB image obtained by the infrared camera and the RGB camera shooting the same scene are obtained; the feature points in the target infrared image are extracted to obtain a first feature point set and the feature points in the RGB image are obtained to a second Feature point set, and matching the first feature point set and the second feature point set; obtaining a transformation between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points relationship.
  • a non-transitory computer-readable storage medium has a computer program stored thereon.
  • the computer program When the computer program is executed by a processor, the following steps are implemented: when it is detected that the image sharpness is less than a sharpness threshold, acquiring an infrared camera and an RGB camera A target infrared image and an RGB image obtained by shooting the same scene; extracting feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set; and A feature point set is matched with the second feature point set; and a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera is obtained according to the matched feature points.
  • FIG. 1 is a schematic diagram of an application environment of a camera calibration method in some embodiments of the present application.
  • FIG. 2 and FIG. 3 are flowcharts of a camera calibration method in some embodiments of the present application.
  • FIG. 4 is a structural block diagram of a camera calibration apparatus in some embodiments of the present application.
  • FIG. 5 is a schematic diagram of an internal structure of an electronic device in some embodiments of the present application.
  • FIG. 6 is a schematic diagram of another application environment of a camera calibration method in some embodiments of the present application.
  • FIG. 7 is a schematic diagram of an image processing circuit in some embodiments of the present application.
  • first, second, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • first calibration image may be referred to as a second calibration image
  • second calibration image may be referred to as a first calibration image. Both the first calibration image and the second calibration image are calibration images, but they are not the same calibration image.
  • FIG. 1 is a schematic diagram of an application environment of a camera calibration method according to an embodiment.
  • the application environment includes an electronic device 110 and a scenario 120.
  • the electronic device 110 includes an infrared camera (Infrared Radiation Camera) and an RGB (Red, Green, Blue) camera.
  • the electronic device 110 can capture the target infrared image and RGB image by shooting the scene 120 through the infrared camera and the RGB camera, and perform feature point matching according to the target infrared image and the RGB image, and then calculate the transformation of the coordinate system of the infrared camera and the coordinate system of the RGB camera. relationship.
  • the electronic device 110 may be a smart phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
  • FIG. 2 is a flowchart of a camera calibration method according to an embodiment. As shown in FIG. 2, a camera calibration method includes:
  • step 202 when it is detected that the image sharpness is less than the sharpness threshold, a target infrared image and an RGB image obtained by the infrared camera and the RGB camera capturing the same scene are acquired.
  • the electronic device 110 may periodically detect images captured by its own camera, and when it is detected that the sharpness of the image is less than the sharpness threshold, it indicates that it is necessary to calibrate its own camera.
  • the sharpness threshold can be set as needed, such as 80%, 90%, etc.
  • the Brenner gradient function can be used to calculate the square of the gray difference between two connected pixels to obtain the image sharpness.
  • the sobel operator of the Tenegrad gradient function can be used to extract the horizontal and vertical gradient values.
  • the image sharpness based on the Tenengrad gradient function can be used to obtain the image sharpness. Taking image sharpness, you can also use the variance function, energy gradient function, vollath function, entropy function, EAV point sharpness algorithm function, Laplacian gradient function, SMD gray variance function, etc. to detect image sharpness.
  • Step 204 Extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and match the first feature point set with the second feature point set.
  • a first feature point set is detected by using a scale-invariant feature transform (SIFT) for the target infrared image
  • SIFT scale-invariant feature transform
  • a second feature point set is obtained by using SIFT detection for an RGB image.
  • SIFT is a description used in image processing. This description is scale-invariant, and key points can be detected in the image.
  • the feature points in the first feature point set and the second feature point set are matched by SIFT.
  • SIFT feature detection includes: searching image positions on all scales; determining positions and scales by fitting at each candidate position; assigning one or more directions to each keypoint position based on the local gradient direction of the image; In the area around each keypoint, the local gradient of the image is measured on a selected scale.
  • SIFT feature matching includes: extracting feature vectors that are irrelevant to scaling, rotation, and brightness changes from multiple images to obtain SIFT feature vectors; using the Euclidean distance of SIFT feature vectors to determine the similarity of key points in the two images. The smaller the Euclidean distance, the higher the similarity. When the Euclidean distance is less than the set threshold, it can be determined that the matching is successful.
  • Step 206 Obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • P ir be the spatial coordinate of a point under the infrared camera coordinate
  • p ir be the projected coordinate of the point on the image plane (x and y units are pixels, z is equal to the depth value, the unit is mm), and H ir is the depth camera
  • P rgb be the spatial coordinates of the same point under the RGB camera coordinates
  • p rgb be the projection coordinates of the point on the RGB image plane
  • H rgb be the internal parameter matrix of the RGB camera. Because the coordinates of the infrared camera and the coordinates of the RGB camera are different, they can be linked by a rotation and translation transformation, that is:
  • R is the rotation matrix and T is the translation vector.
  • H rgb is used to project Pr rgb to obtain the RGB coordinates corresponding to the point.
  • both p ir and p rgb use homogeneous coordinates, so when constructing p ir , the original pixel coordinates (x, y) should be multiplied by the depth value, and the final RGB pixel coordinates must be p Divide rgb by the z component, that is, (x / z, y / z), and the value of the z component is the distance from the point to the RGB camera (unit: mm).
  • the external parameters of the infrared camera include the rotation matrix R ir and the translation vector T ir .
  • the external parameters of the RGB camera include the rotation matrix R rgb and the translation vector T rgb .
  • the external parameters of the camera indicate that the point P in a world coordinate system is transformed into the camera coordinates. Under the system, the infrared camera and the RGB camera are transformed separately, with the following relations:
  • the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera includes a rotation matrix and a translation vector between the two.
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera in the same scene are acquired, the target infrared image is extracted to obtain a first feature point set, and the RGB image is extracted to obtain a second Feature point set.
  • the first feature point set and the second feature point set are matched.
  • the matched feature points are used to obtain the transformation relationship between the infrared camera coordinate system and the RGB camera coordinate system.
  • the calibration of the external parameters between the two RGB cameras calibrates the transformation relationship between the two cameras and improves the sharpness of the image.
  • acquiring the target infrared image and the RGB image obtained by the infrared camera and the RGB camera and shooting the same scene includes: To detect the working state of the electronic device 110; when the working state of the electronic device 110 is a preset working state, acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera shooting the same scene.
  • the preset working state is a preset working state.
  • the preset working state may include at least one of the following situations: it is detected that the face used to unlock the screen entry fails to match the pre-stored face; it is detected that the number of times that the face used to unlock the screen entry fails to match the pre-stored face The first preset number of times; an instruction to turn on the infrared camera and the RGB camera is received; it is detected that the difference between the current temperature of the light transmitter and the initial temperature exceeds the temperature threshold; the second preset number of times detects the current temperature of the light transmitter and The difference between the initial temperatures exceeds the temperature threshold; a 3D processing request for a preview image is received.
  • the start instruction of the infrared camera and the RGB camera may be triggered by starting a camera application.
  • the electronic device 110 receives a user triggering a camera application program to start an infrared camera and an RGB camera. After the camera is started, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera to capture the same scene. When the camera is turned on, the transformation relationship between the coordinate systems of the infrared camera and the RGB camera is obtained, which can ensure the accuracy of the images captured by subsequent cameras.
  • the light emitter may be a laser light or the like.
  • the initial temperature is the temperature of the light emitter at the time of the last calibration.
  • the temperature of the light transmitter is detected by a temperature sensor.
  • a target infrared image and an RGB image obtained by the infrared camera and the RGB camera and capturing the same scene are acquired.
  • the conversion relationship between the infrared camera and the RGB camera is calibrated to ensure the accuracy of the collected images.
  • the second preset number of times can be set as needed, such as 2 times, 3 times, 4 times, and so on.
  • the temperature of the light transmitter is detected by the temperature sensor.
  • a target infrared image obtained by the infrared camera and the RGB camera shooting the same scene is acquired. And RGB images.
  • the face unlock method can be used for face verification.
  • the electronic device 110 can obtain the target infrared image and RGB image obtained by the infrared camera and the RGB camera to capture the same scene and perform matching calculation to obtain the transformation of the infrared camera and RGB camera coordinate system. relationship.
  • calibration of the infrared camera and RGB camera can improve the accuracy of the image and facilitate the face unlock again.
  • the electronic device 110 may obtain a target infrared image and an RGB image obtained by shooting the same scene with the infrared camera and the RGB camera, and perform matching calculation to obtain the coordinates of the infrared camera and the RGB camera coordinate system. Transformation relationship.
  • the transformation relationship between the infrared camera and the RGB camera coordinate system is calculated to avoid calibration of the infrared camera and the RGB camera due to face matching failure caused by other factors.
  • the three-dimensional processing request may be generated by the user by clicking a button on the display screen, or may be generated by the user by pressing a control on the touch screen, and the electronic device 110 may obtain a three-dimensional processing instruction for the initial preview image.
  • Three-dimensional processing refers to processing the image in three dimensions, that is, there are three dimensions: length, width, and height.
  • the electronic device 110 may detect depth information of an object or a human face in an image by using a depth image or an infrared image, and perform three-dimensional processing on the image.
  • the three-dimensional processing may be a beautifying process on the image, and the electronic device 110 may determine the area to beautified according to the depth information of the human face, so that the beautifying effect of the image is better.
  • the electronic device 110 may receive a three-dimensional processing instruction for the initial preview image.
  • the initial preview image refers to an image of the surrounding environment information collected by the electronic device 110 through a camera, and the initial preview image is displayed on the display screen of the electronic device 110 in real time.
  • the electronic device 110 After receiving the three-dimensional processing instruction for the initial preview image, the electronic device 110 performs corresponding three-dimensional processing on the initial preview image.
  • a target infrared image and an RGB image obtained by shooting the same scene with an infrared camera and an RGB camera are acquired, and feature points in the target infrared image are extracted.
  • the first feature point set and the feature points in the RGB image are obtained, the second feature point set is obtained, and the first feature point set and the second feature point set are matched, and the infrared is obtained according to the matched feature points.
  • the transformation relationship between the camera's coordinate system and the RGB camera's coordinate system includes:
  • the working state of the electronic device 110 When the working state of the electronic device 110 is detected that the face used to unlock the screen entry fails to match the pre-stored face, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera to capture the same scene; extract the target infrared
  • the feature points in the image are obtained from the first feature point set and the feature points in the RGB image are obtained from the second feature point set, and the first feature point set and the second feature point set are matched; obtained based on the matched feature points
  • the working state of the electronic device 110 is to receive the opening instruction of the infrared camera and the RGB camera, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera to shoot the same scene; extract the target infrared image from the target infrared image
  • the feature points are obtained from the first feature point set and the feature points in the RGB image are obtained from the second feature point set, and the first feature point set and the second feature point set are matched; the coordinate system of the infrared camera is obtained based on the matched feature points.
  • the coordinate system of the RGB camera With the coordinate system of the RGB camera;
  • the working state of the electronic device 110 When the working state of the electronic device 110 is detected again, it is detected that the difference between the current temperature and the initial temperature of the light transmitter exceeds the temperature threshold, and obtain the target infrared image and RGB image obtained by the infrared camera and the RGB camera to shoot the same scene; extract the target The feature points in the infrared image are obtained from the first feature point set and the feature points in the RGB image are obtained from the second feature point set, and the first feature point set and the second feature point set are matched; the infrared is obtained based on the matched feature points.
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera capturing the same scene are obtained;
  • the feature points in the target infrared image are obtained by The first feature point set and the feature points in the RGB image are used to obtain the second feature point set, and the first feature point set and the second feature point set are matched;
  • the coordinate system of the infrared camera and the RGB camera are obtained according to the matched feature points Transformation relationship between the coordinate systems.
  • Calibration can be performed in different working states of the electronic device 110, providing multiple times for calibration, ensuring timely calibration, and sharpness of the captured image.
  • the infrared camera and the RGB camera are captured to capture the same The target infrared and RGB images obtained from the scene.
  • acquiring the target infrared image and RGB image obtained by the infrared camera and the RGB camera capturing the same scene includes:
  • the working state of the electronic device 110 is to receive the opening instruction of the infrared camera and the RGB camera, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera to shoot the same scene; extract the target infrared image from the target infrared image
  • the feature point is obtained from the first feature point set and the feature point in the RGB image is obtained from the second feature point set, and the first feature point set and the second feature point set are matched; the infrared camera is obtained based on the matched feature points.
  • the difference between the current temperature of the light transmitter and the initial temperature is detected to exceed the temperature threshold, and a target infrared image obtained by the infrared camera and the RGB camera shooting the same scene is acquired.
  • RGB images extracting feature points in the target infrared image to obtain the first feature point set and feature points in the RGB image to obtain the second feature point set, and matching the first feature point set and the second feature point set; according to the matching
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera capturing the same scene are obtained;
  • the feature points in the target infrared image are obtained by
  • the first feature point set and the feature points in the RGB image are used to obtain the second feature point set, and the first feature point set and the second feature point set are matched;
  • the coordinate system of the infrared camera and the RGB camera are obtained according to the matched feature points. Transformation relationship between the coordinate systems.
  • Calibration can be performed in different working states of the electronic device 110, providing multiple times for calibration, ensuring timely calibration, and sharpness of the captured image.
  • the infrared camera and the RGB camera are captured to capture the same The target infrared and RGB images obtained from the scene.
  • the camera calibration method shown in FIG. 2 further includes: acquiring an RGB image obtained by an RGB camera shooting the same scene in a first operating environment, and acquiring an object obtained by an infrared camera shooting the same scene in a second operating environment. Infrared image; extracting feature points in the target infrared image in the second operating environment to obtain a first feature point set, and extracting feature points in the RGB image in the first operating environment to obtain a second feature point set, The second feature point set is transmitted to a second operating environment, in which the first feature point set and the second feature point set are matched.
  • the security level of the second operating environment is higher than that of the first operating environment.
  • the first operating environment may be a REE (Rich Execution Environment) environment
  • the second operating environment may be a TEE (Trusted Execution Environment).
  • TEE has a higher security level than REE. Matching feature points in the second operating environment improves security and ensures data security.
  • the camera calibration method shown in FIG. 2 further includes: acquiring an RGB image obtained by an RGB camera shooting the same scene in a first operating environment, and acquiring an object obtained by an infrared camera shooting the same scene in a second operating environment. Infrared image; transmitting the RGB image to the second operating environment, extracting feature points in the target infrared image to obtain a first feature point set, and extracting feature points in the RGB image to obtain a second feature Point set, in which the first feature point set and the second feature point set are matched in the second operating environment.
  • the security level of the second operating environment is higher than that of the first operating environment. Feature points and feature points are extracted and matched in the second operating environment, improving security and ensuring data security.
  • the camera calibration method shown in FIG. 2 further includes: acquiring an RGB image obtained by an RGB camera shooting the same scene in a first operating environment, and acquiring an object obtained by an infrared camera shooting the same scene in a second operating environment. Infrared image; extracting feature points in the target infrared image in the second operating environment to obtain a first feature point set, transmitting the first feature point set to the first operating environment; extracting the first operating point in the first operating environment
  • the feature points in the RGB image obtain a second feature point set, and the first feature point set and the second feature point set are matched in the first operating environment.
  • the security level of the second operating environment is higher than that of the first operating environment. Matching feature points in the first operating environment has high data processing efficiency.
  • a camera calibration method includes:
  • the working state of the electronic device 110 when the working state of the electronic device 110 is detecting that the face used to unlock the screen entry fails to match the pre-stored face, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera shooting the same scene; extract the target infrared
  • the first feature point set in the image and the second feature point set in the RGB image, and the first feature point set and the second feature point set are matched; the coordinate system and RGB of the infrared camera are obtained according to the matched feature points
  • the operating state of the electronic device 110 is to receive the opening instruction of the infrared camera and the RGB camera, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera to shoot the same scene; extract the first of the target infrared image A feature point set and a second feature point set in the RGB image, and matching the first feature point set and the second feature point set; obtaining the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points Transformation relationship between.
  • the working state of the electronic device 110 when it is detected that the difference between the current temperature and the initial temperature of the light transmitter exceeds a temperature threshold, acquiring a target infrared image and an RGB image obtained by the infrared camera and the RGB camera shooting the same scene; Extract the first feature point set in the target infrared image and the second feature point set in the RGB image, and match the first feature point set with the second feature point set; obtain the infrared camera's The transformation relationship between the coordinate system and the coordinate system of the RGB camera.
  • the working state of the electronic device 110 is the second preset number of consecutive times, it is detected that the difference between the current temperature and the initial temperature of the light transmitter exceeds the temperature threshold, and a target infrared image obtained by the infrared camera and the RGB camera capturing the same scene is acquired.
  • RGB images extract the first feature point set in the target infrared image and the second feature point set in the RGB image, and match the first feature point set with the second feature point set; according to the matched
  • the feature points obtain the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera capturing the same scene are acquired; the first of the target infrared images is extracted
  • the feature point set and the second feature point set in the RGB image, and the first feature point set and the second feature point set are matched; the coordinate system of the infrared camera and the coordinate system of the RGB camera are obtained according to the matched feature points. Transformation relationship between.
  • FIG. 3 is a flowchart of a camera calibration method in an embodiment. As shown in Figure 3, the camera calibration method includes:
  • Step 302 When it is detected that the image sharpness is less than the sharpness threshold, obtain a target infrared image and an RGB image obtained by the infrared camera and the RGB camera and shooting the same scene.
  • the electronic device 110 may periodically detect images captured by its own camera, and when it is detected that the sharpness of the image is less than the sharpness threshold, it indicates that it is necessary to calibrate its own camera.
  • the sharpness threshold can be set as needed, such as 80%, 90%, etc.
  • the Brenner gradient function can be used to calculate the square of the gray difference between two connected pixels to obtain the image sharpness.
  • the sobel operator of the Tenegrad gradient function can be used to extract the horizontal and vertical gradient values.
  • the image sharpness based on the Tenengrad gradient function can be used to obtain the image sharpness. Take the image sharpness. You can also use the variance function, energy gradient function, vollath function, entropy function, EAV point sharpness algorithm function, Laplacian gradient function, SMD gray variance function, etc. to detect image sharpness.
  • Step 304 Extract feature points with equal intervals in the target infrared image to obtain a first feature point set, and extract feature points with equal intervals in the RGB image to obtain a second feature point set, and combine the first feature point set with the second feature point set. Feature point set for matching.
  • the target infrared image is detected with Scale-invariant Feature Transformation (SIFT) to obtain feature points, and feature points with equal spacing are selected and added to the first feature point set.
  • SIFT detection is used for RGB images.
  • the feature points are obtained, and then feature points with equal spacing are selected and added to the second feature point set.
  • SIFT is a description used in image processing. This description is scale-invariant, and key points can be detected in the image.
  • the feature points in the first feature point set and the second feature point set are matched by SIFT.
  • Equal spacing means that the spacing between adjacent feature points is equal.
  • SIFT feature detection includes: searching image positions on all scales; determining positions and scales by fitting at each candidate position; assigning one or more directions to each keypoint position based on the local gradient direction of the image; In the area around each keypoint, the local gradient of the image is measured on a selected scale.
  • SIFT feature matching includes: extracting feature vectors that are irrelevant to scaling, rotation, and brightness changes from multiple images to obtain SIFT feature vectors; using the Euclidean distance of SIFT feature vectors to determine the similarity of key points in the two images. The smaller the Euclidean distance, the higher the similarity. When the Euclidean distance is less than the set threshold, it can be determined that the matching is successful.
  • Step 306 Obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • P ir be the spatial coordinate of a point under the infrared camera coordinate
  • p ir be the projected coordinate of the point on the image plane (x and y units are pixels, z is equal to the depth value, the unit is mm), and H ir is the depth camera
  • P rgb be the spatial coordinates of the same point under the RGB camera coordinates
  • p rgb be the projection coordinates of the point on the RGB image plane
  • H rgb be the internal parameter matrix of the RGB camera. Because the coordinates of the infrared camera and the coordinates of the RGB camera are different, they can be linked by a rotation and translation transformation, that is:
  • R is the rotation matrix and T is the translation vector.
  • H rgb is used to project Pr rgb to obtain the RGB coordinates corresponding to the point.
  • both p ir and p rgb use homogeneous coordinates, so when constructing p ir , the original pixel coordinates (x, y) should be multiplied by the depth value, and the final RGB pixel coordinates must be p Divide rgb by the z component, that is, (x / z, y / z), and the value of the z component is the distance from the point to the RGB camera (unit: mm).
  • the external parameters of the infrared camera include the rotation matrix R ir and the translation vector T ir .
  • the external parameters of the RGB camera include the rotation matrix R rgb and the translation vector T rgb .
  • the external parameters of the camera indicate that the point P in a world coordinate system is transformed into the camera coordinates. Under the system, the infrared camera and the RGB camera are transformed separately, with the following relations:
  • the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera includes a rotation matrix and a translation vector between the two.
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera in the same scene are acquired, and feature points with equal spacing in the target infrared image are extracted to obtain a first feature point set and extracted.
  • the second feature point set is obtained from the equally spaced feature points in the RGB image.
  • the first feature point set and the second feature point set are matched, and the matched feature points are used to obtain the coordinate system of the infrared camera and the coordinate system of the RGB camera.
  • the conversion relationship between the two cameras is matched by the feature points with equal spacing, and the matching is more accurate.
  • the external parameters between the infrared camera and the RGB camera are calibrated, the conversion relationship between the two cameras is calibrated, and the image is improved. Clarity.
  • acquiring the target infrared image and the RGB image obtained by the infrared camera and the RGB camera and shooting the same scene includes: when it is detected that the image sharpness is less than the sharpness threshold To detect the working state of the electronic device 110; when the working state of the electronic device 110 is a preset working state, acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera shooting the same scene.
  • the preset working state is a preset working state.
  • the preset working state may include at least one of the following situations: receiving an opening instruction to the infrared camera and the RGB camera; detecting that the face used to unlock the screen entry fails to match the pre-stored face; and detecting that the screen entry is unlocked The number of failed face matching with the stored face exceeds the first preset number of times.
  • the start instruction of the infrared camera and the RGB camera may be triggered by starting a camera application.
  • the electronic device 110 receives a user triggering a camera application program to start an infrared camera and an RGB camera. After the camera is started, acquire the target infrared image and RGB image obtained by the infrared camera and the RGB camera to capture the same scene. When the camera is turned on, the transformation relationship between the coordinate systems of the infrared camera and the RGB camera is obtained, which can ensure the accuracy of the images captured by subsequent cameras.
  • the face unlock method can be used for face verification.
  • the electronic device 110 can obtain the target infrared image and RGB image obtained by the infrared camera and the RGB camera to capture the same scene and perform matching calculation to obtain the transformation of the infrared camera and RGB camera coordinate system. relationship.
  • calibration of the infrared camera and RGB camera can improve the accuracy of the image and facilitate the face unlock again.
  • the electronic device 110 may obtain a target infrared image and an RGB image obtained by shooting the same scene with the infrared camera and the RGB camera, and perform a matching calculation to obtain the coordinate system of the infrared camera and the RGB camera Transformation relationship.
  • the transformation relationship between the infrared camera and the RGB camera coordinate system is calculated to avoid calibration of the infrared camera and the RGB camera due to face matching failure caused by other factors.
  • acquiring a target infrared image and an RGB image obtained by shooting the same scene with the infrared camera and the RGB camera includes: When the matching of the human face with the pre-stored face fails, obtain the target infrared image and RGB image obtained by the infrared camera and the RGB camera to capture the same scene; and when receiving the opening instruction for the infrared camera and the RGB camera, obtain the infrared camera and RGB The camera captures the target infrared image and RGB image obtained by the same scene.
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera shooting the same scene are acquired, and the target image is extracted.
  • Feature points with equal spacing in the target infrared image are obtained from the first feature point set, and feature points with equal spacing in the RGB image are extracted to obtain the second feature point set, and the first feature point set and the second feature point set are matched;
  • the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera is obtained according to the matched feature points.
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera shooting the same scene are acquired, and the interval in the target infrared image is extracted.
  • Equal feature points are obtained from the first feature point set, and equally spaced feature points are extracted from the RGB image to obtain the second feature point set, and the first feature point set and the second feature point set are matched; according to the matched features Obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
  • acquiring a target infrared image and an RGB image obtained by the infrared camera and the RGB camera and shooting the same scene includes:
  • the target infrared obtained by the infrared camera and the RGB camera capturing the same scene is acquired.
  • the working state of the electronic device 110 is to receive the opening instruction of the infrared camera and the RGB camera, acquire the target infrared image and the RGB image obtained by the infrared camera and the RGB camera to shoot the same scene; extract the interval in the target infrared image Equal feature points are obtained from the first feature point set, and feature points with equal spacing in the RGB image are extracted to obtain the second feature point set, and the first feature point set and the second feature point set are matched;
  • the feature points acquire a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
  • the camera calibration method shown in FIG. 3 further includes: acquiring an RGB image obtained by shooting an RGB camera in the same scene in the first operating environment, and acquiring an object obtained by photographing the same scene by an infrared camera in the second operating environment. Infrared image; extracting feature points with equal spacing in the target infrared image in the second operating environment to obtain a first feature point set, and extracting feature points with equal spacing in the RGB image in the first operating environment to obtain a second feature
  • the point set transmits the second feature point set to the second operating environment, and the first feature point set and the second feature point set are matched in the second running environment.
  • the security level of the second operating environment is higher than that of the first operating environment.
  • the first operating environment may be a REE (Rich Execution Environment) environment
  • the second operating environment may be a TEE (Trusted Execution Environment).
  • TEE has a higher security level than REE. Matching feature points in the second operating environment improves security and ensures data security.
  • the camera calibration method shown in FIG. 3 further includes: acquiring an RGB image obtained by shooting an RGB camera in the same scene in the first operating environment, and acquiring an object obtained by photographing the same scene by an infrared camera in the second operating environment. Infrared image; transmitting the RGB image to the second operating environment, extracting feature points with equal spacing in the target infrared image in the second operating environment to obtain a first feature point set, and extracting feature points with equal spacing in the RGB image A second feature point set is obtained, and the first feature point set and the second feature point set are matched in the second operating environment, wherein the security level of the second operating environment is higher than the first operating environment. Feature points and feature points are extracted and matched in the second operating environment, improving security and ensuring data security.
  • the camera calibration method shown in FIG. 3 further includes: acquiring an RGB image obtained by shooting an RGB camera in the same scene in the first operating environment, and acquiring an object obtained by photographing the same scene by an infrared camera in the second operating environment. Infrared image; extract feature points with equal spacing in the target infrared image in the second operating environment to obtain a first feature point set, and transmit the first feature point set to the first operating environment; in the first operating environment Extract feature points with equal spacing in the RGB image to obtain a second feature point set, and match the first feature point set with the second feature point set in the first operating environment, where the second operating environment has a high security level In the first operating environment. Matching feature points in the first operating environment has high data processing efficiency.
  • FIG. 4 is a structural block diagram of a camera calibration apparatus 300 in an embodiment.
  • the camera calibration apparatus 300 includes an image acquisition module 310, a matching module 320, and a parameter determination module 330.
  • the image acquisition module 310 is configured to acquire a target infrared image and an RGB image obtained by an infrared camera and an RGB camera when the image clarity is less than a sharpness threshold, which is obtained by shooting the same scene.
  • the matching module 320 is configured to extract feature points in the target infrared image to obtain a first feature point set and feature points in an RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set.
  • the parameter determining module 330 is configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the camera calibration device 300 of the present application detects that the sharpness of the image is less than the sharpness threshold, it acquires the target infrared image and RGB image obtained by shooting the same scene with the infrared camera and the RGB camera, and extracts the target infrared image to obtain a first feature point set.
  • the second feature point set is obtained by image extraction.
  • the first feature point set and the second feature point set are matched.
  • the matched feature points are used to obtain the transformation relationship between the infrared camera coordinate system and the RGB camera coordinate system.
  • the transformation relationship between the two cameras is calibrated to improve the sharpness of the image.
  • the camera calibration apparatus 300 further includes a detection module 340.
  • the detection module 340 is configured to detect the working state of the electronic device 110 (shown in FIG. 1) when it is detected that the image sharpness is less than the sharpness threshold.
  • the image acquisition module 310 is further configured to acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera when the electronic device 110 is in a preset working state and shooting the same scene.
  • the preset working state includes at least one of the following situations: it is detected that the face used to unlock the screen entry fails to match the stored face; it is detected that the face used to unlock the screen entry matches the stored face The number of failures exceeds the first preset number of times; an instruction to turn on the infrared camera and the RGB camera is received; the difference between the current temperature of the light transmitter and the initial temperature is detected to exceed the temperature threshold; the light transmitter is detected for a second preset number of times The difference between the current temperature and the initial temperature exceeds the temperature threshold; a 3D processing request for a preview image is received.
  • the image acquisition module 310 is further configured to acquire an infrared camera and RGB when the working state of the electronic device 110 detects that a face used to unlock the screen entry fails to match a pre-stored face.
  • a target infrared image and an RGB image obtained by a camera shooting the same scene;
  • the matching module 320 is further configured to extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, The first feature point set and the second feature point set are matched;
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points;
  • the image acquisition module 310 is further configured to detect again that the working state of the electronic device 110 is receiving a start instruction to the infrared camera and the RGB camera, acquire the target infrared image and the RGB image obtained by the infrared camera and the RGB camera to shoot the same scene ;
  • the matching module 320 is further configured to extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and obtain the first feature point set and the second feature point set Perform matching;
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points;
  • the image acquisition module 310 is further configured to detect again the working state of the electronic device 110 as detecting that the difference between the current temperature of the light transmitter and the initial temperature exceeds the temperature threshold, acquire the target infrared obtained by the infrared camera and the RGB camera shooting the same scene Image and RGB image;
  • the matching module 320 is further configured to extract feature points in the target infrared image to obtain a first feature point set and feature points in an RGB image to obtain a second feature point set, and combine the first feature point set and the second feature The point set is used for matching;
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points;
  • the image acquisition module 310 is further configured to detect that the working state of the electronic device is receiving a three-dimensional processing request for a preview image, and acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera to capture the same scene;
  • the matching module 320 It is also used to extract the feature points in the target infrared image to obtain the first feature point set and the feature points in the RGB image to obtain the second feature point set, and match the first feature point set with the second feature point set;
  • the parameter determination module 330 is also used to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • Calibration can be performed in different working states of the electronic device 110, providing multiple times for calibration, ensuring timely calibration, and sharpness of the captured image.
  • the image acquisition module 310 may be further configured to obtain an infrared camera when the working state of the electronic device 110 is detected that the number of times that the face used to unlock the screen entry has failed to match the pre-stored face exceeds the first preset number of times.
  • the target infrared image and RGB image obtained by shooting the same scene with the RGB camera.
  • the image acquisition module 310 is further configured to obtain the infrared camera and the RGB camera when the working state of the electronic device 110 detects that the face used to unlock the screen entry fails to match the pre-stored face, and capture the same scene.
  • the matching module 320 is further configured to extract the feature points in the target infrared image to obtain a first feature point set and the feature points in the RGB image to obtain a second feature point set, and the first feature point set and The second feature point set is used for matching;
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points;
  • the matching module 320 is further configured to extract feature points in the target infrared image to obtain a first feature point set and feature points in an RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set.
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points;
  • the image acquisition module 310 is further configured to detect again that the working state of the electronic device 110 is a second preset number of times, and detects that the difference between the current temperature and the initial temperature of the light transmitter exceeds the temperature threshold, the infrared camera and the RGB camera are captured to capture the same
  • the target infrared image and RGB image obtained from the scene the matching module 320 is further configured to extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and A feature point set is matched with a second feature point set;
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points;
  • the image acquisition module 310 is further configured to detect that the working state of the electronic device 110 is receiving a three-dimensional processing request for a preview image, and acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera to capture the same scene; a matching module 320 is also used to extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and match the first feature point set and the second feature point set; The parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • Calibration can be performed in different working states of the electronic device 110, providing multiple times for calibration, ensuring timely calibration, and sharpness of the captured image.
  • the image acquisition module 310 may be configured to obtain an infrared camera and an infrared camera when the working state of the electronic device 110 is detected that the number of times that the face used to unlock the screen entry fails to match the pre-stored face exceeds the first preset number of times.
  • the RGB camera captures the target infrared image and RGB image obtained by the same scene.
  • the image acquisition module 310 is further configured to acquire an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquire a target infrared image obtained by the infrared camera and shoot the same scene in the second operating environment; matching Module 320 is further configured to extract feature points in the target infrared image to obtain a first feature point set in the second operating environment, and extract feature points in the RGB image to obtain a second feature point set in the first operating environment. , Transmitting the second feature point set to the second operating environment, in which the first feature point set and the second feature point set are matched.
  • the security level of the second operating environment is higher than that of the first operating environment.
  • the first operating environment may be a REE (Rich Execution Environment) environment
  • the second operating environment may be a TEE (Trusted Execution Environment).
  • TEE has a higher security level than REE. Matching feature points in the second operating environment improves security and ensures data security.
  • the image acquisition module 310 is further configured to acquire an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquire a target infrared image obtained by the infrared camera and shoot the same scene in the second operating environment; matching Module 320 is further configured to transmit the RGB image to the second operating environment, extract feature points in the target infrared image to obtain a first feature point set, and extract feature points in the RGB image to obtain a first feature point set.
  • Two feature point sets, in which the first feature point set and the second feature point set are matched in the second operating environment.
  • the security level of the second operating environment is higher than that of the first operating environment. Feature points and feature points are extracted and matched in the second operating environment, improving security and ensuring data security.
  • the image acquisition module 310 is further configured to acquire an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquire a target infrared image obtained by the infrared camera and shoot the same scene in the second operating environment; matching Module 320 is further configured to extract feature points in the target infrared image in the second operating environment to obtain a first feature point set, and transmit the first feature point set to the first operating environment; in the first operating environment The feature points in the RGB image are extracted to obtain a second feature point set, and the first feature point set and the second feature point set are matched in the first operating environment.
  • the security level of the second operating environment is higher than that of the first operating environment. Matching feature points in the first operating environment has high data processing efficiency.
  • the image acquisition module 310 is configured to acquire a target infrared image and an RGB image obtained by an infrared camera and an RGB camera when the image is less than a sharpness threshold, which are obtained by shooting the same scene.
  • the matching module 320 is configured to extract feature points with equal intervals in the target infrared image to obtain a first feature point set, and extract feature points with equal intervals in the RGB image to obtain a second feature point set, and combine the first feature point set with The second feature point set is matched.
  • the parameter determining module 330 is configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the camera calibration device 300 of the present application detects that the preset conditions are met, it acquires the target infrared image and the RGB image obtained by the infrared camera and the RGB camera to capture the same scene, extracts the feature points with the same spacing in the target infrared image to obtain the first feature point set. , Extracting feature points with equal spacing in the RGB image to obtain a second feature point set, matching the first feature point set and the second feature point set, and using the matched feature points to obtain the coordinate system of the infrared camera and the coordinates of the RGB camera The conversion relationship between the systems is matched using feature points with equal spacing, which makes the matching more accurate.
  • the external parameters between the infrared camera and the RGB camera are calibrated. The conversion relationship between the two cameras is calibrated to improve The sharpness of the image.
  • the detection module 340 in the camera calibration apparatus 300 is configured to detect the working state of the electronic device when it is detected that the image sharpness is less than the sharpness threshold.
  • the image acquisition module 310 is further configured to acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera when the electronic device is in a preset working state and shooting the same scene.
  • the preset working state includes at least one of the following situations: receiving an opening instruction to the infrared camera and the RGB camera; detecting that the face used to unlock the screen entry fails to match the pre-stored face; and detecting that the screen entry is unlocked The number of failed face matching with the stored face exceeds the first preset number of times.
  • the image acquisition module 310 is further configured to acquire a target infrared image and an RGB image obtained by the infrared camera and the RGB camera and shooting the same scene when it is detected that the face used to unlock the screen entry fails to match the pre-stored face. ; And when receiving an instruction to turn on the infrared camera and the RGB camera, obtain a target infrared image and an RGB image obtained by the infrared camera and the RGB camera taking the same scene.
  • the image acquisition module 310 is configured to obtain a target infrared image obtained by shooting the same scene with an infrared camera and an RGB camera when it is detected that the working state of the electronic device 110 is that the face entered for unlocking the screen fails to match the pre-stored face.
  • RGB image the matching module 320 is configured to extract feature points with equal intervals in the target infrared image to obtain a first feature point set, and extract feature points with equal intervals in the RGB image to obtain a second feature point set, and use the first feature The point set is matched with the second feature point set;
  • the parameter determining module 330 is configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the image acquisition module 310 is further configured to acquire the target infrared image and RGB obtained by the infrared camera and the RGB camera when the working state of the electronic device 110 is receiving a start instruction to the infrared camera and the RGB camera, and the same scene is captured Image;
  • the matching module 320 is further configured to extract feature points with equal spacing in the target infrared image to obtain a first feature point set, and extract feature points with equal spacing in the RGB image to obtain a second feature point set, and use the first feature The point set is matched with the second feature point set;
  • the parameter determining module 330 is further configured to obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the image acquisition module 310 is further configured to acquire an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquire a target infrared image obtained by the infrared camera and shoot the same scene in the second operating environment; matching
  • the module 320 is further configured to extract feature points with equal intervals in the target infrared image in the second operating environment to obtain a first feature point set, and extract feature points with equal intervals in the RGB image to obtain a first feature point in the first operating environment.
  • Two feature point sets, the second feature point set is transmitted to a second operating environment, and the first feature point set and the second feature point set are matched in the second operating environment.
  • the security level of the second operating environment is higher than that of the first operating environment.
  • the image acquisition module 310 is further configured to acquire an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquire a target infrared image obtained by the infrared camera and shoot the same scene in the second operating environment; matching Module 320 is further configured to transmit the RGB image to the second operating environment, and in the second operating environment, extract feature points with equal intervals in the target infrared image to obtain a first feature point set, and extract equal distances in the RGB image.
  • the feature points obtain a second feature point set, and the first feature point set and the second feature point set are matched in the second operating environment, wherein the security level of the second operating environment is higher than the first operating environment.
  • the image acquisition module 310 is further configured to acquire an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquire a target infrared image obtained by the infrared camera and shoot the same scene in the second operating environment; matching
  • the module 320 is further configured to extract feature points with equal intervals in the target infrared image in the second operating environment to obtain a first feature point set, transmit the first feature point set to the first operating environment, and run in the first operating environment.
  • Feature points in the RGB image are extracted from the environment to obtain a second feature point set, and the first feature point set and the second feature point set are matched in the first operating environment, wherein the safety of the second operating environment is safe.
  • the level is higher than the first operating environment.
  • an embodiment of the present application further provides an electronic device.
  • the electronic device includes a memory and a processor.
  • a computer program is stored in the memory.
  • the processor causes the processor to perform operations in the camera calibration method according to any one of the foregoing embodiments.
  • the processor may perform the following steps: when it is detected that the image sharpness is less than a sharpness threshold, obtain a target infrared image and an RGB image obtained by the infrared camera and the RGB camera to capture the same scene; and extract the The feature points in the target infrared image are obtained from the first feature point set and the feature points in the RGB image are obtained from the second feature point set, and the first feature point set and the second feature point set are matched; according to the matched features Click to obtain the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera.
  • the processor may further perform the following steps: when detecting that the image sharpness is less than the sharpness threshold, detecting the working state of the electronic device; when the working state of the electronic device is a preset working state To obtain a target infrared image and an RGB image obtained by infrared camera and RGB camera shooting the same scene; extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, The first feature point set and the second feature point set are matched; and the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera is obtained according to the matched feature points.
  • the processor may perform the following steps: acquiring an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquiring an infrared camera obtained by the same scene in the second operating environment.
  • Target infrared image extracting feature points in the target infrared image in the second operating environment to obtain a first feature point set, and extracting feature points in the RGB image in the first operating environment to obtain a second feature point set
  • the second feature point set is transmitted to a second operating environment, in which the first feature point set and the second feature point set are matched.
  • the security level of the second operating environment is higher than that of the first operating environment.
  • the processor may perform the following steps: when it is detected that the image sharpness is less than a sharpness threshold, obtain a target infrared image and an RGB image obtained by the infrared camera and the RGB camera to capture the same scene; extract Equal feature points in the target infrared image are spaced to obtain a first feature point set, and equal feature points in the RGB image are extracted to obtain a second feature point set, and the first feature point set and the second feature point set are processed.
  • Matching Obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the processor may perform the following steps: when detecting that the image sharpness is less than the sharpness threshold, detecting the working status of the electronic device 110; when the working status of the electronic device 110 is a preset work
  • the target infrared image and the RGB image obtained by the infrared camera and the RGB camera to capture the same scene are obtained; the feature points with equal spacing in the target infrared image are extracted to obtain a first feature point set, and the features with equal spacing in the RGB image are extracted.
  • To obtain a second feature point set and match the first feature point set with the second feature point set; and obtain a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the processor may perform the following steps: acquiring an RGB image obtained by an RGB camera and shooting the same scene in the first operating environment, and acquiring an infrared camera obtained by the same scene in the second operating environment.
  • Target infrared image transmitting the RGB image to the second operating environment, extracting feature points with equal spacing in the target infrared image in the second operating environment to obtain a first feature point set, and extracting features with equal spacing in the RGB image Points to obtain a second feature point set, and the first feature point set and the second feature point set are matched in the second operating environment, wherein the security level of the second operating environment is higher than the first operating environment.
  • an embodiment of the present application provides a non-volatile computer-readable storage medium.
  • a non-transitory computer-readable storage medium stores a computer program that, when executed by a processor, implements the operations in the camera calibration method described in any one of the above examples.
  • the following steps can be implemented: when it is detected that the image sharpness is less than the sharpness threshold, obtaining a target infrared image and an RGB image obtained by an infrared camera and an RGB camera capturing the same scene; and extracting the target infrared image
  • the feature points in the first feature point set and the feature points in the RGB image are the second feature point set, and the first feature point set and the second feature point set are matched; the matched feature points are used to obtain the feature point.
  • the following steps may be implemented: when the image sharpness is detected to be less than the sharpness threshold, detecting the working state of the electronic device; when the working state of the electronic device is a preset working state, acquiring infrared
  • the camera and the RGB camera capture the target infrared image and RGB image obtained from the same scene; extract feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and The first feature point set and the second feature point set are matched; and the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera is obtained according to the matched feature points.
  • the following steps may be implemented: acquiring an RGB image obtained by an RGB camera and shooting the same scene in a first operating environment, and acquiring a target infrared image obtained by an infrared camera and shooting the same scene in a second operating environment Image; extracting feature points in the target infrared image in the second operating environment to obtain a first feature point set; and extracting feature points in the RGB image in the first operating environment to obtain a second feature point set.
  • the two feature point sets are transmitted to a second operating environment, in which the first feature point set and the second feature point set are matched.
  • the security level of the second operating environment is higher than that of the first operating environment.
  • the following steps may be implemented: when it is detected that the image sharpness is less than the sharpness threshold, obtaining a target infrared image and an RGB image obtained by the infrared camera and the RGB camera shooting the same scene; extracting the target The first feature point set is obtained from the equally spaced feature points in the infrared image, and the second feature point set is extracted from the equally spaced feature points in the RGB image, and the first feature point set and the second feature point set are matched; The transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera is obtained according to the matched feature points.
  • the following steps may be implemented: when detecting that the image sharpness is less than the sharpness threshold, detecting the working state of the electronic device 110; when the working state of the electronic device 110 is a preset working state To obtain the target infrared image and RGB image obtained by the infrared camera and the RGB camera shooting the same scene; extract feature points with equal spacing in the target infrared image to obtain a first feature point set, and extract feature points with equal spacing in the RGB image to obtain A second feature point set, and matching the first feature point set with the second feature point set; and obtaining a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the following steps may be implemented: acquiring an RGB image obtained by an RGB camera and shooting the same scene in a first operating environment, and acquiring a target infrared image obtained by an infrared camera and shooting the same scene in a second operating environment Image; transmitting the RGB image to the second operating environment, extracting feature points with equal spacing in the target infrared image in the second operating environment to obtain a first feature point set, and extracting feature points with equal spacing in the RGB image to obtain The second feature point set matches the first feature point set and the second feature point set in the second operating environment, wherein the security level of the second operating environment is higher than the first operating environment.
  • FIG. 5 is a schematic diagram of an internal structure of an electronic device according to an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor.
  • the memory may include a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program may be executed by a processor to implement the camera calibration methods provided by the foregoing embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • each module in the camera calibration apparatus 300 provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or server.
  • the program module constituted by the computer program can be stored in the memory of the terminal or server.
  • the computer program is executed by a processor, the steps of the camera calibration method described in any one of the embodiments of the present application are implemented.
  • An embodiment of the present application further provides a computer program product containing instructions, which when executed on a computer, causes the computer to execute the camera calibration method described in any one of the embodiments of the present application.
  • FIG. 6 is a schematic diagram of the internal structure of the electronic device 50 in another embodiment.
  • the electronic device 50 may include a camera module 510, a second processor 520, and a first processor 530.
  • the second processor 520 may be a CPU (Central Processing Unit) module.
  • the first processor 530 may be an MCU (Microcontroller Unit) module or the like.
  • the first processor 530 is connected between the second processor 520 and the camera module 550.
  • the first processor 530 can control the laser camera 512, flood light 514, and laser light 518 in the camera module 510.
  • the controller 520 can control an RGB (Red / Green / Blue) camera 516 in the camera module 510.
  • the electronic device 50 in FIG. 6 and the electronic device in FIG. 5 may be the same electronic device.
  • the functions of both the first processor 530 and the second processor 520 in FIG. 6 are the same as those of the processor in FIG. 5.
  • the camera module 510 includes a laser camera 512, a flood light 514, an RGB camera 516, and a laser light 518.
  • the above-mentioned laser camera 512 is an infrared camera, and is used to acquire an infrared image or a speckle image.
  • the flood light 514 is a surface light source that can emit infrared light; the laser light 518 is a point light source that can emit laser light and the emitted laser light can form a pattern.
  • the laser camera 552 can acquire an infrared image according to the reflected light.
  • the laser camera 512 can obtain a speckle image according to the reflected light.
  • the above speckle image is an image in which the infrared laser light emitted by the laser lamp 518 is deformed after being reflected.
  • the second processor 520 may include a CPU core running in a TEE (Trusted Execution Environment) environment and a CPU core running in a REE (Rich Execution Environment) environment.
  • TEE Trusted Execution Environment
  • REE Raich Execution Environment
  • the TEE environment and the REE environment are operating modes of an ARM module (Advanced RISC Machines, advanced reduced instruction set processor).
  • the TEE environment has a higher security level, and only one CPU core in the second processor 520 can run in the TEE environment at the same time.
  • the higher security level operation behavior in the electronic device 50 needs to be executed in the CPU core in the TEE environment, and the lower security level operation behavior can be executed in the CPU core in the REE environment.
  • the first processor 530 includes a PWM (Pulse Width Modulation) module 532, a SPI / I2C (Serial Peripheral Interface / Inter-Integrated Circuit) interface 534, and RAM (Random Access Memory) module 536 and depth engine 538.
  • the PWM module 532 can transmit a pulse to the camera module 510 and control the flood light 514 or the laser light 518 to be turned on, so that the laser camera 512 can acquire an infrared image or a speckle image.
  • the SPI / I2C interface 534 is configured to receive an image acquisition instruction sent by the second processor 520.
  • the depth engine 538 may process the speckle image to obtain a depth disparity map.
  • the random access memory 536 may store an infrared image, a speckle image obtained by the laser camera 512, a processed infrared image processed by the depth engine 538 or a processed image obtained by the speckle image, and the like.
  • the image may be sent to the first processor 530 through the CPU core running in the TEE environment. Acquisition instruction.
  • the first processor 530 can transmit a pulse wave through the PWM module 532 to control the flood light 514 in the camera module 510 to turn on, collect infrared images through the laser camera 512, and control the laser light 518 in the camera module 510. Turn on and collect the speckle image through the laser camera 512.
  • the camera module 510 may send the acquired infrared image and speckle image to the first processor 530.
  • the first processor 530 may process the received infrared image to obtain an infrared disparity map; and process the received speckle image to obtain a speckle disparity map or a depth disparity map.
  • the processing performed by the first processor 530 on the infrared image and the speckle image refers to correcting the infrared image or the speckle image to remove the influence of the internal and external parameters in the camera module 510 on the image.
  • the first processor 530 may be set to different modes, and images output by different modes are different. When the first processor 530 is set to the speckle pattern mode, the first processor 530 processes the speckle image to obtain a speckle disparity map, and obtain a target speckle image according to the speckle disparity map.
  • the first processor 530 processes a speckle image to obtain a depth disparity map, and obtains a depth image according to the depth disparity map.
  • the depth image refers to an image with depth information.
  • the first processor 530 may send the infrared disparity map and speckle disparity map to the second processor 520, and the first processor 530 may also send the infrared disparity map and depth disparity map to the second processor 520.
  • the second processor 520 may acquire an infrared image according to the infrared disparity map and a depth image according to the depth disparity map. Further, the second processor 520 may perform face recognition, face matching, living body detection, and acquire depth information of the detected human face according to the infrared image and the depth image.
  • the communication between the first processor 530 and the second processor 520 is through a fixed security interface to ensure the security of the transmitted data.
  • the data sent by the second processor 520 to the first processor 530 is through SECURE SPI / I2C 540
  • the data sent by the first processor 530 to the second processor 520 is through SECURE MIPI (Mobile Industry Processor Interface, Mobile Industry Processor Interface) 550.
  • SECURE MIPI Mobile Industry Processor Interface, Mobile Industry Processor Interface
  • an infrared image, a speckle image, an infrared disparity image, a speckle disparity image, a depth disparity image, a depth image, and the like can all be transmitted to the second processor 520 through SECURE MIPI 550.
  • the first processor 530 may also obtain a target infrared image according to the infrared disparity map, and obtain a depth image by calculating the depth disparity map, and then send the target infrared image and depth image to the second processor 520.
  • the first processor 530 may perform face recognition, face matching, living body detection, and acquiring depth information of the detected human face according to the target infrared image and depth image.
  • the sending of the image by the first processor 530 to the second processor 520 means that the first processor 530 sends the image to a CPU core in the TEE environment in the second processor 520.
  • the second processor 520 detects whether the sharpness of the image is less than the sharpness threshold, and when it is detected that the sharpness of the image is less than the sharpness threshold, acquires an infrared camera (that is, a laser camera 512) and an RGB camera 516 to shoot The target infrared and RGB images obtained from the scene. Subsequently, the second processor 520 extracts feature points with equal intervals in the target infrared image to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image to obtain a second feature point set, and uses the first feature point The set and the second feature point set are matched.
  • an infrared camera that is, a laser camera 512
  • RGB camera 516 to shoot The target infrared and RGB images obtained from the scene.
  • the second processor 520 extracts feature points with equal intervals in the target infrared image to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image to obtain a second feature point set, and uses
  • the second processor 520 obtains a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the RGB image is captured by the second processor 520 controlling the RGB camera 516.
  • the second processor 520 may send an instruction to acquire an infrared image to the first processor 530.
  • the first processor 530 controls the laser camera 512 (ie, the infrared camera) to capture an infrared image, and the first processor 530 sends the infrared image through SECURE MIPI. Send to the second processor 520.
  • the second processor 520 detects the working state of the electronic device 50 when detecting that the image sharpness is less than the sharpness threshold; when the working state of the electronic device 50 is a preset working state, it acquires an infrared camera and RGB The camera 516 captures the target infrared image and RGB image obtained by the same scene. Subsequently, the second processor 520 extracts feature points with equal intervals in the target infrared image to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image to obtain a second feature point set, and uses the first feature point The set and the second feature point set are matched. Finally, the second processor 520 obtains a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the target infrared image may be acquired in a TEE environment, that is, after the second processor 520 receives the infrared image, the infrared image is stored in the TEE.
  • Acquire an RGB image in the REE environment that is, after the second processor 520 receives the RGB image sent by the RGB camera, the RGB image is stored in the REE.
  • the second processor 520 sends the RGB image to the TEE environment, extracts the first feature point set in the target infrared image and the second feature point set in the RGB image in the TEE environment, and sends the first feature point
  • the set and the second feature point set are matched; and the transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera is obtained according to the matched feature points.
  • the target infrared image may be acquired in a TEE environment, that is, after the second processor 520 receives the infrared image, the infrared image is stored in the TEE.
  • Acquire an RGB image in the REE environment that is, after the second processor 520 receives the RGB image sent by the RGB camera, the RGB image is stored in the REE.
  • the second processor 520 may extract the first feature point set in the target infrared image under the TEE environment, send the first feature point set to the REE, and then extract the second feature point in the RGB image under the REE environment. Set, and match the first feature point set and the second feature point set.
  • the second processor 520 obtains a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the second processor 520 detects whether the sharpness of the image is less than the sharpness threshold, and when detecting that the sharpness of the image is less than the sharpness threshold, acquires a target infrared image obtained by the infrared camera and the RGB camera shooting the same scene And RGB images. Subsequently, the second processor 520 extracts feature points with equal intervals in the target infrared image to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image to obtain a second feature point set, and uses the first feature point The set and the second feature point set are matched. Finally, the second processor 520 obtains a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • the RGB image is captured by the second processor 520 controlling the RGB camera 516.
  • the second processor 520 may send an instruction to acquire an infrared image to the first processor 530.
  • the first processor 530 controls the laser camera 512 (ie, the infrared camera) to capture an infrared image, and the first processor 530 sends the infrared image through SECURE MIPI. Send to the second processor 520.
  • the second processor 520 detects the working state of the electronic device 50 when detecting that the image sharpness is less than the sharpness threshold; when the working state of the electronic device 50 is a preset working state, it acquires an infrared camera and RGB
  • the camera 516 captures the target infrared image and RGB image obtained by the same scene.
  • the second processor 520 extracts feature points with equal intervals in the target infrared image to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image to obtain a second feature point set, and combines the first feature point set with The second feature point set is matched.
  • the second processor 520 obtains a transformation relationship between the coordinate system of the infrared camera and the coordinate system of the RGB camera according to the matched feature points.
  • an RGB image obtained by shooting an RGB camera in the same scene can be acquired in the REE, and a target infrared image obtained by photographing the same scene in an infrared camera in the TEE, that is, the second processor 520 receives the RGB image sent by the RGB camera. After the image is stored, the RGB image is stored in the REE. After receiving the infrared image, the second processor 520 stores the infrared image in the TEE. Subsequently, the second processor 520 extracts feature points with equal intervals in the target infrared image in the TEE to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image in the REE to obtain a second feature point set. Subsequently, the second processor 520 transmits the second feature point set to the TEE, and matches the first feature point set and the second feature point set in the TEE. Among them, the security level of TEE is higher than REE.
  • an RGB image obtained by shooting an RGB camera in the same scene can be acquired in the REE, and a target infrared image obtained by photographing the same scene in an infrared camera in the TEE, that is, the second processor 520 receives the RGB image sent by the RGB camera. After the image is stored, the RGB image is stored in the REE. After receiving the infrared image, the second processor 520 stores the infrared image in the TEE. Subsequently, the second processor 520 transmits the RGB image to the TEE, extracts the feature points with equal intervals in the target infrared image to obtain the first feature point set, and extracts the feature points with equal intervals in the RGB image to obtain the second feature point set. . Finally, the second processor 520 matches the first feature point set and the second feature point set in the TEE.
  • an RGB image obtained by shooting an RGB camera in the same scene can be acquired in the REE, and a target infrared image obtained by photographing the same scene in an infrared camera in the TEE, that is, the second processor 520 receives the RGB image sent by the RGB camera After the image is stored, the RGB image is stored in the REE. After receiving the infrared image, the second processor 520 stores the infrared image in the TEE. Subsequently, the second processor 520 extracts the feature points with equal intervals in the target infrared image in the TEE to obtain a first feature point set, and transmits the first feature point set to the REE. Subsequently, the second processor 520 extracts feature points with equal intervals in the RGB image in the REE to obtain a second feature point set, and matches the first feature point set and the second feature point set in the REE.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by using hardware and / or software components, and may include various processors that define an ISP (Image Signal Processing) pipeline.
  • FIG. 7 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 7, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes a first ISP processor 630, a second ISP processor 640, a control logic 650, and an image memory 660.
  • the first camera 610 includes one or more first lenses 612 and a first image sensor 614.
  • the first camera 610 may be an infrared camera.
  • the first image sensor 614 may include an infrared filter.
  • the first image sensor 614 can acquire light intensity and wavelength information captured by each imaging pixel in the first image sensor 614, and provide a set of image data that can be processed by the first ISP processor 630.
  • the set of image data is used for Generate an infrared image.
  • the second camera 620 includes one or more second lenses 622 and a second image sensor 624.
  • the second camera may be an RGB camera.
  • the second image sensor 624 may include a color filter array (such as a Bayer filter).
  • the second image sensor 624 can acquire light intensity and wavelength information captured by each imaging pixel in the second image sensor 624, and provide a set of image data that can be processed by the second ISP processor 640.
  • the set of image data is used for Generate RGB images.
  • the first ISP processor 630 may be the first processor 530 in FIG. 6, and the second ISP processor 640 may be the second processor 520 in FIG. 6.
  • the image memory 660 includes two independent memories, one memory is located in the first ISP processor 630 (the memory may be the random access memory 536 in FIG. 6 at this time), and the other memory is located in the second ISP processor 640 in.
  • the control logic 650 also includes two independent control logics. One control logic is located in the first ISP processor 630 and is used to control the first camera 610 to obtain the first image (ie, infrared image). The other control logic is located in The second ISP processor 640 is configured to control the second camera 620 to obtain a second image (that is, an RGB image).
  • the first image collected by the first camera 610 is transmitted to the first ISP processor 630 for processing.
  • the first image statistical data (such as the brightness of the image and the contrast value of the image) can be processed. Etc.) to the control logic 650, and the control logic 650 can determine the control parameters of the first camera 610 according to the statistical data, so that the first camera 66 can perform operations such as autofocus and automatic exposure according to the control parameters.
  • the first image may be stored in the image memory 660 after being processed by the first ISP processor 630. For example, the received infrared image is processed to obtain an infrared disparity map.
  • the first ISP processor 630 may also read and process the images stored in the image memory 660.
  • the first ISP processor 630 processes image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 630 may perform one or more image processing operations on the image data and collect statistical information on the image data.
  • image processing operations can calculate sharpness at the same or different bit depths.
  • the second image collected by the second camera 620 is transmitted to the second ISP processor 640 for processing.
  • statistical data of the second image (such as image brightness, image (The contrast value of the image, the color of the image, etc.) are sent to the control logic 650.
  • the control logic 650 can determine the control parameters of the second camera 620 according to the statistical data, so that the second camera 620 can perform operations such as autofocus and automatic exposure according to the control parameters .
  • the statistical data may include auto exposure, auto white balance, auto focus, flicker detection, black level compensation, second lens 622 shading correction, etc.
  • Control parameters may include gain, integration time for exposure control, image stabilization parameters, flash control Parameters, first lens 612 control parameters (for example, focal length for focusing or zooming), or a combination of these parameters.
  • the second image may be stored in the image memory 660 after being processed by the second ISP processor 640, and the second ISP processor 640 may also read the image stored in the image memory 660 for processing.
  • the second image may be output to the display 670 for display after being processed by the ISP processor 640 for viewing by the user and / or further processed by a graphics engine or GPU (Graphics Processing Unit).
  • the first ISP processor 630 receives the image data acquired by the first camera 610 and processes the image data to form a target infrared image.
  • the first ISP processor 630 sends the target infrared image to the second ISP processor 640, and the second ISP processor 640 receives the image data acquired by the second camera 620 and processes the image data to form an RGB image.
  • the second ISP processor 640 extracts feature points in the target infrared image to obtain a first feature point set and feature points in the RGB image to obtain a second feature point set, and combines the first feature point set and the second feature point set. Matching is performed; the transformation relationship between the coordinate system of the infrared camera (ie, the first camera 610) and the coordinate system of the RGB camera (ie, the second camera 620) is obtained according to the matched feature points.
  • the first ISP processor 630 receives the image data acquired by the first camera 610 and processes the image data to form a target infrared image.
  • the first ISP processor 630 sends the target infrared image to the second ISP processor 640, and the second ISP processor 640 receives the image data acquired by the second camera 620 and processes the image data to form an RGB image.
  • the second ISP processor 640 extracts feature points with equal intervals in the target infrared image to obtain a first feature point set, and extracts feature points with equal intervals in the RGB image to obtain a second feature point set, and sets the first feature point set. Match the second feature point set, and then obtain the transformation relationship between the coordinate system of the infrared camera (ie, the first camera 610) and the coordinate system of the RGB camera (ie, the second camera 620) according to the matched feature points.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种摄像头校准方法、摄像头校准装置(300)、电子设备、计算机可读存储介质。摄像头校准方法包括:(202)当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;(204)提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;(206)根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。

Description

摄像头校准方法和装置、电子设备、计算机可读存储介质
优先权信息
本申请请求2018年8月1日向中国国家知识产权局提交的、专利申请号为201810867079.6及201810866119.5的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及影像技术领域,特别是涉及一种摄像头校准方法、摄像头校准装置、电子设备、计算机可读存储介质。
背景技术
随着电子设备和影像技术的发展,越来越多的用户使用电子设备的摄像头采集图像。红外摄像头和RGB摄像头在出厂前需要进行参数标定处理。当温度变化或电子设备跌落等导致摄像头模组发生形变,影响拍摄图像的清晰度。
发明内容
本申请实施例提供一种摄像头校准方法、摄像头校准装置、电子设备、计算机可读存储介质,可以提高拍摄图像的清晰度。
一种摄像头校准方法,包括:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
一种摄像头校准装置包括图像获取模块、匹配模块和参数确定模块。图像获取模块用于当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。匹配模块用于提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配。参数确定模块用于根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
一种非易失性计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请某些实施例中摄像头校准方法的应用环境示意图。
图2和图3为本申请某些实施例中摄像头校准方法的流程图。
图4为本申请某些实施例中摄像头校准装置的结构框图。
图5为本申请某些实施例中电子设备的内部结构示意图。
图6为本申请某些实施例中摄像头校准方法的另一应用环境示意图。
图7为本申请某些实施例中图像处理电路的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一标定图像称为第二标定图像,且类似地,可将第二标定图像称为第一标定图像。第一标定图像和第二标定图像两者都是标定图像,但其不是同一标定图像。
图1为一个实施例中摄像头校准方法的应用环境示意图。如图1所示,该应用环境包括电子设备110和场景120。电子设备110包括红外摄像头(Infrared Radiation Camera)和RGB(Red、Green、Blue)摄像头。电子设备110可通过红外摄像头和RGB摄像头对场景120拍摄得到目标红外图像和RGB图像,并根据目标红外图像和RGB图像进行特征点匹配,再计算红外摄像头的坐标系和RGB摄像头的坐标系的变换关系。电子设备110可为智能手机、平板电脑、个人数字助理、穿戴式设备等。
图2为一个实施例中摄像头校准方法的流程图。如图2所示,一种摄像头校准方法,包括:
步骤202,当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
具体地,电子设备110可定期对自身摄像头拍摄的图像进行检测,当检测到图像的清晰度小于清晰度阈值,则表示需要对自身摄像头进行校准。清晰度阈值可根据需要设定,如80%、90%等。可采用Brenner梯度函数计算相连两个像素灰度差的平方得到图像清晰度,也可采用Tenegrad梯度函数的sobel算子分别提取水平和垂直方向的梯度值,基于Tenengrad梯度函数的图像清晰度从而求取图像清晰度,还可采用方差函数、能量梯度函数、vollath函数、熵函数、EAV点锐度算法函数、Laplacian梯度函数、SMD灰度方差函数等来检测图像清晰度。
步骤204,提取该目标红外图像中的特征点得到第一特征点集合和该RGB图像中的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。
具体地,对目标红外图像采用尺度不变特征变换(Scale-invariant feature transform,简称SIFT)检测得到第一特征点集合,对RGB图像采用SIFT检测得到第二特征点集合。SIFT是用于图像处理中的一种描述。这种描述具有尺度不变性,可在图像中检测出关键点。通过SIFT对第一特征点集合和第二特征点集合中的特征点进行匹配。
SIFT特征检测包括:搜索所有尺度上的图像位置;在每个候选的位置上,通过拟合来确定位置和尺度;基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向;在每个关键点周围的领域内,在选定的尺度上测量图像局部的梯度。
SIFT的特征匹配包括:从多幅图像中提取对尺度缩放、旋转、亮度变化无关的特征向量,得到SIFT特征向量;采用SIFT特征向量的欧式距离判断两幅图像中关键点的相似性。欧式距离越小,则相似度越高。当欧式距离小于设定的阈值时,则可判定为匹配成功。
步骤206,根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
设P ir为在红外摄像头坐标下某点的空间坐标,p ir为该点在像平面上的投影坐标(x、y单位为像素,z等于深度值,单位为毫米),H ir为深度摄像头的内参矩阵,由小孔成像模型可知,它们满足以下关系:
Figure PCTCN2019075375-appb-000001
再设P rgb为在RGB摄像头坐标下同一点的空间坐标,p rgb为该点在RGB像平面上的投影坐标,H rgb为RGB摄像头的内参矩阵。由于红外摄像头的坐标和RGB摄像头的坐标不同,它们之间可以用一个旋转平移变换联系起来,即:
P rgb=RP ir+T  公式(2)
其中,R为旋转矩阵,T为平移向量。最后再用H rgb对P rgb投影,即可得到该点对应的RGB坐标。
p rgb=H rgbP rgb  公式(3)
需要注意的是,p ir和p rgb使用的都是齐次坐标,因此在构造p ir时,应将原始的像素坐标(x,y)乘以深度值,而最终的RGB像素坐标必须将p rgb除以z分量,即(x/z,y/z),且z分量的值即为该点到RGB摄像头的距离(单位为毫米)。
红外摄像头的外参包括旋转矩阵R ir和平移向量T ir,RGB摄像头的外参包括旋转矩阵R rgb和平移向量T rgb,摄像头的外参表示将一个世界坐标系下的点P变换到摄像头坐标系下,分别对红外摄像头和RGB摄像头进行变换,有以下关系:
Figure PCTCN2019075375-appb-000002
将公式(4)进行转换得到:
Figure PCTCN2019075375-appb-000003
结合公式(2)可得:
Figure PCTCN2019075375-appb-000004
红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系包括两者之间的旋转矩阵和平移向量。
上述摄像头校准方法,检测到满足预设条件时,获取红外摄像头和RGB摄像头拍摄同一场景得到的目标红外图像和RGB图像,对目标红外图像提取得到第一特征点集合,对RGB图像提取得到第二特征点集合,将第一特征点集合和第二特征点集合进行匹配,采用匹配后的特征点求取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系,实现了对红外摄像头和RGB摄像头两者之间的外参的标定,校准了两个摄像头之间的变换关系,提高了图像的清晰度。
在一个实施例中,当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:当检测到图像清晰度小于清晰度阈值时,检测电子设备110的工作状态;当电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
预设工作状态为预先设定的工作状态。预设工作状态可包括以下情况中的至少一种:检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数;接收到对红外摄像头和RGB摄像头的开启指令;检测到光发射器的当前温度与初始温度之差超过温度阈值;连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值;接收到对预览图像的三维处理请求。
具体地,红外摄像头和RGB摄像头的开启指令可为启动相机应用程序触发产生。电子设备110接收到用户触发相机应用程序,启动红外摄像头和RGB摄像头。摄像头启动后,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。在开启摄像头时,进行红外摄像头和RGB摄像头的坐标系的变换关系的求取,可以确保后续摄像头拍摄图像的准确性。
光发射器可为镭射灯等。初始温度是指上一次校准时的光发射器的温度。通过温度传感器检测光发射器的温度,当光发射器的当前温度与初始温度之差超过温度阈值,则获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。通过温度监控,在温度变化超过温度阈值时,对红外摄像头和RGB摄像头的变换关系进行校准,保证了采集图像的准确性。
第二预设次数可根据需要设定,如2次、3次、4次等。通过温度传感器检测光发射器的温度,当连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,则获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。通过设定次数,可以降低校准的频率,确保 摄像头拍摄的图像的准确性,也能降低校准的次数,节省功耗。
屏幕解锁可采用人脸验证方式,在人脸匹配失败时,电子设备110可获取红外摄像头和RGB摄像头拍摄同一场景所得的目标红外图像和RGB图像进行匹配计算得到红外摄像头和RGB摄像头坐标系的变换关系。在人脸解锁失败后,进行红外摄像头和RGB摄像头的校准,可以提高图像的准确性,方便进行再次进行人脸解锁。
进一步的,在人脸匹配失败次数超过第一预设次数时,电子设备110可获取红外摄像头和RGB摄像头拍摄同一场景所得的目标红外图像和RGB图像进行匹配计算得到红外摄像头和RGB摄像头坐标系的变换关系。当匹配失败次数超过第一预设次数时,进行计算红外摄像头和RGB摄像头坐标系的变换关系,避免因其他因素导致的人脸匹配失败而进行红外摄像头和RGB摄像头的校准。
三维处理请求可以是用户通过点击显示屏上的按钮生成的,也可以是用户通过按压触摸屏上的控件生成的,电子设备110可以获取对初始预览图像的三维处理指令。三维处理是指对图像进行三个维度的处理,即有长、宽、高三个维度。具体地,电子设备110可以通过深度图像或红外图像检测图像中物体或人脸的深度信息,对图像进行三维处理。例如,三维处理可以是对图像进行美颜处理,电子设备110可以根据人脸的深度信息确定需要进行美颜的区域,使图像的美颜效果更好;三维处理也可以是三维人脸建模处理,即根据图像中的人脸建立对应的三维人脸模型等。电子设备110可以接收对初始预览图像的三维处理指令。初始预览图像是指电子设备110通过摄像头采集的周围环境信息的图像,初始预览图像实时显示在电子设备110的显示屏上。电子设备110在接收对初始预览图像的三维处理指令后,对初始预览图像进行对应的三维处理。
在一个实施例中,当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和第二特征点集合进行匹配,根据匹配后的特征点获取所述红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系,包括:
当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当再次检测到该电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当再次检测到该电子设备110的工作状态为检测到光发射器的当前温度与初始温度之差超过温度阈值,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当再次检测到该电子设备110的工作状态为接收到对预览图像的三维处理请求,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可在电子设备110不同的工作状态均进行校准,提供了多次校准的时机,保证及时校准,为拍摄图像的清晰度。
在上述实施例中,可采用当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
在一个实施例中,当该电子设备的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:
当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当再次检测到该电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当再次检测到该电子设备110的工作状态为连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当再次检测到该电子设备110的工作状态为接收到对预览图像的三维处理请求,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可在电子设备110不同的工作状态均进行校准,提供了多次校准的时机,保证及时校准,为拍摄图像的清晰度。
在上述实施例中,可采用当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
在一个实施例中,图2所示的摄像头校准方法还包括:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,以及在该第一运行环境中提取该RGB图像中的特征点得到第二特征点集合,将第二特征点集合传输给第二运行环境,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,该第二运行环境的安全级别高于该第一运行环境。
具体地,第一运行环境可为REE(Rich Execution Environment,自然运行环境)环境,第二运行环境可为TEE(Trusted execution environment,可信运行环境)。TEE的安全级别高于REE。在第二运行环境中进行特征点的匹配,提高了安全性,保证数据的安全。
在一个实施例中,图2所示的摄像头校准方法还包括:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;将该RGB图像传输给该第二运行环境,在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,提取该RGB图像中的特征点得到第二特征点集合,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,第二运行环境的安全级别高于第一运行环境。在第二运行环境中提取特征点以及特征点进行匹配,提高了安全性,保证数据的安全。
在一个实施例中,图2所示的摄像头校准方法还包括:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,将该第一特征点集合传输给该第一运行环境;在该第一运行环境中提取该RGB图像中的特征点得到第二特征点集合,在该第一运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,第二运行环境的安全级别高于第一运行环境。在第一运行环境中对特征点进行匹配,数据处理效率高。
在一个实施例中,一种摄像头校准方法包括:
首先,当电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的第一特征点集合和RGB图像中的第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
其次,当电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的第一特征点集合和RGB图像中的第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可选地,当电子设备110的工作状态为接收当检测到光发射器的当前温度与初始温度之差超过温度阈值,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的第一特征点集合和RGB图像中的第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可选地,当电子设备110的工作状态为连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的第一特征点集合和所述RGB图像中的第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可选地,当电子设备110的工作状态为接收到对预览图像的三维处理请求,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的第一特征点集合和RGB图像中的第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
图3为一个实施例中摄像头校准方法的流程图。如图3所示,摄像头校准方法,包括:
步骤302,当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
具体地,电子设备110可定期对自身摄像头拍摄的图像进行检测,当检测到图像的清晰度小于清晰度阈值,则表示需要对自身摄像头进行校准。清晰度阈值可根据需要设定,如80%、90%等。可采用Brenner梯度函数计算相连两个像素灰度差的平方得到图像清晰度,也可采用Tenegrad梯度函数的sobel算子分别提取水平和垂直方向的梯度值,基于Tenengrad梯度函数的图像清晰度从而求取图像清晰度。还可采用方差函数、能量梯度函数、vollath函数、熵函数、EAV点锐度算法函数、Laplacian梯度函数、SMD灰度方差函数等来检测图像清晰度。
步骤304,提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。
具体地,对目标红外图像采用尺度不变特征变换(Scale-invariant feature transform,简称SIFT)检测得到特征点,再选取间距相等的特征点,加入到第一特征点集合,对RGB图像采用SIFT检测得到特征点,再选取间距相等的特征点,加入到第二特征点集合。SIFT是用于图像处理中的一种描述。这种描述具有尺度不变性,可在图像中检测出关键点。通过SIFT对第一特征点集合和第二特征点集合中的特征点进行匹配。间距相等是指相邻的特征点之间的间距相等。
SIFT特征检测包括:搜索所有尺度上的图像位置;在每个候选的位置上,通过拟合来确定位置和尺度;基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向;在每个关键点周围的领域内,在选定的尺度上测量图像局部的梯度。
SIFT的特征匹配包括:从多幅图像中提取对尺度缩放、旋转、亮度变化无关的特征向量,得到SIFT特征向量;采用SIFT特征向量的欧式距离判断两幅图像中关键点的相似性。欧式距离越小,则相似度越高。当欧式距离小于设定的阈值时,则可判定为匹配成功。
步骤306,根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
设P ir为在红外摄像头坐标下某点的空间坐标,p ir为该点在像平面上的投影坐标(x、y单位为像素,z等于深度值,单位为毫米),H ir为深度摄像头的内参矩阵,由小孔成像模型可知,它们满足以下关系:
Figure PCTCN2019075375-appb-000005
再设P rgb为在RGB摄像头坐标下同一点的空间坐标,p rgb为该点在RGB像平面上的投影坐标,H rgb为RGB摄像头的内参矩阵。由于红外摄像头的坐标和RGB摄像头的坐标不同,它们之间可以用一个旋转平移变换联系起来,即:
P rgb=RP ir+T  公式(8)
其中,R为旋转矩阵,T为平移向量。最后再用H rgb对P rgb投影,即可得到该点对应的RGB坐标。
p rgb=H rgbP rgb  公式(9)
需要注意的是,p ir和p rgb使用的都是齐次坐标,因此在构造p ir时,应将原始的像素坐标(x,y)乘以深度值,而最终的RGB像素坐标必须将p rgb除以z分量,即(x/z,y/z),且z分量的值即为该点到RGB摄像头的距离(单位为毫米)。
红外摄像头的外参包括旋转矩阵R ir和平移向量T ir,RGB摄像头的外参包括旋转矩阵R rgb和平移向量T rgb,摄像头的外参表示将一个世界坐标系下的点P变换到摄像头坐标系下,分别对红外摄像头和RGB摄像头进行变换,有以下关系:
Figure PCTCN2019075375-appb-000006
将公式(10)进行转换得到:
Figure PCTCN2019075375-appb-000007
结合公式(8)可得:
Figure PCTCN2019075375-appb-000008
红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系包括两者之间的旋转矩阵和平移向量。
上述摄像头校准方法,检测到满足预设条件时,获取红外摄像头和RGB摄像头拍摄同一场景得到的目标红外图像和RGB图像,提取目标红外图像中的间距相等的特征点得到第一特征点集合,提取RGB图像中间距相等的特征点得到第二特征点集合,将第一特征点集合和第二特征点集合进行匹配,采用匹配后的特征点求取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系,利用间距相等的特征点进行匹配,匹配更加准确,实现了对红外摄像头和RGB摄像头两者之间的外参的标定,校准了两个摄像头之间的变换关系,提高了图像的清晰度。
在一个实施例中,当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:当检测到图像清晰度小于清晰度阈值时,检测电子设备110的工作状态;当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
预设工作状态为预先设定的工作状态。预设工作状态可包括以下情况中的至少一种:接收到对红外摄像头和RGB摄像头的开启指令;检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数。
具体地,红外摄像头和RGB摄像头的开启指令可为启动相机应用程序触发产生。电子设备110接收到用户触发相机应用程序,启动红外摄像头和RGB摄像头。摄像头启动后,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。在开启摄像头时,进行红外摄像头和RGB摄像头的坐标系的变换关系的求取,可以确保后续摄像头拍摄图像的准确性。
屏幕解锁可采用人脸验证方式,在人脸匹配失败时,电子设备110可获取红外摄像头和RGB摄像头拍摄同一场景所得的目标红外图像和RGB图像进行匹配计算得到红外摄像头和RGB摄像头坐标系的 变换关系。在人脸解锁失败后,进行红外摄像头和RGB摄像头的校准,可以提高图像的准确性,方便进行再次进行人脸解锁。
进一步的,在人脸匹配失败次数超过第一预设次数时,电子设备110可获取红外摄像头和RGB摄像头拍摄同一场景所得的目标红外图像和RGB图像进行匹配计算得到红外摄像头和RGB摄像头坐标系的变换关系。
当匹配失败次数超过第一预设次数时,进行计算红外摄像头和RGB摄像头坐标系的变换关系,避免因其他因素导致的人脸匹配失败而进行红外摄像头和RGB摄像头的校准。
在一个实施例中,当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:当检测到用于解锁屏幕录入的人脸与预存人脸匹配失败时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;以及当接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
具体地,当检测到电子设备110的工作状态为用于解锁屏幕录入的人脸与预存人脸匹配失败时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
当再次检测到电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取所述红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:
当检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;以及当接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
具体地,当检测到电子设备110的工作状态为用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
当再次检测到电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,图3所示的摄像头校准方法还包括:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及在该第一运行环境中提取该RGB图像中间距相等的特征点得到第二特征点集合,将第二特征点集合传输给第二运行环境,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,该第二运行环境的安全级别高于该第一运行环境。
具体地,第一运行环境可为REE(Rich Execution Environment,自然运行环境)环境,第二运行环境可为TEE(Trusted execution environment,可信运行环境)。TEE的安全级别高于REE。在第二运行环境中进行特征点的匹配,提高了安全性,保证数据的安全。
在一个实施例中,图3所示的摄像头校准方法还包括:在第一运行环境中获取RGB摄像头拍摄同 一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;将该RGB图像传输给该第二运行环境,在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,提取该RGB图像中间距相等的特征点得到第二特征点集合,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配,其中,该第二运行环境的安全级别高于该第一运行环境。在第二运行环境中提取特征点以及特征点进行匹配,提高了安全性,保证数据的安全。
在一个实施例中,图3所示的摄像头校准方法还包括:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,将该第一特征点集合传输给该第一运行环境;在该第一运行环境中提取该RGB图像中间距相等的特征点得到第二特征点集合,在该第一运行环境中将第一特征点集合和第二特征点集合进行匹配,其中,该第二运行环境的安全级别高于该第一运行环境。在第一运行环境中对特征点进行匹配,数据处理效率高。
图4为一个实施例中摄像头校准装置300的结构框图。如图4所示,该摄像头校准装置300包括图像获取模块310、匹配模块320和参数确定模块330。其中,图像获取模块310用于当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。匹配模块320用于提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配。参数确定模块330用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
本申请的摄像头校准装置300检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景得到的目标红外图像和RGB图像,对目标红外图像提取得到第一特征点集合,对RGB图像提取得到第二特征点集合,将第一特征点集合和第二特征点集合进行匹配,采用匹配后的特征点求取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系,实现了对红外摄像头和RGB摄像头两者之间的外参的标定,校准了两个摄像头之间的变换关系,提高了图像的清晰度。
在一个实施例中,如图4所示,该摄像头校准装置300还包括检测模块340。检测模块340用于当检测到图像清晰度小于清晰度阈值时,检测电子设备110(图1所示)的工作状态。图像获取模块310还用于当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
在一个实施例中,预设工作状态包括以下情况中至少一种:检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数;接收到对红外摄像头和RGB摄像头的开启指令;检测到光发射器的当前温度与初始温度之差超过温度阈值;连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值;接收到对预览图像的三维处理请求。
在一个实施例中,如图4所示,图像获取模块310还用于当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当图像获取模块310还用于再次检测到该电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当图像获取模块310还用于再次检测到该电子设备110的工作状态为检测到光发射器的当前温度与初始温度之差超过温度阈值,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当图像获取模块310还用于再次检测到该电子设备的工作状态为接收到对预览图像的三维处理请求,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可在电子设备110不同的工作状态均进行校准,提供了多次校准的时机,保证及时校准,为拍摄图像的清晰度。
在上述实施例中,图像获取模块310还可用于当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
在一个实施例中,图像获取模块310还用于当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当图像获取模块310还用于再次检测到该电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当图像获取模块310还用于再次检测到该电子设备110的工作状态为连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系;
当图像获取模块310还用于再次检测到该电子设备110的工作状态为接收到对预览图像的三维处理请求,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
可在电子设备110不同的工作状态均进行校准,提供了多次校准的时机,保证及时校准,为拍摄图像的清晰度。
在上述实施例中,图像获取模块310可用于当该电子设备110的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
在一个实施例中,图像获取模块310还用于在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;匹配模块320还用于在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,以及在该第一运行环境中提取该RGB图像中的特征点得到第二特征点集合,将第二特征点集合传输给第二运行环境,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,该第二运行环境的安全级别高于该第一运行环境。
具体地,第一运行环境可为REE(Rich Execution Environment,自然运行环境)环境,第二运行环境可为TEE(Trusted execution environment,可信运行环境)。TEE的安全级别高于REE。在第二运行环境中进行特征点的匹配,提高了安全性,保证数据的安全。
在一个实施例中,图像获取模块310还用于在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;匹配模块320 还用于将该RGB图像传输给该第二运行环境,在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,提取该RGB图像中的特征点得到第二特征点集合,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,所述第二运行环境的安全级别高于所述第一运行环境。在第二运行环境中提取特征点以及特征点进行匹配,提高了安全性,保证数据的安全。
在一个实施例中,图像获取模块310还用于在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;匹配模块320还用于在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,将该第一特征点集合传输给该第一运行环境;在该第一运行环境中提取该RGB图像中的特征点得到第二特征点集合,在该第一运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,所述第二运行环境的安全级别高于所述第一运行环境。在第一运行环境中对特征点进行匹配,数据处理效率高。
请参阅图4,在一个实施例中,图像获取模块310用于当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。匹配模块320用于提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。参数确定模块330用于根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
本申请的摄像头校准装置300检测到满足预设条件时,获取红外摄像头和RGB摄像头拍摄同一场景得到的目标红外图像和RGB图像,提取目标红外图像中的间距相等的特征点得到第一特征点集合,提取RGB图像中间距相等的特征点得到第二特征点集合,将第一特征点集合和第二特征点集合进行匹配,采用匹配后的特征点求取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系,利用间距相等的特征点进行匹配,匹配更加准确,实现了对红外摄像头和RGB摄像头两者之间的外参的标定,校准了两个摄像头之间的变换关系,提高了图像的清晰度。
在一个实施例中,该摄像头校准装置300中的检测模块340用于当检测到图像清晰度小于清晰度阈值时,检测电子设备的工作状态。图像获取模块310还用于当该电子设备的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
其中,预设工作状态包括以下情况中至少一种:接收到对红外摄像头和RGB摄像头的开启指令;检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数。
在一个实施例中,图像获取模块310还用于当检测到用于解锁屏幕录入的人脸与预存人脸匹配失败时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;以及当接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
具体地,图像获取模块310用于当检测到电子设备110的工作状态为用于解锁屏幕录入的人脸与预存人脸匹配失败,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320用于提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;参数确定模块330用于根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
进一步地,图像获取模块310还用于当检测到电子设备110的工作状态为接收到对红外摄像头和RGB摄像头的开启指令时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;匹配模块320还用于提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;参数确定模块330还用于根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,图像获取模块310还用于在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;匹配模块320还用于在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及在该第一运行环境中提取该RGB图像中间距相等的特征点得到第二特征点集合,将第二特征点集合传输给第 二运行环境,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,该第二运行环境的安全级别高于该第一运行环境。
在一个实施例中,图像获取模块310还用于在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;匹配模块320还用于将该RGB图像传输给该第二运行环境,在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,提取该RGB图像中间距相等的特征点得到第二特征点集合,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配,其中,该第二运行环境的安全级别高于该第一运行环境。
在一个实施例中,图像获取模块310还用于在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;匹配模块320还用于在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,将该第一特征点集合传输给该第一运行环境,在该第一运行环境中提取该RGB图像中间距相等的特征点得到第二特征点集合,在该第一运行环境中将第一特征点集合和第二特征点集合进行匹配,其中,该第二运行环境的安全级别高于该第一运行环境。
请参阅图5,本申请实施例还提供了一种电子设备。该电子设备包括存储器及处理器,该存储器中储存有计算机程序,该计算机程序被该处理器执行时,使得该处理器执行上述任意一个实施例所述的摄像头校准方法中的操作。
例如,计算机程序被处理器执行时,处理器可以执行以下步骤:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的特征点得到第一特征点集合和该RGB图像中的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时,处理器还可以执行以下步骤:当检测到图像清晰度小于清晰度阈值时,检测电子设备的工作状态;当电子设备的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的特征点得到第一特征点集合和该RGB图像中的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时,处理器可以执行以下步骤:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,以及在该第一运行环境中提取该RGB图像中的特征点得到第二特征点集合,将第二特征点集合传输给第二运行环境,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,该第二运行环境的安全级别高于该第一运行环境。
再例如,计算机程序被处理器执行时,处理器可以执行以下步骤:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时,处理器可以执行以下步骤:当检测到图像清晰度小于清晰度阈值时,检测电子设备110的工作状态;当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时,处理器可以执行以下步骤:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;将该RGB图像传输给该第二运行环境,在该第二运行环境中提取该目标红外图像中间距相 等的特征点得到第一特征点集合,提取该RGB图像中间距相等的特征点得到第二特征点集合,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配,其中,该第二运行环境的安全级别高于该第一运行环境。
请参阅图5,本申请实施例提供了一种非易失性计算机可读存储介质。非易失性计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述任意一个实例所述的摄像头校准方法中的操作。
例如,计算机程序被处理器执行时可以实现以下步骤:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的特征点得到第一特征点集合和该RGB图像中的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时还可实现以下步骤:当检测到图像清晰度小于清晰度阈值时,检测电子设备的工作状态;当电子设备的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中的特征点得到第一特征点集合和该RGB图像中的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时还可实现以下步骤:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;在该第二运行环境中提取该目标红外图像中的特征点得到第一特征点集合,以及在该第一运行环境中提取该RGB图像中的特征点得到第二特征点集合,将第二特征点集合传输给第二运行环境,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配。其中,该第二运行环境的安全级别高于该第一运行环境。
再例如,计算机程序被处理器执行时还可实现以下步骤:当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时还可实现以下步骤:当检测到图像清晰度小于清晰度阈值时,检测电子设备110的工作状态;当该电子设备110的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
再例如,计算机程序被处理器执行时还可实现以下步骤:在第一运行环境中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在第二运行环境中获取红外摄像头拍摄同一场景得到的目标红外图像;将该RGB图像传输给该第二运行环境,在该第二运行环境中提取该目标红外图像中间距相等的特征点得到第一特征点集合,提取该RGB图像中间距相等的特征点得到第二特征点集合,在该第二运行环境中将第一特征点集合和第二特征点集合进行匹配,其中,该第二运行环境的安全级别高于该第一运行环境。
图5为一个实施例中电子设备的内部结构示意图。如图5所示,该电子设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现上述各实施例所提供的摄像头校准方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
本申请实施例中提供的摄像头校准装置300中的各个模块的实现可为计算机程序的形式。该计算机 程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请任意一个实施例所描述的摄像头校准方法的步骤。
本申请实施例还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行本申请任意一个实施例所述的摄像头校准方法。
图6为另一个实施例中电子设备50的内部结构示意图。如图6所示,电子设备50可包括摄像头模组510、第二处理器520,第一处理器530。第二处理器520可为CPU(Central Processing Unit,中央处理器)模块。第一处理器530可为MCU(Microcontroller Unit,微控制器)模块等。其中,第一处理器530连接在第二处理器520和摄像头模组550之间,第一处理器530可控制摄像头模组510中激光摄像头512、泛光灯514和镭射灯518,第二处理器520可控制摄像头模组510中的RGB(Red/Green/Blue,红/绿/蓝色彩模式)摄像头516。在一个实施例中,图6中的电子设备50与图5中的电子设备可为同一个电子设备。图6中的第一处理器530和第二处理器520二者的功能与图5中的处理器的功能一致。
摄像头模组510包括激光摄像头512、泛光灯514、RGB摄像头516和镭射灯518。上述激光摄像头512为红外摄像头,用于获取红外图像或散斑图像。上述泛光灯514为可发射红外光的面光源;上述镭射灯518为可发射激光且发射的激光可形成图案的点光源。其中,当泛光灯514发射红外光时,激光摄像头552可根据反射回的光线获取红外图像。当镭射灯518发射红外激光时,激光摄像头512可根据反射回的光线获取散斑图像。上述散斑图像是镭射灯518发射的红外激光被反射后图案发生形变的图像。
第二处理器520可包括在TEE(Trusted execution environment,可信运行环境)环境下运行的CPU内核和在REE(Rich Execution Environment,自然运行环境)环境下运行的CPU内核。其中,TEE环境和REE环境均为ARM模块(Advanced RISC Machines,高级精简指令集处理器)的运行模式。其中,TEE环境的安全级别较高,第二处理器520中有且仅有一个CPU内核可同时运行在TEE环境下。通常情况下,电子设备50中安全级别较高的操作行为需要在TEE环境下的CPU内核中执行,安全级别较低的操作行为可在REE环境下的CPU内核中执行。
第一处理器530包括PWM(Pulse Width Modulation,脉冲宽度调制)模块532、SPI/I2C(Serial Peripheral Interface/Inter-Integrated Circuit,串行外设接口/双向二线制同步串行接口)接口534、RAM(Random Access Memory,随机存取存储器)模块536和深度引擎538。PWM模块532可向摄像头模组510发射脉冲,控制泛光灯514或镭射灯518开启,使得激光摄像头512可采集到红外图像或散斑图像。SPI/I2C接口534用于接收第二处理器520发送的图像采集指令。深度引擎538可对散斑图像进行处理得到深度视差图。随机存取存储器536可以存储激光摄像头512获取的红外图像、散斑图像、深度引擎538处理红外图像或散斑图像得到的处理后的图像等。
当第二处理器520接收到应用程序的数据获取请求时,例如,当应用程序需要进行人脸解锁、人脸支付时,可通过运行在TEE环境下的CPU内核向第一处理器530发送图像采集指令。当第一处理器530接收到图像采集指令后,可通过PWM模块532发射脉冲波控制摄像头模组510中泛光灯514开启并通过激光摄像头512采集红外图像、控制摄像头模组510中镭射灯518开启并通过激光摄像头512采集散斑图像。摄像头模组510可将采集到的红外图像和散斑图像发送给第一处理器530。第一处理器530可对接收到的红外图像进行处理得到红外视差图;对接收到的散斑图像进行处理得到散斑视差图或深度视差图。其中,第一处理器530对上述红外图像和散斑图像进行处理是指对红外图像或散斑图像进行校正,去除摄像头模组510中内外参数对图像的影响。其中,第一处理器530可设置成不同的模式,不同模式输出的图像不同。当第一处理器530设置为散斑图模式时,第一处理器530对散斑图像处理得到散斑视差图,根据上述散斑视差图可得到目标散斑图像;当第一处理器530设置为深度图模式时,第一处理器530对散斑图像处理得到深度视差图,根据上述深度视差图可得到深度图像,上述深度图像是指带有深度信息的图像。第一处理器530可将上述红外视差图和散斑视差图发送给第二处理器520,第一处理器530也可将上述红外视差图和深度视差图发送给第二处理器520。第二处理器520可根据上述红外视差图获取红外图像、根据上述深度视差图获取深度图像。进一步的,第二处理器520可根据红外图像、深度图像来进行人脸识别、人脸匹配、活体检测以及获取检测到的人脸的深度信息。
第一处理器530与第二处理器520之间通信是通过固定的安全接口,用以确保传输数据的安全性。如图6所示,第二处理器520发送给第一处理器530的数据是通过SECURE SPI/I2C 540,第一处理器530发送给第二处理器520的数据是通过SECURE MIPI(Mobile Industry Processor Interface,移动产业处理器接口) 550。例如,红外图像、散斑图像、红外视差图、散斑视差图、深度视差图、深度图像等均可以通过SECURE MIPI 550传送给第二处理器520。
在一个实施例中,第一处理器530也可根据上述红外视差图获取目标红外图像、上述深度视差图计算获取深度图像,再将上述目标红外图像、深度图像发送给第二处理器520。
在一个实施例中,第一处理器530可根据上述目标红外图像、深度图像进行人脸识别、人脸匹配、活体检测以及获取检测到的人脸的深度信息。其中,第一处理器530将图像发送给第二处理器520是指第一处理器530将图像发送给第二处理器520中处于TEE环境下的CPU内核。
在一个实施例中,第二处理器520检测图像清晰度是否小于清晰度阈值,并在检测到图像清晰度小于清晰度度阈值时,获取红外摄像头(即激光摄像头512)和RGB摄像头516拍摄同一场景所得到的目标红外图像和RGB图像。随后,第二处理器520提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。最后,第二处理器520根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。其中,RGB图像由第二处理器520控制RGB摄像头516拍摄得到。第二处理器520可以发送获取红外图像的指令至第一处理器530,第一处理器530控制激光摄像头512(即红外摄像头)拍摄红外图像,第一处理器530再将红外图像发送通过SECURE MIPI发送给第二处理器520。
在一个实施例中,第二处理器520在检测到图像清晰度小于清晰度阈值时,检测电子设备50的工作状态;当电子设备50的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头516拍摄同一场景所得到的目标红外图像和RGB图像。随后,第二处理器520提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。最后,第二处理器520根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,可在TEE环境中获取目标红外图像,即第二处理器520接收到红外图像后,将红外图像存储在TEE中。在REE环境下获取RGB图像,即,第二处理器520接收到RGB摄像头发送的RGB图像后,将RGB图像存储在REE中。随后,第二处理器520将RGB图像发送到TEE环境中,在TEE环境中提取该目标红外图像中的第一特征点集合和RGB图像中的第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,可在TEE环境中获取目标红外图像,即第二处理器520接收到红外图像后,将红外图像存储在TEE中。在REE环境下获取RGB图像,即,第二处理器520接收到RGB摄像头发送的RGB图像后,将RGB图像存储在REE中。随后,第二处理器520可在TEE环境下提取目标红外图像中的第一特征点集合,并将第一特征点集合发送到REE中,再在REE环境下提取RGB图像中的第二特征点集合,并将第一特征点集合和第二特征点集合进行匹配。最后,第二处理器520根据匹配后的特征点获取红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,第二处理器520检测图像清晰度是否小于清晰度阈值,并在检测到图像清晰度小于清晰度度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。随后,第二处理器520提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。最后,第二处理器520根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。其中,RGB图像由第二处理器520控制RGB摄像头516拍摄得到。第二处理器520可以发送获取红外图像的指令至第一处理器530,第一处理器530控制激光摄像头512(即红外摄像头)拍摄红外图像,第一处理器530再将红外图像发送通过SECURE MIPI发送给第二处理器520。
在一个实施例中,第二处理器520在检测到图像清晰度小于清晰度阈值时,检测电子设备50的工作状态;当电子设备50的工作状态为预设工作状态时,获取红外摄像头和RGB摄像头516拍摄同一场景所得到的目标红外图像和RGB图像。第二处理器520提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配。最后,第二处理器520根据匹配后的特征点获取该红外摄像头的坐标系和RGB摄像头的坐标系之间的变换关系。
在一个实施例中,可在REE中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在TEE中获取红外摄像头拍摄同一场景得到的目标红外图像,即第二处理器520接收到RGB摄像头发送的RGB图像后,将RGB图像存储在REE中,第二处理器520接收到红外图像后,将红外图像存储在TEE中。随后,第二处理器520在TEE中提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及在REE中提取该RGB图像中间距相等的特征点得到第二特征点集合。随后,第二处理器520将第二特征点集合传输给TEE,并在TEE中将第一特征点集合和第二特征点集合进行匹配。其中,TEE的安全级别高于REE。
在一个实施例中,可在REE中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在TEE中获取红外摄像头拍摄同一场景得到的目标红外图像,即第二处理器520接收到RGB摄像头发送的RGB图像后,将RGB图像存储在REE中,第二处理器520接收到红外图像后,将红外图像存储在TEE中。随后,第二处理器520将RGB图像传输给TEE,在TEE中提取目标红外图像中间距相等的特征点得到第一特征点集合,以及提取RGB图像中间距相等的特征点得到第二特征点集合。最后,第二处理器520在TEE中将第一特征点集合和第二特征点集合进行匹配。
在一个实施例中,可在REE中获取RGB摄像头拍摄同一场景得到的RGB图像,以及在TEE中获取红外摄像头拍摄同一场景得到的目标红外图像,即第二处理器520接收到RGB摄像头发送的RGB图像后,将RGB图像存储在REE中,第二处理器520接收到红外图像后,将红外图像存储在TEE中。随后,第二处理器520在TEE中提取目标红外图像中间距相等的特征点得到第一特征点集合,将第一特征点集合传输给REE。随后,第二处理器520在REE中提取所述RGB图像中间距相等的特征点得到第二特征点集合,并在REE中将第一特征点集合和第二特征点集合进行匹配。
本申请实施例还提供一种电子设备。上述电子设备包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理器。图7为一个实施例中图像处理电路的示意图。如图7所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图7所示,图像处理电路包括第一ISP处理器630、第二ISP处理器640、控制逻辑器650和、图像存储器660。
第一摄像头610包括一个或多个第一透镜612和第一图像传感器614。第一摄像头610可为红外摄像头。此时,第一图像传感器614可包括红外滤光片。第一图像传感器614可获取第一图像传感器614中的每个成像像素捕捉的光强度和波长信息,并提供可由第一ISP处理器630处理的一组图像数据,该组图像数据用于生成红外图像。
第二摄像头620包括一个或多个第二透镜622和第二图像传感器624。第二摄像头可为RGB摄像头。此时,第二图像传感器624可包括色彩滤镜阵列(如Bayer滤镜)。第二图像传感器624可获取第二图像传感器624中的每个成像像素捕捉的光强度和波长信息,并提供可由第二ISP处理器640处理的一组图像数据,该组图像数据用于生成RGB图像。
在一个例子中,第一ISP处理器630可为图6中的第一处理器530,第二ISP处理器640可为图6中的第二处理器520。此时,图像存储器660包括两个独立的存储器,一个存储器位于第一ISP处理器630中(此时存储器可为图6中的随机存取存储器536),另一个存储器位于第二ISP处理器640中。控制逻辑器650也包括两个独立的控制逻辑器,一个控制逻辑器位于第一ISP处理器630中,用于控制第一摄像头610获取第一图像(即红外图像),另一个控制逻辑器位于第二ISP处理器640中,用于控制第二摄像头620获取第二图像(即RGB图像)。
第一摄像头610采集的第一图像传输给第一ISP处理器630进行处理,第一ISP处理器630处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值等)发送给控制逻辑器650,控制逻辑器650可根据统计数据确定第一摄像头610的控制参数,从而第一摄像头66可根据控制参数进行自动对焦、自动曝光等操作。第一图像经过第一ISP处理器630进行处理后可存储至图像存储器660中,例如,对接收到的红外图像进行处理得到红外视差图等。第一ISP处理器630也可以读取图像存储器660中存储的图像并进行处理。
其中,第一ISP处理器630按多种格式逐个像素地处理图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,第一ISP处理器630可对图像数据进行一个或多个图像处理操作、收集关 于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度计算清晰度。
同样地,第二摄像头620采集的第二图像传输给第二ISP处理器640进行处理,第二ISP处理器640处理第二图像后,可将第二图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器650,控制逻辑器650可根据统计数据确定第二摄像头620的控制参数,从而第二摄像头620可根据控制参数进行自动对焦、自动曝光等操作。其中,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、第二透镜622阴影校正等,控制参数可包括增益、曝光控制的积分时间、防抖参数、闪光控制参数、第一透镜612控制参数(例如聚焦或变焦用焦距)、或这些参数的组合等。第二图像经过第二ISP处理器640进行处理后可存储至图像存储器660中,第二ISP处理器640也可以读取图像存储器660中存储的图像以对进行处理。另外,第二图像经过ISP处理器640进行处理后可输出给显示器670进行显示,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。
在一个例子中,当第二ISP处理器640检测到图像清晰度小于清晰度阈值时,第一ISP处理器630接收第一摄像头610获取的图像数据并处理图像数据以形成目标红外图像。第一ISP处理器630将目标红外图像发送给第二ISP处理器640,第二ISP处理器640接收第二摄像头620获取的图像数据并处理图像数据以形成RGB图像。第二ISP处理器640提取该目标红外图像中的特征点得到第一特征点集合和该RGB图像中的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配;根据匹配后的特征点获取该红外摄像头(即第一摄像头610)的坐标系和RGB摄像头(即第二摄像头620)的坐标系之间的变换关系。
在一个例子中,当第二ISP处理器640检测到图像清晰度小于清晰度阈值时,第一ISP处理器630接收第一摄像头610获取的图像数据并处理图像数据以形成目标红外图像。第一ISP处理器630将目标红外图像发送给第二ISP处理器640,第二ISP处理器640接收第二摄像头620获取的图像数据并处理图像数据以形成RGB图像。第二ISP处理器640提取该目标红外图像中间距相等的特征点得到第一特征点集合,以及提取该RGB图像中间距相等的特征点得到第二特征点集合,并将该第一特征点集合和第二特征点集合进行匹配,再根据匹配后的特征点获取该红外摄像头(即第一摄像头610)的坐标系和RGB摄像头(即第二摄像头620)的坐标系之间的变换关系。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (44)

  1. 一种摄像头校准方法,其特征在于,包括:
    当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;
    提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;
    根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
  2. 根据权利要求1所述的摄像头校准方法,其特征在于,所述当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:
    当检测到所述图像清晰度小于所述清晰度阈值时,检测电子设备的工作状态;
    当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  3. 根据权利要求2所述的摄像头校准方法,其特征在于,所述预设工作状态包括以下情况中至少一种:
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数;
    接收到对所述红外摄像头和所述RGB摄像头的开启指令;
    检测到光发射器的当前温度与初始温度之差超过温度阈值;
    连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值;
    接收到对预览图像的三维处理请求。
  4. 根据权利要求2所述的摄像头校准方法,其特征在于,所述当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像,包括:
    当所述电子设备的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像;
    当再次检测到所述电子设备的工作状态为接收到对所述红外摄像头和所述RGB摄像头的开启指令时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  5. 根据权利要求4所述的摄像头校准方法,其特征在于,所述当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像,还包括:
    当再次检测到所述电子设备的工作状态为检测到光发射器的当前温度与初始温度之差超过温度阈值,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  6. 根据权利要求4所述的摄像头校准方法,其特征在于,所述当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像,还包括:
    当再次检测到所述电子设备的工作状态为连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  7. 根据权利要求4至6中任一项所述的摄像头校准方法,其特征在于,所述当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像,还包括:
    当再次检测到所述电子设备的工作状态为接收到对预览图像的三维处理请求,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  8. 根据权利要求1所述的摄像头校准方法,其特征在于,所述提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和第二特征点集合进行匹配,包括:
    提取所述目标红外图像中间距相等的特征点得到所述第一特征点集合,以及提取所述RGB图像中 间距相等的特征点得到所述第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配。
  9. 根据权利要求8所述的摄像头校准方法,其特征在于,所述当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像,包括:
    当检测到所述图像清晰度小于所述清晰度阈值时,检测电子设备的工作状态;
    当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  10. 根据权利要求9所述的摄像头校准方法,其特征在于,所述预设工作状态包括以下情况中至少一种:
    接收到对所述红外摄像头和所述RGB摄像头的开启指令;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数。
  11. 根据权利要求9所述的摄像头校准方法,其特征在于,所述当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像,包括:
    当检测到用于解锁屏幕录入的人脸与预存人脸匹配失败时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像;
    以及当接收到对所述红外摄像头和所述RGB摄像头的开启指令时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
  12. 根据权利要求8至11中任一项所述的摄像头校准方法,其特征在于,所述摄像头校准方法还包括:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,以及在所述第一运行环境中提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第二运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  13. 根据权利要求8至11中任一项所述的摄像头校准方法,其特征在于,所述摄像头校准方法还包括:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    将所述RGB图像传输给所述第二运行环境,在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第二运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  14. 根据权利要求8至11中任一项所述的摄像头校准方法,其特征在于,所述摄像头校准方法还包括:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,将所述第一特征点集合传输给所述第一运行环境;
    在所述第一运行环境中提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第一运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  15. 一种摄像头校准装置,其特征在于,包括:
    图像获取模块,用于当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;
    匹配模块,用于提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征 点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;
    参数确定模块,用于根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
  16. 根据权利要求17所述的摄像头校准装置,其特征在于,
    所述匹配模块还用于提取所述目标红外图像中间距相等的特征点得到所述第一特征点集合,以及提取所述RGB图像中间距相等的特征点得到所述第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配。
  17. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:
    当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;
    提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;
    根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
  18. 根据权利要求17所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当检测到所述图像清晰度小于所述清晰度阈值时,检测电子设备的工作状态;
    当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  19. 根据权利要求18所述的电子设备,其特征在于,所述预设工作状态包括以下情况中至少一种:
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数;
    接收到对所述红外摄像头和所述RGB摄像头的开启指令;
    检测到光发射器的当前温度与初始温度之差超过温度阈值;
    连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值;
    接收到对预览图像的三维处理请求。
  20. 根据权利要求18所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当所述电子设备的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像;
    当再次检测到所述电子设备的工作状态为接收到对所述红外摄像头和所述RGB摄像头的开启指令时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  21. 根据权利要求20所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当再次检测到所述电子设备的工作状态为检测到光发射器的当前温度与初始温度之差超过温度阈值,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  22. 根据权利要求20所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当再次检测到所述电子设备的工作状态为连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  23. 根据权利要求20至22任一项所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当再次检测到所述电子设备的工作状态为接收到对预览图像的三维处理请求,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  24. 根据权利要求17所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    提取所述目标红外图像中间距相等的特征点得到所述第一特征点集合,以及提取所述RGB图像中间距相等的特征点得到所述第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配。
  25. 根据权利要求24所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当检测到所述图像清晰度小于所述清晰度阈值时,检测电子设备的工作状态;
    当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  26. 根据权利要求25所述的电子设备,其特征在于,所述预设工作状态包括以下情况中至少一种:
    接收到对所述红外摄像头和所述RGB摄像头的开启指令;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数。
  27. 根据权利要求25所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    当检测到用于解锁屏幕录入的人脸与预存人脸匹配失败时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像;
    以及当接收到对所述红外摄像头和所述RGB摄像头的开启指令时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
  28. 根据权利要求24至27任一项所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,以及在所述第一运行环境中提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第二运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  29. 根据权利要求24至27任一项所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    将所述RGB图像传输给所述第二运行环境,在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第二运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  30. 根据权利要求24至27任一项所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行以下步骤:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,将所述第一特征点集合传输给所述第一运行环境;
    在所述第一运行环境中提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第一运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  31. 一种非易失性计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现以下步骤:
    当检测到图像清晰度小于清晰度阈值时,获取红外摄像头和RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像;
    提取所述目标红外图像中的特征点得到第一特征点集合和所述RGB图像中的特征点得到第二特征 点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配;
    根据匹配后的特征点获取所述红外摄像头的坐标系和所述RGB摄像头的坐标系之间的变换关系。
  32. 根据权利要求31所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:当检测到所述图像清晰度小于所述清晰度阈值时,检测电子设备的工作状态;
    当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  33. 根据权利要求32所述的非易失性计算机可读存储介质,其特征在于,所述预设工作状态包括以下情况中至少一种:
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数;
    接收到对所述红外摄像头和所述RGB摄像头的开启指令;
    检测到光发射器的当前温度与初始温度之差超过温度阈值;
    连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值;
    接收到对预览图像的三维处理请求。
  34. 根据权利要求32所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    当所述电子设备的工作状态为检测到用于解锁屏幕录入的人脸与预存人脸匹配失败,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像;
    当再次检测到所述电子设备的工作状态为接收到对所述红外摄像头和所述RGB摄像头的开启指令时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  35. 根据权利要求34所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    当再次检测到所述电子设备的工作状态为检测到光发射器的当前温度与初始温度之差超过温度阈值,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  36. 根据权利要求34所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    当再次检测到所述电子设备的工作状态为连续第二预设次数检测到光发射器的当前温度与初始温度之差超过温度阈值,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  37. 根据权利要求34至36任一项所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    当再次检测到所述电子设备的工作状态为接收到对预览图像的三维处理请求,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  38. 根据权利要求31所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    提取所述目标红外图像中间距相等的特征点得到所述第一特征点集合,以及提取所述RGB图像中间距相等的特征点得到所述第二特征点集合,并将所述第一特征点集合和所述第二特征点集合进行匹配。
  39. 根据权利要求38所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    当检测到所述图像清晰度小于所述清晰度阈值时,检测电子设备的工作状态;
    当所述电子设备的工作状态为预设工作状态时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像。
  40. 根据权利要求39所述的非易失性计算机可读存储介质,其特征在于,所述预设工作状态包括以下情况中至少一种:
    接收到对所述红外摄像头和所述RGB摄像头的开启指令;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败;
    检测到用于解锁屏幕录入的人脸与预存人脸匹配失败的次数超过第一预设次数。
  41. 根据权利要求39所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    当检测到用于解锁屏幕录入的人脸与预存人脸匹配失败时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的所述目标红外图像和所述RGB图像;
    以及当接收到对所述红外摄像头和所述RGB摄像头的开启指令时,获取所述红外摄像头和所述RGB摄像头拍摄同一场景所得到的目标红外图像和RGB图像。
  42. 根据权利要求38至41任一项所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,以及在所述第一运行环境中提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第二运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  43. 根据权利要求38至41任一项所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    将所述RGB图像传输给所述第二运行环境,在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第二运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
  44. 根据权利要求38至41任一项所述的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下步骤:
    在第一运行环境中获取所述RGB摄像头拍摄同一场景得到的所述RGB图像,以及在第二运行环境中获取所述红外摄像头拍摄同一场景得到的所述目标红外图像;
    在所述第二运行环境中提取所述目标红外图像中间距相等的所述特征点得到所述第一特征点集合,将所述第一特征点集合传输给所述第一运行环境;
    在所述第一运行环境中提取所述RGB图像中间距相等的所述特征点得到所述第二特征点集合,在所述第一运行环境中将所述第一特征点集合和所述第二特征点集合进行匹配,其中,所述第二运行环境的安全级别高于所述第一运行环境。
PCT/CN2019/075375 2018-08-01 2019-02-18 摄像头校准方法和装置、电子设备、计算机可读存储介质 WO2020024576A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810867079.6 2018-08-01
CN201810866119.5A CN109040745B (zh) 2018-08-01 2018-08-01 摄像头自校准方法和装置、电子设备、计算机存储介质
CN201810867079.6A CN109040746B (zh) 2018-08-01 2018-08-01 摄像头校准方法和装置、电子设备、计算机可读存储介质
CN201810866119.5 2018-08-01

Publications (1)

Publication Number Publication Date
WO2020024576A1 true WO2020024576A1 (zh) 2020-02-06

Family

ID=66951761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075375 WO2020024576A1 (zh) 2018-08-01 2019-02-18 摄像头校准方法和装置、电子设备、计算机可读存储介质

Country Status (4)

Country Link
US (1) US11158086B2 (zh)
EP (1) EP3608875A1 (zh)
TW (1) TWI709110B (zh)
WO (1) WO2020024576A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024575A1 (zh) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
US11172139B2 (en) * 2020-03-12 2021-11-09 Gopro, Inc. Auto exposure metering for spherical panoramic content
US11418719B2 (en) 2020-09-04 2022-08-16 Altek Semiconductor Corp. Dual sensor imaging system and calibration method which includes a color sensor and an infrared ray sensor to perform image alignment and brightness matching
CN112232278B (zh) * 2020-11-04 2024-02-20 上海菲戈恩微电子科技有限公司 一种3d结构光自适应精度实现方法及系统
KR20230130704A (ko) * 2021-01-19 2023-09-12 구글 엘엘씨 카메라 캘리브레이션을 위한 교차 스펙트럼 기능 매핑
CN112819901B (zh) * 2021-02-26 2024-05-14 中国人民解放军93114部队 基于图像边缘信息的红外相机自标定方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156189A1 (en) * 2002-01-16 2003-08-21 Akira Utsumi Automatic camera calibration method
CN102782721A (zh) * 2009-12-24 2012-11-14 康耐视公司 用于相机校准误差的运行时测定的系统和方法
US20140348416A1 (en) * 2013-05-23 2014-11-27 Himax Media Solutions, Inc. Stereo image rectification apparatus and method
CN105701827A (zh) * 2016-01-15 2016-06-22 中林信达(北京)科技信息有限责任公司 可见光相机与红外相机的参数联合标定方法及装置
CN106340045A (zh) * 2016-08-25 2017-01-18 电子科技大学 三维人脸重建中基于双目立体视觉的标定优化方法
CN108230395A (zh) * 2017-06-14 2018-06-29 深圳市商汤科技有限公司 双视角图像校准及图像处理方法、装置、存储介质和电子设备
CN109040745A (zh) * 2018-08-01 2018-12-18 Oppo广东移动通信有限公司 摄像头自校准方法和装置、电子设备、计算机存储介质
CN109040746A (zh) * 2018-08-01 2018-12-18 Oppo广东移动通信有限公司 摄像头校准方法和装置、电子设备、计算机可读存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002090887A2 (en) * 2001-05-04 2002-11-14 Vexcel Imaging Gmbh Digital camera for and method of obtaining overlapping images
US9582889B2 (en) * 2009-07-30 2017-02-28 Apple Inc. Depth mapping based on pattern matching and stereoscopic information
EP3506644B1 (en) * 2012-01-19 2020-12-30 Vid Scale, Inc. Methods and systems for video delivery supporting adaption to viewing conditions
US10819962B2 (en) * 2012-12-28 2020-10-27 Apple Inc. Method of and system for projecting digital information on a real object in a real environment
EP3151048A1 (en) * 2014-07-01 2017-04-05 FotoNation Limited An image capture device
WO2016205419A1 (en) * 2015-06-15 2016-12-22 Flir Systems Ab Contrast-enhanced combined image generation systems and methods
US20170064287A1 (en) * 2015-08-24 2017-03-02 Itseez3D, Inc. Fast algorithm for online calibration of rgb-d camera
WO2017117517A1 (en) * 2015-12-30 2017-07-06 The Johns Hopkins University System and method for medical imaging
EP3411755A4 (en) * 2016-02-03 2019-10-09 Sportlogiq Inc. SYSTEMS AND METHODS FOR AUTOMATED CALIBRATION OF PHOTOGRAPHIC APPARATUS
CA2958010C (en) * 2016-02-19 2021-09-07 Covidien Lp System and methods for video-based monitoring of vital signs
KR20180071589A (ko) * 2016-12-20 2018-06-28 삼성전자주식회사 홍채 인식 기능 운용 방법 및 이를 지원하는 전자 장치
US10540784B2 (en) * 2017-04-28 2020-01-21 Intel Corporation Calibrating texture cameras using features extracted from depth images
CN107679481B (zh) * 2017-09-27 2021-09-14 Oppo广东移动通信有限公司 解锁控制方法及相关产品
CN109754427A (zh) * 2017-11-01 2019-05-14 虹软科技股份有限公司 一种用于标定的方法和装置
CN107820011A (zh) 2017-11-21 2018-03-20 维沃移动通信有限公司 拍照方法和拍照装置
US10762664B2 (en) * 2017-12-29 2020-09-01 Intel Corporation Multi-camera processor with feature matching
CN108090477A (zh) * 2018-01-23 2018-05-29 北京易智能科技有限公司 一种基于多光谱融合的人脸识别方法与装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156189A1 (en) * 2002-01-16 2003-08-21 Akira Utsumi Automatic camera calibration method
CN102782721A (zh) * 2009-12-24 2012-11-14 康耐视公司 用于相机校准误差的运行时测定的系统和方法
US20140348416A1 (en) * 2013-05-23 2014-11-27 Himax Media Solutions, Inc. Stereo image rectification apparatus and method
CN105701827A (zh) * 2016-01-15 2016-06-22 中林信达(北京)科技信息有限责任公司 可见光相机与红外相机的参数联合标定方法及装置
CN106340045A (zh) * 2016-08-25 2017-01-18 电子科技大学 三维人脸重建中基于双目立体视觉的标定优化方法
CN108230395A (zh) * 2017-06-14 2018-06-29 深圳市商汤科技有限公司 双视角图像校准及图像处理方法、装置、存储介质和电子设备
CN109040745A (zh) * 2018-08-01 2018-12-18 Oppo广东移动通信有限公司 摄像头自校准方法和装置、电子设备、计算机存储介质
CN109040746A (zh) * 2018-08-01 2018-12-18 Oppo广东移动通信有限公司 摄像头校准方法和装置、电子设备、计算机可读存储介质

Also Published As

Publication number Publication date
US20200043198A1 (en) 2020-02-06
TWI709110B (zh) 2020-11-01
TW202008306A (zh) 2020-02-16
EP3608875A1 (en) 2020-02-12
US11158086B2 (en) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2020024576A1 (zh) 摄像头校准方法和装置、电子设备、计算机可读存储介质
CN109767467B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
TWI736883B (zh) 影像處理方法和電子設備
CN108805024B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN109040745B (zh) 摄像头自校准方法和装置、电子设备、计算机存储介质
WO2019037088A1 (zh) 一种曝光的控制方法、装置以及无人机
CN108924426B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109040746B (zh) 摄像头校准方法和装置、电子设备、计算机可读存储介质
WO2019205887A1 (zh) 控制拍摄的方法、装置、电子设备及计算机可读存储介质
CN109327626A (zh) 图像采集方法、装置、电子设备和计算机可读存储介质
US11019325B2 (en) Image processing method, computer device and readable storage medium
TWI509466B (zh) 物件辨識方法與裝置
CN111345029A (zh) 一种目标追踪方法、装置、可移动平台及存储介质
CN109584312B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
US11218650B2 (en) Image processing method, electronic device, and computer-readable storage medium
CN109068060B (zh) 图像处理方法和装置、终端设备、计算机可读存储介质
WO2020134123A1 (zh) 全景拍摄方法及装置、相机、移动终端
CN112470192A (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
CN109559353A (zh) 摄像模组标定方法、装置、电子设备及计算机可读存储介质
US9807340B2 (en) Method and apparatus for providing eye-contact function to multiple points of attendance using stereo image in video conference system
WO2020015403A1 (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN109120846B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109658459B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
JP7321772B2 (ja) 画像処理装置、画像処理方法、およびプログラム
CN109584313B (zh) 摄像头标定方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19843719

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19843719

Country of ref document: EP

Kind code of ref document: A1