CN110728714B - Image processing method and device, storage medium and electronic equipment - Google Patents

Image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110728714B
CN110728714B CN201810778734.0A CN201810778734A CN110728714B CN 110728714 B CN110728714 B CN 110728714B CN 201810778734 A CN201810778734 A CN 201810778734A CN 110728714 B CN110728714 B CN 110728714B
Authority
CN
China
Prior art keywords
camera
calibration
tee
image
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810778734.0A
Other languages
Chinese (zh)
Other versions
CN110728714A (en
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810778734.0A priority Critical patent/CN110728714B/en
Publication of CN110728714A publication Critical patent/CN110728714A/en
Application granted granted Critical
Publication of CN110728714B publication Critical patent/CN110728714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image processing method and device, electronic equipment and storage medium, which are applied to the electronic equipment, wherein the electronic equipment supports rich execution environment REE and trusted execution environment TEE. Acquiring images obtained by shooting calibration plates at different angles by a camera from REE; transmitting the acquired image to the TEE through a data channel between the TEE and the TEE; performing camera calibration in the TEE according to the acquired image to obtain a camera calibration result; and outputting the camera calibration result to the REE through a data channel between the REE and the TEE, and storing the camera calibration result in a safety area. Because the TEE is isolated from the REEs and the REEs can only communicate with the TEE through a specific portal, the whole camera calibration process is carried out in the TEE, so that the safety of information processing in the camera calibration process can be improved.

Description

Image processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
Conventional intelligent terminals provide a common execution environment, i.e., rich execution environment (Rich Execution Environment, REEs), which can be used to run a wide variety of general-purpose operating systems. But while bringing great flexibility and functionality through such an execution environment, intelligent terminals are also beginning to face a wide variety of security threats. In the age of increasing emphasis on network security, how to improve security in the information processing process has become a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and electronic equipment, which can improve the safety of information processing in the camera calibration process.
An image processing method applied to an electronic device supporting a rich execution environment, REE, and a trusted execution environment, TEE, the method comprising:
acquiring images obtained by shooting calibration plates with different angles by a camera from the REE;
transmitting the acquired image into the TEE through a data channel between the TEE and the TEE;
performing camera calibration in the TEE according to the acquired image to obtain a camera calibration result;
and outputting the camera calibration result to the REE through a data channel between the REE and the TEE, and storing the camera calibration result in a safety area.
An image processing apparatus, the apparatus comprising:
the calibration plate image acquisition module is used for acquiring images obtained by shooting the calibration plates with different angles by the camera from the REE;
a transmission module for transmitting the acquired image to the TEE through a data channel between the TEE and the TEE;
the camera calibration module is used for calibrating the camera in the TEE according to the acquired image to obtain a camera calibration result;
And the output storage module is used for outputting the camera calibration result to the REE through a data channel between the REE and the TEE and storing the camera calibration result in a safety area.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of the image processing method as described above.
An electronic device comprising a memory and a processor, said memory having stored thereon a computer program executable on the processor, the processor executing the steps of the image processing method as described above when the computer program is executed.
The image processing method and device, the storage medium and the electronic equipment are applied to the electronic equipment, and the electronic equipment supports rich execution environment REE and trusted execution environment TEE. Acquiring images obtained by shooting calibration plates at different angles by a camera from REE; transmitting the acquired image to the TEE through a data channel between the TEE and the TEE; performing camera calibration in the TEE according to the acquired image to obtain a camera calibration result; and outputting the camera calibration result to the REE through a data channel between the REE and the TEE, and storing the camera calibration result in a safety area. Because the TEE is isolated from the REEs and the REEs can only communicate with the TEE through a specific portal, the whole camera calibration process is carried out in the TEE, so that the safety of information processing in the camera calibration process can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an application environment for dual camera calibration in one embodiment;
FIG. 2 is a flow chart of an image processing method in one embodiment;
FIG. 3 is a flow chart of the method of FIG. 2 for camera calibration based on acquired images in the TEE to obtain a camera calibration result;
FIG. 4 (a) is a flow chart of camera calibration of a camera of an electronic device at a computer PC end;
FIG. 4 (b) is a flow chart of camera calibration in a TEE environment of an electronic device;
FIG. 5 is a flowchart of a method for performing monocular calibration based on the Zhang's calibration method according to the result of feature point matching to obtain a monocular calibration matching result in one embodiment;
FIG. 6 is a flowchart of an image processing method in another embodiment;
FIG. 7 is a schematic diagram showing the structure of an image processing apparatus in one embodiment;
Fig. 8 is a schematic structural view of an image processing apparatus in another embodiment;
fig. 9 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
FIG. 1 is a schematic diagram of an application environment for dual camera calibration in one embodiment. As shown in fig. 1, the application environment includes a dual camera jig 110 and a calibration plate 120. The dual camera fixture 110 is used for placing an electronic device with a dual camera module or with a dual camera module. The calibration plate 120 (chart) has a chart pattern thereon. The calibration plate 120 can rotate to maintain the pose of different angles. The dual camera module or the electronic device with the dual camera module on the dual camera 110 shoots the hart pattern on the calibration board 120 at different distances and different angles, and usually shoots at least 3 angles of the image, for example, the optical axis of the dual camera module in fig. 1 is perpendicular to the rotation axis of the calibration board, the calibration board 120 rotates around the Y axis by three angles, one of the angles is 0 degrees, the other two rotation angles are + -theta degrees, and theta is greater than 15, so as to ensure decoupling between the postures. If the calibration plate is a three-dimensional calibration plate, three vertical planes of the three-dimensional calibration plate can be directly shot to obtain calibration images with different angles.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is applied to an electronic device, and the electronic device supports a rich execution environment REE and a trusted execution environment TEE, and the method is applied to the electronic device in fig. 1 for illustration, and includes:
and 220, acquiring images obtained by shooting calibration plates with different angles by a camera from the REE.
The rich execution environment REE is totally called Rich Execution Environment, and common mobile phone software and software on a computer PC can run in the environment, but the REE often has the problem of information leakage. At this time, a new concept has emerged as a trusted execution environment (Trusted Execution Environment, TEE), which is co-located with a rich execution environment, and is dedicated to providing a secure area for devices to execute trusted code. To achieve true security, the TEE execution environment must ensure that all code executing in the TEE itself must be highly reliable. TEE has the following characteristics:
1. protected by hardware mechanism: the TEE is isolated from the REEs, which can only communicate with the TEE through a specific portal; 2. high performance: TEE runtime uses the full performance (exclusive) of the CPU; 3. fast communication mechanism: the TEE can access the memory of the REE, and the REE cannot access the TEE memory protected by hardware; 4. multiple Trusted Application (TA) may be run simultaneously in the TEE. Therefore, based on the above characteristics of the TEE, if the TEE is used in the fields of mobile payment, fingerprint identification, face unlocking and the like, the security of information is greatly improved.
When the camera is calibrated, firstly, shooting is carried out on calibration plates with different angles through the camera, and shot images are obtained. In general, at least three types of images are required to be obtained by shooting the calibration plate under three different angles, so that the camera calibration can be performed more accurately later. In the process of changing the angle, the calibration plate can be selectively moved, the camera can be selectively moved, and of course, the camera and the calibration plate can be simultaneously moved. In the embodiment of the application, the mobile calibration plate is selected, and the camera is not moved to calibrate the camera. The photographed image is affected in order to avoid camera shake.
The geometric model of the camera can be obtained by shooting the pattern array flat plate with the fixed space through the camera and calculating through a calibration algorithm, so that high-precision measurement and reconstruction results are obtained. And the flat plate with the fixed-pitch pattern array is the calibration plate. Generally, the chessboard patterns are rectangular patterns with black and white alternately, and the manufacturing precision requirement is high. The process of capturing images of calibration plates at different angles by a camera is generally performed in a REE environment, so that the images of the calibration plates at different angles captured by the camera need to be obtained from the REE, and then the images of the calibration plates at different angles captured by the camera need to be processed.
In step 240, the acquired image is transmitted to the TEE via a data channel between the TEE and the TEE.
The specific data channel exists between the REE and the TEE, so that the data meeting the conditions can be transmitted between the REE and the TEE. And transmitting the acquired image obtained by shooting the calibration plate to the TEE through a data channel between the REE and the TEE. The camera calibration is carried out in the TEE according to the image obtained by shooting the calibration plate, so that the whole camera calibration process and all data information generated in the camera calibration process are operated in the TEE environment, the safety of the data is ensured, and the risk of the data being attacked is reduced.
Step 260, performing camera calibration in the TEE according to the acquired image to obtain a camera calibration result.
The camera calibration (Camera calibration) is simply a process of changing from the world coordinate system to the image coordinate system, i.e. a process of finding the final projection matrix. The basic coordinate system involved in camera calibration includes: a world coordinate system (world coordinate system), a camera coordinate system (camera coordinate system), and an image coordinate system (image coordinate system), wherein the image coordinate system comprises an image physical coordinate system and an image pixel coordinate system. The target of camera calibration is to obtain the internal parameters of the camera, the external parameters of the camera and the distortion coefficients.
In the last step, the image obtained by shooting the calibration plate in the REE is transmitted to the TEE through a data channel between the REE and the TEE. After the camera of the electronic equipment is calibrated in the TEE, a camera calibration result is obtained.
In step 280, the camera calibration result is output to the re through a data channel between the re and the TEE, and stored in a secure area.
After the camera of the electronic equipment is calibrated in the TEE, a camera calibration result is obtained, and then the camera calibration result is output to the TEE through a data channel between the TEE and is stored in a safety area. Typically stored in a memory space in the electronic device for other application calls.
In the embodiment of the application, the electronic device supports a rich execution environment REE and a trusted execution environment TEE. Acquiring images obtained by shooting calibration plates at different angles by a camera from REE; transmitting the acquired image to the TEE through a data channel between the TEE and the TEE; performing camera calibration in the TEE according to the acquired image to obtain a camera calibration result; and outputting the camera calibration result to the REE through a data channel between the REE and the TEE, and storing the camera calibration result in a safety area. Because the TEE is isolated from the REEs and the REEs can only communicate with the TEE through a specific portal, the whole camera calibration process is carried out in the TEE, so that the safety of information processing in the camera calibration process can be improved.
In one embodiment, the camera includes a first camera head and a second camera head; obtaining images of calibration plates with different angles shot by a camera from REE, wherein the images comprise:
and acquiring images obtained by the calibration plates with different angles from the REE by the first camera and the second camera.
Specifically, the camera is a dual-camera, and the camera comprises a first camera and a second camera. When calibrating two cameras of a camera, firstly, calibrating plates with different angles are shot through the first camera and the second camera at the same time, and a plurality of groups of images are obtained. Generally, at least three sets of images are required to be acquired by shooting the calibration plate under three different angles, so that the camera calibration can be performed more accurately later. The calibration plate is usually adjusted to be under 10-20 different angles, and then the calibration plate under each angle is respectively photographed through the first camera and the second camera, so as to obtain 10-20 groups of images.
In this embodiment of the application, shoot the calibration board under to being in different angles respectively through first camera and second camera, acquire multiunit image. Typically, 10-20 sets of images are acquired for camera calibration, because if the number of images is too small, inaccurate camera calibration results are easily caused. And the accuracy of the camera calibration result can be greatly improved by selecting 10-20 groups of images for camera calibration.
In one embodiment, as shown in fig. 3, performing camera calibration in the TEE according to the acquired image to obtain a camera calibration result, including:
in step 262, feature point matching is performed on the acquired image in the TEE.
And (3) calibrating the camera in the TEE according to the acquired image, wherein the camera can be calibrated in a three-dimensional mode. The camera three-dimensional calibration establishes a geometric imaging model of the camera by acquiring image feature points. And carrying out characteristic point matching on each group of images obtained by shooting the calibration plates with different angles through the first camera and the second camera. Assuming that an image photographed by a first camera at a certain angle of the calibration plate is referred to as a first image, an image photographed by a second camera at the same angle of the calibration plate is referred to as a second image. And carrying out feature point matching on the first image and the second image to obtain a feature point matching result.
Firstly, detecting corresponding characteristic points of characteristic points in a calibration image shot by a first camera in a calibration plate, then corresponding characteristic points in a calibration image shot by a second camera, and searching corresponding homonymous characteristic points in the calibration image shot by the second camera by the characteristic points in the calibration image shot by the first camera. The same-name feature points refer to first feature points corresponding to the feature points in the calibration plate in the calibration image obtained by shooting by the first camera, and second feature points corresponding to the same feature points in the calibration plate in the calibration image shot by the second camera, wherein the feature points, the first feature points and the second feature points in the calibration plate are the same-name feature points. And detecting the same-name characteristic points in the calibration images shot by the first camera and the second camera.
If the pattern on the calibration plate is a checkerboard pattern, detecting feature points in the calibration image may include: obtaining an initial value of a corner in the image by adopting a Harris corner detection operator; detecting edge information in the calibration image, and grouping the obtained corner points to obtain an edge point set; and performing curve fitting on the selected edge points, wherein the curve fitting comprises global and local fitting curves, and obtaining the intersection points of the global curves and the local curves to obtain the obtained corner points, wherein the obtained corner points are the characteristic points in the calibration image.
In one embodiment, if the pattern of the calibration plate is an ellipse or a circle, detecting the feature points in the calibration image includes: adopting a canny edge to extract elliptic edge information, and fitting by using a general equation of an ellipse and a least square method to obtain a center point of the ellipse; the location of each ellipse in the image is represented by its center point coordinates, by which the ellipse center points can be ordered.
And extracting characteristic points of the calibration images shot by the first camera and the second camera respectively to obtain characteristic points in the respective calibration images. For any feature point in a calibration image of a certain angle of the first camera, searching and matching can be performed in the calibration image of the second camera corresponding to the certain angle by using epipolar constraint. For example, for a point p in three-dimensional space, projected onto two different planes L1 and L2, the projected points are p1 and p2, respectively, and p, p1 and p2 constitute one plane S in three-dimensional space. The intersection n1 of S with plane L1 passes through the p1 point and is referred to as the line corresponding to p 2. Epipolar constraint refers to the mapping of the same point on both images, and given a mapped point p1, mapped point p2 is on the epipolar line relative to p 1.
And step 264, monocular calibration based on the Zhang's calibration method is carried out according to the result of the feature point matching, and a monocular calibration result is obtained.
Zhang Zhengyou A new method for solving the internal and external parameters of the camera, namely a Zhang's calibration method, is provided for the radial distortion problem. The method is a method between the traditional calibration and the self-calibration, and only a camera is required to shoot a plurality of images from different directions on a certain calibration plate, and the calibration of the camera is carried out through the corresponding relation between each characteristic point on the calibration plate and an image point piece of an image plane, namely a homography matrix of each image. Specifically, a homography matrix is calculated according to the result of feature point matching, and then internal parameters, external parameters and distortion parameters of the two cameras are calculated according to the homography matrix.
And step 266, performing binocular calibration based on the Zhang's calibration method according to the monocular calibration result to obtain a binocular calibration result.
And calculating the external parameters of the double-camera module according to the internal parameters, the external parameters and the distortion parameters of the two cameras, wherein the double-camera module consists of a first camera and a second camera, and the external parameters of the double-camera module comprise a rotation matrix and a translation matrix between the first camera and the second camera.
As shown in fig. 4 (a), a flowchart of camera calibration of the camera of the electronic device is shown at the PC side of the computer. The whole process is executed in the REE environment of the PC side and the REE environment of the electronic equipment. The PC side controls a main camera and a secondary camera of electronic equipment (such as a smart phone) to shoot a plurality of groups of images on the calibration plates at different angles. And then transmitting the shot image to a PC end, carrying out the whole camera calibration process at the PC end, and finally importing the camera calibration result to the corresponding position of the electronic equipment for storage.
As shown in fig. 4 (b), a flowchart of camera calibration in a TEE environment of an electronic device is shown. Firstly, two cameras of electronic equipment shoot calibration plates at different angles respectively in REE environment to obtain a plurality of groups of images; the electronic equipment reads the corresponding image under the TEE, and performs preprocessing and other steps; searching feature points under the TEE; monocular calibration based on Zhang's calibration is carried out under the TEE; performing binocular calibration based on Zhang's calibration under the TEE; the calibration information is stored in the corresponding Persist partition. The source data of the calibration data, the intermediate data and the information of the calibration result are all executed in a safer environment such as a TEE, so the method is not easy to attack.
In the embodiment of the application, the monocular calibration is performed based on the Zhang calibration method, and then the double-target calibration is performed, and the Zhang calibration method template is easy to manufacture, convenient to use, low in cost, good in robustness and high in accuracy, so that the accuracy of the obtained rotation matrix and the translation matrix between the first camera and the second camera can be improved.
In one embodiment, as shown in fig. 5, monocular calibration based on the zhang calibration method is performed according to the result of feature point matching, to obtain a matching result of the monocular calibration, which includes:
264a, calculating a homography matrix according to the result of feature point matching;
step 264b, respectively calculating internal parameters, external parameters and distortion parameters of the two cameras according to the homography matrix.
And performing characteristic point matching on each group of images obtained by shooting calibration plates with different angles through the first camera and the second camera. It is necessary to find out a feature that can remain unchanged in each set of images obtained by photographing calibration plates of different angles at the same time by the first camera and the second camera, and to find out the same object in each set of images by using the unchanged features.
In order to enable better image matching, it is necessary to select a representative region in each group of images, for example: corner points, edges and some blocks in the image, but it is easiest to identify corner points in the image, that is, the degree of recognition of corner points is highest. Therefore, in many computer vision processes, corner points are extracted as features, and images are matched.
The internal parameters of the single camera can comprise f x 、f y 、c x 、c y Wherein f x Representing the size of a unit pixel, f, of a focal length in the x-axis direction of an image coordinate system y Representing the unit pixel size, c, of the focal length in the y-axis direction of the image coordinate system x 、c y Representing the principal point coordinates of the image plane, the principal point being the intersection of the optical axis and the image plane. f (f) x =f/d x ,f y =f/d y Wherein f is the focal length of the single camera, d x Represents the width of one pixel in the x-axis direction of the image coordinate system, d y Representing the width of one pixel in the y-axis direction of the image coordinate system. The image coordinate system is a coordinate system established based on a two-dimensional image shot by the camera and is used for specifying the position of an object in the shot image. The origin of the (x, y) coordinate system of the image coordinate system is located at the focal point (c) of the camera optical axis and the imaging plane x ,c y ) The unit is length unit, i.e. meters, and the origin of the (u, v) coordinate system in the pixel coordinate system is in the upper left corner of the image, the unit is a number of units, i.e. ones. (x, y) is used to characterize the perspective projection relationship of the object from the camera coordinate system to the image coordinate system, and (u, v) is used to characterize the pixel coordinates. The conversion relationship between (x, y) and (u, v) is as in formula (1):
Figure BDA0001732037590000091
perspective projection refers to a single-sided projection view obtained by projecting a body onto a projection surface by a center projection method, thereby achieving a visual effect.
The external parameters of the single camera comprise a rotation matrix and a translation matrix which are converted from coordinates in a world coordinate system to coordinates in a camera coordinate system. The world coordinate system reaches the camera coordinate system through rigid transformation, and the camera coordinate system reaches the image coordinate system through perspective projection transformation. The rigid transformation refers to the motion of rotating and translating a geometric object when the object is not deformed in the three-dimensional space, namely the rigid transformation. The rigid body transformation is as in equation (2).
Figure BDA0001732037590000092
Figure BDA0001732037590000093
Wherein X is c Representing a camera coordinate system, X representing a world coordinate system, R representing a rotation matrix of the world coordinate system to the camera coordinate system, and T representing a translation matrix of the world coordinate system to the camera coordinate system. The distance between the origin of the world coordinate system and the origin of the camera coordinate system is commonly controlled by components in the directions of x, y and z, three degrees of freedom are provided, and R is the sum of the effects of rotation around X, Y, Z axes respectively. t is t x Represents the translation in the x-axis direction, t y Represents the translation in the y-axis direction, t z Indicating the amount of translation in the z-axis direction.
The world coordinate system is an absolute coordinate system of an objective three-dimensional space, and can be established at any position. For example, for each calibration image, the world coordinate system may be established with the upper left corner point of the calibration plate as the origin, the calibration plate plane as the XY plane, and the Z axis facing up perpendicular to the calibration plate plane. The camera coordinate system takes a camera optical center as an origin of the coordinate system, takes an optical axis of the camera as a Z axis, and an X axis and a Y axis are respectively parallel to an X axis and a Y axis of the image coordinate system. The principal point of the image coordinate system is the intersection of the optical axis and the image plane. The image coordinate system takes a principal point as an origin. The pixel coordinate system refers to the position of the origin defined at the upper left corner of the image plane.
And shooting calibration plates with different angles through a single camera to obtain calibration images, extracting characteristic points from the calibration images, calculating 5 internal parameters and 2 external parameters of the single camera under the condition of no distortion, calculating by using a least square method to obtain distortion coefficients, and optimizing by using a maximum likelihood method to obtain final internal parameters and external parameters of the single camera.
Firstly, a camera model is established, and a formula (3) is obtained.
Figure BDA0001732037590000101
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0001732037590000102
the homogeneous coordinates of (1) represent the pixel coordinates (u, v, 1) of the image plane,>
Figure BDA0001732037590000103
the homogeneous coordinates of (1) represent coordinate points (X, Y, Z) of the world coordinate system, a represents an internal reference matrix, R represents a rotation matrix of the world coordinate system converted to the camera coordinate system, and T represents a translation matrix of the world coordinate system converted to the camera coordinate system.
Figure BDA0001732037590000104
Wherein α=f/d x ,β=f/d y F is the focal length of a single camera, d x Represents the width of one pixel in the x-axis direction of the image coordinate system, d y Representing the width of one pixel in the y-axis direction of the image coordinate system. Gamma represents the deviation of the pixel point in the x, y direction. u (u) 0 、v 0 Representing the principal point coordinates of the image plane, the principal point being the intersection of the optical axis and the image plane.
The world coordinate system is constructed on the plane of Z=0, then homography calculation is carried out, and Z=0 is converted into the formula (5).
Figure BDA0001732037590000111
Homography refers to projection mapping defined in computer vision as one plane to anotherAnd (5) emitting. Let h=ar [ r ] 1 r 2 t]H is a homography matrix. H is a matrix of 3*3 and has an element as homogeneous coordinates, so H has 8 unknowns to be solved. The homography matrix is written in the form of three column vectors, i.e. h= [ H ] 1 h 2 h 3 ]Thereby obtaining formula (6).
[h 1 h 2 h 3 ]=λA[r 1 r 2 t]Formula (6)
For equation (6), two constraints are applied, first, r 1 ,r 2 Orthogonalization to obtain r 1 r 2 =0,r 1 ,r 2 Respectively, about the x, y axes. Second, the modulus of the rotation vector is 1, i.e., |r 1 |=|r 2 |=1. By two constraints, r will be 1 ,r 2 Substitution for h 1 ,h 2 Expression was performed in combination with a. I.e. r 1 =h 1 A -1 ,r 2 =h 2 A -1 . From two constraints, equation (7) can be derived:
Figure BDA0001732037590000112
order the
Figure BDA0001732037590000113
B is a symmetrical array, so that the effective elements of B are 6, and 6 elements form a vector B.
b=[B 11 ,B 12 ,B 22 ,B 13 ,B 23 ,B 33 ] T
Figure BDA0001732037590000114
Can calculate V ij =[h i1 h j1 ,h i1 h j2 +h i2 h j1 ,h i2 h j2 ,h i3 h j1 +h i1 h j3 ,h i3 h j2 +h i2 h j3 ,h i3 h j3 ] T
Obtaining an equation set by using constraint conditions:
Figure BDA0001732037590000115
and (3) estimating B through at least three images by applying a formula (8), and decomposing B to obtain an initial value of an internal reference matrix A of the camera.
And calculating an external reference matrix based on the internal reference matrix to obtain an initial value of the external reference matrix.
Figure BDA0001732037590000121
Wherein λ=1/||a -1 h 1 ||=1/||A -1 h 2 ||。
The complete geometric model of the camera adopts the formula (10)
Figure BDA0001732037590000122
The formula (10) is a geometric model obtained by constructing a world coordinate system on a plane Z of 0, X and Y are world coordinates of feature points on a plane calibration plate, and X, Y and Z are physical coordinates of the feature points on the calibration plate on a camera coordinate system.
Figure BDA0001732037590000123
R is a rotation matrix from the world coordinate system of the calibration plate to the camera coordinate system, and T is a translation matrix from the world coordinate system of the calibration plate to the camera coordinate system.
And carrying out normalization processing on physical coordinates [ x, y, z ] of the feature points on the calibration plate in a camera coordinate system to obtain target coordinate points (x ', y').
Figure BDA0001732037590000124
And carrying out distortion processing on the image point of the camera coordinate system by using the distortion model.
Figure BDA0001732037590000125
The physical coordinates are converted into image coordinates using internal references.
Figure BDA0001732037590000131
And importing the initial values of the internal reference matrix and the initial values of the external reference matrix into a maximum likelihood formula to obtain a final internal reference matrix and an external reference matrix. The maximum likelihood formula is
Figure BDA0001732037590000132
And (5) obtaining a minimum value.
In this embodiment, the above process of calculating the rotation matrix and the translation matrix between the two cameras by using the Zhang calibration method is performed in the TEE environment. Because the TEE environment is a safer software environment than the REE environment, the whole process of calculating the rotation matrix and the translation matrix between the two cameras by adopting the zhangshi calibration method and the safety of data information generated in the process can be ensured. The risk of data being attacked and tampered in the process is reduced.
In one embodiment, performing binocular calibration based on a Zhang's calibration method according to a monocular calibration result to obtain a binocular calibration result, including:
And calculating the external parameters of the double-camera module according to the internal parameters, the external parameters and the distortion parameters of the two cameras, wherein the double-camera module consists of a first camera and a second camera, and the external parameters of the double-camera module comprise a rotation matrix and a translation matrix between the first camera and the second camera.
Specifically, the dual camera module includes a first camera and a second camera. The first camera and the second camera may be color cameras, or one may be black and white cameras, one may be color cameras, or two black and white cameras. The dual-camera calibration refers to determining an external parameter value of the dual-camera module. The external parameters of the double-camera module comprise a rotation matrix between the double cameras and a translation matrix between the double cameras. The rotation matrix and the translation matrix between the two cameras can be obtained by the formula (14).
Figure BDA0001732037590000133
Wherein R 'is a rotation matrix between the two cameras, T' is a translation matrix between the two cameras, and R r For the rotation matrix of the relative calibration object obtained by calibrating the first camera (namely, the rotation matrix of the coordinate of the calibration object in the world coordinate system is converted into the coordinate of the camera coordinate system of the first camera), T r And (3) a translation matrix of the first camera relative to the calibration object obtained through calibration (namely, a translation matrix of the coordinate of the calibration object in a world coordinate system is converted into the coordinate of the camera coordinate system of the first camera). R is R l A rotation matrix (namely, a rotation matrix of the coordinate of the calibration object in the world coordinate system is converted into the coordinate of the camera coordinate system of the second camera) of the relative calibration object obtained by calibrating the second camera is T l And (3) a translation matrix of the second camera relative to the calibration object obtained through calibration (namely, a translation matrix of the coordinate of the calibration object in a world coordinate system is converted into the coordinate of the camera coordinate system of the second camera).
In this embodiment, the above process of calculating the rotation matrix and the translation matrix between the two cameras by using the Zhang calibration method is performed in the TEE environment. Because the TEE environment is a safer software environment than the REE environment, the whole process of calculating the rotation matrix and the translation matrix between the two cameras by adopting the zhangshi calibration method and the safety of data information generated in the process can be ensured. The risk of data being attacked and tampered in the process is reduced.
In one embodiment, the secure area is a Persist partition.
In this embodiment, after the camera calibration result is obtained, the camera calibration result is output to the REE through a data channel between the REE and the TEE, and is stored in the secure area. Wherein the secure area is typically a memory space, such as a Persist partition. The persist partition is used for storing the information of the FRP (factory reset protcect) function, which uses the account number, the password and the like to be protected, and is prevented from being emptied after factory setting is restored. The camera calibration result is stored in the Persist partition, and the camera calibration result is not cleared even after the user restores factory setting or performs root on the electronic equipment. Therefore, the electronic equipment is ensured not to influence the function of image processing according to the camera calibration result after factory setting is restored or root is carried out.
In one embodiment, as shown in fig. 6, after the camera calibration result is output to the re through a data channel between the re and the TEE and stored in the secure area, it includes:
step 620, obtaining a camera calibration result;
step 640, transmitting the obtained camera calibration result to the TEE through a data channel between the TEE and the TEE;
step 660, resolving the camera calibration result in the TEE to obtain a resolved result, and processing the shot image according to the resolved result to obtain a processing result;
in step 680, the processing result is transmitted to the re through the data channel between the re and the TEE.
Specifically, after the camera is calibrated in the TEE environment, a camera calibration result is generated and stored in a security area. And transmitting the acquired camera calibration result to the TEE again through a data channel between the REE and the TEE. The specific data channel exists between the REE and the TEE, so that the data meeting the conditions can be transmitted between the REE and the TEE. And resolving a camera calibration result in a TEE environment to obtain distortion correction information and double-target calibration information, namely rotation and translation information.
And acquiring a photographed original image, processing the photographed original image in the TEE according to the analysis result to obtain a processing result, and transmitting the processing result to the REE through a data channel between the REE and the TEE. And the photographed original image is processed according to the analysis result, and the process of obtaining the processing result is also operated in the TEE environment, so that the safety of the data is further improved. For example, for applications such as face unlocking, after an original image of a photographed face is acquired, a camera calibration result is analyzed, and then, a process of reconstructing and correcting the original image according to distortion correction information and double-target calibration information obtained by the analysis is performed in a TEE environment. Finally, only the face unlocking result is required to be output, for example, whether the unlocking is successful or failed.
In this embodiment, the camera calibration process is performed in the TEE environment, a camera calibration result is generated, and the camera calibration result is stored in the security area. Then, the process of analyzing and using the camera calibration result is also performed in the TEE environment, so that the safety of the whole process of camera calibration and using the camera calibration result can be ensured to the greatest extent.
In one embodiment, as shown in fig. 7, there is provided an image processing apparatus 700 including: calibration plate image acquisition module 710, transmission module 720, camera calibration module 730, and output storage module 740. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the calibration plate image acquisition module 710 is configured to acquire images obtained by shooting calibration plates with different angles by a camera from the REEs;
a transmission module 720, configured to transmit the acquired image to the TEE through a data channel between the TEE and the TEE;
the camera calibration module 730 is configured to perform camera calibration in the TEE according to the acquired image, to obtain a camera calibration result;
and the output storage module 740 is used for outputting the camera calibration result to the REE through a data channel between the REE and the TEE and storing the camera calibration result in a safety area.
In one embodiment, the calibration plate image acquisition module 710 is further configured to acquire images obtained by capturing the calibration plates with different angles from the REEs by the first camera and the second camera at the same time.
In one embodiment, the camera calibration module 730 is further configured to perform feature point matching on the acquired image in the TEE; monocular calibration based on the Zhang's calibration method is carried out according to the result of feature point matching, and a monocular calibration result is obtained; and (3) performing binocular calibration based on a Zhang's calibration method according to the monocular calibration result to obtain a binocular calibration result.
In one embodiment, the camera calibration module 730 is further configured to calculate a homography matrix according to the result of the feature point matching; and respectively calculating internal parameters, external parameters and distortion parameters of the two cameras according to the homography matrix.
In one embodiment, the camera calibration module 730 is further configured to calculate an external parameter of the dual-camera module according to the internal parameter, the external parameter, and the distortion parameter of the two cameras, where the dual-camera module is composed of the first camera and the second camera, and the external parameter of the dual-camera module includes a rotation matrix and a translation matrix between the first camera and the second camera.
In one embodiment, as shown in fig. 8, there is provided an image processing apparatus 700 further comprising: the device comprises a camera calibration result acquisition module, a camera calibration result transmission module, an analysis module and an image processing result transmission module, wherein,
a camera calibration result obtaining module 750, configured to obtain a camera calibration result;
the camera calibration result transmission module 760 is configured to transmit the obtained camera calibration result to the TEE through a data channel between the TEE and the TEE;
the analysis module 770 is configured to analyze the camera calibration result in the TEE to obtain an analysis result, and process the captured image according to the analysis result to obtain a processing result;
The image processing result transmission module 780 is configured to transmit the processing result to the REE through a data channel between the REE and the TEE.
The above-described division of the respective modules in the image processing apparatus is merely for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to accomplish all or part of the functions of the above-described image processing apparatus.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the image processing method provided by the above embodiments.
In one embodiment, an electronic device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the image processing method provided in each of the above embodiments when the computer program is executed by the processor.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the steps of the image processing method provided by the above embodiments.
The embodiment of the application also provides electronic equipment. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as an example of the mobile phone: the electronic device includes an image processing circuit, which may be implemented using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing ) pipeline. Fig. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 9, the image processing circuit includes a first ISP processor 930, a second ISP processor 940, and a control logic 950. The first camera 910 includes one or more first lenses 912 and a first image sensor 914. The first image sensor 914 may include a color filter array (e.g., bayer filters), and the first image sensor 914 may obtain light intensity and wavelength information captured with each imaging pixel of the first image sensor 914 and provide a set of image data that may be processed by the first ISP processor 930. The second camera 920 includes one or more second lenses 922 and a second image sensor 924. The second image sensor 924 may include a color filter array (e.g., bayer filter), and the second image sensor 924 may obtain light intensity and wavelength information captured with each imaging pixel of the second image sensor 924 and provide a set of image data that may be processed by the second ISP processor 940.
The first image collected by the first camera 910 is transmitted to the first ISP processor 930 for processing, and after the first ISP processor 930 processes the first image, statistical data of the first image (such as brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic 950, and the control logic 950 may determine the control parameters of the first camera 910 according to the statistical data, so that the first camera 99 may perform operations such as auto-focusing and auto-exposure according to the control parameters. The first image may be stored in the image memory 960 after being processed by the first ISP processor 930, and the first ISP processor 930 may also read the image stored in the image memory 960 to process the first image. In addition, the first image may be processed by the ISP processor 930 and then sent directly to the display 970 for display, and the display 970 may also read the image in the image memory 960 for display.
Wherein the first ISP processor 930 processes the image data on a pixel-by-pixel basis in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 930 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth calculation accuracy.
Image memory 960 may be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include DMA (Direct Memory Access ) features.
Upon receiving the interface from the first image sensor 914, the first ISP processor 930 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 960 for additional processing before being displayed. The first ISP processor 930 receives the processing data from the image memory 960 and performs image data processing in RGB and YCbCr color spaces on the processing data. The image data processed by the first ISP processor 930 may be output to a display 970 for viewing by a user and/or further processing by a graphics engine or GPU (Graphics Processing Unit, graphics processor). Further, the output of the first ISP processor 930 may also be sent to the image memory 960, and the display 970 may read image data from the image memory 960. In one embodiment, image memory 960 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 930 may be sent to the control logic 950. For example, the statistics may include first image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters of the first camera 910 and control parameters of the first ISP processor 930 based on the received statistics. For example, the control parameters of the first camera 910 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 912 control parameters (e.g., focal length for focusing or zooming), combinations of these parameters, or the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balancing and color adjustment (e.g., during RGB processing), as well as first lens 912 shading correction parameters.
Similarly, the second image collected by the second camera 920 is transmitted to the second ISP processor 940 for processing, and after the second ISP processor 940 processes the first image, statistical data of the second image (such as brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic 950, and the control logic 950 may determine the control parameters of the second camera 920 according to the statistical data, so that the second camera 920 may perform operations such as auto-focusing and auto-exposure according to the control parameters. The second image may be stored in the image memory 960 after being processed by the second ISP processor 940, and the second ISP processor 940 may also read the image stored in the image memory 960 to process it. In addition, the second image may be processed by the ISP processor 940 and then sent directly to the display 970 for display, and the display 970 may also read the image in the image memory 960 for display. The second camera 920 and the second ISP processor 940 may also implement the processes as described for the first camera 910 and the first ISP processor 930.
The following is a step of implementing the image processing method using the image processing technique in fig. 9.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. An image processing method, applied to an electronic device, the electronic device supporting a rich execution environment REE and a trusted execution environment TEE, the method comprising:
acquiring images obtained by shooting calibration plates with different angles by a camera from the REE;
transmitting the acquired image into the TEE through a data channel between the TEE and the TEE;
performing feature point matching on the acquired image in the TEE to obtain a feature point matching result; calculating a homography matrix according to the result of feature point matching; respectively calculating matching results of monocular calibration according to the homography matrix; the monocular calibration matching result comprises internal parameters, external parameters and distortion parameters of the two cameras; performing binocular calibration based on a Zhang's calibration method according to the monocular calibration result to obtain a binocular calibration result;
outputting the binocular calibration result to the REE through a data channel between the REE and the TEE, and storing the binocular calibration result in a safety area; the secure enclave is a Persist partition.
2. The method of claim 1, wherein the camera comprises a first camera and a second camera;
Obtaining images of calibration plates with different angles shot by a camera from the REE, wherein the images comprise:
and acquiring a first image and a second image which are obtained by shooting the calibration plates with different angles from the REE by the first camera and the second camera at the same time.
3. The method according to claim 2, wherein the performing feature point matching on the acquired image in the TEE, to obtain a result of feature point matching, includes:
detecting a first characteristic point corresponding to the characteristic point in the calibration plate in a first image in the TEE;
detecting a second characteristic point corresponding to the characteristic point in the calibration plate in the second image in the TEE, and obtaining a plurality of homonymous characteristic points based on a plurality of groups of the first characteristic points and the second characteristic points;
and taking the homonymous feature points as a result of the feature point matching.
4. The method according to claim 2, wherein the performing feature point matching on the acquired image in the TEE, to obtain a result of feature point matching, includes:
extracting characteristic points of a first image and a second image respectively to obtain the characteristic points in the first image and the characteristic points in the second image;
And for any feature point in the first image, searching and matching in the second image through epipolar constraint to obtain a second feature point matched with any feature point in the first image.
5. The method according to claim 1, wherein the performing the binocular calibration based on the Zhang's calibration method according to the monocular calibration result, to obtain the binocular calibration result, includes:
and calculating external parameters of the double-camera module according to the internal parameters, the external parameters and the distortion parameters of the two cameras, wherein the double-camera module consists of a first camera and a second camera, and the external parameters of the double-camera module comprise a rotation matrix and a translation matrix between the first camera and the second camera.
6. The method of claim 1, wherein after outputting the camera calibration result into the re through a data channel between the re and TEE and storing in a secure area, comprising:
acquiring the camera calibration result;
transmitting the acquired camera calibration result to the TEE through a data channel between the TEE and the TEE;
analyzing the camera calibration result in the TEE to obtain an analysis result, and processing the shot image according to the analysis result to obtain a processing result;
And transmitting the processing result to the REE through a data channel between the REE and the TEE.
7. An image processing apparatus, characterized in that the apparatus comprises:
the calibration plate image acquisition module is used for acquiring images obtained by shooting the calibration plates with different angles by the camera from the REE;
a transmission module for transmitting the acquired image to the TEE through a data channel between the TEE and the TEE;
the camera calibration module is used for carrying out characteristic point matching on the acquired image in the TEE to obtain a characteristic point matching result; calculating a homography matrix according to the result of feature point matching; respectively calculating matching results of monocular calibration according to the homography matrix; the monocular calibration matching result comprises internal parameters, external parameters and distortion parameters of the two cameras; performing binocular calibration based on a Zhang's calibration method according to the monocular calibration result to obtain a binocular calibration result;
and the output storage module is used for outputting the binocular calibration result to the REE through a data channel between the REE and the TEE and storing the binocular calibration result in a safety area.
8. The apparatus of claim 7, wherein the camera comprises a first camera and a second camera; the calibration plate image acquisition module is also used for acquiring a first image and a second image which are obtained by shooting the calibration plates with different angles from the REE through the first camera and the second camera.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 6.
10. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the image processing method according to any of claims 1 to 6.
CN201810778734.0A 2018-07-16 2018-07-16 Image processing method and device, storage medium and electronic equipment Active CN110728714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810778734.0A CN110728714B (en) 2018-07-16 2018-07-16 Image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810778734.0A CN110728714B (en) 2018-07-16 2018-07-16 Image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110728714A CN110728714A (en) 2020-01-24
CN110728714B true CN110728714B (en) 2023-06-20

Family

ID=69217301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810778734.0A Active CN110728714B (en) 2018-07-16 2018-07-16 Image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110728714B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046846A (en) * 2006-12-31 2007-10-03 北京交通大学 Collection, recognition device and method for palm print image information
CN101231750A (en) * 2008-02-21 2008-07-30 南京航空航天大学 Calibrating method of binocular three-dimensional measuring system
WO2014048744A1 (en) * 2012-09-28 2014-04-03 St-Ericsson Sa Method and apparatus for maintaining secure time
CN104350733A (en) * 2012-07-13 2015-02-11 英特尔公司 Context based management for secure augmented reality applications
CN106200891A (en) * 2015-05-08 2016-12-07 阿里巴巴集团控股有限公司 The display method of user interface, Apparatus and system
CN107123147A (en) * 2017-03-31 2017-09-01 深圳市奇脉电子技术有限公司 Scaling method, device and the binocular camera system of binocular camera
CN107633536A (en) * 2017-08-09 2018-01-26 武汉科技大学 A kind of camera calibration method and system based on two-dimensional planar template
CN107924436A (en) * 2015-08-17 2018-04-17 高通股份有限公司 Control is accessed using the electronic device of biological identification technology
CN107958469A (en) * 2017-12-28 2018-04-24 北京安云世纪科技有限公司 A kind of scaling method of dual camera, device, system and mobile terminal
WO2018072713A1 (en) * 2016-10-19 2018-04-26 北京豆荚科技有限公司 Communication system and electronic device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046846A (en) * 2006-12-31 2007-10-03 北京交通大学 Collection, recognition device and method for palm print image information
CN101231750A (en) * 2008-02-21 2008-07-30 南京航空航天大学 Calibrating method of binocular three-dimensional measuring system
CN104350733A (en) * 2012-07-13 2015-02-11 英特尔公司 Context based management for secure augmented reality applications
WO2014048744A1 (en) * 2012-09-28 2014-04-03 St-Ericsson Sa Method and apparatus for maintaining secure time
CN106200891A (en) * 2015-05-08 2016-12-07 阿里巴巴集团控股有限公司 The display method of user interface, Apparatus and system
CN107924436A (en) * 2015-08-17 2018-04-17 高通股份有限公司 Control is accessed using the electronic device of biological identification technology
WO2018072713A1 (en) * 2016-10-19 2018-04-26 北京豆荚科技有限公司 Communication system and electronic device
CN107123147A (en) * 2017-03-31 2017-09-01 深圳市奇脉电子技术有限公司 Scaling method, device and the binocular camera system of binocular camera
CN107633536A (en) * 2017-08-09 2018-01-26 武汉科技大学 A kind of camera calibration method and system based on two-dimensional planar template
CN107958469A (en) * 2017-12-28 2018-04-24 北京安云世纪科技有限公司 A kind of scaling method of dual camera, device, system and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TEE技术应用到智能设备生物识别场景的安全性分析;魏凡星等;《移动通信》;20171115(第21期);全文 *
一种单目相机标定算法研究;赫美琳等;《数字通信世界》;20180501(第05期);全文 *

Also Published As

Publication number Publication date
CN110728714A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109737874B (en) Object size measuring method and device based on three-dimensional vision technology
US9986224B2 (en) System and methods for calibration of an array camera
JP6585006B2 (en) Imaging device and vehicle
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN106815869B (en) Optical center determining method and device of fisheye camera
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
CN109598763B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107808398B (en) Camera parameter calculation device, calculation method, program, and recording medium
TW201340035A (en) Method for combining images
CN109584312B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN109598764A (en) Camera calibration method and device, electronic equipment, computer readable storage medium
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109559352B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
WO2023236508A1 (en) Image stitching method and system based on billion-pixel array camera
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN109697737B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109584311B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109658459B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN111353945B (en) Fisheye image correction method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant