WO2020010945A1 - Image processing method and apparatus, electronic device and computer-readable storage medium - Google Patents

Image processing method and apparatus, electronic device and computer-readable storage medium Download PDF

Info

Publication number
WO2020010945A1
WO2020010945A1 PCT/CN2019/088882 CN2019088882W WO2020010945A1 WO 2020010945 A1 WO2020010945 A1 WO 2020010945A1 CN 2019088882 W CN2019088882 W CN 2019088882W WO 2020010945 A1 WO2020010945 A1 WO 2020010945A1
Authority
WO
WIPO (PCT)
Prior art keywords
transfer function
hypotenuse
calibration
modulus transfer
pattern
Prior art date
Application number
PCT/CN2019/088882
Other languages
French (fr)
Chinese (zh)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020010945A1 publication Critical patent/WO2020010945A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present application relates to the field of imaging, and in particular, to an image processing method and device, an electronic device, and a computer-readable storage medium.
  • the embodiments of the present application provide an image processing method and device, an electronic device, and a computer-readable storage medium, which can detect images that meet requirements and improve the accuracy of subsequent calibrations.
  • An image processing method includes:
  • a calibration board including
  • the carrier and
  • a preset pattern is provided on the carrier
  • the preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and a gap exists between the hypotenuse pattern and the calibration pattern.
  • An image processing device includes:
  • An image acquisition module for acquiring a calibration image
  • a detection module configured to detect the calibration image to obtain a hypotenuse region
  • a parameter acquisition module configured to acquire a first modulus transfer function of the hypotenuse region
  • a reading module configured to read a second modulus transfer function of a central region of the calibration image
  • a ratio obtaining module configured to obtain a ratio of the first modulus transfer function to a second modulus transfer function
  • a determining module configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold.
  • An electronic device includes a memory and a processor.
  • the memory stores a computer program.
  • the processor causes the processor to perform operations of the image processing method.
  • a computer-readable storage medium stores a computer program thereon, and when the computer program is executed by a processor, the operations of the image processing method are implemented.
  • the image processing method and device, electronic device, and computer-readable storage medium of the embodiments of the present application obtain a hypotenuse region by detecting the calibration image, obtain a first modulus transfer function of the hypotenuse region, and then obtain the center of the calibration image The second modulus transfer function of the region.
  • a threshold value it means that the sharpness of the calibrated image meets the preset conditions, so a calibration that meets the criteria is selected Image, subsequent calibration, can improve the calibration accuracy.
  • FIG. 1 is a schematic diagram of an application environment of dual camera calibration in an embodiment.
  • FIG. 2 is a schematic diagram of a chart of a conventional calibration board in an embodiment.
  • FIG. 3 is a flowchart of all blurred calibration images captured in an embodiment.
  • FIG. 4 is a schematic diagram of a partially blurred calibration image taken in an embodiment.
  • FIG. 5 is a feature point obtained by detecting the graph in FIG. 2.
  • FIG. 6 is a feature point obtained by detecting the blur image in FIG. 3.
  • FIG. 7 is a diagram illustrating pixel differences of feature points in FIGS. 5 and 6.
  • FIG. 8 is a schematic diagram of a preset pattern of a calibration plate in an embodiment.
  • FIG. 9 is a flowchart of an image processing method according to an embodiment.
  • FIG. 10 is a schematic diagram of dividing a hypotenuse region into multiple sub-regions according to an embodiment.
  • FIG. 11 is a flowchart of an image processing method in another embodiment.
  • FIG. 12 is a structural block diagram of an image processing apparatus in an embodiment.
  • FIG. 13 is a structural block diagram of an image processing apparatus in another embodiment.
  • FIG. 14 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • FIG. 15 is a schematic diagram of an image processing circuit in one embodiment.
  • first”, “second”, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • the first calibration image may be referred to as a second calibration image
  • the second calibration image may be referred to as a first calibration image. Both the first calibration image and the second calibration image are calibration images, but they are not the same calibration image.
  • FIG. 1 is a schematic diagram of an application environment of dual camera calibration in an embodiment.
  • the application environment includes a dual camera 110 and a calibration plate 120.
  • the dual camera fixture 110 is used to place an electronic device with a dual camera module or a dual camera module.
  • the calibration plate 120 (chart) has a chart pattern.
  • the calibration plate 120 can be rotated to maintain different postures.
  • the dual camera module on the dual camera 110 or the electronic device with the dual camera module takes the chart pattern on the calibration board 120 at different distances and different angles, usually at least 3 angles, as shown in the dual camera module in Figure 1.
  • the optical axis of the group is perpendicular to the rotation axis of the calibration plate.
  • the calibration plate 120 rotates three angles around the Y axis, one of which is 0 degrees, and the other two rotation angles are ⁇ ⁇ degrees, and ⁇ is greater than 15 to ensure decoupling between attitudes.
  • the calibration images of different angles are taken by the dual camera module to obtain calibration images of different angles.
  • the hypotenuse region in the calibration image is obtained by detection.
  • the first modulus transfer function of the hypotenuse region is obtained, and then the center region of the calibration image is obtained.
  • the second modulus transfer function When the ratio of the first modulus transfer function to the second modulus transfer function is within a preset range, it is determined that the calibration image meets the preset conditions, and the internal and external parameters of the single camera are determined according to the calibration image.
  • Perform calibration and obtain the external parameters of the dual camera module according to the internal and external parameters of the single camera. In this way, the calibration accuracy of internal and external parameters of a single camera is improved, and the calibration accuracy of external parameters of a dual camera module is also improved.
  • FIG. 2 is a chart of a conventional calibration board in one embodiment.
  • the chart is a checkerboard diagram, which consists of a black grid and a white grid staggered. Among them, the length of the checkerboard may be equal or different, and the actual physical distance may be 5 to 30 cm. In other embodiments, the chart may be a circular chart. Due to the poor focus of the camera or damage to the lens, the image will be blurred. As shown in Figure 3, the calibration images taken are all blurred, and Figure 4 shows that the captured calibration images are partially blurred.
  • FIG. 5 is a feature point obtained by detecting the map in FIG. 2
  • FIG. 6 is a feature point obtained by detecting a fuzzy map in FIG. 3
  • FIG. 7 is a pixel difference between the feature points in FIG. 5 and FIG. 6.
  • the white dot-shaped area is a line connecting the feature points in FIG. 5 and the feature points in FIG. 6, and the point knowledge that appears is one pixel difference.
  • FIG. 8 is a schematic diagram of a preset pattern of a calibration plate in an embodiment of the present application.
  • the preset pattern includes a calibration pattern 810 and a hypotenuse pattern 820.
  • the calibration pattern 810 is composed of black and white squares.
  • the hypotenuse pattern 820 is located on four sides of the calibration pattern 810.
  • the hypotenuse pattern 820 includes a left hypotenuse pattern, a right hypotenuse pattern, an upper hypotenuse pattern, and a bottom hypotenuse pattern.
  • the upper, lower, left, and right are centered on the calibration pattern 810, and the hypotenuse pattern 820 is divided into four oblique sides, namely, upper, lower, left, and right, relative to the calibration pattern 810.
  • the closest distance of the gap between the hypotenuse pattern 820 and the calibration pattern 810 is between 0.1 and 1 times the distance between two adjacent feature points in the calibration pattern 810.
  • the inclined angle of the hypotenuse is controlled within 2 to 10 degrees.
  • FIG. 9 is a flowchart of an image processing method according to an embodiment. As shown in FIG. 9, in one embodiment, an image processing method includes operations 902 to 912.
  • Operation 902 Obtain a calibration image.
  • a calibration image containing a preset pattern is captured by a camera to obtain a calibration image.
  • the calibrated image is detected to obtain a hypotenuse region.
  • the beveled area is obtained by detecting the calibration image through edge and contour detection.
  • Edge and contour detection can be implemented by filtering functions, including Laplacian (), Sobel (), and Scharr ().
  • a Canny edge detection algorithm can be used to implement edge detection to obtain a hypotenuse region or a connected domain algorithm to obtain a hypotenuse region.
  • Operation 906 Obtain a first modulus transfer function of the hypotenuse region.
  • MTF Modulation, Transfer, Function
  • the loyalty of the lens to reproduce the contrast of the subject on the image surface is represented by the spatial frequency characteristics, and then the MTF curve is drawn.
  • the horizontal axis of the graph represents the image height (the distance from the imaging center, in millimeters), and the vertical axis represents the contrast value, and the maximum contrast value is 1.
  • the first modulus transfer function of the hypotenuse region can be performed by the Slanted Edge Method (SEM) or Spatial Frequency Response (SFR).
  • SEM Slanted Edge Method
  • SFR Spatial Frequency Response
  • the oblique edge method for detecting the oblique edge region includes: obtaining an edge spread function (ESF) of the oblique edge, and then deriving a corresponding line spread function (Linear Spread Function, LSF), and finally obtaining a MTF by Fourier transform.
  • ESF edge spread function
  • LSF Linear Spread Function
  • the response function for a beveled edge can be represented by an impulse function:
  • the output O (x) of the system is equal to the convolution of the line transfer function LSF and the response function S (x) of the system:
  • the MTF normalizes the amplitude of zero frequency.
  • the MTF of the cascade system can be derived from the convolution definition and Fourier transform theory:
  • MTF opticalsystem MTF lens ⁇ MTF camera ⁇ MTF display formula (6)
  • Obtaining the MTF by using the STR curve includes: obtaining the Edge Spread Function (ESF) of the oblique edge, then deriving the corresponding Line Spread Function (LSF), and finally obtaining the MTF by Fourier transform.
  • ESF Edge Spread Function
  • LSF Line Spread Function
  • Operation 908 Read the second modulus transfer function of the central region of the calibration image.
  • the center area of the calibration image may be an area that occupies the entire calibration image area to a preset ratio with the center point of the calibration image as the center.
  • the preset ratio can be set as required, and the preset ratio can be 30% to 50%.
  • the second modulus transfer function of the middle region of the calibration image can be statistically obtained through the actual module specification data.
  • Operation 910 Obtain a ratio of the first modulus transfer function and the second modulus transfer function.
  • ratio MTF border / MTF center , where ratio is the ratio of the first modulus transfer function to the second modulus transfer function, MTF border. Is the first modulus transfer function, and MTF center is the second modulus transfer function.
  • the threshold can be set according to the specifications of the camera module.
  • the threshold can be a value within [0.3, 0.5].
  • r is a threshold.
  • the preset condition means that the sharpness reaches a preset standard.
  • the image processing method in the above embodiment obtains the hypotenuse region by detecting the calibration image, obtains the first modulus transfer function of the hypotenuse region, and then obtains the second modulus transfer function of the central region of the calibration image.
  • the ratio of the first modulus transfer function to the second modulus transfer function exceeds the threshold value, which means that the sharpness of the calibration image meets a preset condition, so that a calibration image that meets the sharpness condition is selected, and subsequent calibration can improve the calibration accuracy.
  • obtaining a calibration image includes: obtaining a calibration image by photographing a calibration board including a preset pattern, wherein the preset pattern includes a calibration pattern and a hypotenuse pattern, and the hypotenuse pattern is located on four sides of the calibration pattern And there is a gap between the hypotenuse pattern and the calibration pattern.
  • the closest distance between the hypotenuse pattern and the calibration pattern is 0.1 to 1 times the distance between two adjacent feature points in the calibration pattern.
  • the hypotenuse pattern is located on the four sides of the calibration pattern. Then, the obtained calibration image is taken. The calibration image is detected to obtain four hypotenuse regions. The first modulus transfer functions of each of the four hypotenuse regions are obtained, and then calculated separately. Ratios of the four first modulus transfer functions to the second modulus transfer functions. When all four ratios exceed a threshold, it is determined that the sharpness of the calibration image meets a preset condition.
  • Two adjacent feature points in the calibration pattern refer to two adjacent feature points on the same row or column of the calibration pattern.
  • the hypotenuse region is obtained through detection by a connected domain algorithm.
  • Connected domain refers to an image area composed of pixels with the same pixel value and adjacent positions in the image.
  • the connected domain algorithm refers to finding and labeling each connected area in the image.
  • the connected domain algorithm can be the algorithm in the connected area labeling function bwlabel in Matlab. It traverses the image once, records the equivalent pairs of each line, and then relabels the original image through the equivalent pairs. You can also use the labeling algorithm used in the open source library cvBlob to label the entire image by locating the inner and outer contours of the connected area.
  • the specific operations of the connected area labeling function algorithm include: scanning the calibration image line by line, grouping consecutive white pixels in each line into a sequence called a clique, and recording its start and end points and its line number. For all the cliques in a row except one line, if it does not overlap with all the cliques in the previous line, give it a new label; if it only has a coincident area with a clique in the previous line, the previous line's The group's label is assigned to it; if it has an overlapping area with more than 2 groups in the previous line, the current group is assigned a minimum label of the connected group, and the marks of the several groups in the previous line are written into the equivalent Yes, it shows that they belong to one category.
  • each sequence needs to be given the same label. Starting from 1, give each equivalent sequence a label. Iterate through the tags of the starting group, find equivalent sequences, and give them new tags. The label of each group is filled into the calibration image.
  • obtaining the first modulus transfer function of the hypotenuse region includes: obtaining the hypotenuse region, dividing the hypotenuse region into a first number of subregions; and obtaining the modulus transfer of the first number of subregions. Function; obtaining the first modulus transfer function of the hypotenuse region according to the modulus transfer function of the first number of sub-regions.
  • the processor may divide the hypotenuse region into a first number of sub-regions, and the first number may be set as required, such as 1, 2, 3, 5, 10, and the like.
  • the hypotenuse of the hypotenuse region can be divided into a first number of line segments of the same size, or the hypotenuse of the hypotenuse region can be divided into a first number of line segments of different sizes.
  • the hypotenuse region is the left hypotenuse region.
  • Vertex A, vertex B, and vertex C of the hypotenuse region are identified.
  • the edges of vertex A and vertex C are selected to calculate the first model of the hypotenuse region.
  • the volume transfer function MTF divides the edge selection level of vertex A and vertex C into N parts, finds the modulus transfer function of each part, and then calculates the average value of the modulus transfer function of N parts to obtain the slope.
  • the first modulus transfer function of the edge region It is also possible to select a part of the sub-region's modulus transfer function from the first number of sub-areas, and obtain the first modulus transfer function of the hypotenuse region by obtaining a weighted average.
  • the first modulus transfer function of the hypotenuse region is obtained according to the modulus transfer function of the subregion, and the calculation is more accurate.
  • obtaining the first modulus transfer function of the hypotenuse region according to the modulus transfer function of the first number of sub-regions includes: obtaining the modulus transfer function of the second number of sub-regions, the second The number of sub-regions is selected from the first number of sub-regions; the modulus transfer function of the second number of sub-regions is averaged to obtain the first modulus transfer function of the hypotenuse region.
  • the second number is less than the first number.
  • the second quantity can be set as required. Sort the first number of subregions according to the division order to obtain the subregion sequence, select the second number of subregions in the middle position of the subregion sequence, calculate the modulus transfer function of the second number of subregions, and then obtain the average The first modulus transfer function of the hypotenuse region.
  • FIG. 11 is a flowchart of an image processing method in another embodiment. As shown in FIG. 11, the image processing method includes:
  • Operation 1102 Obtain calibration images respectively captured by the first camera and the second camera in the dual camera module.
  • the calibration image is obtained by shooting the calibration plate through the first camera and the second camera in the dual camera module.
  • Operation 1104 Detect each calibration image to obtain a corresponding hypotenuse region.
  • the beveled area is obtained by detecting the calibration image through edge and contour detection.
  • the hypotenuse region can be obtained by detecting the connected domain algorithm.
  • Operation 1106 Obtain a first modulus transfer function of the hypotenuse region.
  • Operation 1108 Read the second modulus transfer function of the central region of the calibration image.
  • Operation 1114 Obtain the internal and external parameters of the first camera and the internal and external parameters of the second camera according to the calibration image that meets the preset conditions.
  • the internal parameters of a single camera can include f x , f y , c x , and c y , where f x represents the unit size of the focal length in the image coordinate system x-axis direction, and f y represents the unit of the focal length in the image coordinate system y-axis direction.
  • Pixel size, c x , c y represent the coordinates of the principal point of the image plane, and the principal point is the intersection of the optical axis and the image plane.
  • the image coordinate system is a coordinate system established on the basis of a two-dimensional image captured by a camera, and is used to specify the position of an object in the captured image.
  • the origin of the (x, y) coordinate system in the image coordinate system is located on the focal point (c x , c y ) of the optical axis of the camera and the imaging plane.
  • the unit is the length unit, that is, meters, (u, v) in the pixel coordinate system.
  • the origin of the coordinate system is in the upper left corner of the image, and the unit is a quantity unit, that is, a unit.
  • (x, y) is used to represent the perspective projection relationship of the object from the camera coordinate system to the image coordinate system, and (u, v) is used to represent the pixel coordinates.
  • the conversion relationship between (x, y) and (u, v) is as shown in formula (1):
  • Perspective projection refers to a single-sided projection image that is closer to visual effects by projecting a shape onto a projection surface using the center projection method.
  • the external parameters of a single camera include a rotation matrix and a translation matrix that transform the coordinates in the world coordinate system to the coordinates in the camera coordinate system.
  • the world coordinate system reaches the camera coordinate system through rigid body transformation, and the camera coordinate system reaches the image coordinate system through perspective projection transformation.
  • Rigid body transformation refers to the movement of rotation and translation of a geometric object when the object is not deformed in three-dimensional space, which is the rigid body transformation.
  • the rigid body transformation is as shown in formula (8).
  • X c represents the camera coordinate system
  • X represents the world coordinate system
  • R represents the rotation matrix from the world coordinate system to the camera coordinate system
  • T represents the translation matrix from the world coordinate system to the camera coordinate system.
  • the distance between the origin of the world coordinate system and the origin of the camera coordinate system is controlled by the components in the three axis directions x, y, and z. It has three degrees of freedom, and R is the sum of the effects of rotation around the X, Y, and Z axes, respectively .
  • t x represents the translation amount in the x-axis direction
  • t y represents the translation amount in the y-axis direction
  • t z represents the translation amount in the z-axis direction.
  • the world coordinate system is an absolute coordinate system in objective three-dimensional space and can be established at any position.
  • the world coordinate system can be established with the upper left corner point of the calibration plate as the origin, the calibration plate plane as the XY plane, and the Z axis perpendicular to the calibration plate plane upwards.
  • the camera coordinate system is based on the optical center of the camera as the origin, the optical axis of the camera is used as the Z axis, and the X and Y axes are respectively parallel to the X and Y axes of the image coordinate system.
  • the principal point of the image coordinate system is the intersection of the optical axis and the image plane.
  • the image coordinate system uses the principal point as the origin.
  • the pixel coordinate system means that the origin is defined in the upper left corner of the image plane.
  • Homogeneous coordinates represent the pixel coordinates (u, v, 1) of the image plane
  • Homogeneous coordinates represent the coordinate points of the world coordinate system (X, Y, Z, 1)
  • A represents the internal parameter matrix
  • R represents the rotation matrix converted from the world coordinate system to the camera coordinate system
  • T represents the world coordinate system converted to the camera coordinate system Translation matrix.
  • d x represents the width of one pixel in the x-axis direction of the image coordinate system
  • d y represents one pixel in the y-axis direction of the image coordinate system
  • represents the deviation of the pixel point in the x, y direction.
  • u 0 and v 0 represent the coordinates of the principal point of the image plane, and the principal point is the intersection of the optical axis and the image plane.
  • Homogeneity is defined in computer vision as a projection mapping from one plane to another.
  • H A [r 1 r 2 t], where H is a homography matrix.
  • H is a 3 * 3 matrix and has one element as a homogeneous coordinate. Therefore, H has 8 unknowns to be solved.
  • B is a symmetric matrix, so the effective elements of B are 6, and the 6 elements constitute the vector b.
  • V ij [h i1 h j1 , h i1 h j2 + h i2 h j1 , h i2 h j2 , h i3 h j1 + h i1 h j3 , h i3 h j2 + h i2 h j3 , h i3 h j3 ] T
  • the external parameter matrix is calculated based on the internal parameter matrix to obtain the initial value of the external parameter matrix.
  • formula (16) is a geometric model obtained by constructing the world coordinate system on the plane Z is 0, X, Y are the world coordinates of the feature points on the calibration plate, and x, y, and z are the feature points on the calibration plate on the camera The physical coordinates of the coordinate system.
  • R is the rotation matrix from the world coordinate system of the calibration plate to the camera coordinate system
  • T is the translation matrix from the world coordinate system of the calibration plate to the camera coordinate system
  • the physical coordinates [x, y, z] of the feature points on the calibration board in the camera coordinate system are normalized to obtain the target coordinate points (x ', y').
  • Distortion processing is performed on the camera coordinate system image points using the distortion model.
  • the initial values of the internal parameter matrix and the external parameter matrix are imported into the maximum likelihood formula to obtain the final internal parameter matrix and external parameter matrix.
  • the maximum likelihood formula is Find the minimum.
  • the dual camera module includes a first camera and a second camera.
  • the first camera and the second camera may both be color cameras, or one is a black and white camera, one is a color camera, or two black and white cameras.
  • external parameters of the dual camera module are obtained according to the internal and external parameters of the first camera and the internal and external parameters of the second camera.
  • the dual camera calibration refers to determining the external parameter value of the dual camera module.
  • the external parameters of the dual camera module include a rotation matrix between the dual cameras and a translation matrix between the dual cameras.
  • the rotation matrix and translation matrix between the two cameras can be obtained by formula (20).
  • R ′ is the rotation matrix between the two cameras
  • T ′ is the translation matrix between the two cameras
  • R r is the rotation matrix of the first calibration camera relative to the calibration object (that is, the coordinates of the calibration object in the world coordinate system are converted to The rotation matrix of the coordinates of the camera coordinate system of the first camera)
  • T r is the translation matrix of the relative calibration object obtained by the calibration of the first camera (that is, the coordinates of the calibration object in the world coordinate system are converted to the camera coordinate system of the first camera Coordinate translation matrix).
  • R l is the rotation matrix of the second camera relative to the calibration object (that is, the coordinates of the calibration object in the world coordinate system are converted to the coordinates of the camera camera coordinate system of the second camera)
  • T l is the second camera calibration
  • the obtained translation matrix of the relative calibration object ie, the translation matrix of the coordinates of the calibration object in the world coordinate system and the coordinates of the camera coordinate system of the second camera).
  • the first modulus transfer function of the hypotenuse region in the calibration image and the second modulus transfer function of the center region are calculated from the calibration images collected by the dual camera module, and the ratio between the two is calculated. According to the ratio and the threshold value, Compare to determine whether the sharpness of the calibration image meets the preset conditions and meet the preset conditions, then use the calibration image with sharpness that meets the preset conditions for single-camera and dual-camera module calibration to improve the calibration accuracy.
  • the embodiment of the present application further provides a calibration board.
  • the calibration plate includes a carrier; a preset pattern is disposed on the carrier; the preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and the hypotenuse pattern and the There are gaps between the calibration patterns.
  • the closest distance between the hypotenuse pattern and the calibration pattern is 0.1 to 1 times the distance between two adjacent feature points in the calibration pattern.
  • the hypotenuse pattern is located on the four sides of the calibration pattern. Then, the obtained calibration image is taken. The calibration image is detected to obtain four hypotenuse regions. The first modulus transfer functions of each of the four hypotenuse regions are obtained, and then calculated separately. Ratios of the four first modulus transfer functions to the second modulus transfer functions. When all four ratios exceed a threshold, it is determined that the sharpness of the calibration image meets a preset condition.
  • Two adjacent feature points in the calibration pattern refer to two adjacent feature points on the same row or column of the calibration pattern.
  • the inclined angle of the hypotenuse in the hypotenuse pattern is controlled within 2 to 10 degrees.
  • FIG. 12 is a structural block diagram of an image processing apparatus in an embodiment. As shown in FIG. 12, the image processing apparatus includes an image acquisition module 1202, a detection module 1204, a parameter acquisition module 1206, a reading module 1208, a ratio acquisition module 1210, and a determination module 1212. among them:
  • the image acquisition module 1202 is configured to acquire a calibration image.
  • the detection module 1204 is configured to detect the calibration image to obtain a hypotenuse region.
  • the parameter obtaining module 1206 is configured to obtain a first modulus transfer function of the hypotenuse region.
  • the reading module 1208 is configured to read a second modulus transfer function of a central area of the calibration image.
  • the ratio obtaining module 1210 is configured to obtain a ratio of the first modulus transfer function and the second modulus transfer function.
  • the determining module 1212 is configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold.
  • the image processing apparatus in the above embodiment obtains the hypotenuse region by detecting the calibration image, obtains the first modulus transfer function of the hypotenuse region, and then obtains the second modulus transfer function of the central region of the calibration image.
  • the ratio of the first modulus transfer function to the second modulus transfer function exceeds the threshold value, which means that the sharpness of the calibration image meets a preset condition, so that a calibration image that meets the sharpness condition is selected, and subsequent calibration can improve the calibration accuracy.
  • the image acquisition module 1202 is further configured to obtain a calibration image by shooting a calibration plate that includes a preset pattern, where the preset pattern includes a calibration pattern and a hypotenuse pattern, and the hypotenuse pattern is located at four points of the calibration pattern.
  • the closest distance between the hypotenuse pattern and the calibration pattern is between 0.1 and 1 times the distance between two adjacent feature points in the calibration pattern.
  • the parameter obtaining module 1206 is further configured to obtain the hypotenuse region, and divide the hypotenuse region into a first number of subregions; acquire a modulus transfer function of the first number of subregions; and according to the first The modulus transfer function of the number of sub-regions obtains the first modulus transfer function of the hypotenuse region.
  • the parameter obtaining module 1206 is further configured to obtain a modulus transfer function of a second number of sub-regions, the second number of sub-regions being selected from the first number of sub-regions; The modulus transfer function of the number of subregions is averaged to obtain the first modulus transfer function of the hypotenuse region.
  • the parameter obtaining module 1206 is further configured to obtain the first modulus transfer function of the hypotenuse region by using a sloped edge method or a spatial frequency domain response curve.
  • FIG. 13 is a structural block diagram of an image processing apparatus in another embodiment.
  • the image processing apparatus includes an image acquisition module 1202, a detection module 1204, a parameter acquisition module 1206, a reading module 1208, a ratio acquisition module 1210, and a determination module 1212, and further includes a calibration module 1214.
  • the image acquisition module 1202 is further configured to acquire calibration images respectively captured by the first camera and the second camera in the dual camera module.
  • the detection module 1204 is further configured to detect each calibration image to obtain a corresponding hypotenuse region.
  • the parameter acquisition module 1206 is further configured to acquire a first modulus transfer function of the hypotenuse region.
  • the reading module 1208 is further configured to read a second modulus transfer function of a central region of the calibration image.
  • the ratio obtaining module 1210 is further configured to obtain a ratio of the first modulus transfer function and the second modulus transfer function.
  • the determining module 1212 is further configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold.
  • the calibration module 1214 is configured to obtain the internal and external parameters of the first camera and the internal and external parameters of the second camera according to the calibration image that meets the preset conditions; according to the internal and external parameters of the first camera and the second The internal and external parameters of the camera obtain the external parameters of the dual camera module.
  • An embodiment of the present application further provides an electronic device.
  • the electronic device includes a memory and a processor.
  • a computer program is stored in the memory.
  • the processor causes the processor to perform operations in the image processing method.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium.
  • a non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements operations in the following image processing methods.
  • FIG. 14 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the wireless network communication method applicable to the electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement an image processing method provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • each module in the image processing apparatus provided in the embodiments of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module constituted by the computer program can be stored in the memory of the terminal or server.
  • the computer program is executed by a processor, the operations of the method described in the embodiments of the present application are implemented.
  • a computer program product containing instructions that, when run on a computer, causes the computer to perform an image processing method.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline.
  • FIG. 15 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 15, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes a first ISP processor 1530, a second ISP processor 1540, and a control logic 1550.
  • the first camera 1510 includes one or more first lenses 1512 and a first image sensor 1514.
  • the first image sensor 1514 may include a color filter array (such as a Bayer filter).
  • the first image sensor 1514 may obtain light intensity and wavelength information captured by each imaging pixel of the first image sensor 1514, and provide information that can be captured by the first ISP.
  • the second camera 1520 includes one or more second lenses 1522 and a second image sensor 1524.
  • the second image sensor 1524 may include a color filter array (such as a Bayer filter).
  • the second image sensor 1524 may obtain light intensity and wavelength information captured by each imaging pixel of the second image sensor 1524, and provide information that may be captured by the second ISP.
  • the first image collected by the first camera 1510 is transmitted to the first ISP processor 1530 for processing.
  • the first image statistical data (such as the brightness of the image and the contrast value of the image) can be processed. , Image color, etc.) to the control logic 1550.
  • the control logic 1550 can determine the control parameters of the first camera 1510 according to the statistical data, so that the first camera 1515 can perform operations such as autofocus and automatic exposure according to the control parameters.
  • the first image may be stored in the image memory 1560 after being processed by the first ISP processor 1530, and the first ISP processor 1530 may also read the image stored in the image memory 1560 for processing.
  • the first image can be directly sent to the display 1570 for display after being processed by the ISP processor 1530.
  • the display 1570 can also read the image in the image memory 1560 for display.
  • the first ISP processor 1530 processes image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 1530 may perform one or more image processing operations on the image data and collect statistical information about the image data.
  • the image processing operation may be performed with the same or different bit depth calculation accuracy.
  • the image memory 1560 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the first ISP processor 1530 may perform one or more image processing operations, such as time-domain filtering.
  • the processed image data may be sent to the image memory 1560 for further processing before being displayed.
  • the first ISP processor 1530 receives processed data from the image memory 1560, and performs image data processing in the RGB and YCbCr color spaces on the processed data.
  • the image data processed by the first ISP processor 1530 may be output to the display 1570 for viewing by a user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit).
  • the output of the first ISP processor 1530 can also be sent to the image memory 1560, and the display 1570 can read image data from the image memory 1560.
  • the image memory 1560 may be configured to implement one or more frame buffers.
  • the statistical data determined by the first ISP processor 1530 may be sent to the control logic 1550.
  • the statistical data may include the first image sensor 1514 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, and first lens 1512 shading correction.
  • the control logic 1550 may include a processor and / or a microcontroller that executes one or more routines (such as firmware). The one or more routines may determine the control parameters and the first parameters of the first camera 1510 according to the received statistical data. Control parameters of an ISP processor 1530.
  • control parameters of the first camera 1510 may include gain, integration time of exposure control, image stabilization parameters, flash control parameters, control parameters of the first lens 1512 (for example, focal length for focusing or zooming), or a combination of these parameters.
  • the ISP control parameters may include a gain level and a color correction matrix for automatic white balance and color adjustment (eg, during RGB processing), and a first lens 1512 shading correction parameter.
  • the second image collected by the second camera 1520 is transmitted to the second ISP processor 1540 for processing.
  • statistical data of the second image such as image brightness, image (Contrast value, image color, etc.
  • the control logic 1550 can determine the control parameters of the second camera 1520 according to statistical data, so that the second camera 1520 can perform operations such as autofocus and automatic exposure according to the control parameters .
  • the second image may be stored in the image memory 1560 after being processed by the second ISP processor 1540, and the second ISP processor 1540 may also read the image stored in the image memory 1560 for processing.
  • the second image may be directly sent to the display 1570 for display after being processed by the ISP processor 1540, and the display 1570 may also read the image in the image memory 1560 for display.
  • the second camera 1520 and the second ISP processor 1540 may also implement processing operations as described by the first camera 1510 and the first ISP processor 1530.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Disclosed is an image processing method, comprising: acquiring a calibrated image; testing the calibrated image to obtain a bevel edge area; acquiring a first modulus transfer function of the bevel edge area; reading a second modulus transfer function of a central area of the calibrated image; acquiring the ratio of the first modulus transfer function to the second modulus transfer function; and when the ratio exceeds a threshold value, determining that the definition of the calibrated image meets a preset condition.

Description

图像处理方法和装置、电子设备、计算机可读存储介质Image processing method and device, electronic device, computer-readable storage medium
相关申请的交叉引用Cross-reference to related applications
本申请要求于2018年7月11日提交中国专利局、申请号为201810758592.1、发明名称为“图像处理方法和装置、电子设备、计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on July 11, 2018, with application number 201810758592.1, and the invention name is "image processing method and device, electronic equipment, computer-readable storage medium" Incorporated by reference in this application.
技术领域Technical field
本申请涉及影像领域,特别是涉及一种图像处理方法和装置、电子设备、计算机可读存储介质。The present application relates to the field of imaging, and in particular, to an image processing method and device, an electronic device, and a computer-readable storage medium.
背景技术Background technique
随着电子设备和影像技术的发展,越来越多的用户使用电子设备的摄像头采集图像。摄像头在出厂前需要进行参数标定,在标定操作中需要采集标定图像。With the development of electronic devices and imaging technology, more and more users use the cameras of electronic devices to collect images. The camera needs parameter calibration before leaving the factory, and calibration images need to be collected during the calibration operation.
发明内容Summary of the invention
本申请实施例提供一种图像处理方法和装置、电子设备、计算机可读存储介质,可以检测满足要求的图像,提高后续标定的精度。The embodiments of the present application provide an image processing method and device, an electronic device, and a computer-readable storage medium, which can detect images that meet requirements and improve the accuracy of subsequent calibrations.
一种图像处理方法,包括:An image processing method includes:
获取标定图像;Obtaining a calibration image;
对所述标定图像进行检测得到斜边区域;Detecting the calibration image to obtain a hypotenuse region;
获取所述斜边区域的第一模量传递函数;Obtaining a first modulus transfer function of the hypotenuse region;
读取所述标定图像的中心区域的第二模量传递函数;Reading a second modulus transfer function of a central region of the calibration image;
获取所述第一模量传递函数与第二模量传递函数的比值;及Obtaining a ratio of the first modulus transfer function to a second modulus transfer function; and
当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件。When the ratio exceeds a threshold, it is determined that the sharpness of the calibration image meets a preset condition.
一种标定板,包括A calibration board including
承载体;及The carrier; and
预设图案,设置在所述承载体上;A preset pattern is provided on the carrier;
所述预设图案包括标定图案和斜边图案,所述斜边图案位于所述标定图案的四个侧边,且所述斜边图案与所述标定图案之间存在间隙。The preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and a gap exists between the hypotenuse pattern and the calibration pattern.
一种图像处理装置,包括:An image processing device includes:
图像获取模块,用于获取标定图像;An image acquisition module for acquiring a calibration image;
检测模块,用于对所述标定图像进行检测得到斜边区域;A detection module, configured to detect the calibration image to obtain a hypotenuse region;
参数获取模块,用于获取所述斜边区域的第一模量传递函数;A parameter acquisition module, configured to acquire a first modulus transfer function of the hypotenuse region;
读取模块,用于读取所述标定图像的中心区域的第二模量传递函数;A reading module, configured to read a second modulus transfer function of a central region of the calibration image;
比值获取模块,用于获取所述第一模量传递函数与第二模量传递函数的比值;及A ratio obtaining module, configured to obtain a ratio of the first modulus transfer function to a second modulus transfer function; and
确定模块,用于当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件。A determining module, configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold.
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行所述的图像处理方法的操作。An electronic device includes a memory and a processor. The memory stores a computer program. When the computer program is executed by the processor, the processor causes the processor to perform operations of the image processing method.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述的图像处理方法的操作。A computer-readable storage medium stores a computer program thereon, and when the computer program is executed by a processor, the operations of the image processing method are implemented.
本申请实施例的图像处理方法和装置、电子设备、计算机可读存储介质,通过对标定图像进行检测得到斜边区域,求取斜边区域的第一模量传递函数,再获取标定图像的中心 区域的第二模量传递函数,当第一模量传递函数与第二模量传递函数的比值超过阈值,则表示标定图像的清晰度满足预设条件,如此筛选出了清晰度符合条件的标定图像,后续进行标定,能够提高标定精度。The image processing method and device, electronic device, and computer-readable storage medium of the embodiments of the present application obtain a hypotenuse region by detecting the calibration image, obtain a first modulus transfer function of the hypotenuse region, and then obtain the center of the calibration image The second modulus transfer function of the region. When the ratio of the first modulus transfer function to the second modulus transfer function exceeds a threshold value, it means that the sharpness of the calibrated image meets the preset conditions, so a calibration that meets the criteria is selected Image, subsequent calibration, can improve the calibration accuracy.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions in the embodiments of the present application or the prior art more clearly, the drawings used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are merely These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without paying creative work.
图1为一个实施例中双摄像标定的应用环境示意图。FIG. 1 is a schematic diagram of an application environment of dual camera calibration in an embodiment.
图2为一个实施例中传统的标定板的chart图的示意图。FIG. 2 is a schematic diagram of a chart of a conventional calibration board in an embodiment.
图3为一个实施例中拍摄的标定图像全部模糊不清的流程图。FIG. 3 is a flowchart of all blurred calibration images captured in an embodiment.
图4为一个实施例中拍摄的标定图像部分模糊不清的示意图。FIG. 4 is a schematic diagram of a partially blurred calibration image taken in an embodiment.
图5为对图2中的图检测得到的特征点。FIG. 5 is a feature point obtained by detecting the graph in FIG. 2.
图6为对图3中模糊图检测得到的特征点。FIG. 6 is a feature point obtained by detecting the blur image in FIG. 3.
图7为对图5和图6中特征点的像素差的示意图。FIG. 7 is a diagram illustrating pixel differences of feature points in FIGS. 5 and 6.
图8为一个实施例中标定板的预设图案的示意图。FIG. 8 is a schematic diagram of a preset pattern of a calibration plate in an embodiment.
图9为一个实施例中图像处理方法的流程图。FIG. 9 is a flowchart of an image processing method according to an embodiment.
图10为一个实施例中斜边区域划分多个子区域示意图。FIG. 10 is a schematic diagram of dividing a hypotenuse region into multiple sub-regions according to an embodiment.
图11为另一个实施例中图像处理方法的流程图。FIG. 11 is a flowchart of an image processing method in another embodiment.
图12为一个实施例中图像处理装置的结构框图。FIG. 12 is a structural block diagram of an image processing apparatus in an embodiment.
图13为另一个实施例中图像处理装置的结构框图。FIG. 13 is a structural block diagram of an image processing apparatus in another embodiment.
图14为一个实施例中电子设备的内部结构示意图。FIG. 14 is a schematic diagram of an internal structure of an electronic device in an embodiment.
图15为一个实施例中图像处理电路的示意图。FIG. 15 is a schematic diagram of an image processing circuit in one embodiment.
具体实施方式detailed description
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution, and advantages of the present application clearer, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the application, and are not used to limit the application.
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一标定图像称为第二标定图像,且类似地,可将第二标定图像称为第一标定图像。第一标定图像和第二标定图像两者都是标定图像,但其不是同一标定图像。It can be understood that the terms “first”, “second”, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element. For example, without departing from the scope of the present application, the first calibration image may be referred to as a second calibration image, and similarly, the second calibration image may be referred to as a first calibration image. Both the first calibration image and the second calibration image are calibration images, but they are not the same calibration image.
图1为一个实施例中双摄像头标定的应用环境示意图。如图1所示,该应用环境包括双摄治具110和标定板120。双摄治具110用于放置带有双摄像头模组或者带有双摄像头模组的电子设备。标定板120(chart)上带有chart图案。标定板120可进行旋转,保持不同角度的位姿。双摄治具110上的双摄像头模组或带有双摄像头模组的电子设备在不同距离、不同角度拍摄标定板120上chart图案,通常拍摄图像至少3个角度,如图1中双摄像头模组光轴垂直于标定板旋转轴,标定板120绕Y轴旋转三个角度,其中一个角度为0度,另外两个旋转角度为±θ度,θ大于15,以保证姿态间解耦。通过双摄像头模组拍摄不同角度的标定板得到不同角度的标定图像,通过检测得到标定图像中的斜边区域,求取斜边区域的第一模量传递函数,再获取标定图像的中心区域的第二模量传递函数,当第一模量传递函数与第二模量传递函数的比值在预设范围内,则确定标定图像满足预设条件,再根据标定图像对单摄像头的内参和外参进行标定,并根据单摄像头的内参和外参 求取该双摄像头模组的外参。如此,提高了单摄像头的内参和外参的标定精度,也提高了双摄像头模组外参的标定精度。FIG. 1 is a schematic diagram of an application environment of dual camera calibration in an embodiment. As shown in FIG. 1, the application environment includes a dual camera 110 and a calibration plate 120. The dual camera fixture 110 is used to place an electronic device with a dual camera module or a dual camera module. The calibration plate 120 (chart) has a chart pattern. The calibration plate 120 can be rotated to maintain different postures. The dual camera module on the dual camera 110 or the electronic device with the dual camera module takes the chart pattern on the calibration board 120 at different distances and different angles, usually at least 3 angles, as shown in the dual camera module in Figure 1. The optical axis of the group is perpendicular to the rotation axis of the calibration plate. The calibration plate 120 rotates three angles around the Y axis, one of which is 0 degrees, and the other two rotation angles are ± θ degrees, and θ is greater than 15 to ensure decoupling between attitudes. The calibration images of different angles are taken by the dual camera module to obtain calibration images of different angles. The hypotenuse region in the calibration image is obtained by detection. The first modulus transfer function of the hypotenuse region is obtained, and then the center region of the calibration image is obtained. The second modulus transfer function. When the ratio of the first modulus transfer function to the second modulus transfer function is within a preset range, it is determined that the calibration image meets the preset conditions, and the internal and external parameters of the single camera are determined according to the calibration image. Perform calibration, and obtain the external parameters of the dual camera module according to the internal and external parameters of the single camera. In this way, the calibration accuracy of internal and external parameters of a single camera is improved, and the calibration accuracy of external parameters of a dual camera module is also improved.
图2为一个实施例中传统的标定板的chart图。如图2所示,该chart图为棋盘格图,由黑色方格和白色方格交错排布组成。其中,棋盘格长宽角点个数可以相等,也可以不等,实际的物理距离可为5至30厘米。在其他实施例中,chart图也可为圆形图。由于摄像头对焦差或镜头有损坏会导致图像模糊,如图3所示,拍摄的标定图像全部模糊不清,图4为拍摄的标定图像部分模糊不清。FIG. 2 is a chart of a conventional calibration board in one embodiment. As shown in Figure 2, the chart is a checkerboard diagram, which consists of a black grid and a white grid staggered. Among them, the length of the checkerboard may be equal or different, and the actual physical distance may be 5 to 30 cm. In other embodiments, the chart may be a circular chart. Due to the poor focus of the camera or damage to the lens, the image will be blurred. As shown in Figure 3, the calibration images taken are all blurred, and Figure 4 shows that the captured calibration images are partially blurred.
图5为对图2中的图检测得到的特征点,图6为对图3中模糊图检测得到的特征点,图7为对图5和图6中特征点的像素差。图7中,白色点状区域为图5中的特征点与图6中的特征点的连线,出现的点知识一个像素差。FIG. 5 is a feature point obtained by detecting the map in FIG. 2, FIG. 6 is a feature point obtained by detecting a fuzzy map in FIG. 3, and FIG. 7 is a pixel difference between the feature points in FIG. 5 and FIG. 6. In FIG. 7, the white dot-shaped area is a line connecting the feature points in FIG. 5 and the feature points in FIG. 6, and the point knowledge that appears is one pixel difference.
图8为本申请实施例中的标定板的预设图案的示意图。如图8所示,该预设图案包括标定图案810和斜边图案820。标定图案810以黑白交错的方格组成。斜边图案820位于标定图案810的四个侧边。斜边图案820包括左斜边图案、右斜边图案、上斜边图案、下斜边图案。上、下、左、右以标定图案810为中心,斜边图案820相对标定图案810分为上、下、左、右四个斜边。斜边图案820与标定图案810之间存在间隙。斜边图案820与标定图案810之间间隙的最近距离为标定图案810中两个相邻特征点之间的距离的0.1至1倍之间。斜边的倾斜角度控制在2至10度内。FIG. 8 is a schematic diagram of a preset pattern of a calibration plate in an embodiment of the present application. As shown in FIG. 8, the preset pattern includes a calibration pattern 810 and a hypotenuse pattern 820. The calibration pattern 810 is composed of black and white squares. The hypotenuse pattern 820 is located on four sides of the calibration pattern 810. The hypotenuse pattern 820 includes a left hypotenuse pattern, a right hypotenuse pattern, an upper hypotenuse pattern, and a bottom hypotenuse pattern. The upper, lower, left, and right are centered on the calibration pattern 810, and the hypotenuse pattern 820 is divided into four oblique sides, namely, upper, lower, left, and right, relative to the calibration pattern 810. There is a gap between the hypotenuse pattern 820 and the calibration pattern 810. The closest distance of the gap between the hypotenuse pattern 820 and the calibration pattern 810 is between 0.1 and 1 times the distance between two adjacent feature points in the calibration pattern 810. The inclined angle of the hypotenuse is controlled within 2 to 10 degrees.
图9为一个实施例中图像处理方法的流程图。如图9所示,在一个实施例中,一种图像处理方法,包括操作902至操作912。FIG. 9 is a flowchart of an image processing method according to an embodiment. As shown in FIG. 9, in one embodiment, an image processing method includes operations 902 to 912.
操作902,获取标定图像。Operation 902: Obtain a calibration image.
通过摄像头拍摄包含预设图案的标定板得到标定图像。A calibration image containing a preset pattern is captured by a camera to obtain a calibration image.
操作904,对标定图像进行检测得到斜边区域。In operation 904, the calibrated image is detected to obtain a hypotenuse region.
通过边缘和轮廓检测对标定图像进行检测得到斜边区域。边缘和轮廓检测可通过滤波函数实现,滤波函数包括Laplacian()、Sobel()以及Scharr()等。在其他实施例中,可以采用Canny边缘检测算法实现边缘检测得到斜边区域或者连通域算法检测得到斜边区域。The beveled area is obtained by detecting the calibration image through edge and contour detection. Edge and contour detection can be implemented by filtering functions, including Laplacian (), Sobel (), and Scharr (). In other embodiments, a Canny edge detection algorithm can be used to implement edge detection to obtain a hypotenuse region or a connected domain algorithm to obtain a hypotenuse region.
操作906,获取斜边区域的第一模量传递函数。Operation 906: Obtain a first modulus transfer function of the hypotenuse region.
MTF(Modulation Transfer Function,模量传递函数)是衡量镜头性能的一个重要指标。将镜头把被拍摄物体所具有的对比度再现到像面上的忠诚度以空间频率特性进行表示,便绘制成了MTF曲线图。曲线图的横轴表示像高(与成像中心的距离,单位为毫米),纵轴表示对比度值,对比度值最大为1。MTF (Modulation, Transfer, Function) is an important indicator for measuring the performance of a lens. The loyalty of the lens to reproduce the contrast of the subject on the image surface is represented by the spatial frequency characteristics, and then the MTF curve is drawn. The horizontal axis of the graph represents the image height (the distance from the imaging center, in millimeters), and the vertical axis represents the contrast value, and the maximum contrast value is 1.
斜边区域的第一模量传递函数可通过倾斜边缘法(Slanted Edge Method,SEM)或者空间频率响应(Spatial Frequency Response,SFR)。The first modulus transfer function of the hypotenuse region can be performed by the Slanted Edge Method (SEM) or Spatial Frequency Response (SFR).
倾斜边缘法检测斜边区域包括:获取倾斜边缘的边缘扩散函数(Edge Spread Function,ESF),然后求导得到对应的线扩散函数(Line Spread Function,LSF),最后经过傅里叶变换得到MTF。The oblique edge method for detecting the oblique edge region includes: obtaining an edge spread function (ESF) of the oblique edge, and then deriving a corresponding line spread function (Linear Spread Function, LSF), and finally obtaining a MTF by Fourier transform.
倾斜边缘的响应函数可以由一个冲激函数表示:The response function for a beveled edge can be represented by an impulse function:
Figure PCTCN2019088882-appb-000001
Figure PCTCN2019088882-appb-000001
当边缘响应函数由完善的(没有像差)的光学系统成像时,系统的成像质量不会被劣化。因此边缘函数被线性不变的光学系统成像时,系统的输出O(x)等于线传递函数LSF与系统的响应函数S(x)的卷积:When the edge response function is imaged by a perfect (no aberration) optical system, the imaging quality of the system is not degraded. Therefore, when the edge function is imaged by a linear invariant optical system, the output O (x) of the system is equal to the convolution of the line transfer function LSF and the response function S (x) of the system:
Figure PCTCN2019088882-appb-000002
Figure PCTCN2019088882-appb-000002
当x-α<0时,阶跃函数S(x)=0,其他情况下S(x)=1,所以ESF(x)可以表示为:When x-α <0, the step function S (x) = 0, otherwise S (x) = 1, so ESF (x) can be expressed as:
Figure PCTCN2019088882-appb-000003
Figure PCTCN2019088882-appb-000003
因此,ESF(x)的导数可以写为:Therefore, the derivative of ESF (x) can be written as:
Figure PCTCN2019088882-appb-000004
Figure PCTCN2019088882-appb-000004
所以可以将MTF写作LSF的如下函数:So MTF can be written as the following function of LSF:
Figure PCTCN2019088882-appb-000005
Figure PCTCN2019088882-appb-000005
通常,MTF会对零频率幅值归一化,同时由卷积定义及傅里叶变换理论可以推导得出级联系统的MTF:In general, the MTF normalizes the amplitude of zero frequency. At the same time, the MTF of the cascade system can be derived from the convolution definition and Fourier transform theory:
MTF opticalsystem=MTF lens×MTF camera×MTF display        公式(6) MTF opticalsystem = MTF lens × MTF camera × MTF display formula (6)
采用STR曲线求取MTF包括:获取倾斜边缘的边缘扩散函数(Edge Spread Function,ESF),然后求导得到对应的线扩散函数(Line Spread Function,LSF),最后经过傅里叶变换得到MTF。Obtaining the MTF by using the STR curve includes: obtaining the Edge Spread Function (ESF) of the oblique edge, then deriving the corresponding Line Spread Function (LSF), and finally obtaining the MTF by Fourier transform.
MTF可为最大亮度与最小亮度的差与最大亮度与最小亮度的和的比值,即MTF=(最大亮度-最小亮度)/(最大亮度+最小亮度)。MTF can be the ratio of the difference between the maximum brightness and the minimum brightness and the sum of the maximum brightness and the minimum brightness, that is, MTF = (maximum brightness-minimum brightness) / (maximum brightness + minimum brightness).
操作908,读取标定图像的中心区域的第二模量传递函数。Operation 908: Read the second modulus transfer function of the central region of the calibration image.
标定图像的中心区域可为以标定图像的中心点为中心占整个标定图像面积达到预设比例的区域。预设比例可根据需要设定,该预设比例可为30%至50%。标定图像的中区域的第二模量传递函数可通过实际模组规格数据来统计获取。The center area of the calibration image may be an area that occupies the entire calibration image area to a preset ratio with the center point of the calibration image as the center. The preset ratio can be set as required, and the preset ratio can be 30% to 50%. The second modulus transfer function of the middle region of the calibration image can be statistically obtained through the actual module specification data.
操作910,获取第一模量传递函数与第二模量传递函数的比值。Operation 910: Obtain a ratio of the first modulus transfer function and the second modulus transfer function.
计算第一模量传递函数与第二模量传输函数的比值,计算公式为ratio=MTF border/MTF center,其中,ratio为第一模量传递函数与第二模量传输函数的比值,MTF border为第一模量传递函数,MTF center为第二模量传递函数。 Calculate the ratio of the first modulus transfer function to the second modulus transfer function. The formula is ratio = MTF border / MTF center , where ratio is the ratio of the first modulus transfer function to the second modulus transfer function, MTF border. Is the first modulus transfer function, and MTF center is the second modulus transfer function.
操作912,当比值超过阈值,则确定标定图像的清晰度满足预设条件。In operation 912, when the ratio exceeds the threshold, it is determined that the sharpness of the calibration image meets a preset condition.
阈值可根据摄像头模组的规格来设定,如阈值可为[0.3,0.5]内的值。The threshold can be set according to the specifications of the camera module. For example, the threshold can be a value within [0.3, 0.5].
当比值在[0,r]范围内,则认为标定图像的清晰度未达到预设条件。其中,r为阈值。预设条件是指清晰度达到预设标准。When the ratio is in the range of [0, r], it is considered that the sharpness of the calibration image has not reached a preset condition. Among them, r is a threshold. The preset condition means that the sharpness reaches a preset standard.
上述实施例中的图像处理方法,通过对标定图像进行检测得到斜边区域,求取斜边区域的第一模量传递函数,再获取标定图像的中心区域的第二模量传递函数,当第一模量传递函数与第二模量传递函数的比值超过阈值,则表示标定图像的清晰度满足预设条件,如此筛选出了清晰度符合条件的标定图像,后续进行标定,能够提高标定精度。The image processing method in the above embodiment obtains the hypotenuse region by detecting the calibration image, obtains the first modulus transfer function of the hypotenuse region, and then obtains the second modulus transfer function of the central region of the calibration image. The ratio of the first modulus transfer function to the second modulus transfer function exceeds the threshold value, which means that the sharpness of the calibration image meets a preset condition, so that a calibration image that meets the sharpness condition is selected, and subsequent calibration can improve the calibration accuracy.
在一个实施例中,获取标定图像包括:获取拍摄包含预设图案的标定板得到标定图像,其中,该预设图案包括标定图案和斜边图案,该斜边图案位于该标定图案的四个侧边,且该斜边图案与该标定图案之间存在间隙。该斜边图案与该标定图案之间间隙的最近距离为该标定图案中两个相邻特征点之间的距离的0.1至1倍。In one embodiment, obtaining a calibration image includes: obtaining a calibration image by photographing a calibration board including a preset pattern, wherein the preset pattern includes a calibration pattern and a hypotenuse pattern, and the hypotenuse pattern is located on four sides of the calibration pattern And there is a gap between the hypotenuse pattern and the calibration pattern. The closest distance between the hypotenuse pattern and the calibration pattern is 0.1 to 1 times the distance between two adjacent feature points in the calibration pattern.
斜边图案位于标定图案的四个侧边,则拍摄得到的标定图像,对标定图像进行检测得到四个斜边区域,求取四个斜边区域各自的第一模量传递函数,再分别计算四个第一模量传递函数与第二模量传递函数的比值,当四个比值都超过阈值时,确定该标定图像的清晰度满足预设条件。The hypotenuse pattern is located on the four sides of the calibration pattern. Then, the obtained calibration image is taken. The calibration image is detected to obtain four hypotenuse regions. The first modulus transfer functions of each of the four hypotenuse regions are obtained, and then calculated separately. Ratios of the four first modulus transfer functions to the second modulus transfer functions. When all four ratios exceed a threshold, it is determined that the sharpness of the calibration image meets a preset condition.
标定图案中的两个相邻特征点是指在标定图案的同一行或同一列上相邻两个特征点。Two adjacent feature points in the calibration pattern refer to two adjacent feature points on the same row or column of the calibration pattern.
在一个实施例中,通过连通域算法检测得到斜边区域。连通域是指图像中具有相同像素值且位置相邻的像素点组成的图像区域。连通域算法是指将图像中的各个连通区域找出并标记。连通域算法可为通过matlab中连通区域标记函数bwlabel中的算法,一次遍历图像,并记下每一行的等价对,然后通过等价对对原来的图像进行重新标记。也可以通过开源库cvBlob中使用的标记算法,通过定位连通区域的内外轮廓来标记整个图像。In one embodiment, the hypotenuse region is obtained through detection by a connected domain algorithm. Connected domain refers to an image area composed of pixels with the same pixel value and adjacent positions in the image. The connected domain algorithm refers to finding and labeling each connected area in the image. The connected domain algorithm can be the algorithm in the connected area labeling function bwlabel in Matlab. It traverses the image once, records the equivalent pairs of each line, and then relabels the original image through the equivalent pairs. You can also use the labeling algorithm used in the open source library cvBlob to label the entire image by locating the inner and outer contours of the connected area.
连通区域标记函数算法的具体操作包括:逐行扫描标定图像,把每一行中连续的白色像素组成一个序列称为一个团,并记下它的起点和终点以及它所在的行号,对于除了第一行外的所有行里的团,如果它与前一行中的所有团都没有重合区域,则给它一个新的标号;如果它仅与上一行中一个团有重合区域,则将上一行的那个团的标号赋给它;如果它与上一行的2个以上的团有重叠区域,则给当前团赋一个相连团的最小标号,并将上一行的这几个团的标记写入等价对,说明它们属于一类。将等价对转换为等价序列,每一个序列需要给一相同的标号。从1开始,给每个等价序列一个标号。遍历开始团的标记,查找等价序列,给与它们新的标记。将每个团的标号填入标定图像中。The specific operations of the connected area labeling function algorithm include: scanning the calibration image line by line, grouping consecutive white pixels in each line into a sequence called a clique, and recording its start and end points and its line number. For all the cliques in a row except one line, if it does not overlap with all the cliques in the previous line, give it a new label; if it only has a coincident area with a clique in the previous line, the previous line's The group's label is assigned to it; if it has an overlapping area with more than 2 groups in the previous line, the current group is assigned a minimum label of the connected group, and the marks of the several groups in the previous line are written into the equivalent Yes, it shows that they belong to one category. To convert equivalent pairs into equivalent sequences, each sequence needs to be given the same label. Starting from 1, give each equivalent sequence a label. Iterate through the tags of the starting group, find equivalent sequences, and give them new tags. The label of each group is filled into the calibration image.
在一个实施例中,获取斜边区域的第一模量传递函数,包括:获取斜边区域,将该斜边区域分成第一数量的子区域;获取该第一数量的子区域的模量传递函数;根据该第一数量的子区域的模量传递函数得到该斜边区域的第一模量传递函数。In one embodiment, obtaining the first modulus transfer function of the hypotenuse region includes: obtaining the hypotenuse region, dividing the hypotenuse region into a first number of subregions; and obtaining the modulus transfer of the first number of subregions. Function; obtaining the first modulus transfer function of the hypotenuse region according to the modulus transfer function of the first number of sub-regions.
处理器可将斜边区域分成第一数量的子区域,第一数量可根据需要设定,如1、2、3、5、10等。可将斜边区域的斜边分成大小相同的第一数量的线段,也可以将斜边区域的斜边分成大小不同的第一数量的线段。The processor may divide the hypotenuse region into a first number of sub-regions, and the first number may be set as required, such as 1, 2, 3, 5, 10, and the like. The hypotenuse of the hypotenuse region can be divided into a first number of line segments of the same size, or the hypotenuse of the hypotenuse region can be divided into a first number of line segments of different sizes.
如图10所示,斜边区域为左斜边区域,识别出左斜边区域的顶点A、顶点B和顶点C,选取顶点A和顶点C的边来计算该左斜边区域的第一模量传递函数MTF,将顶点A和顶点C的边选择水平均分为N个部分,求取每个部分的模量传递函数,再求取N个部分的模量传递函数的平均值得到该斜边区域的第一模量传递函数。也可以从第一数量的子区域中选取部分的子区域的模量传递函数,求加权平均得到该斜边区域的第一模量传递函数。As shown in FIG. 10, the hypotenuse region is the left hypotenuse region. Vertex A, vertex B, and vertex C of the hypotenuse region are identified. The edges of vertex A and vertex C are selected to calculate the first model of the hypotenuse region. The volume transfer function MTF divides the edge selection level of vertex A and vertex C into N parts, finds the modulus transfer function of each part, and then calculates the average value of the modulus transfer function of N parts to obtain the slope. The first modulus transfer function of the edge region. It is also possible to select a part of the sub-region's modulus transfer function from the first number of sub-areas, and obtain the first modulus transfer function of the hypotenuse region by obtaining a weighted average.
通过将斜边区域分成多个子区域,再求取子区域的模量传递函数,根据子区域的模量传递函数求取斜边区域的第一模量传递函数,计算更加精确。By dividing the hypotenuse region into multiple subregions, and then obtaining the modulus transfer function of the subregion, the first modulus transfer function of the hypotenuse region is obtained according to the modulus transfer function of the subregion, and the calculation is more accurate.
在一个实施例中,根据该第一数量的子区域的模量传递函数得到该斜边区域的第一模量传递函数,包括:获取第二数量的子区域的模量传递函数,该第二数量的子区域是从该第一数量的子区域中选取的;将该第二数量的子区域的模量传递函数取平均得到该斜边区域的第一模量传递函数。In an embodiment, obtaining the first modulus transfer function of the hypotenuse region according to the modulus transfer function of the first number of sub-regions includes: obtaining the modulus transfer function of the second number of sub-regions, the second The number of sub-regions is selected from the first number of sub-regions; the modulus transfer function of the second number of sub-regions is averaged to obtain the first modulus transfer function of the hypotenuse region.
第二数量小于第一数量。第二数量可以根据需要设定。将第一数量的子区域按照划分顺序排序得到子区域序列,选取子区域序列中处于中间位置的第二数量的子区域,计算第二数量的子区域的模量传递函数,然后求取平均得到斜边区域的第一模量传递函数。The second number is less than the first number. The second quantity can be set as required. Sort the first number of subregions according to the division order to obtain the subregion sequence, select the second number of subregions in the middle position of the subregion sequence, calculate the modulus transfer function of the second number of subregions, and then obtain the average The first modulus transfer function of the hypotenuse region.
选取处于中间位置的第二数量的子区域,计算得到的结果更加准确。By selecting the second number of subregions in the middle position, the calculation result is more accurate.
图11为另一个实施例中的图像处理方法的流程图。如图11所示,该图像处理方法包括:FIG. 11 is a flowchart of an image processing method in another embodiment. As shown in FIG. 11, the image processing method includes:
操作1102,获取双摄像头模组中第一摄像头和第二摄像头分别拍摄的标定图像。Operation 1102: Obtain calibration images respectively captured by the first camera and the second camera in the dual camera module.
通过双摄像头模组中的第一摄像头和第二摄像头分别对标定板进行拍摄得到标定图像。The calibration image is obtained by shooting the calibration plate through the first camera and the second camera in the dual camera module.
操作1104,对各个标定图像进行检测得到对应的斜边区域。Operation 1104: Detect each calibration image to obtain a corresponding hypotenuse region.
通过边缘和轮廓检测对标定图像进行检测得到斜边区域。或者通过连通域算法检测得到斜边区域。The beveled area is obtained by detecting the calibration image through edge and contour detection. Or, the hypotenuse region can be obtained by detecting the connected domain algorithm.
操作1106,获取该斜边区域的第一模量传递函数。Operation 1106: Obtain a first modulus transfer function of the hypotenuse region.
操作1108,读取该标定图像的中心区域的第二模量传递函数。Operation 1108: Read the second modulus transfer function of the central region of the calibration image.
操作1110,获取该第一模量传递函数与第二模量传递函数的比值。In operation 1110, a ratio of the first modulus transfer function to the second modulus transfer function is obtained.
操作1112,当该比值超过阈值,则确定该标定图像的清晰度满足预设条件。In operation 1112, when the ratio exceeds the threshold, it is determined that the sharpness of the calibration image meets a preset condition.
操作1114,根据满足预设条件的标定图像获取双摄像头模组中第一摄像头的内参和外参以及第二摄像头的内参和外参。Operation 1114: Obtain the internal and external parameters of the first camera and the internal and external parameters of the second camera according to the calibration image that meets the preset conditions.
单摄像头的内参可包括f x、f y、c x、c y,其中,f x表示焦距在图像坐标系x轴方向上单位像元大小,f y表示焦距在图像坐标系y轴方向上单位像元大小,c x、c y表示图像平面的主点坐标,主点是光轴与图像平面的交点。f x=f/d x,f y=f/d y,其中,f为单摄像头的焦距,d x表示图像坐标系x轴方向上一个像素的宽度,d y表示图像坐标系y轴方向上一个像素的宽度。图像坐标系是以摄像头拍摄的二维图像为基准建立的坐标系,用于指定物体在拍摄图像中的位置。图像坐标系中的(x,y)坐标系的原点位于摄像头光轴与成像平面的焦点(c x,c y)上,单位为长度单位,即米,像素坐标系中的(u,v)坐标系的原点在图像的左上角,单位为数量单位,即个。(x,y)用于表征物体从摄像头坐标系向图像坐标系的透视投影关系,(u,v)用于表征像素坐标。(x,y)与(u,v)之间的转换关系如公式(1): The internal parameters of a single camera can include f x , f y , c x , and c y , where f x represents the unit size of the focal length in the image coordinate system x-axis direction, and f y represents the unit of the focal length in the image coordinate system y-axis direction. Pixel size, c x , c y represent the coordinates of the principal point of the image plane, and the principal point is the intersection of the optical axis and the image plane. f x = f / d x , f y = f / d y , where f is the focal length of a single camera, d x is the width of a pixel in the x-axis direction of the image coordinate system, and d y is the y-axis direction in the image coordinate system The width of one pixel. The image coordinate system is a coordinate system established on the basis of a two-dimensional image captured by a camera, and is used to specify the position of an object in the captured image. The origin of the (x, y) coordinate system in the image coordinate system is located on the focal point (c x , c y ) of the optical axis of the camera and the imaging plane. The unit is the length unit, that is, meters, (u, v) in the pixel coordinate system. The origin of the coordinate system is in the upper left corner of the image, and the unit is a quantity unit, that is, a unit. (x, y) is used to represent the perspective projection relationship of the object from the camera coordinate system to the image coordinate system, and (u, v) is used to represent the pixel coordinates. The conversion relationship between (x, y) and (u, v) is as shown in formula (1):
Figure PCTCN2019088882-appb-000006
Figure PCTCN2019088882-appb-000006
透视投影是指用中心投影法将形体投射到投影面上,从而获得的一种较为接近视觉效果的单面投影图。Perspective projection refers to a single-sided projection image that is closer to visual effects by projecting a shape onto a projection surface using the center projection method.
单摄像头的外参包括世界坐标系下的坐标转换到摄像头坐标系下的坐标的旋转矩阵和平移矩阵。世界坐标系通过刚体变换到达摄像头坐标系,摄像头坐标系通过透视投影变换到达图像坐标系。刚体变换是指三维空间中,当物体不发生形变时,对一个几何物体做旋转、平移的运动,即为刚体变换。刚体变换如公式(8)。The external parameters of a single camera include a rotation matrix and a translation matrix that transform the coordinates in the world coordinate system to the coordinates in the camera coordinate system. The world coordinate system reaches the camera coordinate system through rigid body transformation, and the camera coordinate system reaches the image coordinate system through perspective projection transformation. Rigid body transformation refers to the movement of rotation and translation of a geometric object when the object is not deformed in three-dimensional space, which is the rigid body transformation. The rigid body transformation is as shown in formula (8).
Figure PCTCN2019088882-appb-000007
Figure PCTCN2019088882-appb-000007
Figure PCTCN2019088882-appb-000008
Figure PCTCN2019088882-appb-000008
其中,X c代表摄像头坐标系,X代表世界坐标系,R代表世界坐标系到摄像头坐标系的旋转矩阵,T代表世界坐标系到摄像头坐标系的平移矩阵。世界坐标系原点和摄像头坐标系原点之间的距离受x、y、z三个轴方向上的分量共同控制,具有三个自由度,R为分别绕X、Y、Z轴旋转的效果之和。t x表示x轴方向的平移量,t y表示y轴方向的平移量,t z表示z轴方向的平移量。 Among them, X c represents the camera coordinate system, X represents the world coordinate system, R represents the rotation matrix from the world coordinate system to the camera coordinate system, and T represents the translation matrix from the world coordinate system to the camera coordinate system. The distance between the origin of the world coordinate system and the origin of the camera coordinate system is controlled by the components in the three axis directions x, y, and z. It has three degrees of freedom, and R is the sum of the effects of rotation around the X, Y, and Z axes, respectively . t x represents the translation amount in the x-axis direction, t y represents the translation amount in the y-axis direction, and t z represents the translation amount in the z-axis direction.
世界坐标系是客观三维空间的绝对坐标系,可以建立在任意位置。例如对于每张标定图像,世界坐标系可以建立在以标定板的左上角角点为原点,以标定板平面为XY平面,Z轴垂直标定板平面向上。摄像头坐标系是以摄像头光心为坐标系的原点,以摄像头的光轴作为Z轴,X轴、Y轴分别平行于图像坐标系的X轴Y轴。图像坐标系的主点是光轴与图像平面的交点。图像坐标系以主点为原点。像素坐标系是指原点定义在图像平面的左上角位置。The world coordinate system is an absolute coordinate system in objective three-dimensional space and can be established at any position. For example, for each calibration image, the world coordinate system can be established with the upper left corner point of the calibration plate as the origin, the calibration plate plane as the XY plane, and the Z axis perpendicular to the calibration plate plane upwards. The camera coordinate system is based on the optical center of the camera as the origin, the optical axis of the camera is used as the Z axis, and the X and Y axes are respectively parallel to the X and Y axes of the image coordinate system. The principal point of the image coordinate system is the intersection of the optical axis and the image plane. The image coordinate system uses the principal point as the origin. The pixel coordinate system means that the origin is defined in the upper left corner of the image plane.
通过单个摄像头拍摄不同角度的标定板得到标定图像,从标定图像中提取特征点,计算无畸变情况下,单个摄像头的5个内参和2个外参,应用最小二乘法计算得到畸变系数,再通过极大似然法进行优化,得到单个摄像头最终的内参和外参。Using a single camera to take calibration plates with different angles to obtain a calibration image, extract feature points from the calibration image, and calculate the distortion-free case for the five internal and two external parameters of a single camera. Apply the least square method to calculate the distortion coefficient. The maximum likelihood method is optimized to obtain the final internal and external parameters of a single camera.
首先建立摄像头模型,得到公式(9)。First, a camera model is established, and formula (9) is obtained.
Figure PCTCN2019088882-appb-000009
Figure PCTCN2019088882-appb-000009
其中,
Figure PCTCN2019088882-appb-000010
的齐次坐标表示图像平面的像素坐标(u,v,1),
Figure PCTCN2019088882-appb-000011
的齐次坐标表示世界坐标系的坐标点(X,Y,Z,1),A表示内参矩阵,R表示世界坐标系转换到摄像头坐标系的旋转矩阵,T表示世界坐标系转换到摄像头坐标系的平移矩阵。
among them,
Figure PCTCN2019088882-appb-000010
Homogeneous coordinates represent the pixel coordinates (u, v, 1) of the image plane,
Figure PCTCN2019088882-appb-000011
Homogeneous coordinates represent the coordinate points of the world coordinate system (X, Y, Z, 1), A represents the internal parameter matrix, R represents the rotation matrix converted from the world coordinate system to the camera coordinate system, and T represents the world coordinate system converted to the camera coordinate system Translation matrix.
Figure PCTCN2019088882-appb-000012
Figure PCTCN2019088882-appb-000012
其中,α=f/d x,β=f/d y,f为单摄像头的焦距,d x表示图像坐标系x轴方向上一个像素的宽度,d y表示图像坐标系y轴方向上一个像素的宽度。γ代表像素点在x,y方向上尺度的偏差。u 0、v 0表示图像平面的主点坐标,主点是光轴与图像平面的交点。 Among them, α = f / d x , β = f / d y , f is the focal length of a single camera, d x represents the width of one pixel in the x-axis direction of the image coordinate system, and d y represents one pixel in the y-axis direction of the image coordinate system The width. γ represents the deviation of the pixel point in the x, y direction. u 0 and v 0 represent the coordinates of the principal point of the image plane, and the principal point is the intersection of the optical axis and the image plane.
将世界坐标系构造在Z=0的平面上,再进行单应性计算,令Z=0则将上述转换为公式(11)。The world coordinate system is constructed on the plane of Z = 0, and then the homography calculation is performed. If Z = 0, the above is converted into formula (11).
Figure PCTCN2019088882-appb-000013
Figure PCTCN2019088882-appb-000013
单应性是指在计算机视觉中被定义为一个平面到另一个平面的投影映射。令H=A[r 1 r 2 t],H为单应性矩阵。H是一个3*3的矩阵,并且有一个元素作为齐次坐标,因此,H有8个未知量待解。将单应性矩阵写成三个列向量的形式,即H=[h 1 h 2 h 3],从而得到公式(12)。 Homogeneity is defined in computer vision as a projection mapping from one plane to another. Let H = A [r 1 r 2 t], where H is a homography matrix. H is a 3 * 3 matrix and has one element as a homogeneous coordinate. Therefore, H has 8 unknowns to be solved. The homography matrix is written in the form of three column vectors, that is, H = [h 1 h 2 h 3 ], thereby obtaining formula (12).
[h 1 h 2 h 3]=λA[r 1 r 2 t]             公式(12) [h 1 h 2 h 3 ] = λA [r 1 r 2 t] Formula (12)
对于公式(14),采用两个约束条件,第一,r 1,r 2正交,得r 1r 2=0,r 1,r 2分别绕x,y轴旋转。第二,旋转向量的模为1,即|r 1|=|r 2|=1。通过两个约束条件,将r 1,r 2代换为h 1,h 2 与A的组合进行表达。即r 1=h 1A -1,r 2=h 2A -1。根据两个约束条件,可以得到公式(15): For formula (14), two constraints are used. First, r 1 , r 2 are orthogonal, and r 1 r 2 = 0, and r 1 , r 2 are rotated around the x and y axes, respectively. Second, the modulus of the rotation vector is 1, that is, | r 1 | = | r 2 | = 1. Through two constraints, r 1 , r 2 is replaced by h 1 , h 2 and A. That is, r 1 = h 1 A -1 and r 2 = h 2 A -1 . According to two constraints, formula (15) can be obtained:
Figure PCTCN2019088882-appb-000014
Figure PCTCN2019088882-appb-000014
make
Figure PCTCN2019088882-appb-000015
Figure PCTCN2019088882-appb-000015
B为一个对称阵,故B的有效元素为6个,6个元素构成向量b。B is a symmetric matrix, so the effective elements of B are 6, and the 6 elements constitute the vector b.
b=[B 11,B 12,B 22,B 13,B 23,B 33] T b = [B 11 , B 12 , B 22 , B 13 , B 23 , B 33 ] T
Figure PCTCN2019088882-appb-000016
Figure PCTCN2019088882-appb-000016
可以计算得到V ij=[h i1h j1,h i1h j2+h i2h j1,h i2h j2,h i3h j1+h i1h j3,h i3h j2+h i2h j3,h i3h j3] T It can be calculated that V ij = [h i1 h j1 , h i1 h j2 + h i2 h j1 , h i2 h j2 , h i3 h j1 + h i1 h j3 , h i3 h j2 + h i2 h j3 , h i3 h j3 ] T
利用约束条件得到方程组:Use constraints to get the equations:
Figure PCTCN2019088882-appb-000017
Figure PCTCN2019088882-appb-000017
通过至少三幅图像,应用公式(16)估算出B,对B进行分解得到摄像头的内参矩阵A的初始值。Using at least three images, apply formula (16) to estimate B, and decompose B to obtain the initial value of the internal parameter matrix A of the camera.
基于内参矩阵计算外参矩阵,得到外参矩阵的初始值。The external parameter matrix is calculated based on the internal parameter matrix to obtain the initial value of the external parameter matrix.
Figure PCTCN2019088882-appb-000018
Figure PCTCN2019088882-appb-000018
其中,λ=1/||A -1h 1||=1/||A -1h 2||。 Where λ = 1 / || A -1 h 1 || = 1 / || A -1 h 2 ||.
摄像头完整几何模型采用公式(16)The complete geometric model of the camera uses the formula (16)
Figure PCTCN2019088882-appb-000019
Figure PCTCN2019088882-appb-000019
其中,公式(16)是将世界坐标系构造在Z为0平面上得到的几何模型,X,Y为平面标定板上特征点的世界坐标,x,y,z为标定板上特征点在摄像头坐标系的物理坐标。Among them, formula (16) is a geometric model obtained by constructing the world coordinate system on the plane Z is 0, X, Y are the world coordinates of the feature points on the calibration plate, and x, y, and z are the feature points on the calibration plate on the camera The physical coordinates of the coordinate system.
Figure PCTCN2019088882-appb-000020
Figure PCTCN2019088882-appb-000020
R为标定板的世界坐标系到摄像头坐标系的旋转矩阵,T为标定板的世界坐标系到摄像头坐标系的平移矩阵。R is the rotation matrix from the world coordinate system of the calibration plate to the camera coordinate system, and T is the translation matrix from the world coordinate system of the calibration plate to the camera coordinate system.
对标定板上特征点在摄像头坐标系的物理坐标[x,y,z]进行归一化处理,得到目标坐标点(x',y')。The physical coordinates [x, y, z] of the feature points on the calibration board in the camera coordinate system are normalized to obtain the target coordinate points (x ', y').
Figure PCTCN2019088882-appb-000021
Figure PCTCN2019088882-appb-000021
利用畸变模型对摄像头坐标系像点进行畸变变形处理。Distortion processing is performed on the camera coordinate system image points using the distortion model.
Figure PCTCN2019088882-appb-000022
Figure PCTCN2019088882-appb-000022
利用内参将物理坐标转换为图像坐标。Use internal parameters to convert physical coordinates to image coordinates.
Figure PCTCN2019088882-appb-000023
Figure PCTCN2019088882-appb-000023
将内参矩阵的初始值和外参矩阵的初始值导入到极大似然公式得到最终的内参矩阵和外参矩阵。极大似然公式为
Figure PCTCN2019088882-appb-000024
求取最小值。
The initial values of the internal parameter matrix and the external parameter matrix are imported into the maximum likelihood formula to obtain the final internal parameter matrix and external parameter matrix. The maximum likelihood formula is
Figure PCTCN2019088882-appb-000024
Find the minimum.
双摄像头模组包括第一摄像头和第二摄像头。第一摄像头和第二摄像头可均为彩色摄像头,或者一个为黑白摄像头,一个为彩色摄像头,或者两个黑白摄像头。The dual camera module includes a first camera and a second camera. The first camera and the second camera may both be color cameras, or one is a black and white camera, one is a color camera, or two black and white cameras.
操作1116,根据该第一摄像头的内参和外参以及第二摄像头的内参和外参获取双摄像头模组的外参。In operation 1116, external parameters of the dual camera module are obtained according to the internal and external parameters of the first camera and the internal and external parameters of the second camera.
双摄像头标定是指确定双摄像头模组的外参值。双摄像头模组的外参包括双摄像头间的旋转矩阵和双摄像头间的平移矩阵。双摄像头之间的旋转矩阵和平移矩阵可以有公式(20)求取。The dual camera calibration refers to determining the external parameter value of the dual camera module. The external parameters of the dual camera module include a rotation matrix between the dual cameras and a translation matrix between the dual cameras. The rotation matrix and translation matrix between the two cameras can be obtained by formula (20).
Figure PCTCN2019088882-appb-000025
Figure PCTCN2019088882-appb-000025
其中,R'为双摄像头间的旋转矩阵,T'为双摄像头间的平移矩阵,R r为第一摄像头经过标定得到的相对标定物的旋转矩阵(即标定物在世界坐标系的坐标转换到第一摄像头的摄像头坐标系的坐标的旋转矩阵),T r为第一摄像头经过标定得到的相对标定物的平移矩阵(即标定物在世界坐标系的坐标转换到第一摄像头的摄像头坐标系的坐标的平移矩阵)。R l为第二摄像头经过标定得到的相对标定物的旋转矩阵(即标定物在世界坐标系的坐标转换到第二摄像头的摄像头坐标系的坐标的旋转矩阵),T l为第二摄像头经过标定得到的相对标定物的平移矩阵(即标定物在世界坐标系的坐标转换到第二摄像头的摄像头坐 标系的坐标的平移矩阵)。 Among them, R ′ is the rotation matrix between the two cameras, T ′ is the translation matrix between the two cameras, and R r is the rotation matrix of the first calibration camera relative to the calibration object (that is, the coordinates of the calibration object in the world coordinate system are converted to The rotation matrix of the coordinates of the camera coordinate system of the first camera), T r is the translation matrix of the relative calibration object obtained by the calibration of the first camera (that is, the coordinates of the calibration object in the world coordinate system are converted to the camera coordinate system of the first camera Coordinate translation matrix). R l is the rotation matrix of the second camera relative to the calibration object (that is, the coordinates of the calibration object in the world coordinate system are converted to the coordinates of the camera camera coordinate system of the second camera), T l is the second camera calibration The obtained translation matrix of the relative calibration object (ie, the translation matrix of the coordinates of the calibration object in the world coordinate system and the coordinates of the camera coordinate system of the second camera).
本实施例中通过对双摄像头模组采集的标定图像计算标定图像中斜边区域的第一模量传递函数和中心区域的第二模量传递函数,计算两者的比值,根据比值与阈值的比较,确定标定图像的清晰度是否符合预设条件,符合预设条件,则采用清晰度符合预设条件的标定图像进行单摄像头和双摄像头模组标定,提高了标定的精度。In this embodiment, the first modulus transfer function of the hypotenuse region in the calibration image and the second modulus transfer function of the center region are calculated from the calibration images collected by the dual camera module, and the ratio between the two is calculated. According to the ratio and the threshold value, Compare to determine whether the sharpness of the calibration image meets the preset conditions and meet the preset conditions, then use the calibration image with sharpness that meets the preset conditions for single-camera and dual-camera module calibration to improve the calibration accuracy.
本申请实施例还提供了一种标定板。该标定板包括承载体;预设图案,设置在该承载体上;该预设图案包括标定图案和斜边图案,该斜边图案位于该标定图案的四个侧边,且该斜边图案与该标定图案之间存在间隙。该斜边图案与该标定图案之间间隙的最近距离为该标定图案中两个相邻特征点之间的距离的0.1至1倍。The embodiment of the present application further provides a calibration board. The calibration plate includes a carrier; a preset pattern is disposed on the carrier; the preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and the hypotenuse pattern and the There are gaps between the calibration patterns. The closest distance between the hypotenuse pattern and the calibration pattern is 0.1 to 1 times the distance between two adjacent feature points in the calibration pattern.
斜边图案位于标定图案的四个侧边,则拍摄得到的标定图像,对标定图像进行检测得到四个斜边区域,求取四个斜边区域各自的第一模量传递函数,再分别计算四个第一模量传递函数与第二模量传递函数的比值,当四个比值都超过阈值时,确定该标定图像的清晰度满足预设条件。标定图案中的两个相邻特征点是指在标定图案的同一行或同一列上相邻两个特征点。The hypotenuse pattern is located on the four sides of the calibration pattern. Then, the obtained calibration image is taken. The calibration image is detected to obtain four hypotenuse regions. The first modulus transfer functions of each of the four hypotenuse regions are obtained, and then calculated separately. Ratios of the four first modulus transfer functions to the second modulus transfer functions. When all four ratios exceed a threshold, it is determined that the sharpness of the calibration image meets a preset condition. Two adjacent feature points in the calibration pattern refer to two adjacent feature points on the same row or column of the calibration pattern.
在一个实施例中,斜边图案中的斜边的倾斜角度控制在2至10度内。In one embodiment, the inclined angle of the hypotenuse in the hypotenuse pattern is controlled within 2 to 10 degrees.
图12为一个实施例中图像处理装置的结构框图。如图12所示,该图像处理装置,包括图像获取模块1202、检测模块1204、参数获取模块1206、读取模块1208、比值获取模块1210和确定模块1212。其中:FIG. 12 is a structural block diagram of an image processing apparatus in an embodiment. As shown in FIG. 12, the image processing apparatus includes an image acquisition module 1202, a detection module 1204, a parameter acquisition module 1206, a reading module 1208, a ratio acquisition module 1210, and a determination module 1212. among them:
图像获取模块1202用于获取标定图像。The image acquisition module 1202 is configured to acquire a calibration image.
检测模块1204用于对该标定图像进行检测得到斜边区域。The detection module 1204 is configured to detect the calibration image to obtain a hypotenuse region.
参数获取模块1206用于获取该斜边区域的第一模量传递函数。The parameter obtaining module 1206 is configured to obtain a first modulus transfer function of the hypotenuse region.
读取模块1208用于读取该标定图像的中心区域的第二模量传递函数。The reading module 1208 is configured to read a second modulus transfer function of a central area of the calibration image.
比值获取模块1210用于获取该第一模量传递函数与第二模量传递函数的比值。The ratio obtaining module 1210 is configured to obtain a ratio of the first modulus transfer function and the second modulus transfer function.
确定模块1212用于当该比值超过阈值,则确定该标定图像的清晰度满足预设条件。The determining module 1212 is configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold.
上述实施例中的图像处理装置,通过对标定图像进行检测得到斜边区域,求取斜边区域的第一模量传递函数,再获取标定图像的中心区域的第二模量传递函数,当第一模量传递函数与第二模量传递函数的比值超过阈值,则表示标定图像的清晰度满足预设条件,如此筛选出了清晰度符合条件的标定图像,后续进行标定,能够提高标定精度。The image processing apparatus in the above embodiment obtains the hypotenuse region by detecting the calibration image, obtains the first modulus transfer function of the hypotenuse region, and then obtains the second modulus transfer function of the central region of the calibration image. The ratio of the first modulus transfer function to the second modulus transfer function exceeds the threshold value, which means that the sharpness of the calibration image meets a preset condition, so that a calibration image that meets the sharpness condition is selected, and subsequent calibration can improve the calibration accuracy.
在一个实施例中,图像获取模块1202还用于获取拍摄包含预设图案的标定板得到标定图像,其中,该预设图案包括标定图案和斜边图案,该斜边图案位于该标定图案的四个侧边;该斜边图案与该标定图案之间最近距离为该标定图案中两个相邻特征点之间的距离的0.1至1倍之间。In one embodiment, the image acquisition module 1202 is further configured to obtain a calibration image by shooting a calibration plate that includes a preset pattern, where the preset pattern includes a calibration pattern and a hypotenuse pattern, and the hypotenuse pattern is located at four points of the calibration pattern. The closest distance between the hypotenuse pattern and the calibration pattern is between 0.1 and 1 times the distance between two adjacent feature points in the calibration pattern.
在一个实施例中,参数获取模块1206还用于获取该斜边区域,将该斜边区域分成第一数量的子区域;获取该第一数量的子区域的模量传递函数;根据该第一数量的子区域的模量传递函数得到该斜边区域的第一模量传递函数。In one embodiment, the parameter obtaining module 1206 is further configured to obtain the hypotenuse region, and divide the hypotenuse region into a first number of subregions; acquire a modulus transfer function of the first number of subregions; and according to the first The modulus transfer function of the number of sub-regions obtains the first modulus transfer function of the hypotenuse region.
在一个实施例中,参数获取模块1206还用于获取第二数量的子区域的模量传递函数,该第二数量的子区域是从该第一数量的子区域中选取的;将该第二数量的子区域的模量传递函数取平均得到该斜边区域的第一模量传递函数。In one embodiment, the parameter obtaining module 1206 is further configured to obtain a modulus transfer function of a second number of sub-regions, the second number of sub-regions being selected from the first number of sub-regions; The modulus transfer function of the number of subregions is averaged to obtain the first modulus transfer function of the hypotenuse region.
在一个实施例中,参数获取模块1206还用于通过倾斜边缘法或空间频域响应曲线获取该斜边区域的第一模量传递函数。In one embodiment, the parameter obtaining module 1206 is further configured to obtain the first modulus transfer function of the hypotenuse region by using a sloped edge method or a spatial frequency domain response curve.
图13为另一个实施例中图像处理装置的结构框图。如图13所示,该图像处理装置,包括图像获取模块1202、检测模块1204、参数获取模块1206、读取模块1208、比值获取模块1210和确定模块1212,还包括标定模块1214。其中:图像获取模块1202还用于获取双摄像头模组中第一摄像头和第二摄像头分别拍摄的标定图像。检测模块1204还用于对各个标定图像进行检测得到对应的斜边区域。参数获取模块1206还用于获取该斜边区 域的第一模量传递函数。读取模块1208还用于读取该标定图像的中心区域的第二模量传递函数。比值获取模块1210还用于获取该第一模量传递函数与第二模量传递函数的比值。确定模块1212还用于当该比值超过阈值,则确定该标定图像的清晰度满足预设条件。标定模块1214用于根据满足预设条件的标定图像获取双摄像头模组中第一摄像头的内参和外参以及第二摄像头的内参和外参;根据该第一摄像头的内参和外参以及第二摄像头的内参和外参获取双摄像头模组的外参。FIG. 13 is a structural block diagram of an image processing apparatus in another embodiment. As shown in FIG. 13, the image processing apparatus includes an image acquisition module 1202, a detection module 1204, a parameter acquisition module 1206, a reading module 1208, a ratio acquisition module 1210, and a determination module 1212, and further includes a calibration module 1214. The image acquisition module 1202 is further configured to acquire calibration images respectively captured by the first camera and the second camera in the dual camera module. The detection module 1204 is further configured to detect each calibration image to obtain a corresponding hypotenuse region. The parameter acquisition module 1206 is further configured to acquire a first modulus transfer function of the hypotenuse region. The reading module 1208 is further configured to read a second modulus transfer function of a central region of the calibration image. The ratio obtaining module 1210 is further configured to obtain a ratio of the first modulus transfer function and the second modulus transfer function. The determining module 1212 is further configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold. The calibration module 1214 is configured to obtain the internal and external parameters of the first camera and the internal and external parameters of the second camera according to the calibration image that meets the preset conditions; according to the internal and external parameters of the first camera and the second The internal and external parameters of the camera obtain the external parameters of the dual camera module.
本申请实施例还提供了一种电子设备。该电子设备,包括存储器及处理器,该存储器中储存有计算机程序,该计算机程序被该处理器执行时,使得该处理器执行图像处理方法中的操作。An embodiment of the present application further provides an electronic device. The electronic device includes a memory and a processor. A computer program is stored in the memory. When the computer program is executed by the processor, the processor causes the processor to perform operations in the image processing method.
本申请实施例提供了一种非易失性计算机可读存储介质。一种非易失性计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以下图像处理方法中的操作。An embodiment of the present application provides a non-volatile computer-readable storage medium. A non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements operations in the following image processing methods.
图14为一个实施例中电子设备的内部结构示意图。如图14所示,该电子设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行,以实现本申请实施例中提供的适用于电子设备的无线网络通信方法。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种图像处理方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。FIG. 14 is a schematic diagram of an internal structure of an electronic device in an embodiment. As shown in FIG. 14, the electronic device includes a processor, a memory, and a network interface connected through a system bus. The processor is used to provide computing and control capabilities to support the operation of the entire electronic device. The memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the wireless network communication method applicable to the electronic device provided in the embodiments of the present application. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided by each of the following embodiments. The internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium. The network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
本申请实施例中提供的图像处理装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。The implementation of each module in the image processing apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program can be run on a terminal or a server. The program module constituted by the computer program can be stored in the memory of the terminal or server. When the computer program is executed by a processor, the operations of the method described in the embodiments of the present application are implemented.
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行图像处理方法。A computer program product containing instructions that, when run on a computer, causes the computer to perform an image processing method.
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图15为一个实施例中图像处理电路的示意图。如图15所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。An embodiment of the present application further provides an electronic device. The above electronic device includes an image processing circuit. The image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline. FIG. 15 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 15, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
如图15所示,图像处理电路包括第一ISP处理器1530、第二ISP处理器1540和控制逻辑器1550。第一摄像头1510包括一个或多个第一透镜1512和第一图像传感器1514。第一图像传感器1514可包括色彩滤镜阵列(如Bayer滤镜),第一图像传感器1514可获取用第一图像传感器1514的每个成像像素捕捉的光强度和波长信息,并提供可由第一ISP处理器1530处理的一组图像数据。第二摄像头1520包括一个或多个第二透镜1522和第二图像传感器1524。第二图像传感器1524可包括色彩滤镜阵列(如Bayer滤镜),第二图像传感器1524可获取用第二图像传感器1524的每个成像像素捕捉的光强度和波长信息,并提供可由第二ISP处理器1540处理的一组图像数据。As shown in FIG. 15, the image processing circuit includes a first ISP processor 1530, a second ISP processor 1540, and a control logic 1550. The first camera 1510 includes one or more first lenses 1512 and a first image sensor 1514. The first image sensor 1514 may include a color filter array (such as a Bayer filter). The first image sensor 1514 may obtain light intensity and wavelength information captured by each imaging pixel of the first image sensor 1514, and provide information that can be captured by the first ISP. A set of image data processed by the processor 1530. The second camera 1520 includes one or more second lenses 1522 and a second image sensor 1524. The second image sensor 1524 may include a color filter array (such as a Bayer filter). The second image sensor 1524 may obtain light intensity and wavelength information captured by each imaging pixel of the second image sensor 1524, and provide information that may be captured by the second ISP. A set of image data processed by the processor 1540.
第一摄像头1510采集的第一图像传输给第一ISP处理器1530进行处理,第一ISP处理器1530处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器1550,控制逻辑器1550可根据统计数据确定第一摄像头1510的控制参数,从而第一摄像头1515可根据控制参数进行自动对焦、自动曝光等操作。第一图像经过第一ISP处理器1530进行处理后可存储至图像存储器1560中,第一ISP处理器1530也可以读取图像存储器1560中存储的图像以对进行处理。另外,第一图 像经过ISP处理器1530进行处理后可直接发送至显示器1570进行显示,显示器1570也可以读取图像存储器1560中的图像以进行显示。The first image collected by the first camera 1510 is transmitted to the first ISP processor 1530 for processing. After the first ISP processor 1530 processes the first image, the first image statistical data (such as the brightness of the image and the contrast value of the image) can be processed. , Image color, etc.) to the control logic 1550. The control logic 1550 can determine the control parameters of the first camera 1510 according to the statistical data, so that the first camera 1515 can perform operations such as autofocus and automatic exposure according to the control parameters. The first image may be stored in the image memory 1560 after being processed by the first ISP processor 1530, and the first ISP processor 1530 may also read the image stored in the image memory 1560 for processing. In addition, the first image can be directly sent to the display 1570 for display after being processed by the ISP processor 1530. The display 1570 can also read the image in the image memory 1560 for display.
其中,第一ISP处理器1530按多种格式逐个像素地处理图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,第一ISP处理器1530可对图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度计算精度进行。Among them, the first ISP processor 1530 processes image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 1530 may perform one or more image processing operations on the image data and collect statistical information about the image data. The image processing operation may be performed with the same or different bit depth calculation accuracy.
图像存储器1560可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。The image memory 1560 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
当接收到来自第一图像传感器1514接口时,第一ISP处理器1530可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器1560,以便在被显示之前进行另外的处理。第一ISP处理器1530从图像存储器1560接收处理数据,并对所述处理数据进行RGB和YCbCr颜色空间中的图像数据处理。第一ISP处理器1530处理后的图像数据可输出给显示器1570,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,第一ISP处理器1530的输出还可发送给图像存储器1560,且显示器1570可从图像存储器1560读取图像数据。在一个实施例中,图像存储器1560可被配置为实现一个或多个帧缓冲器。When receiving the interface from the first image sensor 1514, the first ISP processor 1530 may perform one or more image processing operations, such as time-domain filtering. The processed image data may be sent to the image memory 1560 for further processing before being displayed. The first ISP processor 1530 receives processed data from the image memory 1560, and performs image data processing in the RGB and YCbCr color spaces on the processed data. The image data processed by the first ISP processor 1530 may be output to the display 1570 for viewing by a user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit). In addition, the output of the first ISP processor 1530 can also be sent to the image memory 1560, and the display 1570 can read image data from the image memory 1560. In one embodiment, the image memory 1560 may be configured to implement one or more frame buffers.
第一ISP处理器1530确定的统计数据可发送给控制逻辑器1550。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、第一透镜1512阴影校正等第一图像传感器1514统计信息。控制逻辑器1550可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定第一摄像头1510的控制参数及第一ISP处理器1530的控制参数。例如,第一摄像头1510的控制参数可包括增益、曝光控制的积分时间、防抖参数、闪光控制参数、第一透镜1512控制参数(例如聚焦或变焦用焦距)、或这些参数的组合等。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及第一透镜1512阴影校正参数。The statistical data determined by the first ISP processor 1530 may be sent to the control logic 1550. For example, the statistical data may include the first image sensor 1514 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, and first lens 1512 shading correction. The control logic 1550 may include a processor and / or a microcontroller that executes one or more routines (such as firmware). The one or more routines may determine the control parameters and the first parameters of the first camera 1510 according to the received statistical data. Control parameters of an ISP processor 1530. For example, the control parameters of the first camera 1510 may include gain, integration time of exposure control, image stabilization parameters, flash control parameters, control parameters of the first lens 1512 (for example, focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include a gain level and a color correction matrix for automatic white balance and color adjustment (eg, during RGB processing), and a first lens 1512 shading correction parameter.
同样地,第二摄像头1520采集的第二图像传输给第二ISP处理器1540进行处理,第二ISP处理器1540处理第一图像后,可将第二图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器1550,控制逻辑器1550可根据统计数据确定第二摄像头1520的控制参数,从而第二摄像头1520可根据控制参数进行自动对焦、自动曝光等操作。第二图像经过第二ISP处理器1540进行处理后可存储至图像存储器1560中,第二ISP处理器1540也可以读取图像存储器1560中存储的图像以对进行处理。另外,第二图像经过ISP处理器1540进行处理后可直接发送至显示器1570进行显示,显示器1570也可以读取图像存储器1560中的图像以进行显示。第二摄像头1520和第二ISP处理器1540也可以实现如第一摄像头1510和第一ISP处理器1530所描述的处理操作。Similarly, the second image collected by the second camera 1520 is transmitted to the second ISP processor 1540 for processing. After the second ISP processor 1540 processes the first image, statistical data of the second image (such as image brightness, image (Contrast value, image color, etc.) are sent to the control logic 1550. The control logic 1550 can determine the control parameters of the second camera 1520 according to statistical data, so that the second camera 1520 can perform operations such as autofocus and automatic exposure according to the control parameters . The second image may be stored in the image memory 1560 after being processed by the second ISP processor 1540, and the second ISP processor 1540 may also read the image stored in the image memory 1560 for processing. In addition, the second image may be directly sent to the display 1570 for display after being processed by the ISP processor 1540, and the display 1570 may also read the image in the image memory 1560 for display. The second camera 1520 and the second ISP processor 1540 may also implement processing operations as described by the first camera 1510 and the first ISP processor 1530.
以下为运用图15中图像处理技术实现图像处理方法的操作。The following is the operation of implementing the image processing method using the image processing technology in FIG. 15.
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路 (Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。Any reference to memory, storage, database, or other media used in this application may include non-volatile and / or volatile memory. Suitable non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which is used as external cache memory. By way of illustration and not limitation, RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation manners of the present application, and their descriptions are more specific and detailed, but they should not be construed as limiting the patent scope of the present application. It should be noted that, for those of ordinary skill in the art, without departing from the concept of the present application, several modifications and improvements can be made, and these all belong to the protection scope of the present application. Therefore, the protection scope of this application patent shall be subject to the appended claims.

Claims (18)

  1. 一种图像处理方法,包括:An image processing method includes:
    获取标定图像;Obtaining a calibration image;
    对所述标定图像进行检测得到斜边区域;Detecting the calibration image to obtain a hypotenuse region;
    获取所述斜边区域的第一模量传递函数;Obtaining a first modulus transfer function of the hypotenuse region;
    读取所述标定图像的中心区域的第二模量传递函数;Reading a second modulus transfer function of a central region of the calibration image;
    获取所述第一模量传递函数与第二模量传递函数的比值;及Obtaining a ratio of the first modulus transfer function to a second modulus transfer function; and
    当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件。When the ratio exceeds a threshold, it is determined that the sharpness of the calibration image meets a preset condition.
  2. 根据权利要求1所述的方法,其特征在于,所述获取标定图像,包括:The method according to claim 1, wherein the acquiring a calibration image comprises:
    获取拍摄包含预设图案的标定板得到标定图像,其中,所述预设图案包括标定图案和斜边图案,所述斜边图案位于所述标定图案的四个侧边,且所述斜边图案与所述标定图案之间存在间隙。A calibration image is obtained by shooting a calibration plate containing a preset pattern, wherein the preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and the hypotenuse pattern There is a gap with the calibration pattern.
  3. 根据权利要求1所述的方法,其特征在于,所述获取所述斜边区域的第一模量传递函数,包括:The method according to claim 1, wherein the acquiring a first modulus transfer function of the hypotenuse region comprises:
    获取所述斜边区域,将所述斜边区域分成第一数量的子区域;Acquiring the hypotenuse region and dividing the hypotenuse region into a first number of subregions;
    获取所述第一数量的子区域的模量传递函数;及Obtaining a modulus transfer function of the first number of sub-regions; and
    根据所述第一数量的子区域的模量传递函数得到所述斜边区域的第一模量传递函数。A first modulus transfer function of the hypotenuse region is obtained according to a modulus transfer function of the first number of sub-regions.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一数量的子区域的模量传递函数得到所述斜边区域的第一模量传递函数,包括:The method according to claim 3, wherein the obtaining the first modulus transfer function of the hypotenuse region according to the modulus transfer function of the first number of sub-regions comprises:
    获取第二数量的子区域的模量传递函数,所述第二数量的子区域是从所述第一数量的子区域中选取的;及Obtaining a modulus transfer function for a second number of sub-regions, the second number of sub-regions being selected from the first number of sub-regions; and
    将所述第二数量的子区域的模量传递函数取平均得到所述斜边区域的第一模量传递函数。The modulus transfer function of the second number of sub-regions is averaged to obtain the first modulus transfer function of the hypotenuse region.
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,包括:The method according to any one of claims 1 to 4, further comprising:
    通过倾斜边缘法或空间频域响应曲线获取所述斜边区域的第一模量传递函数。The oblique edge method or the spatial frequency domain response curve is used to obtain the first modulus transfer function of the hypotenuse region.
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:
    获取双摄像头模组中第一摄像头和第二摄像头分别拍摄的标定图像;Obtaining calibration images taken by the first camera and the second camera in the dual camera module;
    对各个标定图像进行检测得到对应的斜边区域;Detecting each calibration image to obtain a corresponding hypotenuse region;
    获取所述斜边区域的第一模量传递函数;Obtaining a first modulus transfer function of the hypotenuse region;
    读取所述标定图像的中心区域的第二模量传递函数;Reading a second modulus transfer function of a central region of the calibration image;
    获取所述第一模量传递函数与第二模量传递函数的比值;Obtaining a ratio of the first modulus transfer function to a second modulus transfer function;
    当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件;When the ratio exceeds a threshold, determining that the sharpness of the calibration image meets a preset condition;
    根据满足预设条件的标定图像获取双摄像头模组中第一摄像头的内参和外参以及第二摄像头的内参和外参;及Obtaining internal and external parameters of a first camera and a internal and external parameter of a second camera in a dual camera module according to a calibration image that meets preset conditions; and
    根据所述第一摄像头的内参和外参以及第二摄像头的内参和外参获取双摄像头模组的外参。Obtaining external parameters of the dual camera module according to the internal and external parameters of the first camera and the internal and external parameters of the second camera.
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:
    当所述斜边图案位于所述标定图像的四个侧边,则对标定图像进行检测得到四个斜边区域,求取四个斜边区域各自的第一模量传递函数,再分别计算四个第一模量传递函数与第二模量传递函数的比值,当四个比值都超过阈值时,确定所述标定图像的清晰度满足预设条件。When the hypotenuse pattern is located on the four sides of the calibration image, the calibration image is detected to obtain four hypotenuse regions, and the first modulus transfer function of each of the four hypotenuse regions is obtained. The ratio of the first modulus transfer function to the second modulus transfer function. When all four ratios exceed the threshold, it is determined that the sharpness of the calibration image meets a preset condition.
  8. 一种标定板,其特征在于,包括A calibration board characterized by comprising:
    承载体;及The carrier; and
    预设图案,设置在所述承载体上;A preset pattern is provided on the carrier;
    所述预设图案包括标定图案和斜边图案,所述斜边图案位于所述标定图案的四个侧 边,且所述斜边图案与所述标定图案之间存在间隙。The preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and a gap exists between the hypotenuse pattern and the calibration pattern.
  9. 根据权利要求8所述的标定板,其特征在于,所述斜边图案与所述标定图案之间间隙的最近距离为所述标定图案中两个相邻特征点之间的距离的0.1至1倍。The calibration plate according to claim 8, wherein a closest distance between a gap between the hypotenuse pattern and the calibration pattern is 0.1 to 1 of a distance between two adjacent feature points in the calibration pattern Times.
  10. 一种图像处理装置,其特征在于,包括:An image processing device, comprising:
    图像获取模块,用于获取标定图像;An image acquisition module for acquiring a calibration image;
    检测模块,用于对所述标定图像进行检测得到斜边区域;A detection module, configured to detect the calibration image to obtain a hypotenuse region;
    参数获取模块,用于获取所述斜边区域的第一模量传递函数;A parameter acquisition module, configured to acquire a first modulus transfer function of the hypotenuse region;
    读取模块,用于读取所述标定图像的中心区域的第二模量传递函数;A reading module, configured to read a second modulus transfer function of a central region of the calibration image;
    比值获取模块,用于获取所述第一模量传递函数与第二模量传递函数的比值;及A ratio obtaining module, configured to obtain a ratio of the first modulus transfer function to a second modulus transfer function; and
    确定模块,用于当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件。A determining module, configured to determine that the sharpness of the calibration image meets a preset condition when the ratio exceeds a threshold.
  11. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行操作以下操作:An electronic device includes a memory and a processor. The memory stores a computer program. When the computer program is executed by the processor, the processor causes the processor to perform the following operations:
    获取标定图像;Obtaining a calibration image;
    对所述标定图像进行检测得到斜边区域;Detecting the calibration image to obtain a hypotenuse region;
    获取所述斜边区域的第一模量传递函数;Obtaining a first modulus transfer function of the hypotenuse region;
    读取所述标定图像的中心区域的第二模量传递函数;Reading a second modulus transfer function of a central region of the calibration image;
    获取所述第一模量传递函数与第二模量传递函数的比值;及Obtaining a ratio of the first modulus transfer function to a second modulus transfer function; and
    当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件。When the ratio exceeds a threshold, it is determined that the sharpness of the calibration image meets a preset condition.
  12. 根据权利要求11所述的电子设备,其特征在于,所述处理器还用于执行:The electronic device according to claim 11, wherein the processor is further configured to execute:
    获取拍摄包含预设图案的标定板得到标定图像,其中,所述预设图案包括标定图案和斜边图案,所述斜边图案位于所述标定图案的四个侧边,且所述斜边图案与所述标定图案之间存在间隙。A calibration image is obtained by shooting a calibration plate containing a preset pattern, wherein the preset pattern includes a calibration pattern and a hypotenuse pattern, the hypotenuse pattern is located on four sides of the calibration pattern, and the hypotenuse pattern There is a gap with the calibration pattern.
  13. 根据权利要求11所述的电子设备,其特征在于,所述处理器还用于执行:The electronic device according to claim 11, wherein the processor is further configured to execute:
    获取所述斜边区域,将所述斜边区域分成第一数量的子区域;Acquiring the hypotenuse region and dividing the hypotenuse region into a first number of subregions;
    获取所述第一数量的子区域的模量传递函数;及Obtaining a modulus transfer function of the first number of sub-regions; and
    根据所述第一数量的子区域的模量传递函数得到所述斜边区域的第一模量传递函数。A first modulus transfer function of the hypotenuse region is obtained according to a modulus transfer function of the first number of sub-regions.
  14. 根据权利要求13所述的电子设备,其特征在于,所述处理器还用于执行:The electronic device according to claim 13, wherein the processor is further configured to execute:
    获取第二数量的子区域的模量传递函数,所述第二数量的子区域是从所述第一数量的子区域中选取的;及Obtaining a modulus transfer function for a second number of sub-regions, the second number of sub-regions being selected from the first number of sub-regions; and
    将所述第二数量的子区域的模量传递函数取平均得到所述斜边区域的第一模量传递函数。The modulus transfer function of the second number of sub-regions is averaged to obtain the first modulus transfer function of the hypotenuse region.
  15. 根据权利要求11至14中任一项所述的电子设备,其特征在于,所述处理器还用于执行:The electronic device according to any one of claims 11 to 14, wherein the processor is further configured to execute:
    通过倾斜边缘法或空间频域响应曲线获取所述斜边区域的第一模量传递函数。The oblique edge method or the spatial frequency domain response curve is used to obtain the first modulus transfer function of the hypotenuse region.
  16. 根据权利要求11所述的电子设备,其特征在于,所述处理器还用于执行:The electronic device according to claim 11, wherein the processor is further configured to execute:
    获取双摄像头模组中第一摄像头和第二摄像头分别拍摄的标定图像;Obtaining calibration images taken by the first camera and the second camera in the dual camera module;
    对各个标定图像进行检测得到对应的斜边区域;Detecting each calibration image to obtain a corresponding hypotenuse region;
    获取所述斜边区域的第一模量传递函数;Obtaining a first modulus transfer function of the hypotenuse region;
    读取所述标定图像的中心区域的第二模量传递函数;Reading a second modulus transfer function of a central region of the calibration image;
    获取所述第一模量传递函数与第二模量传递函数的比值;Obtaining a ratio of the first modulus transfer function to a second modulus transfer function;
    当所述比值超过阈值,则确定所述标定图像的清晰度满足预设条件;When the ratio exceeds a threshold, determining that the sharpness of the calibration image meets a preset condition;
    根据满足预设条件的标定图像获取双摄像头模组中第一摄像头的内参和外参以及第二摄像头的内参和外参;及Obtaining internal and external parameters of a first camera and a internal and external parameter of a second camera in a dual camera module according to a calibration image that meets preset conditions; and
    根据所述第一摄像头的内参和外参以及第二摄像头的内参和外参获取双摄像头模组的外参。Obtaining external parameters of the dual camera module according to the internal and external parameters of the first camera and the internal and external parameters of the second camera.
  17. 根据权利要求11所述的电子设备,其特征在于,所述处理器还用于执行:The electronic device according to claim 11, wherein the processor is further configured to execute:
    当所述斜边图案位于所述标定图像的四个侧边,则对标定图像进行检测得到四个斜边区域,求取四个斜边区域各自的第一模量传递函数,再分别计算四个第一模量传递函数与第二模量传递函数的比值,当四个比值都超过阈值时,确定所述标定图像的清晰度满足预设条件。When the hypotenuse pattern is located on the four sides of the calibration image, the calibration image is detected to obtain four hypotenuse regions, and the first modulus transfer function of each of the four hypotenuse regions is obtained. The ratio of the first modulus transfer function to the second modulus transfer function. When all four ratios exceed the threshold, it is determined that the sharpness of the calibration image meets a preset condition.
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的图像处理方法的操作。A computer-readable storage medium having stored thereon a computer program, wherein when the computer program is executed by a processor, the operations of the image processing method according to any one of claims 1 to 7 are realized.
PCT/CN2019/088882 2018-07-11 2019-05-28 Image processing method and apparatus, electronic device and computer-readable storage medium WO2020010945A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810758592.1 2018-07-11
CN201810758592.1A CN110717942B (en) 2018-07-11 2018-07-11 Image processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020010945A1 true WO2020010945A1 (en) 2020-01-16

Family

ID=69143229

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088882 WO2020010945A1 (en) 2018-07-11 2019-05-28 Image processing method and apparatus, electronic device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN110717942B (en)
WO (1) WO2020010945A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612852A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for verifying camera parameters
CN112184723A (en) * 2020-09-16 2021-01-05 杭州三坛医疗科技有限公司 Image processing method and device, electronic device and storage medium
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device
CN112435290A (en) * 2020-09-29 2021-03-02 南京林业大学 Leaf area image measuring method based on saturation segmentation
CN112581546A (en) * 2020-12-30 2021-03-30 深圳市杉川机器人有限公司 Camera calibration method and device, computer equipment and storage medium
CN113240752A (en) * 2021-05-21 2021-08-10 中科创达软件股份有限公司 Internal reference and external reference cooperative calibration method and device
CN113256727A (en) * 2020-02-13 2021-08-13 纳恩博(北京)科技有限公司 Mobile device and method and device for online parameter calibration and inspection of image sensing system
CN113706632A (en) * 2021-08-31 2021-11-26 上海景吾智能科技有限公司 Calibration method and system based on three-dimensional visual calibration plate
CN114387347A (en) * 2021-10-26 2022-04-22 浙江智慧视频安防创新中心有限公司 Method and device for determining external parameter calibration, electronic equipment and medium
CN114445501A (en) * 2020-10-30 2022-05-06 北京小米移动软件有限公司 Multi-camera calibration method, multi-camera calibration device and storage medium
CN114782549A (en) * 2022-04-22 2022-07-22 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN117911542A (en) * 2024-03-19 2024-04-19 杭州灵西机器人智能科技有限公司 Calibration plate, calibration plate identification method, system, equipment and medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427752A (en) * 2020-06-09 2020-07-17 北京东方通科技股份有限公司 Regional anomaly monitoring method based on edge calculation
CN115482288A (en) * 2021-05-31 2022-12-16 影石创新科技股份有限公司 Lens parameter conversion method, device, computer equipment and storage medium
CN114782587B (en) * 2022-06-16 2022-09-02 深圳市国人光速科技有限公司 Jet printing image processing method and jet printing system for solving jet printing linear step pixel

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050280803A1 (en) * 2004-06-17 2005-12-22 The Boeing Company Method for calibration and certifying laser projection beam accuracy
CN101109620A (en) * 2007-09-05 2008-01-23 北京航空航天大学 Method for standardizing structural parameter of structure optical vision sensor
CN101586943A (en) * 2009-07-15 2009-11-25 北京航空航天大学 Method for calibrating structure light vision transducer based on one-dimensional target drone
CN105424731A (en) * 2015-11-04 2016-03-23 中国人民解放军第四军医大学 Resolution ratio performance measuring device of cone beam CT and calibration method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8456545B2 (en) * 2009-05-08 2013-06-04 Qualcomm Incorporated Systems, methods, and apparatus for generation of reinforcement pattern and systems, methods, and apparatus for artifact evaluation
JP2011082746A (en) * 2009-10-06 2011-04-21 Canon Inc Image processing apparatus
US8837837B2 (en) * 2011-12-30 2014-09-16 Mckesson Financial Holdings Methods, apparatuses, and computer program products for determining a modulation transfer function of an imaging system
JP5983373B2 (en) * 2012-12-07 2016-08-31 富士通株式会社 Image processing apparatus, information processing method, and program
US20150109613A1 (en) * 2013-10-18 2015-04-23 Point Grey Research Inc. Apparatus and methods for characterizing lenses
AU2015234328B2 (en) * 2015-09-30 2018-03-15 Canon Kabushiki Kaisha Calibration marker for 3D printer calibration
CN106101697B (en) * 2016-06-21 2017-12-05 深圳市辰卓科技有限公司 Approach for detecting image sharpness, device and test equipment
CN107948531A (en) * 2017-12-29 2018-04-20 努比亚技术有限公司 A kind of image processing method, terminal and computer-readable recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050280803A1 (en) * 2004-06-17 2005-12-22 The Boeing Company Method for calibration and certifying laser projection beam accuracy
CN101109620A (en) * 2007-09-05 2008-01-23 北京航空航天大学 Method for standardizing structural parameter of structure optical vision sensor
CN101586943A (en) * 2009-07-15 2009-11-25 北京航空航天大学 Method for calibrating structure light vision transducer based on one-dimensional target drone
CN105424731A (en) * 2015-11-04 2016-03-23 中国人民解放军第四军医大学 Resolution ratio performance measuring device of cone beam CT and calibration method

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256727A (en) * 2020-02-13 2021-08-13 纳恩博(北京)科技有限公司 Mobile device and method and device for online parameter calibration and inspection of image sensing system
CN111612852A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for verifying camera parameters
CN112184723A (en) * 2020-09-16 2021-01-05 杭州三坛医疗科技有限公司 Image processing method and device, electronic device and storage medium
CN112184723B (en) * 2020-09-16 2024-03-26 杭州三坛医疗科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112435290A (en) * 2020-09-29 2021-03-02 南京林业大学 Leaf area image measuring method based on saturation segmentation
CN114445501A (en) * 2020-10-30 2022-05-06 北京小米移动软件有限公司 Multi-camera calibration method, multi-camera calibration device and storage medium
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device
CN112232279B (en) * 2020-11-04 2023-09-05 杭州海康威视数字技术股份有限公司 Personnel interval detection method and device
CN112581546A (en) * 2020-12-30 2021-03-30 深圳市杉川机器人有限公司 Camera calibration method and device, computer equipment and storage medium
CN113240752A (en) * 2021-05-21 2021-08-10 中科创达软件股份有限公司 Internal reference and external reference cooperative calibration method and device
CN113240752B (en) * 2021-05-21 2024-03-22 中科创达软件股份有限公司 Internal reference and external reference collaborative calibration method and device
CN113706632A (en) * 2021-08-31 2021-11-26 上海景吾智能科技有限公司 Calibration method and system based on three-dimensional visual calibration plate
CN113706632B (en) * 2021-08-31 2024-01-16 上海景吾智能科技有限公司 Calibration method and system based on three-dimensional vision calibration plate
CN114387347A (en) * 2021-10-26 2022-04-22 浙江智慧视频安防创新中心有限公司 Method and device for determining external parameter calibration, electronic equipment and medium
CN114387347B (en) * 2021-10-26 2023-09-19 浙江视觉智能创新中心有限公司 Method, device, electronic equipment and medium for determining external parameter calibration
CN114782549B (en) * 2022-04-22 2023-11-24 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN114782549A (en) * 2022-04-22 2022-07-22 南京新远见智能科技有限公司 Camera calibration method and system based on fixed point identification
CN117911542A (en) * 2024-03-19 2024-04-19 杭州灵西机器人智能科技有限公司 Calibration plate, calibration plate identification method, system, equipment and medium
CN117911542B (en) * 2024-03-19 2024-06-11 杭州灵西机器人智能科技有限公司 Calibration plate, calibration plate identification method, system, equipment and medium

Also Published As

Publication number Publication date
CN110717942B (en) 2022-06-10
CN110717942A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
WO2020010945A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
US11570423B2 (en) System and methods for calibration of an array camera
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
CN108230397B (en) Multi-view camera calibration and correction method and apparatus, device, program and medium
CN107749268B (en) Screen detection method and equipment
US10909719B2 (en) Image processing method and apparatus
CN106815869B (en) Optical center determining method and device of fisheye camera
US9781412B2 (en) Calibration methods for thick lens model
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
JP2015203652A (en) Information processing unit and information processing method
US20210120194A1 (en) Temperature measurement processing method and apparatus, and thermal imaging device
WO2019041794A1 (en) Distortion correction method and apparatus for three-dimensional measurement, and terminal device and storage medium
US20130202206A1 (en) Exposure measuring method and apparatus based on composition for automatic image correction
CN114140521A (en) Method, device and system for identifying projection position and storage medium
CN107527323B (en) Calibration method and device for lens distortion
CN116934833A (en) Binocular vision-based underwater structure disease detection method, equipment and medium
CN115631245A (en) Correction method, terminal device and storage medium
CN116012322A (en) Camera dirt detection method, device, equipment and medium
Meißner Determination and improvement of spatial resolution obtained by optical remote sensing systems
JP6255819B2 (en) COMPUTER PROGRAM FOR MEASUREMENT, MEASUREMENT DEVICE AND MEASUREMENT METHOD
CN112866550B (en) Phase difference acquisition method and apparatus, electronic device, and computer-readable storage medium
CN110728714B (en) Image processing method and device, storage medium and electronic equipment
KR102242202B1 (en) Apparatus and Method for camera calibration
CN116485842A (en) Method, device and storage medium for automatic target recognition
CN115546138A (en) Polygonal area calibration method for distorted image and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19834660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19834660

Country of ref document: EP

Kind code of ref document: A1