CN107155017B - Image acquisition method and device - Google Patents

Image acquisition method and device Download PDF

Info

Publication number
CN107155017B
CN107155017B CN201710524753.6A CN201710524753A CN107155017B CN 107155017 B CN107155017 B CN 107155017B CN 201710524753 A CN201710524753 A CN 201710524753A CN 107155017 B CN107155017 B CN 107155017B
Authority
CN
China
Prior art keywords
distortion
image
position information
camera module
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710524753.6A
Other languages
Chinese (zh)
Other versions
CN107155017A (en
Inventor
王旭
杨飞菲
姜亚龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Granfei Intelligent Technology Co ltd
Original Assignee
Shanghai Zhaoxin Integrated Circuit Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhaoxin Integrated Circuit Co Ltd filed Critical Shanghai Zhaoxin Integrated Circuit Co Ltd
Priority to CN201710524753.6A priority Critical patent/CN107155017B/en
Publication of CN107155017A publication Critical patent/CN107155017A/en
Application granted granted Critical
Publication of CN107155017B publication Critical patent/CN107155017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image acquisition method and an image acquisition apparatus using the same. The embodiment of the invention provides a method for acquiring an image through an image acquisition device, which is executed by a processing unit and used for acquiring the image through a correction camera module. Controlling the camera module to acquire a captured image, wherein the captured image contains distortion. Reference position information of a plurality of pixels of the photographed image is generated, wherein the reference position information includes the distortion-removed position information of the plurality of pixels. And correcting the output of the camera module by using the mapping table, wherein the mapping table comprises a plurality of storage cells for recording the reference position information corresponding to the pixels.

Description

Image acquisition method and device
Technical Field
The present invention relates to image processing technologies, and in particular, to an offline camera calibration method and an apparatus using the same.
Background
In image capturing, the use of the lens brings about some advantages, such as an increase in the amount of light entering, a reduction in the exposure time, and the like, but also brings about a disadvantage of causing nonlinear image deformation. The nonlinear image distortion generally includes radial distortion (radial distortion) and tangential distortion (tangential distortion). Therefore, there is a need for an offline camera calibration and apparatus using the same to reduce distortion of the captured image.
Disclosure of Invention
The embodiment of the invention provides an image acquisition method, which is executed by a processing unit and is used for acquiring an image by correcting a camera. Controlling the camera module to acquire a captured image, wherein the captured image contains distortion. Reference position information of a plurality of pixels of the photographed image is generated, wherein the reference position information includes the distortion-removed position information of the plurality of pixels. And correcting the output of the camera module by using the mapping table, wherein the mapping table comprises a plurality of storage cells for recording the reference position information corresponding to the pixels.
An embodiment of the present invention provides an image capturing device, which may be a camera calibration device, and at least includes a camera module and a processing unit. The processing unit is coupled with the camera module and controls the camera module to acquire a shot image containing distortion; generating reference position information of a plurality of pixels of the photographed image, the reference position information including the distortion-removed position information of the plurality of pixels; storing the reference position information to a mapping table; and correcting the output of the camera module by using the mapping table. The mapping table comprises a plurality of storage grids, and the storage grids record reference position information corresponding to pixels.
Drawings
FIG. 1 is a block diagram of a computing device according to an embodiment of the present invention.
Fig. 2 is a flowchart of a camera calibration method according to an embodiment of the invention.
FIG. 3 is a schematic diagram of a calibration plate according to an embodiment of the invention.
FIG. 4 is a diagram of a captured image according to an embodiment of the present invention.
Fig. 5 is a schematic view of a corner point according to an embodiment of the present invention.
FIG. 6 is a flowchart of a method for determining parameters according to an embodiment of the present invention.
FIG. 7 is a flowchart of a method for determining maximum likelihood point coordinates according to an embodiment of the invention.
Fig. 8 is a schematic diagram of a corner point after radial distortion is removed according to an embodiment of the present invention.
FIG. 9 is a schematic diagram of a corner point after radial distortion and tangential distortion are removed according to an embodiment of the present invention.
[ notation ] to show
110 a processing unit;
130 an image buffer;
150 volatile memory;
160 a non-volatile storage device;
170 camera module controller;
190 a camera module;
S210-S290;
30, checking the board;
40 shooting an image;
s611 to S650;
s710 to S790.
Detailed Description
The following description is of the best mode for carrying out the invention and is intended to illustrate the general spirit of the invention and not to limit the invention. Reference must be made to the following claims for their true scope of the invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of further features, integers, steps, operations, elements, components, and/or groups thereof.
Use of the terms "first," "second," "third," and the like in the claims is used to modify a claim element without indicating a priority, precedence, or order between elements, or the order in which a method step is performed, but is intended to distinguish one element from another element having a same name.
FIG. 1 is a block diagram of a computing device according to an embodiment of the present invention. The system architecture can be implemented in desktop computers, notebook computers, tablet computers, mobile phones, digital cameras, digital video recorders, etc., and at least includes a processing unit 110. The processing unit 110 may be implemented in numerous ways, such as in dedicated hardware circuitry or in general purpose hardware (e.g., a single processor, multiple processors with parallel processing capability, a graphics processor, or other processor with computing capability), and when executing hardware (hardware), firmware (firmware), or software (software) instructions, provides the functionality described hereinafter. The processing unit 110 may be integrated in an Image Signal Processor (ISP), and may control the camera module 190 through the camera module controller 170 to capture an Image. The camera module 190 may include an image sensor, such as a complementary metal-oxide-semiconductor (CMOS) sensor, a charge-coupled device (CCD) sensor, etc., for sensing an image formed by the intensities of red, green, and blue light, and read electronics for collecting the sensed data from the image sensor. The volatile Memory 150, such as a Dynamic Random Access Memory (DRAM), is used for storing data required in the execution process, such as variables, data tables (data tables), and the like.
FIG. 2 is a flowchart illustrating an image correction method according to an embodiment of the present invention. The method is performed by the processing unit 110 executing the relevant hardware, firmware, or software instructions. FIG. 3 is a schematic diagram of a calibration plate according to an embodiment of the invention. In order to correct the image taken by the camera module 190, the embodiment of the present invention provides a checker board (checkerboard)30 like a checkerboard (checkerboard). In some embodiments, the length and/or width of the calibration plate is adjustable. Generally, the size of the calibration board is preferably such that the entire photographing lens can be covered. Before the camera module 190 leaves the factory, the processing unit 110 drives the camera module controller 170 to control the camera module 190 to photograph the calibration board 30, so that the camera module 190 obtains a photographed image and stores the photographed image in the image buffer 130 (step S210). FIG. 4 is a diagram of a captured image according to an embodiment of the present invention. However, the image sensor senses the photographed image 40 generated by the light penetrating through the lens in the camera module 190. The photographed image 40 is a calibration plate image including distortion (distortion) including radial distortion and tangential distortion. The radial distortion is due to the shape of the lens in camera module 190 such that light rays farther from the center of the lens bend more greatly as they pass through the lens and light rays closer to the center of the lens bend less. The tangential distortion is caused by the assembly error of the camera module 190, mainly because the lens and the image sensor of the camera module 190 are not arranged in parallel.
In order to correct the output of the camera, the processing unit 110 determines the corner of the captured image 40 and a distortion center (step S230), and determines the optimal parameters corresponding to the camera module 190
Figure GDA0001514059140000031
And
Figure GDA0001514059140000032
(step S250), applying the optimal parameters
Figure GDA0001514059140000033
And
Figure GDA0001514059140000034
and the distortion center obtains the maximum likelihood point coordinates of the angular point in the shot image 40 after the radial distortion and the tangential distortion are eliminated
Figure GDA0001514059140000041
(step S270) according to the maximum likelihood point coordinates of the corner points
Figure GDA0001514059140000042
Coordinates of maximum likelihood points of pixels in the captured image 40 other than the corner points are obtained (step S280) and a mapping table is generated and stored in the nonvolatile storage device 160 based on the above calculation results (step S290).
In step S230, the corner point is an extreme point, i.e. a point particularly prominent on the specific property. The corner point may be the intersection of two lines (e.g. the intersection of any two lines in fig. 3) or may be a point located on different things in two adjacent main directions. When the captured image is obtained by capturing a calibration plate, the corner point is the intersection of two edges (see the examples of fig. 4 and 5 for details). The angular points (horns) and distortion centers (x) of the captured image 40 can be determined by those skilled in the art with reference to known algorithmse,ye). For example, Richard Hartley and Sing Bing Kang published in the academic journal IEEE Transactions on Pattern Analysis and Machine understanding of August 2007, Vol.29, No.8, pages 1309 to 1321, the article Parameter-Free Radial disorder recording with Center of disorder Estimation. Fig. 5 is a schematic view of a corner point according to an embodiment of the present invention.
In step S250, the processing unit 110 selects one of the sets of parameters, and calculates all corner points in the captured image 40 using the selected parameter to generate an energy function. Processing unit 110 reuses multiple sets of parametersThe other groups of parameters in (1) generate a plurality of energy functions, and when the energy functions corresponding to all groups are calculated, the group of parameters corresponding to the energy function with the minimum energy function is taken as the optimal parameter, wherein the optimal parameter can be expressed as
Figure GDA0001514059140000046
And
Figure GDA0001514059140000047
the detailed calculation process is described as follows: FIG. 6 is a flowchart of a method for determining parameters according to an embodiment of the present invention. The method includes an outer loop (step S611 to S615) and an inner loop (step S631 to S637). In each round of the outer loop, the processing unit 110 is configured to select a set of α from the m sets of parametersj=(cj,pj)TAnd betaj=(β1j,β2j,β3j)TJ is 0. ltoreq. m-1 (step S611), where m sets of parameters may be preset based on empirical values. The first parameter α can be used to simulate the radial surface distortion, and the second parameter β can be used to simulate the direction of the optical axis. Then, the processing unit 110 repeatedly executes the inner loop using the selected parameter αjAnd betajThe coordinates P' corresponding to the n corner points after removal of the radial plane distortion are calculated (step S631 to step S637). Therein, the processing unit 110 may sample n corner points in fig. 5, for example, a column (column) or/and a row (row) in fig. 5 may be selected. After the inner loop is executed, the processing unit 110 calculates the corresponding parameter α using the n coordinates P' without the radial surface distortionjAnd betajThe energy function of (2) may indicate a degree of removing the radial distortion in the captured image 40 (step S613). After all the sets of the first parameter α and the second parameter β have been processed (yes in step S615), the outer loop is skipped, and the parameter corresponding to the minimum energy function is taken as the optimal parameter, wherein the optimal parameter includes the first optimal parameter
Figure GDA0001514059140000043
And a second optimum parameter
Figure GDA0001514059140000044
And
Figure GDA0001514059140000045
(step S650).
In each pass of the inner loop, the processing unit 110 selects the first (next) corner point P in fig. 5i=(xi,yi) I is 0. ltoreq. n-1 (step S631), the parameter α to be selectedjAnd betajSubstituting into the surface equation to calculate the corner point PiZ is a depth value ofi(step S633), and using the depth value ziAnd calculating the coordinate P 'with the radial distortion removed by the distance h between the camera module 190 and the calibration plate 30'i(step S635). When all corner points have been processed (yes in step S637), the inner loop is skipped.
In step S633, the depth value ziThe following surface equation can be used for calculation:
Figure GDA0001514059140000051
s.t.
Figure GDA0001514059140000052
wherein z isiDepth value, x, representing the ith corner pointiX-coordinate value, y, representing the ith corner pointiY-coordinate value representing the ith corner, cjAnd pjRepresents the j first parameter alpha, beta 1j、β2jAnd beta 3jRepresents the jth second parameter beta.
In step S635, the coordinates P 'with the distorted radial surface are removed'iThe following formula can be used for calculation:
Figure GDA0001514059140000053
wherein, P'iRepresenting the elimination of the ith corner pointThe coordinate of the distorted radial plane, h represents the distance between the camera module 190 and the calibration plate 30, and xiX-coordinate value, y, representing the ith corner pointiY-coordinate value, z, representing the ith corner pointiRepresenting the depth value of the ith corner point.
In step S613, the corresponding parameter αjAnd betajEnergy function E ofjThe following formula can be used for calculation:
Figure GDA0001514059140000054
coordinates of median angular point after eliminating radial surface distortion
Figure GDA0001514059140000055
The following formula can be used for calculation:
Figure GDA0001514059140000056
Figure GDA0001514059140000057
wherein h represents the distance between the camera module 190 and the calibration board 30, and xkX-coordinate value, y, representing the kth cornerkY-coordinate value, z, representing the kth corner pointkDepth value, x, representing the kth corner pointk-1X-coordinate value, y, representing the (k-1) th corner pointk-1Y-coordinate value, z, representing the (k-1) th corner pointk-1Depth value, x, representing the k-1 cornerk+1X-coordinate value, y, representing the (k + 1) th corner pointk+1Y-coordinate value representing the (k + 1) th corner point, and zk+1Representing the depth value of the (k + 1) th corner point. The positional relationship among the (k-1) th, the (k) th and the (k + 1) th corner points may be any three collinear and adjacent corner points, such as the four cases shown in table 1.
Figure GDA0001514059140000061
Table 1
When the kth corner point is an edge corner point, i.e. the kth side has no (k-1) th and/or (k + 1) th corner point, the coordinates of the median corner point
Figure GDA0001514059140000062
The values are respectively:
Figure GDA0001514059140000063
Figure GDA0001514059140000064
in step S270, the processing unit 110 executes step S250 to determine the optimal parameters
Figure GDA0001514059140000065
And
Figure GDA0001514059140000066
introducing the surface equation to calculate the coordinates corresponding to the corner points in FIG. 5 after removing the radial distortion, and then applying the principle of equidistant space between adjacent corner points to calculate the coordinates corresponding to the corner points in FIG. 5 after removing the radial distortion and the tangential distortion to calculate the coordinates of the maximum likelihood points corresponding to the corner points in FIG. 5 after removing the radial distortion and the tangential distortion
Figure GDA0001514059140000067
The detailed calculation process is described as follows: FIG. 7 is a flowchart of a method for determining maximum likelihood point coordinates according to an embodiment of the invention. The processing unit 110 obtains optimal parameters including a first optimal parameter
Figure GDA0001514059140000068
And a second optimum parameter
Figure GDA0001514059140000069
Wherein,
Figure GDA00015140591400000610
and
Figure GDA00015140591400000611
(step S710), a loop is repeatedly executed to use the optimal parameters
Figure GDA00015140591400000612
And
Figure GDA00015140591400000613
calculating the best undistorted radial coordinates P 'corresponding to all corner points in FIG. 5'u,v(steps S731 to S737). In each pass of the loop, the processing unit 110 selects the first (next) corner point P in fig. 5u,v=(xu,v,yu,v) U is more than or equal to 0 and less than or equal to U-1, V is more than or equal to 0 and less than or equal to V-1, U represents the total number of rows (rows) of angular points, V represents the total number of columns (columns) of angular points (step S731), Pu,vRepresents the v-th column corner point of the u-th row, xu,vX coordinate value representing the v column corner of the u row, and yu,vAnd a y coordinate value representing the v column corner of the u row. Then, the processing unit 110 will optimize the parameters
Figure GDA00015140591400000614
And
Figure GDA00015140591400000615
angular point P calculated by substituting curved surface equationu,vBest depth value zu,v(step S733), and using the optimal depth value zu,vAnd calculating the optimal coordinate P 'with the radial distortion eliminated by the distance h between the camera module 190 and the calibration plate 30'u,v(step S735). When all corner points have been processed (yes route in step S737), the loop is exited. Fig. 8 is a schematic diagram of a corner point after radial distortion is removed according to an embodiment of the present invention.
In step S733, corner Point Pu,vBest depth value zu,vThe following surface equation can be used for calculation:
Figure GDA00015140591400000616
s.t.
Figure GDA0001514059140000071
wherein z isu,vDepth value, x, representing the corner point of the v-th rowu,vX-coordinate value, y, representing the corner point of the u-th row and v-th columnu,vY-coordinate value representing the angle point of the v-th column of the u-th row and the optimum parameter
Figure GDA0001514059140000072
And
Figure GDA0001514059140000073
wherein,
Figure GDA0001514059140000074
and
Figure GDA0001514059140000075
in step S735, the best coordinates P 'after the distortion of the radial surface is removed'u,vThe following formula can be used for calculation:
Figure GDA0001514059140000076
wherein, P'u,vRepresents the optimal coordinates of the u-th row and v-th column corner points after radial distortion removal, h represents the distance between the camera module 190 and the verification board 30, and xu,vX-coordinate value, y, representing the corner point of the u-th row and v-th columnu,vY-coordinate value representing the v-th column corner of the u-th row, and zu,vRepresents the optimal depth value of the v-th column corner of the u-th row.
When the detected corner points have been subjected to radial distortion removal (i.e. after the corner points of fig. 8 are obtained) (yes in step S737), the processing unit 110 calculates the column average of the corner points subjected to radial distortion removal
Figure GDA0001514059140000077
And row mean value
Figure GDA0001514059140000078
(step S750), the nearest distortion center (x) is obtainede,ye) Index _ x1 and index _ x2 of the two columns and the nearest distortion center (x)e,ye) Index _ y1 and index _ y2 of two rows, and the distortion center (x)e,ye) Can be calculated in step S230 (step S760) according to the distortion center (x)e,ye) And calculating the basic value x of the x axis by the information of two adjacent rows and two adjacent columnsbaseAnd step value xstepAnd the basic value y of the y-axisbaseAnd a step value ystep(step S770), and generating maximum likelihood point coordinates corresponding to all the corner points in the photographed image 40 after removing the radial distortion and the tangential distortion therefrom
Figure GDA0001514059140000079
(step S790). Fig. 9 is a schematic diagram of an angular point after radial distortion and tangential distortion are removed according to an embodiment of the present invention, and it should be noted that the schematic diagram of the angular point shown in fig. 9 is inclined because the optical axis is not perpendicular to the calibration plate, and when the optical axis is perpendicular to the calibration plate or almost perpendicular to the calibration plate, the inclination condition shown in fig. 9 disappears or is very insignificant.
In step S750, the column average of the corner points after the radial distortion is eliminated
Figure GDA00015140591400000710
And row mean value
Figure GDA00015140591400000711
The following formula can be used for calculation:
Figure GDA00015140591400000712
Figure GDA00015140591400000713
wherein U represents a total number of rows (rows) of corner points, V represents a total number of columns (columns) of corner points, x'u,vX-coordinate values representing the u-th row v-th column corner points after the radial distortion is eliminated, and y'u,vAnd representing the y-coordinate value of the angular point of the ith row and the vth column after radial surface distortion is eliminated.
In step S760, the index values index _ x1, index _ x2, index _ y1, and index _ y2 may be obtained using the following equations:
Figure GDA0001514059140000081
Figure GDA0001514059140000082
Figure GDA0001514059140000083
Figure GDA0001514059140000084
wherein x iseX-coordinate value, y, representing center of distortioneY-coordinate values representing the distortion center, U representing the total number of rows of corner points, V representing the total number of columns of corner points,
Figure GDA0001514059140000085
average value of x-coordinate values representing angular points of the v-th row after distortion removal of radial plane, and
Figure GDA0001514059140000086
representing the average value of the y-coordinate values of the u-th line of corner points after the radial plane distortion is eliminated,
Figure GDA0001514059140000087
represents the average value of the x-coordinate values of the column corner points of the index _ x1 after the radial plane distortion is eliminated,
Figure GDA0001514059140000088
represents the average value of the x-coordinate values of the column corner points of the index _ x2 after the radial plane distortion is eliminated,
Figure GDA0001514059140000089
an average value of the removed radial plane distortion y-coordinate values representing the line corner point of the index _ y1, an
Figure GDA00015140591400000810
And the mean value of the y-coordinate values of the first index _ y2 line corner point after the radial plane distortion is eliminated.
In step S770, a base value x of the x-axis is calculatedbaseAnd step value xstepAnd the basic value y of the y-axisbaseAnd a step value ystepWherein the basic value x of the x-axis/y-axisbase/ybaseMean value of x-coordinate value/y-coordinate value of one column/row corner point closer to distortion center, x-coordinate valuestep/ystepMeans the difference between the x-coordinate value/y-coordinate value of the index value of two rows/columns near the distortion center after the radial distortion is eliminated. In one embodiment, the base value x of the x-axisbaseAnd step value xstepAnd the basic value y of the y-axisbaseAnd a step value ystepThe following formula can be used for calculation:
Figure GDA00015140591400000811
Figure GDA00015140591400000812
Figure GDA00015140591400000813
Figure GDA00015140591400000814
wherein,
Figure GDA00015140591400000815
represents the average value of the x-coordinate values of the column corner points of the index _ x1 after the radial plane distortion is eliminated,
Figure GDA00015140591400000816
represents the average value of the x-coordinate values of the column corner points of the index _ x2 after the radial plane distortion is eliminated,
Figure GDA00015140591400000817
an average value of the removed radial plane distortion y-coordinate values representing the line corner point of the index _ y1, an
Figure GDA00015140591400000818
And the mean value of the y-coordinate values of the first index _ y2 line corner point after the radial plane distortion is eliminated.
In step S790, the maximum likelihood point coordinates after removal of the radial distortion and the tangential distortion corresponding to all the corner points in the captured image 40
Figure GDA0001514059140000091
The following formula can be used for calculation:
Figure GDA0001514059140000092
Figure GDA0001514059140000093
Figure GDA0001514059140000094
Figure GDA0001514059140000095
wherein, the value range of r and s can be expressed as:
index_x1≤r≤V-1-index_x1,
index_y1≤s≤U-1-index_y1
wherein,
Figure GDA0001514059140000096
x-coordinate values representing the (index _ x1+ r) -th column corner points after removing the radial distortion and the tangential distortion,
Figure GDA0001514059140000097
and the y-coordinate values represent the (index _ y1+ s) th row corner points after radial distortion and tangential distortion are eliminated, U represents the total number of rows of corner points, and V represents the total number of columns of corner points.
In step S280, those skilled in the art can rely on the maximum likelihood point coordinates of the corner point
Figure GDA0001514059140000098
The coordinates of the maximum likelihood points of the pixels in fig. 9 except for the corner points are calculated using a known algorithm (e.g., interpolation).
In step S290, the non-volatile memory device 160 can be a flash memory or other memory device that does not cause the disappearance of the mapping table due to the power failure. The mapping table may include a plurality of storage cells, the number and location of the storage cells corresponding to the number and location of image sensors of the image sensor array. For example, when the image sensor array includes mxn image sensors, the mapping table includes mxn memory cells, m and n are integers greater than 0, and m and n may be the same or different integers. Each cell records reference position information of a pixel of a captured image. Suppose storage cell [ i, j ] records [ k, l ]: in detail, when the storage cell [ i, j ] is the corner determined in step S230, then [ k, l ] may include information of the maximum likelihood point coordinates after radial distortion and tangential distortion removal corresponding to the corner calculated in step S270. When the storage bin [ i, j ] is not the corner determined in step S230, then [ k, l ] may include information for calculating the maximum likelihood point coordinates corresponding to the pixel in step S280. In some embodiments, the storage information of the storage cell [ i, j ] may represent that the reference position of the pixel [ i, j ] of the captured image is [ k, l ], where i, k are any integer from 0 to m-1, and j, l are any integer from 0 to n-1. In some embodiments, the storage information of the storage cell [ i, j ] may represent that the reference position of the pixel [ i, j ] of the captured image is [ i + k, j + l ], where i is any integer from 0 to m-1, k is any integer (positive integer, 0 or negative integer) and i + k is from 0 to m-1, and j is any integer from 0 to n-1, l is any integer and j + l is from 0 to n-1. In some embodiments, to reduce the storage space, the mapping table may store only the information of the maximum likelihood point coordinates after removing the radial distortion and the tangential distortion corresponding to the corner point determined in step S230. In some embodiments, the mapping table may store only information of the optimal coordinates of the corner point determined in step S735 after removing the radial distortion.
After the shipment of the camera module 190, the processing unit 110 may take the original image from the camera module 190 and generate the adjustment image according to the reference position information of the mapping table in the nonvolatile memory device 160. In one example, the processing unit 110 may take the value of the pixel [ k, l ] in the original image as the value of the pixel [ i, j ] in the adjusted image. In another example, the processing unit 110 may obtain the value of the pixel [ i + k, j + l ] in the original image as the value of the pixel [ i, j ] in the adjusted image. In yet another example, the processing unit 110 may calculate the value of the pixel [ k, l ] and the values of the neighboring pixels in the original image using a smoothing algorithm (smoothing algorithm) and take the calculation result as the value of the pixel [ i, j ] in the adjusted image. In yet another example, the processing unit 110 may use a smoothing algorithm to calculate the value of the pixel [ i + k, j + l ] and the values of the neighboring pixels in the original image and use the calculation result as the value of the pixel [ i, j ] in the adjusted image.
In one aspect of the present invention, the processing unit 110 drives the camera module controller 170 to control the camera module 190 to photograph the calibration plate 30, so that the camera module 190 takes the photographed image 40 containing the distortion; eliminating distortion of the photographed image 40 using an algorithm for generating reference position information corresponding to a plurality of pixels of the photographed image 40; and stores the mapping table to the non-volatile storage 160. In an alternative embodiment, the processing unit 110 may generate an adjustment model according to the adjustment result, wherein the adjustment model includes a plurality of mathematical formulas or algorithms and parameters thereof for reducing distortion in the original image. However, it should be noted that when the distortion contained in the image imaged on the image sensor array is difficult to be simulated using the mathematical formula and its parameters, the generated adjustment model will not effectively eliminate the distortion contained in the image. Unlike the above embodiments, the mapping table of the embodiment of the present invention includes a plurality of storage cells, and each storage cell records reference position information of one pixel, so as to solve the above-mentioned defects.
In another aspect of the invention, the processing unit 110 performs the correction using two stages: determining the optimal parameters of the camera module 190 according to the information of the corner points; and obtaining the distortion-removed maximum likelihood point coordinates of the plurality of pixels in the photographed image 40 using the optimum parameters and the distortion center. Finally, the processing unit 110 stores a mapping table to the nonvolatile storage device 160, wherein the mapping table contains information of the maximum likelihood point coordinates after the distortion is removed. In still another aspect of the present invention, the processing unit 110 obtains maximum likelihood point coordinates after removing distortion in the following manner: obtaining a plurality of distortion-eliminated maximum likelihood point coordinates corresponding to the corner points in the shot image by using the optimal parameters and the distortion center; and obtaining the maximum likelihood point coordinates of the pixels except the corner points in the shot image according to the maximum likelihood point coordinates of the corner points. In some alternative embodiments, the calibration method requires that the calibration plate is photographed from different angles to generate a plurality of images, and then an adjustment model is generated according to the corner points and the distortion center information of the plurality of images, wherein the adjustment model includes a plurality of mathematical formulas or algorithms and parameters thereof. Unlike the above-described embodiment, the two-stage correction of the embodiment of the present invention requires only one-time photographing of the calibration board 30 to obtain distortion-removed maximum likelihood point coordinates of all pixels in the photographed image 40.
Although fig. 1 includes the above-described elements, it is not excluded that more additional elements may be used to achieve better technical results without departing from the spirit of the invention. Further, although the process steps of fig. 2, 7 and 8 are performed in a specific order, the order of the steps may be modified by those skilled in the art without departing from the spirit of the invention to achieve the same result, and therefore, the invention is not limited to the order used.
While the present invention has been described with reference to the above embodiments, it should be noted that the description is not intended to limit the invention. Rather, this invention covers modifications and similar arrangements apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements as is readily apparent.

Claims (19)

1. An image acquisition method performed by a processing unit, comprising:
providing a plurality of groups of parameters;
controlling a camera module to obtain a shot image, wherein the shot image contains distortion;
calculating the corner points determined in the shot image based on optimal parameters to generate reference position information of a plurality of pixels of the shot image, wherein the reference position information comprises the position information of the plurality of pixels after the distortion is eliminated, the optimal parameters are one group of the plurality of groups of parameters which enables an energy function to be minimum, and the energy function is calculated based on the corner points determined in the shot image;
storing the reference position information into a mapping table, wherein the mapping table comprises a plurality of storage cells, and the storage cells record the reference position information corresponding to the pixels; and
correcting the output of the camera module by using the mapping table,
each of the plurality of sets of parameters includes a first parameter simulating a curved surface and a second parameter simulating a direction of an optical axis.
2. The image capturing method according to claim 1, wherein the captured image is obtained by the processing unit driving a camera module controller to control the camera module to capture the calibration plate.
3. The image capturing method according to claim 1, wherein the distortion includes radial surface distortion, and the position information includes optimal coordinates of the plurality of pixels from which the radial surface distortion is removed.
4. The image acquisition method as set forth in claim 1, further comprising:
determining a plurality of corner points of the captured image based on a plurality of pixels of the captured image;
determining the optimal parameter corresponding to the camera module based on the information of the corner; and
the reference position information is determined using the optimal parameter.
5. The image capturing method as claimed in claim 4, wherein the optimal parameter is determined by an energy function, the energy function being determined by a depth value of the corner and a distance between the camera module and the calibration plate.
6. The image acquisition method as set forth in claim 1, further comprising:
generating a plurality of energy functions corresponding to the plurality of sets of parameters; and
a set of parameters corresponding to the energy function that is minimized is used as the optimal parameters.
7. The image acquisition method according to claim 1, wherein the distortion includes tangential distortion, and the position information includes maximum likelihood point coordinates of the plurality of pixels from which the tangential distortion is removed.
8. The image acquisition method as set forth in claim 7, further comprising:
determining a plurality of corner points of the shot image based on a plurality of pixels of the shot image, wherein the distance between any two adjacent corner points is equal;
determining the optimal coordinates of the plurality of angular points after radial surface distortion is eliminated; and
and determining the maximum likelihood point coordinates of the angular points after the tangential distortion is eliminated by adopting the principle of equidistant between adjacent angular points based on the optimal coordinates and the distortion center.
9. The image acquisition method as set forth in claim 8, further comprising:
calculating a plurality of column average values and a plurality of row average values of the plurality of corner points based on the optimal coordinates, and determining index values of two rows and two columns which are closest to the distortion center;
determining a base value and a step value based on the index value; and
and generating maximum likelihood point coordinates corresponding to the corner points after the tangential distortion is eliminated by using the basic value and the stepping value.
10. The image acquisition method as set forth in claim 1, further comprising:
determining a plurality of corner points of the captured image based on a plurality of pixels of the captured image;
obtaining a plurality of maximum likelihood point coordinates corresponding to the corner points in the shot image after the distortion is eliminated; and
and obtaining maximum likelihood point coordinates of pixels in the shot image except the corner points according to the maximum likelihood point coordinates of the corner points.
11. The image acquisition method according to claim 1,
the mapping table includes mxn of the storage cells, m and n are integers greater than 0, the storage cell [ i, j ] records [ k, l ] to indicate the reference position of the pixel [ i, j ] as [ k, l ], i and k are any integers between 0 and m-1, and j and l are any integers between 0 and n-1.
12. The image capturing method as claimed in claim 1, wherein the mapping table is generated before the camera module leaves factory.
13. The image acquisition method as set forth in claim 1, further comprising:
obtaining an original image from the camera module after the camera module leaves a factory; and
and generating an adjustment image according to the reference position information of the mapping table.
14. An image acquisition method as claimed in claim 13, wherein said adjustment image is determined off-line.
15. The image capturing method as claimed in claim 1, wherein said mapping table is stored in a non-volatile storage device.
16. An image acquisition apparatus comprising:
a camera module; and
a processing unit coupled to the camera module for providing a plurality of parameters; controlling the camera module to obtain a shot image, wherein the shot image contains distortion; calculating the corner points determined in the shot image according to optimal parameters to generate reference position information of a plurality of pixels of the shot image, wherein the reference position information comprises the position information of the plurality of pixels after the distortion is eliminated, the optimal parameters are one group of the plurality of groups of parameters which enables an energy function to be minimum, and the energy function is calculated based on the corner points determined in the shot image; storing the reference position information into a mapping table, wherein the mapping table comprises a plurality of storage cells, and the storage cells record the reference position information corresponding to the pixels; and correcting the output of the camera module by using the mapping table,
each of the plurality of sets of parameters includes a first parameter simulating a curved surface and a second parameter simulating a direction of an optical axis.
17. The image capturing apparatus according to claim 16, wherein the processing unit determines a plurality of corner points of the captured image based on a plurality of pixels of the captured image; determining the optimal parameter corresponding to the camera module based on the information of the corner; and determining the reference position information by using the optimal parameter.
18. The image pickup apparatus as set forth in claim 16, wherein the distortion includes tangential distortion, and the position information includes maximum likelihood point coordinates of the plurality of pixels from which the tangential distortion is removed.
19. The image capturing apparatus according to claim 18, wherein the processing unit determines a plurality of corner points of the captured image based on a plurality of pixels of the captured image, wherein distances between any two adjacent corner points are equal; determining the optimal coordinates of the plurality of angular points after radial surface distortion is eliminated; and determining the maximum likelihood point coordinates of the angular points after the tangential distortion is eliminated by adopting the principle of equal distance between adjacent angular points and based on the optimal coordinates and the distortion center.
CN201710524753.6A 2017-06-30 2017-06-30 Image acquisition method and device Active CN107155017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710524753.6A CN107155017B (en) 2017-06-30 2017-06-30 Image acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710524753.6A CN107155017B (en) 2017-06-30 2017-06-30 Image acquisition method and device

Publications (2)

Publication Number Publication Date
CN107155017A CN107155017A (en) 2017-09-12
CN107155017B true CN107155017B (en) 2021-01-22

Family

ID=59796694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710524753.6A Active CN107155017B (en) 2017-06-30 2017-06-30 Image acquisition method and device

Country Status (1)

Country Link
CN (1) CN107155017B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471055A (en) * 2002-07-02 2004-01-28 ��ʿͨ��ʽ���� Image distortion correction method and apparatus
CN105046657A (en) * 2015-06-23 2015-11-11 浙江大学 Image stretching distortion adaptive correction method
CN106303283A (en) * 2016-08-15 2017-01-04 Tcl集团股份有限公司 A kind of panoramic image synthesis method based on fish-eye camera and system
CN106355621A (en) * 2016-09-23 2017-01-25 邹建成 Method for acquiring depth information on basis of array images

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003253980A1 (en) * 2002-07-22 2004-02-09 Spitz, Inc. Foveated display system
EP1927876A1 (en) * 2006-08-10 2008-06-04 MEKRA Lang GmbH & Co. KG Wide-angle objective lens system and camera
DE112013002200T5 (en) * 2012-04-27 2015-01-08 Adobe Systems Incorporated Automatic adjustment of images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471055A (en) * 2002-07-02 2004-01-28 ��ʿͨ��ʽ���� Image distortion correction method and apparatus
CN105046657A (en) * 2015-06-23 2015-11-11 浙江大学 Image stretching distortion adaptive correction method
CN106303283A (en) * 2016-08-15 2017-01-04 Tcl集团股份有限公司 A kind of panoramic image synthesis method based on fish-eye camera and system
CN106355621A (en) * 2016-09-23 2017-01-25 邹建成 Method for acquiring depth information on basis of array images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鱼眼畸变棋盘格图像校正;黄岩岩等;《计算机工程与应用》;20141231;第50卷(第12期);第1、2节 *

Also Published As

Publication number Publication date
CN107155017A (en) 2017-09-12

Similar Documents

Publication Publication Date Title
CN107317953B (en) Image acquisition method and device
JP6934026B2 (en) Systems and methods for detecting lines in a vision system
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN103973999B (en) Camera device and its control method
US9451132B2 (en) System for capturing a document in an image signal
US20200059604A1 (en) Photography processing method for camera module, terminal, and storage medium
US20190385285A1 (en) Image Processing Method and Device
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
JP5141245B2 (en) Image processing apparatus, correction information generation method, and imaging apparatus
US20120182442A1 (en) Hardware generation of image descriptors
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
JP2000196939A (en) Device and method for forming image without distortion and facial direction chromatic aberration
US20140086394A1 (en) System and method for correction of geometric distortion of multi-camera flat panel x-ray detectors
CN113096192A (en) Image sensor internal reference calibration method, device, equipment and storage medium
CN111556311B (en) Quality detection method and device for fixed-focus camera module and computer storage medium
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
KR20160000423A (en) Image processing apparatus, control method thereof, and storage medium
CN107155017B (en) Image acquisition method and device
CN107333028B (en) Image acquisition method and device
US11948316B2 (en) Camera module, imaging device, and image processing method using fixed geometric characteristics
CN110852958A (en) Self-adaptive correction method and device based on object inclination angle
JP6030890B2 (en) Image processing unit, image processing method, and stand type scanner
CN115631099A (en) Radial distortion parameter measuring method and device and electronic equipment
JP3191659B2 (en) Image input device
CN115830131A (en) Method, device and equipment for determining fixed phase deviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211126

Address after: Room 201, No. 2557, Jinke Road, pilot Free Trade Zone, Pudong New Area, Shanghai 201203

Patentee after: Gryfield Intelligent Technology Co.,Ltd.

Address before: Room 301, 2537 Jinke Road, Zhangjiang hi tech park, Shanghai 201203

Patentee before: VIA ALLIANCE SEMICONDUCTOR Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: 201203, 11th Floor, Building 3, No. 889 Bibo Road, China (Shanghai) Pilot Free Trade Zone

Patentee after: Granfei Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: Room 201, No. 2557, Jinke Road, pilot Free Trade Zone, Pudong New Area, Shanghai 201203

Patentee before: Gryfield Intelligent Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address