CN110400278B - Full-automatic correction method, device and equipment for image color and geometric distortion - Google Patents

Full-automatic correction method, device and equipment for image color and geometric distortion Download PDF

Info

Publication number
CN110400278B
CN110400278B CN201910695090.3A CN201910695090A CN110400278B CN 110400278 B CN110400278 B CN 110400278B CN 201910695090 A CN201910695090 A CN 201910695090A CN 110400278 B CN110400278 B CN 110400278B
Authority
CN
China
Prior art keywords
image
color
gray
card
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910695090.3A
Other languages
Chinese (zh)
Other versions
CN110400278A (en
Inventor
王峰
王宏武
潘晴
王晓洒
刘进辉
潘观潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910695090.3A priority Critical patent/CN110400278B/en
Publication of CN110400278A publication Critical patent/CN110400278A/en
Application granted granted Critical
Publication of CN110400278B publication Critical patent/CN110400278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a full-automatic correction method of image color and geometric distortion, which comprises the steps of obtaining image information containing a target object and an extended two-dimensional code; identifying and calculating the positioning mark of the extended two-dimensional code to obtain external parameters of the camera; performing three-dimensional rotation and translation conversion on the target object image according to the camera external parameters; performing mode fitting on the edge of the color card of the extended two-dimensional code and the edge of the gray card to obtain the internal parameters of the computing camera; carrying out lens geometric distortion correction on the target object image by utilizing camera internal parameters; identifying color blocks and gray blocks of the extended two-dimensional code so as to perform color correction and gray correction on the target object image; and obtaining a real image of the target object. The method and the device reduce the misjudgment probability of image identification and improve the accuracy of the image identification result. The invention also discloses a full-automatic correction device, a full-automatic correction system and a full-automatic correction storage medium for image color and geometric distortion, and the full-automatic correction system have corresponding technical effects.

Description

Full-automatic correction method, device and equipment for image color and geometric distortion
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, and a computer-readable storage medium for fully automatically correcting image color and geometric distortion.
Background
The current artificial intelligence technology is developed rapidly, and the computer vision technology is widely applied. When image shooting is carried out by utilizing the image acquisition equipment, due to the difference of used lighting light, the different angles of shooting cameras, the existence of geometric distortion of the cameras and the like, the image acquired by the image acquisition equipment has stronger color, gray scale and geometric distortion. If the distortion and the distortion cannot be calibrated, the misjudgment probability of the image recognition is increased, and a relatively large error is generated on the image recognition result.
In summary, how to effectively solve the problems of large errors and the like caused by color, gray scale and geometric distortion of an image to an image recognition result is a problem that needs to be solved urgently by a person skilled in the art at present.
Disclosure of Invention
The invention aims to provide a full-automatic correction method for image color and geometric distortion, which greatly reduces the misjudgment probability of image recognition and greatly improves the accuracy of an image recognition result; it is another object of the present invention to provide a full automatic correction apparatus, system and computer readable storage medium for image color and geometric distortion.
In order to solve the technical problems, the invention provides the following technical scheme:
a full-automatic correction method for image color and geometric distortion comprises the following steps:
acquiring image information containing a target object and an extended two-dimensional code; the expanded two-dimensional code comprises a color card formed by a plurality of different color areas, a gray card formed by a plurality of different gray areas and a two-dimensional code containing a positioning mark;
identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction;
performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain a corrected image of the lens geometric distortion;
identifying color blocks of the color card and gray blocks of the gray card in the image information to obtain an identification result; and performing color correction and gray correction on the image after the geometric distortion correction of the lens according to the identification result to obtain a real image of the target object.
In a specific embodiment of the present invention, the locating mark includes three position identifiers respectively located at three black-and-white intervals of three corners of the extended two-dimensional code, and the identifying and calculating the locating mark of the extended two-dimensional code in the image information to obtain the camera external parameter includes:
respectively calculating the coordinates of the central points of three position identifiers in the positioning mark;
and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
In a specific embodiment of the present invention, calculating camera parameters by using a lens distortion model by performing pattern fitting on an edge of the color card and an edge of the grayscale card of the extended two-dimensional code in the image information includes:
and performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by combining a lens distortion model and a steepest descent algorithm.
In a specific embodiment of the present invention, identifying color blocks of the color card and gray blocks of the gray card in the image information includes:
and identifying the color blocks of the color card and the gray blocks of the gray card in the image information by utilizing a polynomial regression method.
In a specific embodiment of the present invention, identifying color blocks of the color card and gray blocks of the gray card in the image information includes:
and identifying the color blocks of the color card and the gray blocks of the gray card in the image information by using a BP neural network method.
In a specific embodiment of the present invention, identifying color blocks of the color card and gray blocks of the gray card in the image information includes:
and identifying the color blocks of the color card and the gray blocks of the gray card in the image information by using a support vector machine algorithm.
A full automatic correction device for image color and geometric distortion, comprising:
the image information acquisition module is used for acquiring image information containing a target object and the extended two-dimensional code; the expanded two-dimensional code comprises a color card formed by a plurality of different color areas, a gray card formed by a plurality of different gray areas and a two-dimensional code containing a positioning mark;
the oblique distortion correction module is used for identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction;
the lens geometric distortion correction module is used for performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain a corrected image of the lens geometric distortion;
the real image obtaining module is used for identifying color blocks of the color card and gray blocks of the gray card in the image information to obtain an identification result; and performing color correction and gray scale correction on the image after the lens geometric distortion correction according to the identification result to obtain a real image of the target object, so as to diagnose a patient corresponding to the target object according to the real image.
A full-automatic correction system for image color and geometric distortion comprises an extended two-dimensional code, image acquisition equipment, an image preprocessing terminal and a server, wherein the extended two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark; wherein:
the image acquisition equipment is used for acquiring the image information of the extended two-dimensional code and the target object; sending the image information to the image preprocessing terminal;
the image preprocessing terminal is used for identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction; sending the image subjected to the rotation and translation correction to the server;
the server is used for performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain a corrected image of the lens geometric distortion; identifying color blocks of the color card and gray blocks of the gray card in the image information to obtain an identification result; and performing color correction and gray scale correction on the image after the lens geometric distortion correction according to the identification result to obtain a real image of the target object, and diagnosing a patient corresponding to the target object according to the real image.
In one embodiment of the invention, the positioning mark comprises three position identifiers respectively positioned at three corners of the extended two-dimensional code and spaced between black and white,
the image preprocessing terminal is specifically configured to calculate coordinates of center points of the three position identifiers in the positioning mark respectively; and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for fully automatically correcting color and geometric distortion of an image as set forth above.
The invention provides a full-automatic correction method for image color and geometric distortion, which comprises the following steps: acquiring image information containing a target object and an extended two-dimensional code; the expanded two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark; identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain external parameters of the camera; performing three-dimensional rotation and translation transformation on a target object image corresponding to a target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction; performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain an image after the lens geometric distortion correction; identifying color blocks of a color card and gray blocks of a gray card in the image information to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object.
According to the technical scheme, the image information comprising the extended two-dimensional code and the target object is obtained by presetting a color card comprising a plurality of different color blocks, a gray card comprising a plurality of different gray blocks and the extended two-dimensional code comprising the two-dimensional code of the positioning mark, and the extended two-dimensional code is utilized to obtain the external reference of the camera so as to finish the rotation and translation correction of the image of the target object; calculating camera parameters by using a lens distortion model to finish lens geometric distortion correction of the target object image; and the color correction and the gray correction of the target object image are completed by identifying the color blocks of the color card and the gray blocks of the gray card in the image information. And finally, a real image is obtained, so that image recognition can be performed based on the real image, the misjudgment probability of the image recognition is greatly reduced, and the accuracy of the image recognition result is greatly improved.
Accordingly, embodiments of the present invention further provide a full-automatic correction apparatus, a system, and a computer-readable storage medium for image color and geometric distortion corresponding to the full-automatic correction method for image color and geometric distortion, which have the above technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a method for fully automatically correcting color and geometric distortion of an image according to the present invention;
fig. 2 is a schematic structural diagram of an extended two-dimensional code according to an embodiment of the present invention;
fig. 3 is a structural diagram of an image including a target object and an extended two-dimensional code obtained in an embodiment of the present invention;
FIG. 4 is a flow chart of another embodiment of a method for fully automatically correcting color and geometric distortion of an image according to the present invention;
FIG. 5 is a flow chart of another embodiment of a method for fully automatically correcting color and geometric distortion of an image according to the present invention;
FIG. 6 is a block diagram of a BP neural network according to an embodiment of the present invention;
FIG. 7 is a flow chart of another embodiment of a method for fully automatically correcting color and geometric distortion of an image according to the present invention;
FIG. 8 is a block diagram of an apparatus for automatically correcting color and geometric distortion of an image according to an embodiment of the present invention;
FIG. 9 is a block diagram of a fully automatic system for correcting color and geometric distortion of an image according to an embodiment of the present invention;
FIG. 10 is a diagram of a fully automatic system for correcting color and geometric distortion of an image according to an embodiment of the present invention.
The drawings are numbered as follows:
11-image, 21-camera, 22-camera, 23-user mobile terminal, 31-client, 4-server.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a flowchart of an implementation of a method for fully automatically correcting color and geometric distortion of an image according to an embodiment of the present invention, where the method may include the following steps:
s101: and acquiring image information containing the target object and the extended two-dimensional code.
The expanded two-dimensional code comprises a color card formed by a plurality of different color areas, a gray card formed by a plurality of different gray areas and a two-dimensional code containing a positioning mark.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an extended two-dimensional code in the embodiment of the present invention. An extended two-dimensional code including a color card configured by a plurality of different color areas, a gradation card configured by a plurality of different gradation areas, and a two-dimensional code including a positioning mark may be set in advance. The two-dimensional Code is also QR (quick response) Code, which has the characteristic of omnibearing (360 DEG) reading, and the two-dimensional Code is provided with a positioning mark formed by three black-white large square nests, and the positioning mark is respectively positioned at the upper left corner, the upper right corner and the lower left corner of the two-dimensional Code so as to determine the size and the position of the two-dimensional Code. The correcting mark formed by three black-white small square nests and the timing mark formed by two black-white straight lines are convenient for determining the position and the rotation angle of the two-dimensional code. A color card and a gray card are added below the two-dimensional code, the color card is composed of a plurality of rectangles with different colors, the gray card is a self-made multi-level gray card and is composed of rectangles with different gray levels, the gray card is designed by sequentially arranging the 1 st level, the nth level and the 2 nd level, and the gray card arranged in this way can perform better cluster calculation and separate each gray block at the later stage. Different colors and different grayscales, boundary division of black is used so that computer vision can easily extract colors, grayscales, and lattices dividing different colors and grayscales. The color and gray scale are used for color and gray scale correction of the system, and the grid is used for correcting geometric distortion of the lens. The gray card is different from the common gray gradation, and the gray is divided into a plurality of discrete color levels, namely, the gray card is gradually changed from white to black.
When a photo or a video of a target object needs to be acquired, one extended two-dimensional code provided by the embodiment of the invention can be placed beside the target object, and the target object and the extended two-dimensional code keep a proper distance to acquire image information containing the target object and the extended two-dimensional code. Referring to fig. 3, fig. 3 is a structural diagram of an image including an object and an extended two-dimensional code obtained in an embodiment of the present invention. In fig. 3, image information of a face is obtained, and an extended two-dimensional code provided in the embodiment of the present invention is placed beside the face to obtain image information including the face and the extended two-dimensional code.
The color and gray cards may be, but are not limited to, macbeth color cards.
S102: identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain external parameters of the camera; and carrying out three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction.
After the image information containing the target object and the extended two-dimensional code is obtained, the positioning mark of the extended two-dimensional code in the image information can be identified and calculated to obtain the Euler angle and the translation vector, and further obtain the camera external parameter. And carrying out three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction.
S103: performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by using a lens distortion model; and correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain the image after the lens geometric distortion correction.
The image after the lens geometric distortion correction can be obtained by performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information, for example, identifying the edge of the color card and the edge of the gray card by using an RANSAC method, calculating camera internal parameters by using a lens distortion model, and correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters. For a picture with lens distortion, the distorted picture can be restored to be an undistorted picture only by using the calibrated distortion coefficient and the internal reference matrix. The process of calculating camera parameters using the lens distortion model is as follows:
first, considering a lens distortion-free model, it is expressed by the following formula:
Figure GDA0003198235030000071
wherein (X, Y, Z) represents a point in the world coordinate system, (u, v) represents the pixel point coordinates of the image plane corresponding to the point in the world coordinate system, and fx、fyRespectively the transverse and longitudinal focal lengths, u, of the calibrated camera0And v0Is the calibrated image center point of the camera, R and T respectively represent the rotation matrix and the translation matrix in the external reference of the camera, and zcIs the coordinate of the object point (X, Y, Z) along the Z-axis in the camera coordinate system.
Let P be:
Figure GDA0003198235030000081
the lens distortion-free model becomes:
Figure GDA0003198235030000082
p is a matrix of 3 rows and 4 columns. It can be set as follows:
Figure GDA0003198235030000083
to this end, a point (X, Y, Z) in the world coordinate system is linked by P to a point (u, v) in the pixel plane coordinate system. The P matrix can be obtained by solving the coordinates of the 6 known points under the world coordinates and the pixel points corresponding to the known points. In general calibration work, more than 6 known points are often found on a target, so that the number of equations greatly exceeds the number of unknowns, and the least square method can be used for solving to reduce the influence caused by errors.
The target generally refers to a lens calibration plate or a calibration reference object, and in the embodiment of the invention, the target generally refers to an extended two-dimensional code.
And the internal and external parameter matrixes can be reversely solved after the P matrix is solved. Assuming that a point given n sets of pixel coordinates goes to a point in the world coordinate system
Figure GDA0003198235030000084
The lens distortion-free model is as follows:
Figure GDA0003198235030000085
wherein phi (P, M)i) Is object space point Mi=(Xi,Yi,Zi) Image points projected by the camera.
The lens undistorted model considered above, now considering the lens imaging model with distortion added, the lens imaging model with distortion added is as follows:
Figure GDA0003198235030000091
wherein, R and T respectively represent a rotation matrix and a translation matrix in the external parameters of the camera, and (x, y and z) are coordinates transformed from three-dimensional world coordinate points to a camera coordinate system. The radial distortion and the tangential distortion are expressed as follows:
Figure GDA0003198235030000092
Figure GDA0003198235030000093
δx1=x′(1+k1r2+k2r4+…);
δy1=y′(1+k1r2+k2r4+…);
δx2=[2p1xy+p2(r2+2x2)](1+p3r2+…);
δy2=[p1(r2+2y2)+2p2xy](1+p3r2+…);
wherein the content of the first and second substances,
Figure GDA0003198235030000094
is a coordinate under an imaging coordinate system after adding a distortion amount; deltaxAnd deltayIs the amount of distortion in the x and y directions, usually deltaxAnd deltayThe method comprises two parts, wherein one part is radial distortion, and the other part is tangential distortion; r is2=x′2+y′2K1 and k2 are the radial distortion coefficients to be scaled, and higher order coefficients are generally ignored. p1 and p2 are the tangential distortion coefficients to be calibrated, and higher order coefficients are ignored.
The total distortion is expressed as follows:
δx=δx1x2
δy=δy1y2
coordinates of image points
Figure GDA0003198235030000095
Can be expressed as:
Figure GDA0003198235030000096
setting:
Figure GDA0003198235030000101
assume that a point given n sets of pixel coordinates reaches a point under the world coordinate system:
Figure GDA0003198235030000102
the lens imaging model with distortion added is as follows:
Figure GDA0003198235030000103
wherein phi (R, T, A, k)1,k2,p1,p2,Mi) Is object space point Mi=(Xi,Yi,Zi) Image points projected by the camera.
The above is a nonlinear optimization process, which can use a nonlinear optimization method to solve the solution meeting the requirements, thereby extracting the internal reference coefficients of the above formula to obtain the camera internal reference.
S104: identifying color blocks of a color card and gray blocks of a gray card in the image information to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object.
Identifying color blocks of a color card and gray blocks of a gray card in the image information to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object. For example, a polynomial regression method, a BP neural network method or a support vector machine algorithm can be adopted to identify the color card and the gray card of the extended two-dimensional code in the image information.
According to the technical scheme, the image information comprising the extended two-dimensional code and the target object is obtained by presetting a color card comprising a plurality of different color blocks, a gray card comprising a plurality of different gray blocks and the extended two-dimensional code comprising the two-dimensional code of the positioning mark, and the extended two-dimensional code is utilized to obtain the external reference of the camera so as to finish the rotation and translation correction of the image of the target object; calculating camera parameters by using a lens distortion model to finish lens geometric distortion correction of the target object image; and the color correction and the gray correction of the target object image are completed by identifying the color blocks of the color card and the gray blocks of the gray card in the image information. And finally, a real image is obtained, so that image recognition can be performed based on the real image, the misjudgment probability of the image recognition is greatly reduced, and the accuracy of the image recognition result is greatly improved.
It should be noted that, based on the first embodiment, the embodiment of the present invention further provides a corresponding improvement scheme. In the following embodiments, steps that are the same as or correspond to those in the first embodiment may be referred to each other, and corresponding advantageous effects may also be referred to each other, which are not described in detail in the following modified embodiments.
Example two:
referring to fig. 4, fig. 4 is a flowchart of another implementation of a method for fully automatically correcting color and geometric distortion of an image according to an embodiment of the present invention, where the method may include the following steps:
s401: and acquiring image information containing the target object and the extended two-dimensional code.
The extended two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark.
S402: the coordinates of the center point of the three position identifiers in the positioning mark are respectively calculated.
Firstly loading shot image information, firstly carrying out gray level conversion on the loaded image information, then carrying out configuration verification on the image information, secondly carrying out progressive scanning on the image information, wherein the scanning path is Z-shaped, scanning is carried out in one line by taking one pixel point as increment, filtering is completed, edge gradient is obtained, gradient threshold value self-adaption is carried out, the edge is determined, and the image information is converted into a light and shade width stream. After the edge is determined, the current edge is subtracted from the edge saved last time to obtain a width, and the edge information of the current time is saved. And then, calculating the saved bright-dark width flow, describing the current width flow as a self-defined line segment structure comprising information such as end points at two ends, length and the like, and storing the transverse line segment structure variable meeting the conditions into a transverse line segment set of a container. And scanning the whole image line by line, wherein the steps are the same as those of scanning line by line, the scanning path is in an N shape, the edge is found out, and finally, a longitudinal light and shade height flow is obtained, and the longitudinal line segments which accord with the two-dimensional code are stored in a longitudinal line segment set of the container. Therefore, in the process of analyzing the two-dimensional code, the coordinates of the central points of the three position identifiers in the positioning mark are respectively calculated. The ratio 1 can be used for the previously solved transverse and longitudinal line segment sets: 1: 3: 1: 1, screening width flow line segments, clustering, then solving transverse and longitudinal intersection points, and solving the coordinates of the central points of three position identifiers in the positioning mark.
The position identifiers may be three position identifiers located at three corners of the extended two-dimensional code, where the three position identifiers are black and white, for example, the three position identifiers may be located at an upper left corner, a lower right corner, and a lower left corner of the extended two-dimensional code, respectively.
It should be noted that, in the embodiment of the present invention, the shapes of the three position identifiers are not limited, and for example, the three position identifiers may be set to be square or round.
S403: and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
After the coordinates of the center points of the three position identifiers in the positioning mark are calculated, the inverse perspective transformation matrix of the image information can be calculated by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information, so as to obtain the external parameters of the camera.
S404: and carrying out three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction.
S405: performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by combining a lens distortion model and a steepest descent algorithm; and correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain the image after the lens geometric distortion correction.
After the lens distortion model is acquired, camera parameters may be extracted from the model using a steepest descent algorithm. The gradient descent method is simple to implement, and when the target function is a convex function, the solution of the gradient descent method is a global solution. The optimization idea of the gradient descent method is to use the negative gradient direction of the current position as the search direction, and since this direction is the fastest descent direction of the current position, it is also called the "steepest descent method". The closer the steepest descent method is to the target value, the smaller the step size is, the slower the progress is. In general, the solution is not guaranteed to be a global optimal solution, and the speed of the gradient descent method is not necessarily the fastest. Thus, embodiments of the present invention use the Levenberg-Marquardt method (Levenberg-Marquardt), which provides a numerical solution for non-linear minimization (local minimization), to find the parameters using the steepest descent algorithm. The algorithm can modify parameters during execution to combine the advantages of the Gauss-Newton algorithm and the gradient descent method and improve the disadvantages of the Gauss-Newton algorithm and the gradient descent method.
S406: identifying color blocks of a color card and gray blocks of a gray card in the image information by using a polynomial regression method to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object.
The color blocks of the color card and the gray blocks of the gray card in the image information can be identified by a polynomial regression method. Let the color card have N color blocks, and the color tristimulus value of the ith color block is R in the standard spaceoi、Goi、BoiThe tristimulus value of the color of the ith color block on the colorimetric plate to be corrected, which is acquired under the natural illumination environment, is Ri、Gi、BiWherein i is 1,2, 3.. N,then:
Roi=a11v1i+a12v2i+...+a1ivji
Goi=a21v1i+a22v2i+...+a2ivji
Boi=a31v1i+a32v2i+...+a3ivji
wherein v isji(J ═ 1.., J) from Ri、Gi、BiThere are various polynomial forms, for example, V ═ R, G, B,1],V=[R,G,B,RG,RB,GB,1],V=[R,G,B,RGB,1]Etc., the forms of V may be combined into different forms as desired.
The matrix form of the above formula is:
X=AT*V;
wherein X is an R, G, B tristimulus matrix of the corrected image with dimensions of 3 xM; v is a matrix formed by the terms of the polynomial corresponding to the R, G, B values of all the pixels of the original image, and the dimension is J × M; m is the total number of pixels of the original image. A is a conversion coefficient matrix with dimension J multiplied by 3, which can be obtained by least square method optimization, and is the solved model parameter. By substituting a into the matrix, R, G, B values of each pixel of the corrected image can be calculated, and online color correction can be realized. And identifying the color card and the gray card of the expanded two-dimensional code in the image information by a polynomial regression algorithm through designing a reasonable polynomial form so as to obtain a good identification result.
Example three:
referring to fig. 5, fig. 5 is a flowchart of another implementation of a method for fully automatically correcting color and geometric distortion of an image according to an embodiment of the present invention, where the method may include the following steps:
s501: and acquiring image information containing the target object and the extended two-dimensional code.
The extended two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark.
S502: the coordinates of the center point of the three position identifiers in the positioning mark are respectively calculated.
S503: and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
S504: and carrying out three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction.
S505: performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by combining a lens distortion model and a steepest descent algorithm; and correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain the image after the lens geometric distortion correction.
S506: identifying color blocks of the color card and gray blocks of the gray card in the image information by using a BP neural network method to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object.
The color blocks of the color card and the gray blocks of the gray card in the image information can be identified by a feed-forward (BP) neural network method. Specifically, the processing end may perform template matching on the standard color blocks in the picture by using a normalized square error matching method, where template matching is a technique of finding a portion in one image that is the most matched with another template image. Is a method of matching an actual image with an input image by sliding a template image block over the input image. T (-) is used for representing a template image, I (-) is used for representing an image to be matched, R (-) is used for representing a matching result, pixel points of the image to be matched are represented by (x, y) in a formula, and pixel points of the template image are represented by (x ', y'), wherein the matching method is as the formula:
Figure GDA0003198235030000141
and intercepting and storing the matched color cards in the pictures. The color card is provided with a plurality of color blocks, the color values in the color blocks are clustered, the color value of the clustering center represents the color value Q1 of each collected color block, and the color value in the color card can be used as the output reference value Q2 of the BP neural network. For the trained BP neural network, Q1 is the actual color input value and Q2 is the desired color output value.
The structure of the designed BP neural network is shown in FIG. 6, the output layer and the input layer are respectively 3 neurons, and the BP neural network has a hidden layer. The number of neurons in the hidden layer was 7. The activation function of the BP neural network is a nonlinear function, and data is input into the neural network after being normalized. And training the BP neural network by using the data of the plurality of color blocks to obtain a color correction model coefficient. The color correction model obtained by inputting the color tristimulus value data R, G, B of the entire image is color-corrected and stored.
Example four:
referring to fig. 7, fig. 7 is a flowchart of another implementation of a method for fully automatically correcting color and geometric distortion of an image according to an embodiment of the present invention, where the method may include the following steps:
s701: and acquiring image information containing the target object and the extended two-dimensional code.
The extended two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark.
S702: the coordinates of the center point of the three position identifiers in the positioning mark are respectively calculated.
S703: and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
S704: and carrying out three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction.
S705: performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by combining a lens distortion model and a steepest descent algorithm; and correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain the image after the lens geometric distortion correction.
S706: identifying color blocks of a color card and gray blocks of a gray card in the image information by using a support vector machine algorithm to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object.
The color blocks of the color card and the gray blocks of the gray card in the image information can be identified by using a support vector machine algorithm, or the color blocks on the standard color card are used as a training sample set by using the support vector machine algorithm, and the training data is converted into a high-dimensional feature space by using a nonlinear kernel function. Firstly, collecting color blocks of a color card reference image shot in a standard environment and color block RGB values of a color card image shot in a non-standard environment, and then converting the color blocks into a Lab color space from the RGB color space, wherein the Lab color model consists of three elements of brightness (L) and related colors, namely a and b. L represents lightness (luminescence), a represents a range from magenta to green, and b represents a range from yellow to blue. Dividing the training data in the obtained training set of the Lab color space into three components of L, a and b to obtain three training subsets of L, a and b, then training the three subsets of L, a and b respectively by taking color blocks of a color card image shot in a non-standard environment as a source and color blocks of a color card reference image shot in a standard environment as a target to obtain three support vector regression models and support vector regression functions f of L, a and bL_SVR,fa_SVR,fb_SVR. And finally, converting the RGB color space of the image to be corrected into a Lab color space, and performing regression by using a regression function obtained by training. Setting the color value of the ith pixel point of the image to be corrected to be Li,ai,biWherein i is 1,2,3, and N is the total number of image pixels, and is respectively passed through a support vector regression function fL_SVR,fa_SVR,fb_SVRCalculating the corrected color value L _ SVR of the pixel pointi,a_SVRi,b_SVRiIn the form of:
L_SVRi=fL_SVR(Li);
a_SVRi=fa_SVR(ai);
b_SVRi=fb_SVR(bi);
and finally obtaining the image corrected by the support vector machine algorithm. Wherein the support vector regression model and the support vector regression function fL_SVR,fa_SVR,fb_SVRThe specific form and solving process of (1) is as follows:
firstly, a loss function needs to be defined, in the field of medical images, an epsilon-insensitive loss function is mainly adopted, compared with a minimum square error loss function, a Laplace loss function and a Huber loss function, the epsilon-insensitive loss function can ignore errors in a certain range of true values, can help to obtain fewer support vectors and has good robustness.
The ε -insensitive loss function may be expressed as:
Lε(y)=max(0,|f(x)-y|-ε);
where y represents the value calculated by the epsilon-insensitive loss function.
The kernel function adopts Gaussian radiation kernel function and is used for any vector xi,xj∈X∈Rnδ is the bandwidth, which controls the local range of action of the gaussian kernel:
Figure GDA0003198235030000161
solving the regression parameters to obtain Lagrange multiplier alpha, alpha*
Figure GDA0003198235030000162
The constraint conditions satisfied by the above formula are:
Figure GDA0003198235030000163
and satisfies the Karush-Kuhn-Tucker (KKT) condition:
Figure GDA0003198235030000164
solving the regression parameter equation under the constraint condition can obtain a Lagrange multiplier
Figure GDA0003198235030000165
The support vectors refer to those points where the lagrangian multiplier is greater than zero.
The regression equation is given in the form:
Figure GDA0003198235030000166
wherein:
Figure GDA0003198235030000171
Figure GDA0003198235030000172
K(xi,xj) In order to be a kernel function, the kernel function,
Figure GDA0003198235030000173
to be offset, xrAnd xsRespectively some x-vector of the kernel function.
Therefore support vector regression function fL_SVR,fa_SVR,fb_SVRIn the form of the regression equation as described above.
And intercepting and storing the matched gray blocks in the picture. The gray scale card is provided with a plurality of gray scales, the plurality of gray scales are clustered and the value of the gray scales is calculated, the change coefficient of the gray scale of the shot picture is calculated according to the accurate value of the standard plurality of gray scales, the change coefficient of different gray scales can be used for correcting different gray scales of the picture, and finally the image without color distortion and gray scale distortion is obtained.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a full-automatic correction device for image color and geometric distortion, and the full-automatic correction device for image color and geometric distortion described below and the full-automatic correction method for image color and geometric distortion described above may be referred to in correspondence with each other.
Referring to fig. 8, fig. 8 is a block diagram of a full-automatic correction apparatus for image color and geometric distortion according to an embodiment of the present invention, where the apparatus may include:
an image information obtaining module 81, configured to obtain image information including a target object and an extended two-dimensional code; the expanded two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark;
the rotation and translation correction module 82 is used for identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain external parameters of the camera; performing three-dimensional rotation and translation transformation on a target object image corresponding to a target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction;
the lens geometric distortion correction module 83 is configured to perform mode fitting on the edge of the color card and the edge of the grayscale card of the extended two-dimensional code in the image information, and calculate camera parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain an image after the lens geometric distortion correction;
a real image obtaining module 84, configured to identify color blocks of a color card and gray blocks of a gray card in the image information to obtain an identification result; and performing color correction and gray scale correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object so as to diagnose a patient corresponding to the target object according to the real image.
According to the technical scheme, the image information comprising the extended two-dimensional code and the target object is obtained by presetting a color card comprising a plurality of different color blocks, a gray card comprising a plurality of different gray blocks and the extended two-dimensional code comprising the two-dimensional code of the positioning mark, and the extended two-dimensional code is utilized to obtain the external reference of the camera so as to finish the rotation and translation correction of the image of the target object; calculating camera parameters by using a lens distortion model to finish lens geometric distortion correction of the target object image; and the color correction and the gray correction of the target object image are completed by identifying the color blocks of the color card and the gray blocks of the gray card in the image information. And finally, a real image is obtained, so that image recognition can be performed based on the real image, the misjudgment probability of the image recognition is greatly reduced, and the accuracy of the image recognition result is greatly improved.
In a specific embodiment of the present invention, the positioning mark includes three position identifiers respectively located at three black and white phases of three angles of the extended two-dimensional code, the rotation and translation modification module 82 includes a camera external reference obtaining sub-module, and the camera external reference obtaining sub-module includes:
a coordinate calculation unit for calculating coordinates of center points of three position identifiers in the positioning mark respectively
And the camera external parameter obtaining unit is used for calculating an inverse perspective transformation matrix of the image information by utilizing the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the camera external parameters.
In one embodiment of the present invention, the lens geometric distortion correction module 83 includes a camera internal parameter calculation sub-module,
the camera internal parameter calculation sub-module is a module for calculating the camera internal parameters by performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information and combining a lens distortion model and a steepest descent algorithm.
In one embodiment of the present invention, the real image acquisition module 94 includes a color card and grayscale card identification sub-module,
the color card and gray card identification sub-module is a module for identifying color blocks of the color card and gray blocks of the gray card in the image information by utilizing a polynomial regression method.
In one embodiment of the present invention, the real image acquisition module 84 includes a color card and grayscale card identification sub-module,
the color card and gray card identification submodule is a module for identifying color blocks of a color card and gray blocks of a gray card in image information by using a BP neural network method.
In one embodiment of the present invention, the real image acquisition module 84 includes a color card and grayscale card identification sub-module,
the color card and gray card identification submodule is a module for identifying color blocks of the color card and gray blocks of the gray card in the image information by using a support vector machine algorithm.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a full-automatic correction system for image color and geometric distortion, and the full-automatic correction system for image color and geometric distortion described below and the full-automatic correction method for image color and geometric distortion described above may be referred to correspondingly.
Referring to fig. 9, fig. 9 is a structural block diagram of a full-automatic correction system for image color and geometric distortion in the embodiment of the present invention, where the system may include an extended two-dimensional code 1, an image acquisition device 2, an image preprocessing terminal 3, and a server 4, where the extended two-dimensional code 1 includes a color card formed by a plurality of different color blocks, a grayscale card formed by a plurality of different grayscale blocks, and a two-dimensional code including a positioning mark; wherein:
the image acquisition equipment 2 is used for acquiring the image information of the extended two-dimensional code 1 and the target object; and sends the image information to the image preprocessing terminal 3;
the image preprocessing terminal 3 is used for identifying and calculating the positioning mark of the extended two-dimensional code 1 in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on a target object image corresponding to a target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction; and sends the image after the rotation and translation correction to the server 4;
the server 4 is used for performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code 1 in the image information and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain an image after the lens geometric distortion correction; identifying color blocks of a color card and gray blocks of a gray card in the image information to obtain an identification result; and performing color correction and gray scale correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object, and diagnosing a patient corresponding to the target object according to the real image.
According to the technical scheme, the image information comprising the extended two-dimensional code and the target object is obtained by presetting a color card comprising a plurality of different color blocks, a gray card comprising a plurality of different gray blocks and the extended two-dimensional code comprising the two-dimensional code of the positioning mark, and the extended two-dimensional code is utilized to obtain the external reference of the camera so as to finish the rotation and translation correction of the image of the target object; calculating camera parameters by using a lens distortion model to finish lens geometric distortion correction of the target object image; and the color correction and the gray correction of the target object image are completed by identifying the color blocks of the color card and the gray blocks of the gray card in the image information. And finally, a real image is obtained, so that image recognition can be performed based on the real image, the misjudgment probability of the image recognition is greatly reduced, and the accuracy of the image recognition result is greatly improved.
In one embodiment of the invention, the positioning mark comprises three position identifiers respectively positioned at three corners of the extended two-dimensional code 1 and spaced in black and white,
the image preprocessing terminal 3 is specifically used for respectively calculating the coordinates of the central points of the three position identifiers in the positioning mark; and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
In an embodiment of the present invention, the server 4 is specifically configured to perform mode fitting on an edge of a color card and an edge of a grayscale card of the extended two-dimensional code 1 in the image information, and calculate the camera parameters by combining a lens distortion model and a steepest descent algorithm.
In an embodiment of the present invention, the server 4 is specifically configured to identify color blocks of a color card and gray blocks of a gray card in the image information by using a polynomial regression method.
In a specific embodiment of the present invention, the server 4 is specifically configured to identify color blocks of a color card and gray blocks of a gray card in the image information by using a BP neural network method.
In a specific embodiment of the present invention, the server 4 is specifically configured to identify the color blocks of the color card and the gray blocks of the gray card in the image information by using a support vector machine algorithm.
In one specific example application, referring to fig. 10, fig. 10 is a schematic diagram of a fully automatic correction system for image color and geometric distortion according to an embodiment of the present invention. As is currently the process of performing a targeted diagnosis of a patient by observing the color of the patient's face, an image 11 containing an image of the patient's face and an extended two-dimensional code may first be acquired by the camera 21, the video camera 22, or the user mobile terminal 23. If the image is collected by the camera 21 or the camera 22, the collected image 11 may be sent to the client 31, the camera 21 and the camera 22 may be in communication connection with the client 31 in a wired manner, or in communication connection in a wireless manner, and the client 31 is used to perform three-dimensional rotation and translation transformation on the face image; if the image is collected by the user mobile terminal 23, the face image may be directly subjected to three-dimensional rotation and translation transformation by the user mobile terminal 23. After the face image is subjected to three-dimensional rotation and translation transformation, the image subjected to rotation and translation modification is sent to the server 4 through the client terminal 31 or the user mobile terminal 23, the client terminal 31 and the server 4 can be in communication connection in a wired mode or in communication connection in a wireless communication mode, the server 4 is used for further performing lens geometric distortion correction and color and gray scale correction on the face image, and finally the real face image of the patient is obtained, the server 4 performs targeted diagnosis on the basis of the real face image of the patient, and returns a diagnosis result to the user mobile terminal 23 or the client terminal 31, so that the identification rate of the diagnosis system is greatly improved, and the misjudgment rate of the diagnosis system is greatly reduced.
The full-automatic correction system for image color and geometric distortion shown in fig. 10 can also be an anti-theft or check-in system for face recognition, the acquisition process and the processing process of the face image can refer to the above processes, and the three-dimensional rotation and translation transformation, the lens geometric distortion correction and the color and gray level correction are carried out on the acquired face image information, so that the degree of reality of the face image can be greatly improved, and the recognition effect and the recognition efficiency are improved. When the human subject is implemented by the present invention, the mentioned target object is not limited to a human face, but may be other image recognition objects, and the embodiment of the present invention does not limit this.
Corresponding to the above method embodiment, the present invention further provides a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:
acquiring image information containing a target object and an extended two-dimensional code; the expanded two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark; identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain external parameters of the camera; performing three-dimensional rotation and translation transformation on a target object image corresponding to a target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction; performing mode fitting on the edge of a color card and the edge of a gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain an image after the lens geometric distortion correction; identifying color blocks of a color card and gray blocks of a gray card in the image information to obtain an identification result; and performing color correction and gray correction on the image subjected to the lens geometric distortion correction according to the recognition result to obtain a real image of the target object.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For the introduction of the computer-readable storage medium provided by the present invention, please refer to the above method embodiments, which are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device, the system and the computer readable storage medium disclosed by the embodiments correspond to the method disclosed by the embodiments, so that the description is simple, and the relevant points can be referred to the method part for description.
The principle and the implementation of the present invention are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A full-automatic correction method for image color and geometric distortion is characterized by comprising the following steps:
acquiring image information containing a target object and an extended two-dimensional code; the expanded two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks, and a two-dimensional code containing a positioning mark;
identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction;
performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain a corrected image of the lens geometric distortion;
identifying color blocks of the color card and gray blocks of the gray card in the image information to obtain an identification result; and performing color correction and gray correction on the image after the geometric distortion correction of the lens according to the identification result to obtain a real image of the target object.
2. The method according to claim 1, wherein the positioning mark includes three position identifiers respectively located at three corners of the extended two-dimensional code, and the positioning mark of the extended two-dimensional code in the image information is identified and calculated to obtain a camera external parameter, including:
respectively calculating the coordinates of the central points of three position identifiers in the positioning mark;
and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
3. The method according to claim 1 or 2, wherein calculating camera parameters using a lens distortion model by performing pattern fitting on the edges of the color card and the edges of the grayscale card of the extended two-dimensional code in the image information comprises:
and performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information, and calculating camera internal parameters by combining a lens distortion model and a steepest descent algorithm.
4. The method according to claim 3, wherein identifying color blocks of the color card and gray blocks of the gray card in the image information comprises:
and identifying the color blocks of the color card and the gray blocks of the gray card in the image information by utilizing a polynomial regression method.
5. The method according to claim 3, wherein identifying color blocks of the color card and gray blocks of the gray card in the image information comprises:
and identifying the color blocks of the color card and the gray blocks of the gray card in the image information by using a BP neural network method.
6. The method according to claim 3, wherein identifying color blocks of the color card and gray blocks of the gray card in the image information comprises:
and identifying the color blocks of the color card and the gray blocks of the gray card in the image information by using a support vector machine algorithm.
7. A device for fully automatic correction of color and geometric distortions of an image acquisition system, comprising: the image information acquisition module is used for acquiring image information containing a target object and the extended two-dimensional code; the expanded two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks, and a two-dimensional code containing a positioning mark;
the rotation and translation correction module is used for identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction;
the lens geometric distortion correction module is used for performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain a corrected image of the lens geometric distortion;
the real image obtaining module is used for identifying color blocks of the color card and gray blocks of the gray card in the image information to obtain an identification result; and performing color correction and gray scale correction on the image after the lens geometric distortion correction according to the identification result to obtain a real image of the target object so as to diagnose a patient corresponding to the target object according to the real image.
8. A full-automatic correction system for image color and geometric distortion is characterized by comprising an extended two-dimensional code, image acquisition equipment, an image preprocessing terminal and a server, wherein the extended two-dimensional code comprises a color card consisting of a plurality of different color blocks, a gray card consisting of a plurality of different gray blocks and a two-dimensional code containing a positioning mark; wherein:
the image acquisition equipment is used for acquiring the image information of the extended two-dimensional code and the target object; sending the image information to the image preprocessing terminal;
the image preprocessing terminal is used for identifying and calculating the positioning mark of the extended two-dimensional code in the image information to obtain camera external parameters; performing three-dimensional rotation and translation transformation on the target object image corresponding to the target object in the image information according to the camera external parameters to obtain an image subjected to rotation and translation correction; sending the image subjected to the rotation and translation correction to the server;
the server is used for performing mode fitting on the edge of the color card and the edge of the gray card of the extended two-dimensional code in the image information and calculating camera internal parameters by using a lens distortion model; correcting the lens geometric distortion of the image after the rotation and translation correction by using the camera internal parameters to obtain a corrected image of the lens geometric distortion; identifying color blocks of the color card and gray blocks of the gray card in the image information to obtain an identification result; and performing color correction and gray scale correction on the image after the lens geometric distortion correction according to the identification result to obtain a real image of the target object, and diagnosing a patient corresponding to the target object according to the real image.
9. The system of claim 8, wherein the position marker includes three position identifiers respectively located at three corners of the extended two-dimensional code, which are black and white alternated,
the image preprocessing terminal is specifically configured to calculate coordinates of center points of the three position identifiers in the positioning mark respectively; and calculating an inverse perspective transformation matrix of the image information by using the geometric relationship of the three position identifiers and the positions of the position identifiers in the image information to obtain the external parameters of the camera.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method for fully automatic correction of color and geometric distortion of an image according to any one of claims 1 to 6.
CN201910695090.3A 2019-07-30 2019-07-30 Full-automatic correction method, device and equipment for image color and geometric distortion Active CN110400278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910695090.3A CN110400278B (en) 2019-07-30 2019-07-30 Full-automatic correction method, device and equipment for image color and geometric distortion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910695090.3A CN110400278B (en) 2019-07-30 2019-07-30 Full-automatic correction method, device and equipment for image color and geometric distortion

Publications (2)

Publication Number Publication Date
CN110400278A CN110400278A (en) 2019-11-01
CN110400278B true CN110400278B (en) 2021-10-01

Family

ID=68326582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910695090.3A Active CN110400278B (en) 2019-07-30 2019-07-30 Full-automatic correction method, device and equipment for image color and geometric distortion

Country Status (1)

Country Link
CN (1) CN110400278B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008564B (en) * 2019-11-01 2023-05-09 南京航空航天大学 Non-matching type face image recognition method and system
WO2021114184A1 (en) * 2019-12-12 2021-06-17 华为技术有限公司 Neural network model training method and image processing method, and apparatuses therefor
CN113132693A (en) * 2019-12-31 2021-07-16 长沙云知检信息科技有限公司 Color correction method
CN111353952B (en) * 2020-01-21 2023-05-05 佛山科学技术学院 Method for eliminating black boundary after image distortion correction
CN111915524B (en) * 2020-08-04 2024-03-29 深圳企业云科技股份有限公司 Full-automatic image perspective correction method based on cross ratio operation
CN112307786B (en) * 2020-10-13 2022-07-08 上海迅邦电子科技有限公司 Batch positioning and identifying method for multiple irregular two-dimensional codes
CN114782841B (en) * 2022-04-21 2023-12-15 广州中科云图智能科技有限公司 Correction method and device based on landing pattern
CN117615080A (en) * 2023-11-02 2024-02-27 江南大学 Printing color separation method and system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630444A (en) * 2009-08-06 2010-01-20 杭州六易科技有限公司 Automatic identification warning system of vehicle-mounted traffic annunciator
CN203396705U (en) * 2013-05-13 2014-01-15 深圳市宝凯仑科技有限公司 Rapid detection card
CN103985118A (en) * 2014-04-28 2014-08-13 无锡观智视觉科技有限公司 Parameter calibration method for cameras of vehicle-mounted all-round view system
CN104657728A (en) * 2015-03-19 2015-05-27 江苏物联网研究发展中心 Barcode recognition system based on computer vision
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN107689061A (en) * 2017-07-11 2018-02-13 西北工业大学 Rule schema shape code and localization method for indoor mobile robot positioning
CN107944324A (en) * 2017-11-16 2018-04-20 凌云光技术集团有限责任公司 A kind of Quick Response Code distortion correction method and device
CN108090542A (en) * 2018-01-10 2018-05-29 深圳市深大极光科技有限公司 A kind of two-dimensional code anti-counterfeiting label and preparation method thereof
CN108305291A (en) * 2018-01-08 2018-07-20 武汉大学 Utilize the monocular vision positioning and orientation method of the wall advertisement comprising positioning Quick Response Code
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN108765328A (en) * 2018-05-18 2018-11-06 凌美芯(北京)科技有限责任公司 A kind of high-precision multiple features plane template and its distort optimization and scaling method
CN109805891A (en) * 2019-01-08 2019-05-28 中南大学湘雅医院 Post-operative recovery state monitoring method, device, system, readable medium and colour atla
CN109829524A (en) * 2019-01-17 2019-05-31 柳州康云互联科技有限公司 A kind of compound characteristics of image code and preparation method thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201113815A (en) * 2009-10-09 2011-04-16 Primax Electronics Ltd QR code processing method and apparatus thereof
CN104517109B (en) * 2013-09-29 2018-03-06 北大方正集团有限公司 A kind of bearing calibration of QR codes image and system
CN109461126B (en) * 2018-10-16 2020-06-30 重庆金山科技(集团)有限公司 Image distortion correction method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630444A (en) * 2009-08-06 2010-01-20 杭州六易科技有限公司 Automatic identification warning system of vehicle-mounted traffic annunciator
CN203396705U (en) * 2013-05-13 2014-01-15 深圳市宝凯仑科技有限公司 Rapid detection card
CN103985118A (en) * 2014-04-28 2014-08-13 无锡观智视觉科技有限公司 Parameter calibration method for cameras of vehicle-mounted all-round view system
CN104657728A (en) * 2015-03-19 2015-05-27 江苏物联网研究发展中心 Barcode recognition system based on computer vision
CN107689061A (en) * 2017-07-11 2018-02-13 西北工业大学 Rule schema shape code and localization method for indoor mobile robot positioning
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN107944324A (en) * 2017-11-16 2018-04-20 凌云光技术集团有限责任公司 A kind of Quick Response Code distortion correction method and device
CN108305291A (en) * 2018-01-08 2018-07-20 武汉大学 Utilize the monocular vision positioning and orientation method of the wall advertisement comprising positioning Quick Response Code
CN108090542A (en) * 2018-01-10 2018-05-29 深圳市深大极光科技有限公司 A kind of two-dimensional code anti-counterfeiting label and preparation method thereof
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN108765328A (en) * 2018-05-18 2018-11-06 凌美芯(北京)科技有限责任公司 A kind of high-precision multiple features plane template and its distort optimization and scaling method
CN109805891A (en) * 2019-01-08 2019-05-28 中南大学湘雅医院 Post-operative recovery state monitoring method, device, system, readable medium and colour atla
CN109829524A (en) * 2019-01-17 2019-05-31 柳州康云互联科技有限公司 A kind of compound characteristics of image code and preparation method thereof

Also Published As

Publication number Publication date
CN110400278A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
US6985631B2 (en) Systems and methods for automatically detecting a corner in a digitally captured image
CN110223226B (en) Panoramic image splicing method and system
RU2421814C2 (en) Method to generate composite image
JP4868530B2 (en) Image recognition device
JP5387193B2 (en) Image processing system, image processing apparatus, and program
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN111862224A (en) Method and device for determining external parameters between camera and laser radar
CN110909750B (en) Image difference detection method and device, storage medium and terminal
CN113012234B (en) High-precision camera calibration method based on plane transformation
US9131193B2 (en) Image-processing device removing encircling lines for identifying sub-regions of image
JP6797046B2 (en) Image processing equipment and image processing program
CN115170525A (en) Image difference detection method and device
US20120038785A1 (en) Method for producing high resolution image
CN110751690B (en) Visual positioning method for milling machine tool bit
JPH08287258A (en) Color image recognition device
CN117095417A (en) Screen shot form image text recognition method, device, equipment and storage medium
CN112733773A (en) Object detection method and device, computer equipment and storage medium
KR20190027165A (en) Image Adjustment System and Method for Unifying Features of Multiple Images
CN110245674B (en) Template matching method, device, equipment and computer storage medium
CN110689586B (en) Tongue image identification method in traditional Chinese medicine intelligent tongue diagnosis and portable correction color card used for same
CN102843479A (en) File scanning method, file scanning device and portable electronic device
US11145037B1 (en) Book scanning using machine-trained model
JP4298283B2 (en) Pattern recognition apparatus, pattern recognition method, and program
CN114358131A (en) Digital photo frame intelligent photo optimization processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant