CN112991211B - Industrial camera dark angle correction method - Google Patents

Industrial camera dark angle correction method Download PDF

Info

Publication number
CN112991211B
CN112991211B CN202110271960.1A CN202110271960A CN112991211B CN 112991211 B CN112991211 B CN 112991211B CN 202110271960 A CN202110271960 A CN 202110271960A CN 112991211 B CN112991211 B CN 112991211B
Authority
CN
China
Prior art keywords
key
column
image
row
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110271960.1A
Other languages
Chinese (zh)
Other versions
CN112991211A (en
Inventor
易天格
宋伟铭
周中亚
刘敏
高晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Daheng Image Vision Co ltd
China Daheng Group Inc Beijing Image Vision Technology Branch
Original Assignee
Beijing Daheng Image Vision Co ltd
China Daheng Group Inc Beijing Image Vision Technology Branch
Filing date
Publication date
Application filed by Beijing Daheng Image Vision Co ltd, China Daheng Group Inc Beijing Image Vision Technology Branch filed Critical Beijing Daheng Image Vision Co ltd
Priority to CN202110271960.1A priority Critical patent/CN112991211B/en
Publication of CN112991211A publication Critical patent/CN112991211A/en
Application granted granted Critical
Publication of CN112991211B publication Critical patent/CN112991211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses a method for correcting dark angles of an industrial camera, which comprises the following steps: step 1, sequentially determining key rows and key columns of a calibration image according to a set minimum grid width and pixel gray values of pixel points in the calibration image, and forming grids according to the key rows and the key columns; step 2, determining a neighborhood range of each grid point in the grid according to a preset neighborhood half-axis length L and the number of lines and columns of the calibration image, and calculating a neighborhood mean value of a related point group in the grid in the neighborhood range, wherein the related point group comprises four adjacent grid points; and step 3, calculating a dark angle correction coefficient corresponding to each pixel point in the image to be corrected according to the neighborhood mean value, and carrying out dark angle correction on the image to be corrected according to the dark angle correction coefficient. By the technical scheme, the application designs the proper correction coefficient aiming at the dark angle characteristic so as to correct the dark angle distortion of the image edge, improve the image quality, reduce the occupation of hardware resources of an industrial camera and improve the image correction efficiency.

Description

Industrial camera dark angle correction method
Technical Field
The application relates to the technical field of image processing, in particular to an industrial camera dark angle correction method.
Background
The automatic optical detection is an effective detection method for industrial automation, uses machine vision with an industrial camera as a platform as a detection standard technology, and is widely applied to the manufacturing fields of printing and packaging quality control, PCB detection, rapid molding and the like. For example, the system collects image information on the surface of the printed matter through an industrial camera and a lens, and manages and controls the quality of the printed matter through image processing such as positioning, identification, classification and the like.
Along with the wider and wider application range of the automatic optical detection, the requirements on the quality of the acquired image are also higher and higher. In the collected images, due to the reasons of the structure, performance and the like of an optical system, the phenomenon of 'dark angle' that the gray value of the shot uniform bright field image gradually decreases from the center to the periphery exists, wherein the 'dark angle' is a characteristic of uneven brightness of the shot images and is represented by gradual effect that the brightness of four corners of the images gradually decreases, and the reason is that the included angle between the light of the edge of a lens and the optical axis of a camera is larger, so that light loss is caused.
The phenomenon of the dark angle becomes more obvious with the increase of the target surface of the camera sensor. The non-uniformity of the image caused by the dark angle phenomenon causes the distortion of the target features at the edge of the image, increases the difficulty of later image processing, increases the false detection and omission of defects, and has negative impression influence on the robustness of the system.
In the process of correcting the dark angle phenomenon in the prior art, the problems of large operation data volume, high resource occupancy rate, small application range and the like generally exist, the production cost of an industrial camera is increased, and meanwhile, the phenomenon of poor anti-interference performance and influence on the output image quality of the industrial camera exists.
The camera vignetting correction system as described in patent CN108111777a calculates a mean image by calculating a plurality of images, and then calculates the gray scale reciprocal of each point pixel by using the mean image as the multiplication coefficient of each point, so as to correct the subsequent images. The dark angle correction method can correct the non-uniformity caused by the dark angle to a certain extent. But also has obvious disadvantages: 1. the correction method requires that the correction coefficient data quantity is consistent with the image resolution, the correction coefficient data quantity is large, meanwhile, the data type is floating point data, and the storage requirement of hardware such as FPGA is increased. 2. The correction method requires the FPGA to carry out floating point multiplication operation, so that the calculation resource is greatly occupied, and the realization of other functions of the camera is influenced. 3. The multiplication coefficient of each pixel in the correction method does not consider noise in a spatial domain, is extremely sensitive to tiny interference in a dead pixel or a punctuation surface of a camera, and therefore influences the quality of subsequent images.
As described in patent CN107172323a, a camera vignetting correction coefficient calculating method is also disclosed, which characterizes the vignetting degree by calculating the gradient and the divergence of the calibration image, and calculates the correction coefficient of each pixel by solving the mathematical model. The method can correct the dark angle of the image to a certain extent, but has several defects: 1. the model adopted by the method needs to specify the image coordinates of the model center, when errors exist between the center of the dark angle feature and the model center, the calculated correction coefficient is inconsistent with the actual situation, and the stability and reliability of dark angle correction are reduced; 2. the model mentioned by the method adopts radial expansion, so that the non-centrosymmetric hidden angle characteristic cannot be reasonably corrected, and the application range is limited; 3. the model analysis and calculation complexity of the method is high, the method is difficult to deploy in camera hardware, the correction can only be completed in a computer terminal, and the offline deployment of the correction cannot be realized.
Disclosure of Invention
The application aims at: and proper correction coefficients are designed aiming at the dark angle characteristics so as to correct dark angle distortion of the image edge, improve the image quality, reduce the occupation of hardware resources of an industrial camera and improve the image correction efficiency.
The technical scheme of the application is as follows: there is provided an industrial camera vignetting correction method, the method comprising: step 1, determining key rows and key columns of a calibration image in sequence according to a set minimum grid width and pixel gray values of pixel points in the calibration image, and forming a grid according to the key rows and the key columns, wherein the method specifically comprises the following steps:
Step 11, selecting all pixel points with column coordinates of c less than or equal to b min and c more than or equal to (H cols-bmin) in the calibration image according to the minimum grid width b min and the column number H cols of the calibration image, wherein the number of the selected pixel points is N 1=2×bmin×Hrows, the selected pixel points are marked as a first reference point set, in the first reference point set, row sitting marks of all the pixel points are marked as r u,u=1,2,3,...,N1, column sitting marks are marked as c u, and corresponding pixel gray values are p u=H(ru,cu);
Step 12), calculating a first coordinate matrix A according to row coordinates r u of each pixel point in the first reference point set, and calculating a first pixel matrix V according to a pixel gray value p u=H(ru,cu of each pixel point;
Step 13, calculating a first parameter matrix according to the first coordinate matrix A and the first pixel matrix V, and selecting a first element in the first parameter matrix to be recorded as a line interval parameter a 1;
Step 14, sequentially calculating the line interval and the line interval number N 2 of the calibration image according to the line interval parameter a 1 and the line number H rows of the calibration image, and determining the key row key_r i according to the line interval and the line interval number N 2, wherein when determining the key row key_r i, the corresponding rule is as follows: the first key row key_r 1:key_r1 =0; the remaining key rows key_r i:key_ri=(i-1)×δr0,i=2,3,…,N2-1,δr0 are the default row intervals; last critical line
Step 15, setting i as a current key line sequence number, i is more than or equal to 1 and less than or equal to N 2 -1, selecting pixel points in the i and i+1th key line ranges in the calibration image, namely, taking all points of which the line coordinates in the calibration image meet key_r i≤r≤key_ri+1 as a second reference point set, wherein the number of the selected pixel points is M i.1=δr,i×Hcols in total, the column coordinates of all the pixel points in the second reference point set are c v, the line coordinates are r v,v=1,2,3...Mi.1, and the corresponding pixel gray value q v=H(rv,cv;
Step 16, calculating a second coordinate matrix B i according to the column coordinates c v of each pixel in the second reference point set, and calculating a second pixel matrix according to the pixel gray values q v=H(rv,cv of each pixel;
Step 17, calculating a second parameter matrix according to the second coordinate matrix B i and the second pixel matrix q v=H(rv,cv), and selecting a first element in the second parameter matrix to be recorded as a column interval parameter B i.1;
Step 18, according to the column interval parameter b i.1 and the column number H cols of the calibration image, sequentially calculating the column interval and the column interval number M i.2 of the calibration image, determining a key column key_c (i,j), and when determining the key column key_c (i,j), determining the corresponding rule as follows: the first key column key_c (i,1):key_c(i,1) =0; the remaining key columns key_c (i,j):key_r(i,j)=(j-1)×δc0.i,j=2,3,…,Mi.2-1,δc0.i are the default column spacing; last key column
Step 2, determining a neighborhood range of each grid point in the grid according to a preset neighborhood half-axis length L and the number of lines and columns of the calibration image, and calculating a neighborhood mean value of a related point group in the grid in the neighborhood range, wherein the related point group comprises four adjacent grid points;
Step 3, calculating a dark angle correction coefficient corresponding to each pixel point in the image to be corrected according to the neighborhood mean value, and carrying out dark angle correction on the image to be corrected according to the dark angle correction coefficient, wherein the method specifically comprises the following steps:
Step 31, sequentially calculating a point group correction coefficient G m,n,l corresponding to the relevant point group in the grid according to the maximum value max_avg in the neighborhood mean value, wherein the point group correction coefficient G m,n,l comprises four relevant point correction coefficients;
Step 32, respectively calculating the vertical gradient gy m of each row and first column grid and the row direction correction coefficient G R,1 of each row in the first column pixel point of the image to be corrected according to the third correlation point correction coefficient G m,n,3, the first correlation point correction coefficient G m,n,1 and the row interval;
Step 33, calculating a first horizontal gradient gx m,n of each grid according to the second correlation point correction coefficient G m,n,2, the first correlation point correction coefficient G m,n,1 and the column interval, calculating a gradient change rate kx m,n of each grid according to the four correlation point correction coefficients and the column interval, and correcting the first horizontal gradient gx m,n according to the gradient change rate kx m,n to generate a second horizontal gradient of the image to be corrected
Step 34, according to the line direction correction coefficient g R,1, the second horizontal direction gradientCalculating a dark angle correction coefficient g R,C corresponding to each pixel point in the image to be corrected, wherein the dark angle correction coefficient g R,C has a calculation formula as follows:
Where T cols is the number of columns of the image to be corrected.
In any of the above solutions, in step 14, further, calculating the line interval of the calibration image further includes: and correcting the row interval according to the minimum grid width and the maximum grid width, and calculating the number of row intervals according to the corrected row interval.
In any of the above solutions, in step 18, further, calculating a column interval of the calibration image further includes: and correcting the column interval according to the minimum grid width and the maximum grid width, and calculating the number of column intervals according to the corrected column interval.
In any one of the above technical solutions, in step 2, the neighborhood range includes at least an upper boundary, a lower boundary, a left boundary, and a right boundary, the value of the neighborhood average avg m,n,l is determined by the pixel gray value H (r, c) of each pixel point in the calibration image in the neighborhood range, and the calculation formula of the neighborhood average avg m,n,l is:
dup.(m,n,l)=max((Pt_rm,n,l-L),0)
ddown.(m,n,l)=min((Pt_rm,n,l+L),Hrows)
dleft.(m,n,l)=max((Pt_cm,n,l-L),0)
drig.(m,n,l)=min((Pt_cm,n,l+L),Hcols)
Wherein c is the column coordinate of the pixel point in the calibration image, r is the row coordinate of the pixel point in the calibration image, H (r, c) is the pixel gray value of the pixel point in the nth row of the c-th column in the calibration image, d up.(m,n,l) is the upper boundary, d down.(m,n,l) is the lower boundary, d left.(m,n,l) is the left boundary, d rig.(m,n,l) is the right boundary, pt_c m,n,l represents the column coordinate of four adjacent grid points Pt m,n,l in the related point group, pt_r m,n,l represents the row coordinate of four adjacent grid points Pt m,n,l in the related point group, M is the serial number of the key row, m=1, 2,3, …, N 2-1,N2 is the number of row intervals, L is the preset neighborhood half-axis length, H rows is the number of rows of the calibration image, H cols is the number of columns of the calibration image, key_c m,n is the column coordinate of the grid points, N is the serial number of the key columns, n=1, 2,3, …, M m.2-1,Mm.2 is the number of column intervals corresponding to the mth row key row and L is the serial number of the four adjacent grid points in the related point group=1, 2, 3.
In any of the above technical solutions, further, a calculation formula of the vertical gradient gy m is:
Wherein m is a key row number, m=1, 2,3, …, N 2-1,N2 is the number of row intervals, δ r is the row interval;
The calculation formula of the line direction correction coefficient g R,1 is:
wherein T rows is the number of lines of the image to be corrected, R is the line coordinates of the image to be corrected, and G 1,1,1 is the first correlation point correction coefficient of the first column of the grid.
In any of the above technical solutions, further, a calculation formula of the first horizontal direction gradient gx m,n is:
Wherein, G m,n,2 is the second correlation point correction coefficient, G m,n,1 is the first correlation point correction coefficient, and delta c.m is the column interval;
Second horizontal gradient The calculation formula of (2) is as follows:
wherein C is the column coordinate of the image to be corrected, Is the gradient rate of change.
The beneficial effects of the application are as follows:
According to the technical scheme, a more proper correction coefficient calculation scheme is designed aiming at different correction precision requirements, parameters such as the maximum allowable correction deviation, the maximum/small grid width, the neighborhood half-axis length, the calibration image and the like are given, the grid division rule is combined, the coordinates of key points of each grid and the gray values, gradients and changes of the key points are determined, and the gradient superposition method is adopted to calculate the dark angle correction coefficient so as to correct the dark angle of the image shot by the industrial camera. The method can design a proper grid division scheme aiming at different scenes (different vignetting characteristics), can reduce the occupation of hardware data storage capacity calculation resources of a camera end, can realize off-line deployment of vignetting correction, and simultaneously improves the stability of correction of different vignetting characteristics and the quality of an output image of the camera.
Drawings
The advantages of the foregoing and/or additional aspects of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic flow chart diagram of an industrial camera vignetting correction method according to one embodiment of the application;
FIG. 2 is a schematic diagram of an industrial camera vignetting correction method according to one embodiment of the present application with respect to maximum out-of-tolerance;
FIG. 3 is a schematic diagram of a first set of reference points according to one embodiment of the application;
FIG. 4 is a schematic diagram of a second set of reference points according to one embodiment of the application;
FIG. 5 is a schematic diagram of a grid composed of key rows and key columns according to one embodiment of the application;
FIG. 6 is a schematic diagram of a neighborhood range of keypoints versus a domain radius L according to one embodiment of the application;
FIG. 7 is a schematic view of the analysis of the vignetting correction results according to one embodiment of the present application.
The method comprises the steps of 1 correcting the maximum out-of-tolerance t max, 2, actual image gray value distribution characteristics (grid local), 3, interpolation-obtained image gray value distribution characteristics (grid local), 4, grid interval, 5, calibration image H,6, calibration image two-side minimum grid width b min range, 7, row interval formed by two adjacent key rows, 8, interpolation grid and four key points, 9, calibration image upper boundary and 10, calibration image left boundary.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
As shown in fig. 1 and fig. 2, the present embodiment provides an industrial camera vignetting correction method, in which a calibration image and design parameters are used to calculate a vignetting correction coefficient corresponding to each pixel point in an image to be corrected, so as to perform vignetting correction on each pixel point in the image to be corrected, and improve quality of an output image of the industrial camera.
In this embodiment, the design parameters include an allowable correction maximum out-of-tolerance T max, an allowable maximum grid width b max, a minimum grid width b min, and a neighborhood half-axis length L, and the pixel gray value of the calibration image is denoted as H (R, C), where the parameter R represents the number of rows of the calibration image, C represents the number of columns of the calibration image, and similarly, the pixel gray value of the image to be corrected is denoted as T (R, C), where the parameter R represents the number of rows of the image to be corrected, and C represents the number of columns of the image to be corrected.
In this embodiment, the above process may be divided into two parts, i.e., on-line and off-line, which are respectively executed by the industrial camera end and the computer end, and in the off-line part, the computer end calculates grid information corresponding to the calibration image according to the calibration image and the design parameters, and stores the grid information as an off-line file for storage; when the industrial camera end needs to carry out dark angle correction, reading related offline files, calculating dark angle correction coefficients corresponding to each pixel point in the image to be corrected according to grid information and the image information to be corrected, and carrying out dark angle correction.
It should be noted that, in this embodiment, the specific division of the above-mentioned process is not limited, and the method in this embodiment may also be independently completed by the industrial camera end.
The industrial camera vignetting correction method in the present embodiment includes:
step 1, sequentially determining a key row key_r i and a key column key_c (i,j) of a calibration image according to a set minimum grid width b min and a pixel gray value H (r, c) of a pixel point in the calibration image, and forming a grid according to the key row and the key column;
In this embodiment, before the vignetting correction is performed, a grid corresponding to the calibration image needs to be determined according to the calibration image and the design parameter, where the grid is composed of a key row and a key column in the calibration image, and an intersection point of the key row and the key column is denoted as a grid point.
In this embodiment, a method for dividing grids is provided by combining a grid dividing rule and a design parameter, and coordinates of each grid point are determined by calculating a corresponding coordinate matrix and a pixel matrix, so as to ensure that a proper dark angle correction coefficient can be designed for dark angle features, so as to correct dark angle distortion of an image edge to be corrected, and improve image quality.
In the process of dividing the grid, in this embodiment, the determining the key rows of the calibration image includes:
Step 11, according to the minimum grid width b min and the column number H cols of the calibration image, selecting all pixel points with column coordinates satisfying c less than or equal to b min and c more than or equal to (H cols-bmin) in the calibration image, wherein the number of the selected pixel points is N 1=2×bmin×Hrows, as shown in fig. 3, and the selected pixel points are recorded as a first reference point set. In the first reference point set, the row seat mark of each pixel point is r u,u=1,2,3,...,N1, the column seat mark is c u, and the corresponding pixel gray value p u=H(ru,cu).
Step 12, calculating a first coordinate matrix a according to the row coordinates r u of each pixel point in the first reference point set, and calculating a first pixel matrix V according to the pixel gray value p u=H(ru,cu of each pixel point, where the calculation formula of the first coordinate matrix a is:
The calculation formula of the first pixel matrix V is:
V=[p1 p2 p3 … pN1]。
Step 13, calculating a first parameter matrix according to the first coordinate matrix a and the first pixel matrix V, and selecting a first element in the first parameter matrix, and recording as a line interval parameter a 1, wherein a calculation formula of the first parameter matrix is as follows:
Step 14, sequentially calculating the row interval and the number of row intervals N 2 of the calibration image according to the row interval parameter a 1 and the row number H rows of the calibration image, and determining the key row key_r i according to the row interval and the number of row intervals N 2.
Preferably, in order to ensure the accuracy of the selection of the key row key_r i, in this embodiment, the calculated row interval of the calibration image is further corrected, the number of row intervals N 2 is calculated according to the corrected row interval, when the row interval is corrected, the row interval is corrected according to the minimum grid width b min and the maximum grid width b max, and the number of row intervals is calculated according to the corrected row interval.
Specifically, the calculated line interval according to the line interval parameter a 1 is referred to as a default line interval δ r0, and the corresponding calculation formula is:
Where t max is the corrected maximum deviation, Is a round-down operation.
The corrected line interval is recorded as a corrected line interval delta r, and the corresponding correction formula is:
Where b max is the maximum grid width and b min is the minimum grid width.
Then, according to the corrected row interval delta r and the row number H rows of the calibration image, the number N 2 of row intervals is calculated, and the corresponding calculation formula is as follows:
In the method, in the process of the invention, Is a round-up operation.
It should be noted that, the line interval may not be modified, and only the modified line interval δ r needs to be replaced by the default line interval δ r0, which is not described herein.
When determining the key row key_r i, the corresponding rule is as follows:
the first key row key_r 1:key_r1 =0;
the rest key row key_r i(i=2,3,…,N2-1):key_ri=(i-1)×δr;
Last critical line
On the basis of the above embodiment, in the process of determining the key column key_c (i,1) of the calibration image, the method specifically includes:
Step 15, selecting pixel points in two adjacent key rows as a second reference point set in the calibration image;
Specifically, i is set as a current key line number (i is more than or equal to 1 and less than or equal to N 2 -1), in the calibration image, the pixel points in the i and i+1th key line ranges are selected, namely all points with line coordinates meeting key_r i≤r≤key_ri+1 in the calibration image are taken as a second reference point set, as shown in fig. 4, the number of the selected pixel points is M i.1=δr,i×Hcols in total, the column coordinates of all the pixel points in the second reference point set are c v, the line coordinates are r v,v=1,2,3...Mi.1, and the corresponding pixel gray value q v=H(rv,cv.
Step 16, calculating a second coordinate matrix B i according to the column coordinates c v of each pixel point in the second reference point set, and calculating a second pixel matrix according to the pixel gray values q v=H(rv,cv of each pixel point), wherein the calculation formula of the second coordinate matrix B i is as follows:
The calculation formula of the second pixel matrix U i is:
Step 17, calculating a second parameter matrix according to the second coordinate matrix B i and the second pixel matrix q v=H(rv,cv), and selecting a first element in the second parameter matrix, and recording as a column interval parameter B i.1, wherein a calculation formula of the second parameter matrix is as follows:
Step 18, sequentially calculating the column interval and the column interval number M i.2 of the calibration image according to the column interval parameter b i.1 and the column number H cols of the calibration image, and determining the key column key_c (i,j) according to the column interval and the column interval number M i.2.
Preferably, in order to ensure the accuracy of the selection of the key columns key_c (i,j), in this embodiment, the calculated column intervals of the calibration image are further corrected, the number of column intervals M i.2 is calculated according to the corrected column intervals, when the column intervals are corrected, the column intervals are corrected according to the minimum grid width b min and the maximum grid width b max, and the number of column intervals is calculated according to the corrected column intervals.
Specifically, according to the column interval parameter b i.1, calculating a column interval corresponding to the ith key row key_r i, and recording the calculated column interval as a default column interval delta c0.i, wherein a corresponding calculation formula is as follows:
the corrected column interval is referred to as corrected column interval delta c.i, and the corresponding correction formula is:
Then, according to the corrected column interval delta c.i and the column number H cols of the calibration image, calculating the column interval number M i.2 corresponding to the ith key row, wherein the corresponding calculation formula is as follows:
It should be noted that, the column interval may not be modified, and only the modified column interval δ c.i needs to be replaced by the default column interval δ c0.i, which is not described herein.
When determining the key_c (i,j), the corresponding rule is as follows:
The first key column key_c (i,1):key_c(i,1) =0;
the remaining key columns key_c (i,j)(j=2,3,…,Mi.2-1):key_r(i,j)=(j-1)×δc.i;
Last key column
The grids formed in this embodiment are shown in fig. 5, and the above-mentioned determining process of the grids improves the rationality of grid division, and helps to ensure that a proper vignetting correction coefficient can be designed for vignetting characteristics, and improve image quality.
Step 2, determining a neighborhood range of each grid point in the grid according to a preset neighborhood half-axis length L, a line number H rows and a line number H cols of a calibration image, and calculating a neighborhood average value avg m,n,l of a related point group in the grid in the neighborhood range, wherein the related point group comprises four adjacent grid points;
In this embodiment, for convenience of marking, the number of the key rows in the grid is denoted as M, m=1, 2,3, …, N 2-1,N2 is the number of row intervals, the number of the associated columns is denoted as N, n=1, 2,3, …, and M m.2-1,Mm.2 is the number of column intervals corresponding to the M-th key row.
In this embodiment, each key row and the next adjacent key row thereof, and the grid points corresponding to each key column and the next adjacent key column corresponding to the current key row, that is, four adjacent grid points, are recorded as a relevant point group, and the corresponding expression is:
Wherein l is the serial number of the related point group, l=1, 2,3,4, related point Is marked as a first correlation pointRecorded as the second correlation pointRecorded as the third correlation pointAnd is denoted as a fourth correlation point.
As shown in fig. 6, four boundaries of four adjacent grid points can be determined by taking the relevant point Pt m,n,l of the grid points as the center and according to the preset neighborhood half-axis length L, the number of lines H rows and the number of columns H cols of the calibration image: the upper boundary d up.(m,n,l), the lower boundary d down.(m,n,l), the left boundary d left.(m,n,l) and the right boundary d right.(m,n,l) enclose a neighborhood range, and the expressions corresponding to the four boundaries are as follows:
dup.(m,n,l)=max((Pt_rm,n,l-L),0)
ddown.(m,n,l)=min((Pt_rm,n,l+L),Hrows)
dleft.(m,n,l)=max((Pt_cm,n,l-L),0)
dright.(m,n,l)=min((Pt_cm,n,l+L),Hcols)
Where pt_c m,n,l represents the column coordinates of four adjacent grid points (correlation points) Pt m,n,l in the correlation point group, and pt_r m,n,l represents the row coordinates of four adjacent grid points (correlation points) Pt m,n,l in the correlation point group.
In this embodiment, the value of the neighborhood average avg m,n,l is determined by the pixel gray value H (r, c) of each pixel point in the calibration image in the neighborhood range, and the calculation formula of the neighborhood average avg m,n,l is:
Wherein c is the column coordinate of the pixel point in the calibration image, r is the row coordinate of the pixel point in the calibration image, H (r, c) is the pixel gray value of the pixel point of the r-th row of the c-th column in the calibration image, For row coordinates of grid points, M is a sequence number of a key row, m=1, 2,3, …, N 2-1,N2 is a number of row intervals, L is a preset neighborhood half-axis length, H rows is a row number of a calibration image, H cols is a column number of the calibration image, key_c m,n is a column coordinate of the grid points, N is a sequence number of a key column, n=1, 2,3, …, M m.2-1,Mm.2 is a number of column intervals corresponding to an mth key row, L is a sequence number of four adjacent grid points (related points) in a related point group, and l=1, 2,3,4.
And 3, calculating a dark angle correction coefficient g R,C corresponding to each pixel point in the image to be corrected according to the neighborhood average value avg m,n,l, and carrying out dark angle correction on the image to be corrected according to the dark angle correction coefficient.
Specifically, the calculation formula of the vignetting correction coefficient g R,C is:
wherein T rows is the number of lines of the image to be corrected, T cols is the number of columns of the image to be corrected, R is the row coordinates of the image to be corrected, C is the column coordinates of the image to be corrected, For the column number of the column,In the form of a row number,For the corrected column spacing, δ r is the corrected row spacing, G 1,1,1 is the first correlation point correction coefficient for the grid of the first column of the first row in the grid,A first horizontal gradient for a first row in the grid,For the rate of change of the gradient thereof,The value of (c) is determined by the point set correction coefficient G m,n,l,For the row number of the corresponding grid,Is the column number of the corresponding grid.
On the basis of the above embodiment, the present embodiment shows a method for calculating a vignetting correction coefficient, which specifically includes:
Step 31, sequentially calculating a point group correction coefficient G m,n,l corresponding to the relevant point group in the grid according to the maximum value max_avg in the neighborhood mean value, wherein the point group correction coefficient G m,n,l comprises four relevant point correction coefficients, and the calculation formula of the point group correction coefficient G m,n,l is as follows:
max_avg=max(avgm,n,l)
Where M is the number of key rows, m=1, 2,3, …, N 2-1,N2 is the number of row intervals, N is the number of key columns, n=1, 2,3, …, M m.2-1,Mm.2 is the number of column intervals corresponding to the M-th key row, l is the number of relevant point groups, and l=1, 2,3,4.
Step 32, calculating the vertical gradient gy m of the first row and the first column grid and the line direction correction coefficient G R,1 of each row in the first column pixel point of the image to be corrected according to the third correlation point correction coefficient G m,n,3, the first correlation point correction coefficient G m,n,1 and the correction line interval, respectively,
The calculation formula of the vertical gradient gy m is:
where m is a key row number, m=1, 2,3, …, N 2-1,N2 is the number of row intervals, and δ r is the row interval.
The calculation formula of the line direction correction coefficient g R,1 is:
wherein T rows is the number of lines of the image to be corrected, R is the line coordinates of the image to be corrected, G 1,1,1 is the first correlation point correction coefficient of the first column of the first line of the grid, Is the row number of the corresponding grid.
In this embodiment, the line direction correction coefficient G R,1 is set in the form of a piecewise function according to the first correlation point correction coefficient G 1,1,1 and the vertical direction gradient gy m of the first row and first column in the grid, that is, according to the operation result of the line coordinate R and the correction line interval delta r of the image to be correctedThe key row number m in the vertical gradient gy m is replaced to obtain the corresponding row direction correction coefficient g R,1.
It should be noted that, the correction line interval δ r may be replaced by the calculated (default) line interval δ r0, which is not described herein.
Step 33, respectively calculating a first horizontal gradient gx m,n and a gradient change rate kx m,n of each grid according to the point group correction coefficient and the column interval, correcting the first horizontal gradient gx m,n according to the gradient change rate kx m,n, and generating a second horizontal gradient of the image to be corrected
The column interval δ c.m may be the (default) column interval δ c0.i or the corrected column interval δ c.i when the first horizontal gradient gx m,n is calculated.
In this embodiment, the correction column interval δ c.i is taken as an example, that is, δ c.m=δc.i, and the calculation formula of the first horizontal gradient gx m,n is:
wherein, G m,n,2 is a second correlation point correction coefficient, G m,n,1 is a first correlation point correction coefficient, and the calculation formula of the gradient change rate kx m,n is as follows:
Wherein G m,n,3 is a third correlation coefficient, and G m,n,4 is a fourth correlation coefficient.
Second horizontal gradientThe calculation process of (3) is similar to that of the line direction correction coefficient g R,1, and the second horizontal direction gradientThe calculation formula of (2) is as follows:
wherein C is the column coordinate of the image to be corrected, Is the gradient change rate;
Step 34, according to the line direction correction coefficient g R,1, the second horizontal direction gradient Calculating a dark angle correction coefficient g R,C corresponding to each pixel point in the image to be corrected, wherein the dark angle correction coefficient g R,C has a calculation formula as follows:
Where T cols is the number of columns of the image to be corrected.
After the dark angle correction coefficient g R,C is calculated through the above process, the pixel value of each corresponding coordinate in the image to be corrected can be corrected, and the corrected pixel value is:
T′(R,C)=T(R,C)×gR,C
In order to verify the effectiveness of the method for correcting the dark angle of the industrial camera in this embodiment, a correction test is performed, the resolution of the calibration image is set to 5120×5120, and the correction result of the calibration image is analyzed. As shown in fig. 7 (a), a gray distribution curve is drawn by taking the average value of gray values in the horizontal direction of the image, the left image is the gray distribution of the original image, and the gray value in the middle part of the visible image is high and the gray values on both sides are low. The right image is the gray scale distribution of the corrected image, and the gray scale values of the image are basically consistent. Indicating better photometric consistency of the corrected image.
In this embodiment, the histogram is also analyzed, and the correction effect is represented by the distribution characteristics of the image gray level histogram, as shown in fig. 7 (b), where the left image is the original image histogram, and the visible distribution is wider. The right image is a corrected image histogram with a narrower statistical distribution. Indicating that the corrected image gray scale distribution is more concentrated and the consistency is higher.
By correcting the image shot by the industrial camera by the vignetting correction method in the embodiment, as shown in fig. 7 (c), the visible image effect is obviously improved.
The technical scheme of the application is explained in detail above with reference to the accompanying drawings, and the application provides an industrial camera vignetting correction method, which comprises the following steps: step 1, sequentially determining key rows and key columns of a calibration image according to a set minimum grid width and pixel gray values of pixel points in the calibration image, and forming grids according to the key rows and the key columns; step 2, determining a neighborhood range of each grid point in the grid according to a preset neighborhood half-axis length L and the number of lines and columns of the calibration image, and calculating a neighborhood mean value of a related point group in the grid in the neighborhood range, wherein the related point group comprises four adjacent grid points; and step 3, calculating a dark angle correction coefficient corresponding to each pixel point in the image to be corrected according to the neighborhood mean value, and carrying out dark angle correction on the image to be corrected according to the dark angle correction coefficient. By the technical scheme, the application designs the proper correction coefficient aiming at the dark angle characteristic so as to correct the dark angle distortion of the image edge, improve the image quality, reduce the occupation of hardware resources of an industrial camera and improve the image correction efficiency.
The steps in the application can be sequentially adjusted, combined and deleted according to actual requirements.
The units in the device can be combined, divided and deleted according to actual requirements.
Although the application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and is not intended to limit the application of the application. The scope of the application is defined by the appended claims and may include various modifications, alterations and equivalents of the application without departing from the scope and spirit of the application.

Claims (6)

1. A method for correcting a camera vignetting in an industrial camera, the method comprising:
Step 1, determining a key row and a key column of a calibration image in sequence according to a set minimum grid width and a pixel gray value of a pixel point in the calibration image, and forming a grid according to the key row and the key column, wherein the method specifically comprises the following steps:
Step 11, selecting all pixel points with column coordinates of c less than or equal to b min and c more than or equal to (H cols-bmin) in the calibration image according to the minimum grid width b min and the column number H cols of the calibration image, wherein the number of the selected pixel points is N 1=2×bmin×Hrows, and the selected pixel points are marked as a first reference point set, in the first reference point set, row sitting marks of all the pixel points are marked as r u, u=1, 2,3,., N1, and column sitting marks are marked as c u, and corresponding pixel gray values p u=H(ru,cu);
Step 12), calculating a first coordinate matrix A according to row coordinates r u of each pixel point in the first reference point set, and calculating a first pixel matrix V according to a pixel gray value p u=H(ru,cu of each pixel point;
Step 13, calculating a first parameter matrix according to the first coordinate matrix A and the first pixel matrix V, and selecting a first element in the first parameter matrix to be recorded as a line interval parameter a 1;
Step 14, sequentially calculating the line interval and the line interval number N 2 of the calibration image according to the line interval parameter a 1 and the line number H rows of the calibration image, and determining the key row key_r i according to the line interval and the line interval number N 2, wherein when determining the key row key_r i, the corresponding rule is as follows: the first key row key_r 1:key_r1 =0; the remaining key rows key_r i:key_ri=(i-1)×δr0,i=2,3,…,N2-1,δr0 are the default row intervals; last critical line
Step 15, setting i as a current key line sequence number, i is more than or equal to 1 and less than or equal to N 2 -1, selecting pixel points in the i and i+1th key line ranges in the calibration image, namely, taking all points of which the line coordinates in the calibration image meet key_r i≤r≤key_ri+1 as a second reference point set, wherein the number of the selected pixel points is M i.1=δr,i×Hcols in total, the column coordinates of all the pixel points in the second reference point set are c v, the line coordinates are r v,v=1,2,3...Mi.i, and the corresponding pixel gray value q v=H(rv,cv;
Step 16, calculating a second coordinate matrix B i according to the column coordinates c v of each pixel in the second reference point set, and calculating a second pixel matrix according to the pixel gray values q v=H(rv,cv of each pixel;
Step 17, calculating a second parameter matrix according to the second coordinate matrix B i and the second pixel matrix q v=H(rv,cv), and selecting a first element in the second parameter matrix to be recorded as a column interval parameter B i.1;
Step 18, according to the column interval parameter b i.1 and the column number H cols of the calibration image, sequentially calculating the column interval and the column interval number M i.2 of the calibration image, determining a key column key_c (i,j), and when determining the key column key_c (i,j), determining the corresponding rule as follows: the first key column key_c (i,1):key_c(i,1) =0; the remaining key columns key_c (i,j):key_r(i,j)=(j-1)×δc0.i,j=2,3,…,Mi.2-1,δc0.i are the default column spacing; last key column
Step 2, determining a neighborhood range of each grid point in the grid according to a preset neighborhood half-axis length L and the number of lines and columns of the calibration image, and calculating a neighborhood mean value of a related point group in the grid in the neighborhood range, wherein the related point group comprises four adjacent grid points;
Step 3, calculating a dark angle correction coefficient corresponding to each pixel point in the image to be corrected according to the neighborhood mean value, and performing dark angle correction on the image to be corrected according to the dark angle correction coefficient, wherein the method specifically comprises the following steps:
Step 31, sequentially calculating a point group correction coefficient G m,n,l corresponding to the relevant point group in the grid according to the maximum value max_avg in the neighborhood mean value, wherein the point group correction coefficient G m,n,l comprises four relevant point correction coefficients;
Step 32, respectively calculating a vertical gradient gy m of each row and first column grid and a row direction correction coefficient G R,1 of each row in the first column pixel point of the image to be corrected according to the third correlation point correction coefficient G m,n,3, the first correlation point correction coefficient G m,n,1 and the row interval;
Step 33, calculating a first horizontal gradient gx m,n of each grid according to a second correlation point correction coefficient G m,n,2, a first correlation point correction coefficient G m,n,1 and a column interval, calculating a gradient change rate kx m,n of each grid according to four correlation point correction coefficients and a column interval, and correcting the first horizontal gradient gx m,n according to the gradient change rate kx m,n to generate a second horizontal gradient of the image to be corrected
Step 34, according to the line direction correction coefficient g R,1, the second horizontal direction gradientCalculating a dark angle correction coefficient g R,C corresponding to each pixel point in the image to be corrected, wherein the dark angle correction coefficient g R,C has a calculation formula as follows:
Where T cols is the number of columns of the image to be corrected.
2. The method for correcting the camera vignetting of claim 1, wherein in step 14, the line interval of the calibration image is calculated, and then further comprising:
And correcting the row interval according to the minimum grid width and the maximum grid width, and calculating the number of the row intervals according to the corrected row interval.
3. The method of claim 1, wherein in step 18, the column interval of the calibration image is calculated, and further comprising:
And correcting the column interval according to the minimum grid width and the maximum grid width, and calculating the number of column intervals according to the corrected column interval.
4. The method of claim 1, wherein in the step 2, the neighborhood range includes at least an upper boundary, a lower boundary, a left boundary, and a right boundary, the value of the neighborhood average avg m,n,l is determined by the pixel gray values H (r, c) of the pixels in the calibration image in the neighborhood range, and the calculation formula of the neighborhood average avg m,n,l is:
Wherein c is the column coordinate of the pixel point in the calibration image, r is the row coordinate of the pixel point in the calibration image, H (r, c) is the pixel gray value of the pixel point in the ith row of the c-th column in the calibration image, d up.(m,n,l) is the upper boundary, d down.(m,n,l) is the lower boundary, d left.(m,n,l) is the left boundary, and d right.(m,n,l) is the right boundary.
5. The industrial camera vignetting correction method as claimed in claim 1, characterized in that the calculation formula of the vertical direction gradient gy m is:
Wherein m is a key row number, m=1, 2,3, …, N 2-1,N2 is the number of row intervals, δ r is the row interval;
the calculation formula of the row direction correction coefficient g R,1 is as follows:
Wherein T rows is the number of lines of the image to be corrected, R is the line coordinates of the image to be corrected, and G 1,1,1 is the first correlation point correction coefficient of the first row and first column of the grids.
6. The method of claim 5, wherein the first horizontal gradient gx m,n is calculated by the formula:
Wherein, G m,n,2 is the second correlation point correction coefficient, G m,n,1 is the first correlation point correction coefficient, and delta c.m is the column interval;
the second horizontal direction gradient The calculation formula of (2) is as follows:
wherein C is the column coordinate of the image to be corrected, Is the gradient rate of change.
CN202110271960.1A 2021-03-12 Industrial camera dark angle correction method Active CN112991211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110271960.1A CN112991211B (en) 2021-03-12 Industrial camera dark angle correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110271960.1A CN112991211B (en) 2021-03-12 Industrial camera dark angle correction method

Publications (2)

Publication Number Publication Date
CN112991211A CN112991211A (en) 2021-06-18
CN112991211B true CN112991211B (en) 2024-07-05

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007318533A (en) * 2006-05-26 2007-12-06 Fujifilm Corp Digital camera
EP3195197A4 (en) * 2014-09-18 2018-08-08 Sciometrics LLC Mobility empowered biometric appliance a tool for real-time verification of identity through fingerprints

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007318533A (en) * 2006-05-26 2007-12-06 Fujifilm Corp Digital camera
EP3195197A4 (en) * 2014-09-18 2018-08-08 Sciometrics LLC Mobility empowered biometric appliance a tool for real-time verification of identity through fingerprints

Similar Documents

Publication Publication Date Title
CN113077748A (en) Gray scale compensation method, gray scale compensation device and gray scale compensation system of display panel
CN110631706B (en) Infrared image correction method and device and storage medium
CN109685794B (en) Camera self-adaptive step length DPC algorithm and device for mobile phone screen defect detection
CN112848281B (en) Light compensation method for photocuring 3D printer
CN109191386B (en) BPNN-based rapid Gamma correction method and device
CN111179184B (en) Fish-eye image effective region extraction method based on random sampling consistency
CN113808121A (en) Yarn sub-pixel level diameter measurement method and system
CN110909772B (en) High-precision real-time multi-scale dial pointer detection method and system
CN115523847A (en) Monocular camera ranging method and system
CN112991211B (en) Industrial camera dark angle correction method
CN114332237A (en) Method for calculating conversion relation between camera coordinate system and laser coordinate system
CN111182293B (en) Method and system for detecting lens shadow correction data
CN110193673B (en) Grid regional compensation method for galvanometer type laser processing
CN115100078B (en) Method and related device for correcting and filling dot matrix coordinates in curved screen image
CN112182967A (en) Automatic photovoltaic module modeling and hot spot positioning method based on thermal imaging instrument
CN112991211A (en) Dark corner correction method for industrial camera
CN113284196B (en) Camera distortion pixel-by-pixel calibration method
CN111678937B (en) Image method for determining micro segregation ratio value range in steel
CN103873786A (en) Image adjustment method and optical navigator using same
CN115511718A (en) PCB image correction method and device, terminal equipment and storage medium
CN111839580A (en) Dental film image generation method and device, electronic equipment and storage medium
CN117831063B (en) Double-drawing same-screen control method and system for drawing measurement
CN116862815B (en) Image sensor seam correction method, system, electronic device and storage medium
CN112767472B (en) Method for positioning lamp beads in display screen image, computing equipment and storage medium
CN116608816B (en) Calibration method and device for calibrating device of small-angle measuring instrument

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant