CN117671036B - Correction parameter calibration method, device, computer equipment and storage medium - Google Patents

Correction parameter calibration method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117671036B
CN117671036B CN202410130860.0A CN202410130860A CN117671036B CN 117671036 B CN117671036 B CN 117671036B CN 202410130860 A CN202410130860 A CN 202410130860A CN 117671036 B CN117671036 B CN 117671036B
Authority
CN
China
Prior art keywords
grid
gain value
value corresponding
constraint
reference gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410130860.0A
Other languages
Chinese (zh)
Other versions
CN117671036A (en
Inventor
龙彬
周涤非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ouye Semiconductor Co ltd
Original Assignee
Shenzhen Ouye Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ouye Semiconductor Co ltd filed Critical Shenzhen Ouye Semiconductor Co ltd
Priority to CN202410130860.0A priority Critical patent/CN117671036B/en
Publication of CN117671036A publication Critical patent/CN117671036A/en
Application granted granted Critical
Publication of CN117671036B publication Critical patent/CN117671036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to a correction parameter calibration method, a correction parameter calibration device, computer equipment and a storage medium. The method comprises the following steps: dividing the calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area; determining reference gain values corresponding to the grid vertexes and the grid constraint points respectively based on pixel values corresponding to the grid vertexes and the grid constraint points respectively, and forming a reference gain matrix; performing linear fitting on the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by utilizing a linear fitting result to obtain a target gain value corresponding to each grid vertex; a correction parameter is determined from the target gain value, the correction parameter being used for shading correction of the image. By adopting the method, the shading correction effect of the lens can be improved.

Description

Correction parameter calibration method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a calibration method and apparatus for calibration parameters, a computer device, and a storage medium.
Background
Due to the optical characteristics of the lens and the imaging system, the refraction of incident light rays at different angles is inconsistent, so that the shot image can show the phenomenon of inconsistent brightness of dark centers around, namely lens shadows. In order to improve the image quality, it is necessary to perform lens shading correction on a captured image using correction parameters. Therefore, calibration of the correction parameters is important.
In the conventional art, correction parameters are generally calculated using bilinear interpolation approximation. However, in the case of serious lens shading, the degree of nonlinearity of the shading law is high, and the correction parameter calculated by bilinear interpolation approximation has a large error from the ideal correction parameter, so that the effect of correcting the lens shading by using the correction parameter is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a correction parameter calibration method, apparatus, computer device, computer readable storage medium, and computer program product that can improve the effect of lens shading correction.
In a first aspect, the present application provides a calibration method for calibration parameters. The method comprises the following steps: dividing a calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area; determining reference gain values respectively corresponding to the grid vertexes and the grid constraint points based on pixel values respectively corresponding to the grid vertexes and the grid constraint points, and forming a reference gain matrix; performing linear fitting on the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a target gain value corresponding to each grid vertex; and determining a correction parameter according to the target gain value, wherein the correction parameter is used for shading correction of the image.
In a second aspect, the present application further provides a calibration device for calibration parameters. The device comprises: the grid dividing module is used for dividing the calibration image into a plurality of grid areas and determining grid vertexes and grid constraint points of each grid area; the reference gain determining module is used for determining reference gain values respectively corresponding to the grid vertexes and the grid constraint points based on pixel values respectively corresponding to the grid vertexes and the grid constraint points, and forming a reference gain matrix; the optimization module is used for carrying out linear fitting on the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by utilizing a linear fitting result to obtain a target gain value corresponding to each grid vertex; and the parameter determining module is used for determining a correction parameter according to the target gain value, wherein the correction parameter is used for shading correction of the image.
In some embodiments, the optimization module is further to: performing linear fitting on each row in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a first gain value corresponding to each grid vertex; performing linear fitting on each column in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a second gain value corresponding to each grid vertex; and determining a target gain value corresponding to each grid vertex based on the first gain value and the second gain value corresponding to each grid vertex.
In some embodiments, the mesh constraint points include a first constraint point that is a midpoint between any two adjacent mesh vertices; the optimization module is also used for: for each row in a reference gain matrix, determining any two adjacent grid vertexes of the row as a first segmentation point and a second segmentation point respectively; performing linear regression on the first segmentation point, the second segmentation point and a first constraint point between the first segmentation point and the second segmentation point to obtain a regression line; and optimizing the reference gain values respectively corresponding to the first segmentation point and the second segmentation point based on the regression line to obtain first gain values respectively corresponding to the first segmentation point and the second segmentation point.
In some embodiments, the grid constraint points include a second constraint point, the second constraint point being a region center point of the grid region; the optimization module is also used for: calculating a first gain value corresponding to each second constraint point based on a first gain value corresponding to each grid vertex, and calculating a first gain value corresponding to each second constraint point based on a second gain value corresponding to each grid vertex; counting the difference between the reference gain value corresponding to each second constraint point and the first gain value corresponding to the second constraint point to obtain a first gain error; counting the difference between the reference gain value corresponding to each second constraint point and the second gain value corresponding to the second constraint point to obtain a second gain error; and selecting a target gain value corresponding to each grid vertex from a first gain value and a second gain value corresponding to each grid vertex respectively based on the first gain error and the second gain error.
In some embodiments, the meshing module is further to: performing grid division on the calibration image according to a preset division mode to obtain a plurality of grid areas; wherein, in the case that the preset division manner is non-uniform division, the area of the grid region, of the plurality of grid regions, is larger as the position in the calibration image is closer to the center of the calibration image.
In some embodiments, the meshing module is further to: determining a plurality of color channels corresponding to the calibration image according to the image format of the calibration image; dividing the calibration image according to the color channels to obtain sub-calibration images corresponding to each color channel; dividing each sub-calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area;
the reference gain value comprises a reference gain value corresponding to each color channel, and the reference gain matrix comprises a reference gain matrix corresponding to each color channel; the reference gain determination module is further configured to: and determining reference gain values corresponding to grid vertexes and grid constraint points under the color channels based on pixel values corresponding to the grid vertexes and the grid constraint points in the sub-calibration images corresponding to the color channels for each color channel, and forming a reference gain matrix corresponding to the color channels.
In some embodiments, the reference gain determination module is further to: determining a target value from the pixel value corresponding to each grid vertex and the pixel value corresponding to each grid constraint point; determining a reference gain value corresponding to each grid vertex based on a ratio between the target value and a pixel value corresponding to the grid vertex; and determining a reference gain value corresponding to each grid constraint point based on the ratio between the target value and the pixel value corresponding to the grid constraint point.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps in the correction parameter calibration method when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the correction parameter calibration method described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the correction parameter calibration method described above.
According to the calibration method, the calibration device, the computer equipment, the storage medium and the computer program product of the correction parameters, errors caused by the calibration method are introduced into the calibration stage for optimization, the grid constraint points are determined to obtain the reference gain matrix composed of the grid vertexes and the reference gain values corresponding to the grid constraint points respectively, the reference gain matrix is subjected to linear fitting to optimize the reference gain values of the grid vertexes, the optimization of the reference gain values corresponding to the grid vertexes by the grid constraint points is realized, more accurate correction parameters are obtained, the correction parameters obtained by the method are used for correcting the image on the basis of not increasing hardware cost and calculation cost, and the effect of lens shading correction can be improved.
Drawings
FIG. 1 is an application environment diagram of a calibration parameter calibration method in one embodiment;
FIG. 2 is a flow chart of a calibration method for calibration parameters according to one embodiment;
FIG. 3 is an exemplary diagram of meshing in one embodiment;
FIG. 4 is a schematic illustration of the selection of mesh vertices and mesh constraint points in one embodiment;
FIG. 5 is a schematic diagram of four steps of calibration parameter calibration in one embodiment;
FIG. 6 is a flow chart of linear fitting of optimized gain values in one embodiment;
FIG. 7 is a graph showing the ideal correction parameters and the correction parameters obtained by bilinear interpolation calculation in one embodiment;
FIG. 8 is a block diagram of a calibration parameter calibration device in one embodiment;
FIG. 9 is an internal block diagram of a computer device in one embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The calibration method for the correction parameters can be applied to an application environment shown in fig. 1. The application environment comprises a terminal 102, a server 104 and an image acquisition device 106, wherein the terminal 102 communicates with the server 104 and the image acquisition device 106 through a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server.
The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers. The optical processing module of the Image capturing device 106 mainly comprises a Lens (Lens), an infrared cut Filter (IR-cut Filter), an Image Sensor (Image Sensor) and a printed circuit board, and the Lens, the infrared cut Filter and the Image Sensor are the main parts causing Lens shadows, so that the Image signal processor of the Image capturing device 106 has a module for removing Lens shadows specifically, generally referred to as a Lens shadow correction module, and the Image signal processor is a microprocessor, for example, may be a chip.
Those skilled in the art will appreciate that the application environment shown in fig. 1 is only a partial scenario related to the present application scenario, and does not constitute a limitation on the application environment of the present application scenario.
In some embodiments, as shown in fig. 2, a calibration method for calibration parameters is provided, which may be executed by a terminal or a server, or may be executed by the terminal and the server together, where the method is applied to the terminal 102 in fig. 1, and is described by taking as an example, the following steps:
Step 202, dividing the calibration image into a plurality of grid areas, and determining grid vertices and grid constraint points of each grid area.
The calibration image is used for determining correction parameters of the image acquisition equipment, and is acquired by the image acquisition equipment aiming at an acquisition object. It can be appreciated that in order to improve the accuracy of calibration parameters, the acquisition object is a smooth and texture-free object, and the light source brightness distribution is flat and uniform. The shape of the mesh region is generally rectangular, the mesh vertices of the mesh region refer to four vertices of the rectangle, the mesh constraint points are points selected from the boundary line of the mesh region or the interior of the mesh region, and the mesh constraint points may be plural.
Specifically, the terminal acquires a calibration image acquired by the image acquisition device, performs grid division on the calibration image according to a preset division mode to obtain a plurality of grid areas in the calibration image, determines the vertex of each grid area as a grid vertex, and then selects a grid constraint point from the boundary line of the grid area or the inside of the grid area. The preset dividing mode can be any one of uniform dividing or non-uniform dividing, wherein the uniform dividing refers to dividing the calibration image into a plurality of grid areas with the same area, and the non-uniform dividing refers to dividing the calibration image into a plurality of grid areas with at least two different areas. For example, as shown in fig. 3, an example diagram of mesh division is illustrated, where (a) in fig. 3 represents an example diagram of uniform division, and (b) in fig. 3 represents an example diagram of non-uniform division.
In some embodiments, it should be noted that lens shading correction is typically done separately on different color channels of the image, so meshing should also be done on different color channels. The terminal can divide the calibration image into sub-calibration images corresponding to the color channels respectively according to the color channels, and grid division is carried out on each sub-calibration image.
In some embodiments, a midpoint of a region boundary line of the mesh region, or a region center point of the mesh region, may be selected as the mesh constraint point. For example, as shown in fig. 4, a schematic drawing of mesh vertices and mesh constraint points is shown, where solid black dots represent mesh vertices and hollow dots represent mesh constraint points. Only grid constraint points corresponding to the grid region at the upper left are marked in fig. 4 as an example, and the selection method of grid constraint points corresponding to other grid regions is the same.
In some embodiments, a luminance box or an integrating sphere lamp box can be used as a calibration object to collect calibration images, and a light source can be shot for the gray inner wall of the lamp box (without obvious smudge scratch) or through ground glass to collect calibration images with uniform illumination. In the case of limited conditions, any gray plane with uniformly distributed brightness, such as a white wall, may be used.
Step 204, determining reference gain values corresponding to the grid vertices and the grid constraint points respectively based on the pixel values corresponding to the grid vertices and the grid constraint points respectively, and forming a reference gain matrix.
The reference gain value corresponding to each grid vertex or each grid constraint point comprises reference gain values corresponding to a plurality of color channels, so that the reference gain matrix also comprises reference gain matrixes corresponding to the plurality of color channels respectively. The reference gain matrix may also be referred to as a reference gain LUT (Look Up Table).
Specifically, for the sub-calibration image corresponding to each color channel, the terminal may determine the maximum pixel value from the pixel values corresponding to each grid vertex and each grid constraint point of the sub-calibration image as the target value. The terminal determines a reference gain value corresponding to each grid vertex under the color channel based on the ratio between the target value and the pixel value corresponding to each grid vertex; and determining a reference gain value corresponding to each grid constraint point under the color channel based on the ratio between the target value and the pixel value corresponding to each grid constraint point, thereby forming a reference gain matrix corresponding to the color channel.
In some embodiments, as shown in fig. 5, a schematic diagram of four steps of calibration of correction parameters is shown, including: 1. collecting a calibration image; 2. dividing grids; 3. calibrating a reference gain value; 4. the linear fit optimizes the reference gain value. The first three steps are standard lens shading correction parameter calibration procedures, and the method is different in that the standard calibration procedures only can calibrate the gain value at the grid vertex, and the gain value at the grid vertex can be used for correction without introducing grid constraint points. In the method, as the grid constraint points are selected, linear fitting optimization of the reference gain value can be performed by utilizing the grid constraint points in the fourth step.
And 206, performing linear fitting on the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using the linear fitting result to obtain the target gain value corresponding to each grid vertex.
The target gain value is an optimized gain value, and each grid vertex corresponds to one target gain value under each color channel. The linear fit may be piecewise linear regression or other parameter optimization methods.
Specifically, for the reference gain matrix corresponding to each color channel, the terminal may perform linear fitting on each line in the reference gain matrix, and optimize the reference gain value corresponding to each grid vertex of the line by using the linear regression result, to obtain the target gain value of each grid vertex under the color channel. One row of the reference gain matrix includes both the reference gain values for the grid vertices and the reference gain values for the grid constraint points.
In some embodiments, as shown in fig. 6, a flow chart of linear fitting optimization gain values is shown, after determining a reference gain matrix, pixel coordinates of each grid vertex and pixel coordinates of each grid constraint point, a terminal may traverse each row of the reference gain matrix, perform piecewise linear regression by using the reference gain value of each row, the piecewise points are grid vertices of the row, input the pixel coordinates of each row of grid vertices into the piecewise linear regression results of the corresponding row, and calculate to obtain a first gain value corresponding to each grid vertex; meanwhile, each column of the reference gain matrix can be traversed, piecewise linear regression is performed by using the reference gain value of each column, the piecewise points are grid vertexes of the column, pixel coordinates of the grid vertexes of each column are input into piecewise linear regression results of corresponding columns, and a second gain value corresponding to each grid vertex is obtained through calculation. Then, the terminal may traverse the center constraint point of each grid region, linearly interpolate by using the first gain value and the second gain value of each grid vertex, calculate the first gain value and the second gain value of the grid constraint point at the grid center, and then select a group of target gain values corresponding to each grid vertex from the first gain value and the second gain value corresponding to each grid vertex based on the first gain value and the second gain value of the grid constraint point at the grid center, thereby obtaining the correction parameter.
Step 208, determining a correction parameter according to the target gain value, wherein the correction parameter is used for shading correction of the image.
The correction parameters are parameters used for shading correction of the image by the image acquisition equipment, and comprise target gain values corresponding to grid vertexes.
Specifically, the terminal may determine the target gain value corresponding to each mesh vertex as the correction parameter of the image capturing device. The image acquisition device may acquire correction parameters from the terminal and store them in the image signal processor, thereby performing shading correction on the acquired image using the correction parameters. The image acquisition device can determine the gain value corresponding to each pixel point in the image by adopting a bilinear interpolation method based on the target gain value corresponding to each grid vertex and the pixel coordinates of each grid vertex, so that the gain value corresponding to each pixel point is utilized to correct the pixel value of the pixel point, and a corrected image corresponding to the acquired image is obtained. The brightness of each pixel point in the corrected image is more uniform than that of the acquired image.
In the calibration method of the correction parameters, errors caused by the correction method are introduced into the calibration stage for optimization, the grid constraint points are determined to obtain the reference gain matrix composed of the grid vertexes and the reference gain values corresponding to the grid constraint points respectively, the reference gain matrix is linearly fitted to optimize the reference gain values of the grid vertexes, the optimization of the reference gain values corresponding to the grid vertexes by the grid constraint points is realized, more accurate correction parameters are obtained, the correction parameters obtained by the method are used for correcting the image on the basis of not increasing hardware cost and calculation cost, and the effect of lens shading correction can be improved.
In some embodiments, due to limitations of computational power and hardware cost, gain values of all pixels cannot be stored in the image capturing device, and only gain values of a part of pixels can be selected for storage. In the conventional lens shading correction method, an image acquisition device generally calculates a gain value corresponding to each pixel point by adopting a bilinear interpolation algorithm based on a reference gain value corresponding to each grid vertex, so as to perform lens shading correction on an acquired image based on the gain value of each pixel point. However, the pixel value of each pixel point in the acquired image shows a nonlinear attenuation rule from the center to the edge, and the closer to the edge, the faster the pixel value is attenuated, and the more obvious the nonlinearity degree is. Because the gain value is obtained through bilinear interpolation approximation calculation, errors exist between the gain value and ideal correction parameters, the errors are particularly obvious on grids at the edges of the image, and the linear approximation is not enough to accurately approximate the actual change rule of the correction parameters. As shown in fig. 7, a schematic diagram of a change curve of an ideal correction parameter and a correction parameter obtained by bilinear interpolation calculation is shown, the change rule of the ideal correction parameter is nonlinear, the correction parameter obtained by bilinear interpolation is more accurate at the grid vertices (solid dots in the figure), and the correction parameter obtained by bilinear interpolation is larger at the grid center (the dotted line is higher than the solid line). This error eventually acts on the image to be corrected, resulting in poor correction, especially on images with severe shadows of the lens, and even in streaks of alternating brightness of the corrected image. By adopting the calibration parameter calibration method, the gain value at the grid constraint point is introduced, and after the reference gain matrix is obtained, the gain value at the grid vertex is optimized by a linear fitting method, so that the gain value calculation error of the pixel point in the grid region caused by bilinear interpolation can be reduced, and the effect of lens shading correction is improved. Furthermore, the correction parameters actually used for shading correction still only contain the target gain values at the mesh vertices, and do not increase hardware cost.
In some embodiments, performing linear fitting on the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using the linear fitting result to obtain the target gain value corresponding to each grid vertex, including: performing linear fitting on each row in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a first gain value corresponding to each grid vertex; performing linear fitting on each column in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a second gain value corresponding to each grid vertex; and determining a target gain value corresponding to each grid vertex based on the first gain value and the second gain value corresponding to each grid vertex.
The first gain value is an optimized gain value obtained by performing linear fitting on each line of the reference gain value, and the second gain value is an optimized gain value obtained by performing linear fitting on each column of the reference gain value. The target gain value is determined from the first gain value and the second gain value.
Specifically, for the reference gain matrix corresponding to each color channel, the terminal may perform linear fitting on each line of the reference gain values, and optimize the reference gain value corresponding to each grid vertex by using the linear fitting result to obtain a first gain value corresponding to each grid vertex; and respectively carrying out linear fitting on each column of the reference gain values, optimizing the reference gain value corresponding to each grid vertex by utilizing a linear fitting result to obtain a second gain value corresponding to each grid vertex, and selecting and obtaining a target gain value corresponding to each grid vertex from the first gain value corresponding to each grid vertex and the second gain value corresponding to each grid vertex. I.e. each color channel corresponds to a set of target gain values for the grid vertices.
In this embodiment, by performing simple linear fitting for each row and each column in the reference gain matrix, optimization of the gain values is achieved, and the final target gain value is determined based on the two sets of gain values, so that errors can be further reduced.
In some embodiments, the mesh constraint points include a first constraint point that is a midpoint between any two adjacent mesh vertices; performing linear fitting on each row in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a first gain value corresponding to each grid vertex, wherein the linear fitting method comprises the following steps: for each row in the reference gain matrix, determining any two adjacent grid vertexes of the row as a first segmentation point and a second segmentation point respectively; performing linear regression on the first segmentation point, the second segmentation point and a first constraint point between the first segmentation point and the second segmentation point to obtain a regression line; and optimizing the reference gain values respectively corresponding to the first segmentation point and the second segmentation point based on the regression line to obtain first gain values respectively corresponding to the first segmentation point and the second segmentation point.
The first constraint point is a midpoint between any two adjacent grid vertices, that is, a midpoint of a boundary line of the grid region, for example, as shown in fig. 4, hollow dots on four boundary lines of the grid region in the upper left corner are the first constraint points.
Specifically, the terminal may perform linear regression on each row in the reference gain matrix in a piecewise linear regression manner, where the piecewise points are grid vertices of the row. The terminal takes the total error of the straight line at the first segmentation point, the second segmentation point and the first constraint point as a regression target, carries out linear regression on the first segmentation point, the second segmentation point and the first constraint point among the first segmentation point and the second segmentation point, and determines the straight line meeting the regression target as a regression straight line. And then the terminal can substitute the coordinates corresponding to the first segmentation point and the second segmentation point into a regression line to obtain first gain values corresponding to the first segmentation point and the second segmentation point respectively. In addition, the method for obtaining the second gain value corresponding to each grid vertex by performing linear fitting on each column in the reference gain matrix may be performed according to the above steps, which is not described herein again.
In this embodiment, the first gain value corresponding to each grid vertex is determined by piecewise linear regression, and although the error of the gain value corresponding to each grid vertex may be increased, the error of the gain value corresponding to each pixel point in the grid region may be reduced, so that the overall error of the correction parameter is reduced, and the lens shading correction effect is improved.
In some embodiments, the grid constraint points include a second constraint point, the second constraint point being a region center point of the grid region; determining a target gain value corresponding to each grid vertex based on a first gain value and a second gain value corresponding to each grid vertex, including: calculating first gain values corresponding to the second constraint points based on the first gain values corresponding to the grid vertexes respectively, and calculating first gain values corresponding to the second constraint points based on the second gain values corresponding to the grid vertexes respectively; counting the difference between the reference gain value corresponding to each second constraint point and the first gain value corresponding to each second constraint point to obtain a first gain error; counting the difference between the reference gain value corresponding to each second constraint point and the second gain value corresponding to the second constraint point to obtain a second gain error; and selecting and obtaining a target gain value corresponding to each grid vertex from the first gain value and the second gain value corresponding to each grid vertex based on the first gain error and the second gain error.
The second constraint point is a region center point of the grid region, for example, as shown in fig. 4, a hollow dot at the center of the grid region in the upper left corner is the second constraint point. The first gain error characterizes the total error between the first gain value and the reference gain value corresponding to each second constraint point, and the second gain error characterizes the total error between the second gain value and the reference gain value corresponding to each second constraint point.
Specifically, for each grid region, the terminal may calculate, according to the pixel coordinates and the first gain values of the four grid vertices corresponding to the grid region and the pixel coordinates of the second constraint points corresponding to the grid region, a first gain value corresponding to the second constraint points corresponding to the grid region. And then, the terminal sums the errors between the reference gain value and the first gain value corresponding to each second constraint point to obtain a first gain error. Similarly, the terminal may obtain the second gain error according to the above steps. The terminal may select, from the first gain value corresponding to each mesh vertex and the second gain value corresponding to each mesh vertex, a set of gain values with smaller gain errors as target gain values corresponding to each mesh vertex, for example, in the case that the first gain error is smaller than the second gain error, the first gain value corresponding to each mesh vertex may be selected as the target gain value.
In this embodiment, by calculating the first gain value and the second gain value corresponding to each second constraint point, and counting to obtain the first gain error between the reference gain value and the first gain value corresponding to each second constraint point and the second gain error between the reference gain value and the second gain value corresponding to each second constraint point, a group with smaller group error is selected as the final target gain value, and the accuracy of the correction parameter is improved.
In some embodiments, dividing the calibration image into a plurality of grid areas includes: performing grid division on the calibration image according to a preset division mode to obtain a plurality of grid areas; wherein, in the case that the preset division manner is non-uniform division, the area of the grid region, which is positioned closer to the center of the calibration image, is larger among the plurality of grid regions.
The preset dividing mode comprises uniform dividing and non-uniform dividing.
Specifically, the terminal may determine a grid division manner according to the degree of lens shadow in the calibration image, if the degree of lens shadow is light, grid division is performed on the calibration image by adopting uniform division, and the area of the divided multiple grid areas is the same, as shown in (a) in fig. 3; if the degree of lens shading is heavier, in order to better approximate the brightness attenuation rule, the calibration image is grid-divided by non-uniform division, which is generally a division mode with sparse center and four circles, and the brightness of the image is attenuated slowly in the center of the image and attenuated rapidly around the image, so that the area of the grid area is larger when the position in the calibration image is closer to the center of the calibration image, and the area of the grid area is smaller when the position in the calibration image is closer to the edge of the calibration image, as shown in (b) in fig. 3.
In this embodiment, the calibration image is grid-divided according to the preset division manner, so that the divided grid areas can be more attached to the nonlinear attenuation trend of the brightness of the image, and the gain value corresponding to each pixel point in the image calculated based on the correction parameters is closer to the ideal correction parameters, so that the lens shading correction effect is improved.
In some embodiments, dividing the calibration image into a plurality of mesh regions and determining mesh vertices and mesh constraint points for each mesh region includes: determining a plurality of color channels corresponding to the calibration image according to the image format of the calibration image; dividing the calibration image according to the color channels to obtain sub-calibration images corresponding to each color channel; dividing each sub-calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area;
the reference gain value comprises a reference gain value corresponding to each color channel, and the reference gain matrix comprises a reference gain matrix corresponding to each color channel; determining reference gain values corresponding to the grid vertices and the grid constraint points respectively based on pixel values corresponding to the grid vertices and the grid constraint points respectively, and forming a reference gain matrix, wherein the method comprises the following steps: for each color channel, determining a reference gain value corresponding to each grid vertex and each grid constraint point under the color channel based on pixel values corresponding to each grid vertex and each grid constraint point in the sub-calibration image corresponding to the color channel, and forming a reference gain matrix corresponding to the color channel.
The image formats of the calibration image comprise a RAW format, an RGB format and other image formats, wherein the RAW format comprises R, gr, gb, B four color channels, and the RGB format comprises R, G, B three color channels. For example, assuming that the image format of the calibration image is a RAW format, the terminal may determine a reference gain matrix corresponding to each of the R, gr, gb, B color channels of the calibration image.
In this embodiment, since lens shading correction is usually performed on different color channels, optimization of reference gain values of each grid vertex under different color channels is achieved by determining a reference gain matrix corresponding to each color channel.
In some embodiments, determining the reference gain value for each mesh vertex and each mesh constraint point based on the pixel value for each mesh vertex and each mesh constraint point, respectively, includes: determining a target value from pixel values corresponding to each grid vertex and pixel values corresponding to each grid constraint point respectively; determining a reference gain value corresponding to each grid vertex based on the ratio between the target value and the pixel value corresponding to the grid vertex; for each grid constraint point, determining a reference gain value corresponding to the grid constraint point based on the ratio between the target value and the pixel value corresponding to the grid constraint point.
Specifically, the terminal may determine a reference gain matrix corresponding to each color channel. For each color channel, the terminal may determine, from the sub-calibration image corresponding to the color channel, the pixel values corresponding to each grid vertex and each grid constraint point, and count the pixel values corresponding to each grid vertex and each grid constraint point, so as to determine the maximum pixel value as the target value. For each grid vertex, the terminal determines the ratio between the target value and the pixel value corresponding to the grid vertex as a reference gain value corresponding to the grid vertex under the color channel; for each grid constraint point, the terminal determines the ratio between the target value and the pixel value corresponding to the grid constraint point as the reference gain value corresponding to the grid constraint point under the color channel.
In this embodiment, the target value is determined from the pixel value corresponding to each grid vertex and the pixel value corresponding to each grid constraint point, so that the reference gain value corresponding to each grid vertex and each grid constraint point is calculated by using the target value, and a data base is provided for the reference gain value of the grid vertex by using the grid constraint point.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a correction parameter calibration device for realizing the correction parameter calibration method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the correction parameter calibration device or devices provided below may be referred to the limitation of the correction parameter calibration method hereinabove, and will not be described herein.
In some embodiments, as shown in fig. 8, there is provided a correction parameter calibration apparatus, including: a meshing module 802, a reference gain determination module 804, an optimization module 806, and a parameter determination module 808, wherein:
the mesh division module 802 is configured to divide the calibration image into a plurality of mesh areas, and determine mesh vertices and mesh constraint points of each mesh area.
The reference gain determining module 804 is configured to determine, based on the pixel values corresponding to the grid vertices and the grid constraint points, reference gain values corresponding to the grid vertices and the grid constraint points, respectively, and form a reference gain matrix.
And an optimizing module 806, configured to perform linear fitting on the reference gain matrix, and optimize the reference gain value corresponding to each grid vertex by using the linear fitting result, so as to obtain the target gain value corresponding to each grid vertex.
A parameter determination module 808 is configured to determine a correction parameter according to the target gain value, where the correction parameter is used for shading correction of the image.
In some embodiments, the optimization module 806 is further to: performing linear fitting on each row in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a first gain value corresponding to each grid vertex; performing linear fitting on each column in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a second gain value corresponding to each grid vertex; and determining a target gain value corresponding to each grid vertex based on the first gain value and the second gain value corresponding to each grid vertex.
In some embodiments, the mesh constraint points include a first constraint point that is a midpoint between any two adjacent mesh vertices; the optimization module 806 is further configured to: for each row in the reference gain matrix, determining any two adjacent grid vertexes of the row as a first segmentation point and a second segmentation point respectively; performing linear regression on the first segmentation point, the second segmentation point and a first constraint point between the first segmentation point and the second segmentation point to obtain a regression line; and optimizing the reference gain values respectively corresponding to the first segmentation point and the second segmentation point based on the regression line to obtain first gain values respectively corresponding to the first segmentation point and the second segmentation point.
In some embodiments, the grid constraint points include a second constraint point, the second constraint point being a region center point of the grid region; the optimization module 806 is further configured to: calculating first gain values corresponding to the second constraint points based on the first gain values corresponding to the grid vertexes respectively, and calculating first gain values corresponding to the second constraint points based on the second gain values corresponding to the grid vertexes respectively; counting the difference between the reference gain value corresponding to each second constraint point and the first gain value corresponding to each second constraint point to obtain a first gain error; counting the difference between the reference gain value corresponding to each second constraint point and the second gain value corresponding to the second constraint point to obtain a second gain error; and selecting and obtaining a target gain value corresponding to each grid vertex from the first gain value and the second gain value corresponding to each grid vertex based on the first gain error and the second gain error.
In some embodiments, meshing module 802 is further to: performing grid division on the calibration image according to a preset division mode to obtain a plurality of grid areas; wherein, in the case that the preset division manner is non-uniform division, the area of the grid region, which is positioned closer to the center of the calibration image, is larger among the plurality of grid regions.
In some embodiments, meshing module 802 is further to: determining a plurality of color channels corresponding to the calibration image according to the image format of the calibration image; dividing the calibration image according to the color channels to obtain sub-calibration images corresponding to each color channel; dividing each sub-calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area;
the reference gain value comprises a reference gain value corresponding to each color channel, and the reference gain matrix comprises a reference gain matrix corresponding to each color channel; the reference gain determination module 804 is further configured to: for each color channel, determining a reference gain value corresponding to each grid vertex and each grid constraint point under the color channel based on pixel values corresponding to each grid vertex and each grid constraint point in the sub-calibration image corresponding to the color channel, and forming a reference gain matrix corresponding to the color channel.
In some embodiments, the reference gain determination module 804 is further to: determining a target value from pixel values corresponding to each grid vertex and pixel values corresponding to each grid constraint point respectively; determining a reference gain value corresponding to each grid vertex based on the ratio between the target value and the pixel value corresponding to the grid vertex; for each grid constraint point, determining a reference gain value corresponding to the grid constraint point based on the ratio between the target value and the pixel value corresponding to the grid constraint point.
The above-mentioned various modules in the correction parameter calibration device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer equipment is used for storing related data related to the calibration method of the correction parameters. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a correction parameter calibration method.
In some embodiments, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a correction parameter calibration method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 9 and 10 are block diagrams of only some of the structures associated with the present application and are not intended to limit the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps in the correction parameter calibration method described above when the computer program is executed.
In some embodiments, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the correction parameter calibration method described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the correction parameter calibration method described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A correction parameter calibration method, the method comprising:
dividing a calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area; the grid constraint points comprise first constraint points and second constraint points, the first constraint points are midpoints between any two adjacent grid vertices, and the second constraint points are regional center points of the grid region;
Determining reference gain values respectively corresponding to the grid vertexes and the grid constraint points based on pixel values respectively corresponding to the grid vertexes and the grid constraint points, and forming a reference gain matrix;
performing linear fitting on each row in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a first gain value corresponding to each grid vertex; performing linear fitting on each column in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a second gain value corresponding to each grid vertex; calculating a first gain value corresponding to each second constraint point based on a first gain value corresponding to each grid vertex, and calculating a first gain value corresponding to each second constraint point based on a second gain value corresponding to each grid vertex; counting the difference between the reference gain value corresponding to each second constraint point and the first gain value corresponding to the second constraint point to obtain a first gain error; counting the difference between the reference gain value corresponding to each second constraint point and the second gain value corresponding to the second constraint point to obtain a second gain error; selecting a target gain value corresponding to each grid vertex from a first gain value and a second gain value corresponding to each grid vertex respectively based on the first gain error and the second gain error;
And determining a correction parameter according to the target gain value, wherein the correction parameter is used for shading correction of the image.
2. The method of claim 1, wherein dividing the calibration image into a plurality of mesh regions and determining mesh vertices and mesh constraints for each of the mesh regions comprises:
acquiring a calibration image acquired by image acquisition equipment;
performing grid division on the calibration image according to a preset division mode to obtain a plurality of grid areas in the calibration image;
the vertices of each mesh region are determined as mesh vertices and mesh constraint points are selected from the border lines of the mesh region or from within the mesh region.
3. The method according to claim 1, wherein the performing linear fitting for each row in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using the linear fitting result, to obtain the first gain value corresponding to each grid vertex, includes:
for each row in a reference gain matrix, determining any two adjacent grid vertexes of the row as a first segmentation point and a second segmentation point respectively;
performing linear regression on the first segmentation point, the second segmentation point and a first constraint point between the first segmentation point and the second segmentation point to obtain a regression line;
And optimizing the reference gain values respectively corresponding to the first segmentation point and the second segmentation point based on the regression line to obtain first gain values respectively corresponding to the first segmentation point and the second segmentation point.
4. The method of claim 1, wherein each mesh vertex corresponds to a target gain value for each color channel.
5. The method of claim 1, wherein dividing the calibration image into a plurality of grid areas comprises:
performing grid division on the calibration image according to a preset division mode to obtain a plurality of grid areas;
wherein, in the case that the preset division manner is non-uniform division, the area of the grid region, of the plurality of grid regions, is larger as the position in the calibration image is closer to the center of the calibration image.
6. The method of claim 1, wherein dividing the calibration image into a plurality of mesh regions and determining mesh vertices and mesh constraints for each of the mesh regions comprises:
determining a plurality of color channels corresponding to the calibration image according to the image format of the calibration image;
Dividing the calibration image according to the color channels to obtain sub-calibration images corresponding to each color channel;
dividing each sub-calibration image into a plurality of grid areas, and determining grid vertexes and grid constraint points of each grid area;
the reference gain value comprises a reference gain value corresponding to each color channel, and the reference gain matrix comprises a reference gain matrix corresponding to each color channel; determining reference gain values corresponding to the grid vertices and the grid constraint points respectively based on pixel values corresponding to the grid vertices and the grid constraint points respectively, and forming a reference gain matrix, wherein the method comprises the following steps:
and determining reference gain values corresponding to grid vertexes and grid constraint points under the color channels based on pixel values corresponding to the grid vertexes and the grid constraint points in the sub-calibration images corresponding to the color channels for each color channel, and forming a reference gain matrix corresponding to the color channels.
7. The method of claim 1, wherein determining the reference gain value for each of the mesh vertices and each of the mesh constraints based on the pixel values for each of the mesh vertices and each of the mesh constraints, respectively, comprises:
Determining a target value from the pixel value corresponding to each grid vertex and the pixel value corresponding to each grid constraint point;
determining a reference gain value corresponding to each grid vertex based on a ratio between the target value and a pixel value corresponding to the grid vertex;
and determining a reference gain value corresponding to each grid constraint point based on the ratio between the target value and the pixel value corresponding to the grid constraint point.
8. A correction parameter calibration apparatus, the apparatus comprising:
the grid dividing module is used for dividing the calibration image into a plurality of grid areas and determining grid vertexes and grid constraint points of each grid area; the grid constraint points comprise first constraint points and second constraint points, the first constraint points are midpoints between any two adjacent grid vertices, and the second constraint points are regional center points of the grid region;
the reference gain determining module is used for determining reference gain values respectively corresponding to the grid vertexes and the grid constraint points based on pixel values respectively corresponding to the grid vertexes and the grid constraint points, and forming a reference gain matrix;
The optimization module is used for carrying out linear fitting on each line in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by utilizing a linear fitting result to obtain a first gain value corresponding to each grid vertex; performing linear fitting on each column in the reference gain matrix, and optimizing the reference gain value corresponding to each grid vertex by using a linear fitting result to obtain a second gain value corresponding to each grid vertex; calculating a first gain value corresponding to each second constraint point based on a first gain value corresponding to each grid vertex, and calculating a first gain value corresponding to each second constraint point based on a second gain value corresponding to each grid vertex; counting the difference between the reference gain value corresponding to each second constraint point and the first gain value corresponding to the second constraint point to obtain a first gain error; counting the difference between the reference gain value corresponding to each second constraint point and the second gain value corresponding to the second constraint point to obtain a second gain error; selecting a target gain value corresponding to each grid vertex from a first gain value and a second gain value corresponding to each grid vertex respectively based on the first gain error and the second gain error;
And the parameter determining module is used for determining a correction parameter according to the target gain value, wherein the correction parameter is used for shading correction of the image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410130860.0A 2024-01-31 2024-01-31 Correction parameter calibration method, device, computer equipment and storage medium Active CN117671036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410130860.0A CN117671036B (en) 2024-01-31 2024-01-31 Correction parameter calibration method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410130860.0A CN117671036B (en) 2024-01-31 2024-01-31 Correction parameter calibration method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117671036A CN117671036A (en) 2024-03-08
CN117671036B true CN117671036B (en) 2024-04-09

Family

ID=90082852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410130860.0A Active CN117671036B (en) 2024-01-31 2024-01-31 Correction parameter calibration method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117671036B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355941A (en) * 2020-04-01 2020-06-30 深圳市菲森科技有限公司 Image color real-time correction method, device and system
CN113592739A (en) * 2021-07-30 2021-11-02 浙江大华技术股份有限公司 Method and device for correcting lens shadow and storage medium
CN113747066A (en) * 2021-09-07 2021-12-03 汇顶科技(成都)有限责任公司 Image correction method, image correction device, electronic equipment and computer-readable storage medium
CN115534801A (en) * 2022-08-29 2022-12-30 深圳市欧冶半导体有限公司 Vehicle lamp self-adaptive dimming method and device, intelligent terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355941A (en) * 2020-04-01 2020-06-30 深圳市菲森科技有限公司 Image color real-time correction method, device and system
CN113592739A (en) * 2021-07-30 2021-11-02 浙江大华技术股份有限公司 Method and device for correcting lens shadow and storage medium
CN113747066A (en) * 2021-09-07 2021-12-03 汇顶科技(成都)有限责任公司 Image correction method, image correction device, electronic equipment and computer-readable storage medium
CN115534801A (en) * 2022-08-29 2022-12-30 深圳市欧冶半导体有限公司 Vehicle lamp self-adaptive dimming method and device, intelligent terminal and storage medium

Also Published As

Publication number Publication date
CN117671036A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
US10224955B2 (en) Data compression and decompression method of demura table, and mura compensation method
JP5403557B2 (en) System and method for image correction
CN107024485B (en) The defect inspection method and device of camber display screen
CN111242863B (en) Method and medium for eliminating transverse chromatic aberration of lens based on image processor
CN108596908B (en) LED display screen detection method and device and terminal
CN113747066B (en) Image correction method, image correction device, electronic equipment and computer readable storage medium
CN112929623B (en) Lens shadow repairing method and device applied to whole screen in correction process
CN117612470A (en) Color lookup table generating method and color correcting method
CN117671036B (en) Correction parameter calibration method, device, computer equipment and storage medium
CN113766203B (en) Image white balance processing method
CN116978308A (en) Display correction method and device and electronic equipment
CN116758206A (en) Vector data fusion rendering method and device, computer equipment and storage medium
CN115760578A (en) Image processing method and device, electronic equipment and storage medium
CN114007055B (en) Image sensor lens shading correction method and device
CN116977154B (en) Visible light image and infrared image fusion storage method, device, equipment and medium
CN117522749B (en) Image correction method, apparatus, computer device, and storage medium
CN116419076B (en) Image processing method and device, electronic equipment and chip
CN116563357B (en) Image matching method, device, computer equipment and computer readable storage medium
CN113343848B (en) Instrument reading identification method and device, computer equipment and storage medium
CN117078543A (en) Image correction method, apparatus, computer device, and storage medium
CN117456008A (en) Calibration method, calibration device, computer equipment and storage medium
CN117793510A (en) Camera light filling lamp control method, device, computer equipment and storage medium
CN116843566A (en) Tone mapping method, tone mapping device, display device and storage medium
CN116883257A (en) Image defogging method, device, computer equipment and storage medium
CN114972100A (en) Noise model estimation method and device, and image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant