CN114078094A - Image edge brightness correction method, device and system - Google Patents

Image edge brightness correction method, device and system Download PDF

Info

Publication number
CN114078094A
CN114078094A CN202010850604.0A CN202010850604A CN114078094A CN 114078094 A CN114078094 A CN 114078094A CN 202010850604 A CN202010850604 A CN 202010850604A CN 114078094 A CN114078094 A CN 114078094A
Authority
CN
China
Prior art keywords
image
brightness
curve
standard
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010850604.0A
Other languages
Chinese (zh)
Inventor
毛之华
魏文燕
褚哲文
张露萍
何艳宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Priority to CN202010850604.0A priority Critical patent/CN114078094A/en
Publication of CN114078094A publication Critical patent/CN114078094A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image edge brightness correction method, device and system, wherein the correction method comprises the following steps: shooting a target with a specific pattern to obtain an image of the specific pattern; determining an abnormal area of the image edge through a standard brightness gradient curve of the image, and calculating a brightness deviation coefficient of the abnormal area; judging a correctable area in the abnormal area based on the standard resolving power change curve of the image; and performing brightness correction on the correctable area according to the brightness deviation coefficient. Meanwhile, the image after brightness correction is reconstructed through the trained convolutional neural network model, and the large-field-of-view image with high edge definition is obtained, so that the corrected image has the characteristics of high edge definition, smooth image and strong sense of reality while the maximum field angle is kept.

Description

Image edge brightness correction method, device and system
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to a method and an apparatus for correcting image edge brightness.
Background
As large wide-angle self-timer shooting, macro shooting, etc. become the "essential" camera function of portable electronic products such as mobile phones, more and more portable electronic products are beginning to adopt large field of view (FOV) camera modules. Compared with a conventional camera module, the large-FOV camera module has a larger viewing range, a user can obtain a wider picture at the same shooting position, and meanwhile, an image shot under a macro condition has no effect of the conventional camera module, such as the short-distance acquisition of detailed information of a shot object. However, the problem of brightness drop inevitably occurs in the image edge area of the large FOV imaging module.
In addition, the free-form surface technique adopted to improve the distortion of the large FOV imaging module also causes the same problem that the brightness of the edge region of the image decreases, and the region where the captured image can be actually displayed becomes smaller.
In addition, as electronic products such as smartphones become thinner and thinner, the height of the camera module configured in the electronic products also becomes smaller and smaller, and the problem of brightness reduction occurs in the edge area of the image shot by the camera module with characteristics such as large FOV and microspur under the same condition.
When the demand for a large FOV camera module and an ultra-thin mobile phone becomes inevitable, a correction method for the brightness of the edge area of the captured image becomes an urgent need. At present, most of the known methods at home and abroad realize brightness correction by constructing a functional relationship between the brightness of a light source and image data displayed by a terminal and adjusting the data of a displayed image. However, in the existing method, the effective information of the image cannot be generally judged, so that the problem of imaging distortion of the edge area of the image occurs.
Therefore, there is a need for an image edge brightness correction method and apparatus that can solve the problem of brightness reduction of an image and ensure that the image is not distorted.
Disclosure of Invention
The present application provides a method and apparatus for applying the same that addresses at least some of the above-identified deficiencies in the prior art.
One aspect of the present application provides a method for correcting image edge brightness, where the method includes: shooting a target with a specific pattern to obtain an image of the specific pattern; determining an abnormal area of the image edge through a standard brightness gradient curve of the image, and calculating a brightness deviation coefficient of the abnormal area; determining a correctable region in the abnormal region based on a standard resolving power variation curve of the image; and performing brightness correction on the correctable area according to the brightness deviation coefficient.
According to an embodiment of the application, the method further comprises: training a convolution neural network model for correcting the image edge brightness; and reconstructing the image after brightness correction through the convolutional neural network model.
According to the embodiment of the application, the training of the convolutional neural network model for the image edge brightness correction comprises the following steps: building a convolutional neural network model with a feature extraction layer and a reconstruction layer; selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image; defining an edge region of the luminance anomaly image; extracting the feature data of the edge area in the feature extraction layer; optimizing the edge region in the reconstruction layer; comparing the optimized result image with the standard image to determine the difference; when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; and repeating the above steps until the difference is less than the predetermined value.
According to the embodiment of the application, the shooting of the target with the specific pattern and the acquisition of the image of the specific pattern comprises the following steps: arranging a plurality of first color blocks and second color blocks which are same in size and are alternately arranged on a square plane, and drawing a plurality of strips which extend from the center of the square plane to four sides and four corners and equally divide the square plane to form the specific pattern; and setting the center of the target on the optical axis of a camera module, and enabling the shooting range of the camera module to be equal to the size of the specific pattern so as to acquire the image of the specific pattern.
According to the embodiment of the present application, disposing a plurality of first color patches and second color patches having the same size and alternately arranged on a square plane further includes: disposing the first color patch and the second color patch in an area outside the band in the square plane; setting the first color patch and the second color patch to different colors; and setting a marker point on at least one of the first color patch and the second color patch.
According to an embodiment of the application, the method further comprises: generating a standard luminance gradient curve for the image, comprising: drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image; calculating the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image; determining an edge field of view and a center field of view in the image; fitting a brightness gradient curve in the marginal field range according to the average brightness gradient curve in the central field range; and combining the average brightness gradient curve in the central field range with the fitted brightness gradient curve in the edge field range to generate the standard brightness gradient curve.
According to an embodiment of the present application, generating the standard luminance gradient curve further includes: comparing the brightness gradient curves of the eight strips at diagonal positions in the marginal field of view, and calculating a deviation value; and correcting the standard brightness gradient curve by using the deviation value.
According to an embodiment of the present application, fitting the luminance gradient curve in the peripheral field of view according to the average luminance gradient curve in the central field of view further includes: and fitting a brightness gradient curve in the marginal field range by adopting a least square method.
According to the embodiment of the application, the step of determining the abnormal region of the image edge through the standard brightness gradient curve of the image comprises the following steps: setting a brightness threshold value; making a difference value between the gray value of each mark point in the edge view field and the corresponding value in the standard brightness gradient curve; and setting the first color block or the second color block where the mark point with the difference value exceeding the brightness threshold is located as the abnormal area.
According to an embodiment of the present application, calculating the luminance deviation coefficient of the abnormal region includes: and setting the ratio of the gray value of each mark point in the abnormal area to the corresponding value in the standard brightness gradient curve as the brightness deviation coefficient.
According to an embodiment of the application, the method further comprises: generating a standard resolution force variation curve for the image, comprising: solving the resolution power of each mark point in the image to draw a resolution power change curve of the image; fitting a resolution force change curve in the edge field range according to the resolution force change curve of the image in the central field range; and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate the standard resolution force change curve.
According to an embodiment of the present application, fitting the variation curve of the resolution power in the peripheral field range according to the variation curve of the resolution power of the image in the central field range further includes: and fitting the variation curve of the resolution force in the marginal field of view by using a least square method.
According to the embodiment of the present application, obtaining the resolution of each marker point in the image and drawing the variation curve of the resolution of the image further includes: and solving the SFR value of each mark point in the image to draw an SFR value change curve of the image.
According to an embodiment of the present application, determining a correctable region among the abnormal regions based on a standard resolving power variation curve of the image includes: setting a resolving power threshold value; comparing the resolution power of each marking point in the abnormal area with a corresponding value in the standard resolution power variation curve; and setting the abnormal area where the mark point lower than the resolution threshold is positioned as an information loss serious area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
According to an embodiment of the present application, performing luminance correction on the correctable region according to the luminance deviation coefficient further includes: and obtaining the image with the corrected brightness by using the brightness deviation coefficient through least square fitting.
In another aspect, the present application further provides an image edge brightness correction system, including: a memory storing computer readable instructions; and a processor, coupled to the memory, that executes the instructions to: determining an abnormal area of the image edge according to a standard brightness gradient curve of the image with a specific pattern of the target, and calculating a brightness deviation coefficient of the abnormal area; determining a correctable region in the abnormal region based on a standard resolving power variation curve of the image; and performing brightness correction on the correctable area according to the brightness deviation coefficient.
According to an embodiment of the application, the processor executing the instructions further: training a convolution neural network model for correcting the image edge brightness; and reconstructing the image after brightness correction through the convolutional neural network model.
According to an embodiment of the application, the process of the processor executing the instructions to train the convolutional neural network model for image edge brightness correction comprises: building a convolutional neural network model with a feature extraction layer and a reconstruction layer; selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image; defining an edge region of the luminance anomaly image; extracting the feature data of the edge area in the feature extraction layer; optimizing the edge region in the reconstruction layer; comparing the optimized result image with the standard image to determine the difference; when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; and repeating the above steps until the difference is less than the predetermined value.
According to an embodiment of the present application, the specific pattern includes: a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are arranged on the square plane; and a plurality of stripes extending from the center of the square plane to four sides and four corners, equally dividing the square plane, wherein the first color patch and the second color patch are disposed in the square plane in a region other than the stripes; the first color patch and the second color patch have different colors; and at least one of the first color block and the second color block is provided with a mark point.
According to an embodiment of the application, the processor executes the instructions to further: generating a standard luminance gradient curve for the image, comprising: drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image; calculating the average value of a plurality of strip brightness gradient curves to obtain the average brightness gradient curve of the image; determining an edge field of view and a center field of view in the image; fitting a brightness gradient curve in the marginal field range according to the average brightness gradient curve in the central field range; and combining the average brightness gradient curve in the central field range with the fitted brightness gradient curve in the edge field range to generate the standard brightness gradient curve.
According to an embodiment of the application, the processor executes the instructions to further: generating the standard brightness gradient curve, including: comparing the brightness gradient curves of the eight strips positioned at diagonal positions in the marginal field of view with each other, and calculating a deviation value; and correcting the standard brightness gradient curve by using the deviation value.
According to an embodiment of the application, the processor executes the instructions to fit the luminance gradient curve within the marginal field of view using a least squares method.
According to an embodiment of the application, the processor executes the instructions to determine an abnormal region of the image edge according to a target according to the following steps: setting a brightness threshold value; making a difference value between the gray value of each mark point in the edge view field and the corresponding value in the standard brightness gradient curve; and setting the first color block or the second color block where the mark point with the difference value exceeding the brightness threshold is located as the abnormal area.
According to the embodiment of the application, the processor executes the instructions to set the ratio between the gray value of each mark point in the abnormal area and the corresponding value in the standard brightness gradient curve as the brightness deviation coefficient.
According to an embodiment of the application, the processor executing the instructions further: generating a standard resolution force variation curve for the image, comprising: solving the resolution power of each mark point in the image to draw a resolution power change curve of the image; fitting a resolution force change curve in the edge field range according to the resolution force change curve of the image in the central field range; and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate the standard resolution force change curve.
According to an embodiment of the application, the processor executes the instructions to fit the solution force variation curve within the marginal field of view using a least squares method.
According to the embodiment of the application, the processor executes the instructions to obtain the SFR value of each mark point in the image so as to draw the SFR value change curve of the image.
According to an embodiment of the application, the processor executes the instructions to determine a correctable region in the abnormal region by: setting a resolving power threshold value; comparing the resolution power of each marking point in the abnormal area with a corresponding value in the standard resolution power variation curve; and setting the abnormal area where the mark point lower than the resolution threshold is positioned as an information loss serious area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
According to an embodiment of the application, the processor executes the instructions to obtain a brightness corrected image by least square fitting using the brightness deviation coefficient.
In another aspect, the present application further provides an apparatus for an image edge brightness correction method, where the apparatus includes: the module clamp is used for fixing the camera module to be tested; the target with a specific pattern, the center of the target is arranged on the optical axis of the camera module, and the shooting range of the camera module is equal to the size of the specific pattern so as to obtain an image with the specific pattern; the image data generator is used for acquiring the gray value and the image resolving power of the image and correcting the edge brightness of the image; and the reconstructor is provided with a convolution neural network model and is used for correcting the image edge brightness.
According to the embodiment of the application, the convolutional neural network model comprises a feature extraction layer, a convolution layer and a convolution layer, wherein the feature extraction layer is used for extracting features of an image to be processed or a result image of previous edge brightness correction processing so as to obtain a plurality of feature maps; and the reconstruction layer is positioned behind the feature extraction layer and is used for performing deconvolution processing on the extracted feature maps so as to reconstruct the image to be processed according to the extracted feature maps.
According to an embodiment of the present application, the apparatus further comprises a trainer for training the convolutional neural network model, wherein the trainer is configured to: selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image; defining an edge region of the luminance anomaly image; extracting the feature data of the edge area in the feature extraction layer; optimizing the edge region in the reconstruction layer; comparing the optimized result image with the standard image to determine the difference; when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; and repeating the above steps until the difference is less than the predetermined value.
According to an embodiment of the present application, the specific pattern includes: a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are arranged on the square plane; and a plurality of stripes extending from the center of the square plane to four sides and four corners, equally dividing the square plane, wherein the first color patch and the second color patch are disposed in the square plane in a region other than the stripes; the first color patch and the second color patch have different colors; and at least one of the first color block and the second color block is provided with a mark point.
According to an embodiment of the present application, the image data generator is configured to: determining an abnormal area of the image edge through a standard brightness gradient curve of the image, and calculating a brightness deviation coefficient of the abnormal area; determining a correctable region in the abnormal region based on a standard resolving power variation curve of the image; and performing brightness correction on the correctable area according to the brightness deviation coefficient.
According to the embodiment of the application, the image data generator is configured to make a difference between the gray value of each mark point in the edge field and the corresponding value in the standard brightness gradient curve; and setting the first color block or the second color block where the mark point with the difference value exceeding a preset brightness threshold value is located as the abnormal area.
According to an embodiment of the present application, the image data generator is configured to set a ratio between a gray value of each marker point in the abnormal region and a corresponding value in the standard luminance gradient curve as the luminance deviation coefficient.
According to the embodiment of the application, the image data generator is configured to compare the resolution of each marking point in the abnormal area with the corresponding value in the standard resolution variation curve; and setting the abnormal area where the mark point lower than the preset resolution threshold is positioned as an information loss severe area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
According to an embodiment of the present application, the image data generator is configured to obtain the image with the corrected brightness by least square fitting using the brightness deviation coefficient.
According to an embodiment of the application, the image data generator is further configured to generate a standard luminance gradient curve of the image, including: drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image; calculating the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image; determining an edge field of view and a center field of view in the image; fitting a brightness gradient curve in the marginal field range according to the average brightness gradient curve in the central field range; and combining the average brightness gradient curve in the central field range with the fitted brightness gradient curve in the edge field range to generate the standard brightness gradient curve.
According to an embodiment of the application, the image data generator is further configured to generate a standard luminance gradient curve of the image, including: comparing the brightness gradient curves of the eight strips at diagonal positions in the marginal field of view, and calculating a deviation value; and correcting the standard brightness gradient curve by using the deviation value.
According to an embodiment of the application, the image data generator is further configured to generate a standard resolution force variation curve of the image, including: solving the resolution power of each mark point in the image to draw a resolution power change curve of the image; fitting a resolution force change curve in the edge field range according to the resolution force change curve of the image in the central field range; and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate the standard resolution force change curve.
According to at least one scheme of the image edge brightness correction method and the image edge brightness correction device provided by the application, at least one of the following beneficial effects can be achieved:
1. the application provides a novel image edge brightness correction method, and the image shot by a large FOV camera module is subjected to pixel level processing on an edge area by adopting a neural network model, so that the corrected image can have the characteristics of high edge definition, smooth image and strong sense of reality while keeping the maximum FOV.
2. The application provides a neotype device that is used for image edge brightness to correct, under the prerequisite that need not introduce third party measuring device, utilizes the system of rebuilding that has the convolutional neural network model to repair the image that big FOV module of making a video recording shot effectively.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a method for image edge brightness correction according to an embodiment of the present application;
FIG. 2 is a specific pattern of a reticle in an image edge brightness correction apparatus according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an image edge brightness correction method according to another embodiment of the present application;
FIG. 4 is a flow diagram of a method of training a convolutional neural network model according to one embodiment of the present application; and
fig. 5 shows a schematic block diagram of a computer system suitable for implementing the terminal device or server of the present application.
Detailed Description
For a better understanding of the present application, various aspects of the present application will be described in more detail with reference to the accompanying drawings. It should be understood that the detailed description is merely illustrative of exemplary embodiments of the present application and does not limit the scope of the present application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
It should be noted that in this specification, the expressions first, second, third, etc. are used only to distinguish one feature from another, and do not represent any limitation on the features. Thus, a first color patch discussed below may also be referred to as a second color patch without departing from the teachings of the present application. And vice versa.
In the drawings, the thickness, size and shape of the components have been slightly adjusted for convenience of explanation. The figures are purely diagrammatic and not drawn to scale. As used herein, the terms "approximately", "about" and the like are used as table-approximating terms and not as table-degree terms, and are intended to account for inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art.
It will be further understood that terms such as "comprising," "including," "having," "including," and/or "containing," when used in this specification, are open-ended and not closed-ended, and specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof. Furthermore, when a statement such as "at least one of" appears after a list of listed features, it modifies that entire list of features rather than just individual elements in the list. Furthermore, when describing embodiments of the present application, the use of "may" mean "one or more embodiments of the present application. Also, the term "exemplary" is intended to refer to an example or illustration.
Unless otherwise defined, all terms (including engineering and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. Unless explicitly defined or contradicted by context, the specific steps included in the imaging module described in the present application are not necessarily limited to the described order, and may be executed in any order or in parallel. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a flowchart of an image edge brightness correction method according to an embodiment of the present application.
Referring to fig. 1, an image edge brightness correction method 100 according to an embodiment of the present application includes: step S101, shooting a target with a specific pattern to acquire an image of the specific pattern; step S102, determining an abnormal area of the image edge through a standard brightness gradient curve of the image, and calculating a brightness deviation coefficient of the abnormal area; step S103, judging a correctable area in the abnormal area based on the standard resolving power change curve of the image; and step S104, performing brightness correction on the correctable area according to the brightness deviation coefficient.
The above steps will be described in detail below.
The image to be corrected in step S101 can be obtained by shooting a target with a specific pattern by a to-be-corrected camera module fixed on the module fixture. The center of the target can be arranged on the optical axis of the camera module, and the shooting range of the camera module is equal to the size of the specific pattern, so that the specific pattern on the target can be filled with the whole image to be corrected.
Fig. 2 is a specific pattern of a target in the image edge brightness correction apparatus according to an embodiment of the present application. The specific pattern includes: a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are drawn on a square (including squares and rectangles) plane; and a plurality of strips extending from the center of the square plane to the four sides and corners, equally dividing the square plane. Further, the first patch and the second patch in the specific pattern may be disposed in an area other than the band in the square plane, and a marker point is disposed in each of the first patch and the second patch. The first color block and the second color block may have different colors to facilitate testing of the resolution of the image to be corrected having the above-mentioned specific pattern.
As shown in fig. 2, in one embodiment, the specific pattern of the target has a substantially square shape in which 8 stripes having a generally "m" shape are drawn, and first and second color patches having different colors, such as white and black patches, alternately arranged like a checkerboard pattern are drawn in regions other than the stripes. Still can fix a position through drawing the cross mark in the mark plate center department to the module of making a video recording that awaits measuring conveniently shoots.
A marker point may be set in at least one of the first patch and the second patch, and for example, a Region of Interest (ROI) in the first patch or the second patch may be selected as the marker point of the patch.
In step S102, the gray values of the sampling points in the image to be corrected may be acquired by, for example, an image data generator in the image edge brightness correction apparatus, and then a standard brightness gradient curve of the image to be corrected may be determined. In one embodiment of the present application, this can be achieved by:
and drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image to be corrected. For example, the central position of the image to be corrected is set as the origin of coordinates, an XY coordinate system is established, the seat value of the sampling point on each strip is obtained, and then the brightness gradient curve of each strip is drawn by taking the gray value of the sampling point as the vertical coordinate and the coordinate value of the sampling point as the horizontal coordinate.
And obtaining the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image to be corrected.
Setting the range of 0-0.8 field of view in the image to be corrected as a central field of view, and setting the range of 0.8-1 field of view as an edge field of view, wherein the 0.8-1 field of view is an edge area of the image to be corrected in the application.
The intensity gradient curve in the edge field of view can be fitted, for example, using a least squares fit, based on the average intensity gradient curve over the central field of view.
The average brightness gradient curve in the central field of view range and the brightness gradient curve in the fitted edge field of view range can be combined to generate a standard brightness gradient curve of the image to be corrected.
Further, in order to ensure that the standard brightness gradient curve is applicable to all edges including the four corners of the image, the uniformity of the four corners of the image to be corrected can be verified. Specifically, the luminance gradient curves of eight strips within the peripheral field of view and at diagonal positions in the image to be corrected are compared with each other, and the deviation values thereof are calculated. And correcting the generated standard brightness gradient curve by using the deviation value.
In one embodiment of the present application, a brightness threshold may be set, for example, 10-15% of the corresponding value in the standard brightness gradient curve may be set as the brightness threshold of each marking point. And (3) making a difference value between the gray value (actually measured value) of each mark point in the image edge and the corresponding value of the mark point in the standard brightness gradient curve. When the difference value of the mark points exceeds the brightness threshold, the first color block or the second color block where the mark points are located can be set as the abnormal area of the image to be corrected.
Meanwhile, the ratio of the gray value of each mark point in the abnormal area to the corresponding value of the mark point in the standard brightness gradient curve can be set as a brightness deviation coefficient.
In step S103, the resolution of each marker point in the image to be corrected may be acquired by, for example, an image data generator in the image edge brightness correction apparatus. Currently, it is an industry recognized manner to evaluate the resolution of a camera module by using a Spatial Frequency Response (SFR), and in an embodiment of the present application, a resolution change curve of an image to be corrected may be obtained by drawing an SFR value change curve using an SFR value of each mark point in the image to be corrected. The method comprises the following specific steps:
and solving the resolution of each mark point in the image to be corrected to draw an image resolution change curve. For example, the SFR value of each mark point in the image to be corrected can be obtained by the image data generator, and the SFR value of the mark point is taken as the ordinate, and the seating position of the mark point is taken as the abscissa to draw the SFR value change curve of the image to be corrected.
And fitting a resolution force change curve in an edge view field range in the image to be corrected according to the image resolution force change curve in the central view field range.
And combining the image resolution force change curve in the central view field range with the fitted image resolution force change curve in the edge view field range to generate a standard resolution force change curve of the image to be corrected.
The correctable area in the abnormal area can be judged through the standard resolving power variation curve of the image to be corrected. Specifically, the method can be realized by the following steps:
the threshold value of the resolution for determining the abnormal area is set, for example, 70% of the corresponding value of the resolution of the marker point in the standard resolution variation curve may be set as the threshold value of the resolution.
And comparing the resolution power of each mark point in the abnormal area with the corresponding value in the standard resolution power change curve.
Setting the abnormal region where the mark point lower than the set resolution threshold is located as the severe information loss region, for example, when the resolution of the mark point is lower than 70% of the corresponding value in the standard resolution variation curve, the abnormal region where the mark point is located is the severe information loss region; the abnormal region where the mark point higher than the resolution threshold is located is set as a correctable region, for example, when the resolution of the mark point is higher than 70% of the corresponding value in the standard resolution variation curve, the abnormal region where the mark point is located is set as a correctable region.
And fitting the correctable area by using the brightness deviation coefficient through a least square method to obtain an image with the corrected brightness.
Further, in an embodiment of the present application, a method for optimizing a luminance-corrected image through a trained convolutional neural network model is also provided.
Fig. 3 is a schematic diagram of an image edge brightness correction method 200 according to another embodiment of the present application. FIG. 4 is a flow diagram of a method 300 of training a convolutional neural network model according to one embodiment of the present application.
As shown in fig. 3, the convolutional neural network model for image edge brightness correction may include at least one feature extraction layer 2021 and a reconstruction layer 2022 connected after the feature extraction layer 2021. The feature extraction layer 2021 is configured to perform a convolution operation on the input image to obtain a feature map of the input image. In this embodiment, the feature extraction layer 2021 may include a plurality of convolution kernels to obtain a plurality of feature maps based on the input image, with different convolution kernels being used to extract different features in the input image. For example, the feature extraction layer in this embodiment may include 32 convolution kernels, which may result in 32 feature maps for the input image, which may reflect different features of the input image. The reconstruction layer 2022 is configured to perform a deconvolution operation on the feature maps obtained in the previous feature extraction layer, so as to reconstruct a clear image according to the feature maps. In other embodiments, the feature extraction layer may also include other numbers of convolution kernels without departing from the teachings of the present invention.
As shown in fig. 3, for example, the first image 201 may correspond to the image after brightness correction described above, the first image 201 is first input to the convolutional neural network model 202, in the convolutional neural network model 202, the first layer convolutional layer 2021 may perform feature extraction on the input first image 201 to obtain, for example, 32 feature maps, and then the second layer convolutional layer 2021 may perform feature extraction on the basis of the 32 feature maps to obtain a plurality of feature maps, for example, 32 feature maps or 64 feature maps or more. It should be noted that the number of convolution kernels included in each feature extraction layer is not limited to 32 or 64, but may be other numbers, in other words, the number of feature maps obtained by each feature extraction layer is not particularly limited. As described above, each subsequent feature extraction layer 2021 obtains new features on the basis of the feature map output from the previous layer, after the feature extraction of the last feature extraction layer 2021 is completed, the obtained feature maps are input to the reconstruction layer 2022, and the reconstruction layer 2022 performs a deconvolution operation on the input feature maps to reconstruct the first resultant image 203, which is clearer (edge brightness is corrected again) with respect to the first image 201, based on the feature maps.
Next, the first result image 203 is input to the convolutional neural network model 202, and the first layer feature extraction layer 2021 performs feature extraction on the first result image 203, obtains, for example, 32 feature maps (64 feature maps in total) for each image, and then correspondingly adds the 64 feature maps to obtain 32 feature maps, and in the adding process, for example, the feature maps obtained from the two images by the same convolution kernel may be added together. The 32 feature maps obtained by the addition are input to the subsequent feature extraction layer, and the subsequent operation is the same as that described above for the processing of the first image 201, that is, the second resultant image 205 is obtained by the feature extraction operation of the subsequent feature extraction layer 2021 and the reconstruction operation of the reconstruction layer 2022.
Finally, the second resultant image 205 and the original image 206 are input together into the convolutional neural network model 202, and through the same processing as that described above for the first resultant image 203, a resultant image corresponding to the original image 206, that is, after the second resultant image 205 and the original image 206 are subjected to the feature extraction operation of the plurality of feature extraction layers 2021 and the reconstruction operation of the reconstruction layer 2022 in the convolutional neural network model 202, the resultant image 207, that is, the edge brightness-corrected reconstructed sharp image, can be obtained. The size of the sharp image 207 is the same as that of the original image, but after reconstruction, the edge brightness is higher than that of the original image under the condition that the image is not distorted, and the sharp image has better definition.
In the method according to an embodiment of the present application, the image resulting from the previous luminance correction is input to the convolutional neural network model in the next luminance correction process, so that the features of each image can be fully utilized, and the luminance correction process is more effective. In addition, as shown in FIG. 4, such a process flow may also enable the convolutional neural network model to be fully trained in a training process that will be described below.
In the training process, first, in step S301, a luminance abnormal image (test image) for training and a standard image corresponding thereto are selected from a test image set.
In step S302, an edge region of the luminance-abnormal image may be defined first, for example, an image in a field of view range of 0-0.8 may be defined as a central field of view region, and an image in a field of view range of 0.8-1 may be defined as an edge field of view region (edge region).
In step S303, the test image may be subjected to optimization processing by the image edge brightness correction method as described above with reference to fig. 3.
In step S304, the result image obtained from the last processing is compared with the standard image, and the difference between the result image and the standard image can be determined, and the difference can be measured by a loss function, and the loss function can be determined by the mean square error.
In step S305, it may be determined whether the difference is greater than a predetermined value, which may be set according to actual conditions. When the difference is greater than the predetermined value, it indicates that the result image obtained by the convolutional neural network model has not yet reached the requirement, and further training is required, in this case, step S306 is executed to adjust the parameters of the convolutional neural network model according to the difference. After the adjustment of the parameters is completed, the process returns to step S301 again for the next round of training. The above process is repeated until the difference between the result image and the standard image is less than or equal to a predetermined value. When the difference between the result image and the standard image is equal to or less than a predetermined value, it indicates that the result image obtained by the convolutional neural network model has reached the requirement, in which case, the training may be ended and the processed model weight may be saved.
The trained convolutional neural network model can accurately reconstruct the image after the edge brightness correction.
The present application further provides an apparatus for edge brightness correction, the apparatus comprising: a light source, a target having a particular pattern, a modular fixture, an image data generator, and a reconstructor. The module clamp is used for fixing the camera module to be tested; the center of the target with the specific pattern can be arranged on the optical axis of the camera module, and the shooting range of the camera module is equal to the size of the specific pattern so as to obtain an image with the specific pattern; the image data generator is used for acquiring the gray value and the resolving power of the image and the edge brightness of the corrected image; and the reconstructor comprises a trained convolutional neural network model for correcting the image edge brightness.
Further, in one embodiment, the edge brightness correction apparatus further includes: and the output device is used for outputting the image after the edge brightness correction.
In one embodiment, the convolutional neural network model comprises: the characteristic extraction layer is used for extracting characteristics of the image to be processed or the image obtained by the previous edge brightness correction processing so as to obtain a plurality of characteristic graphs; and the reconstruction layer is positioned behind the feature extraction layer and is used for performing deconvolution processing on the extracted feature maps so as to reconstruct an image to be processed according to the extracted feature maps.
In one embodiment, the edge brightness correction apparatus further includes a trainer for training the convolutional neural network model, wherein the trainer is configured to: selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image; defining an edge area of the image with abnormal brightness; extracting feature data of the edge area in a feature extraction layer; optimizing the edge area by taking the mean square error as a loss function in a reconstruction layer; comparing the optimized result image with the standard image to determine the difference; when the difference is larger than a preset value, adjusting the parameters of the feature extraction layer and the reconstruction layer according to the difference; repeating the above steps until the difference is less than the predetermined value; and saving the trained model weight.
In one embodiment, the specific pattern of the target may include: a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are arranged on the square plane; and a plurality of stripes extending from the center of the square plane to four sides and four corners, equally dividing the square plane, wherein the first color patch and the second color patch may have different colors and be disposed in an area other than the stripes in the square plane, and a mark point may be disposed in at least one of the first color patch and the second color patch.
In one embodiment, the image data generator may be configured to: determining an abnormal area of the edge of the image to be corrected through a standard brightness gradient curve, and calculating a brightness deviation coefficient of the abnormal area; judging a correctable area in the abnormal area through a standard resolving power change curve; and performing brightness correction on the correctable area according to the brightness deviation coefficient to obtain a brightness corrected image.
Further, the image data generator may be configured to: making difference between the gray value of each mark point in the edge field and the corresponding value in the standard brightness gradient curve; and setting the first color block or the second color block where the mark point with the difference value exceeding the preset brightness threshold value is located as an abnormal area.
The image data generator may be configured to: and setting the ratio of the gray value of each mark point in the abnormal area to the corresponding value in the standard brightness gradient curve as a brightness deviation coefficient.
The image data generator may be configured to: comparing the resolution power of each marking point in the abnormal area with the corresponding value in the standard resolution power change curve; and setting the abnormal area where the mark point lower than the preset resolution threshold is positioned as an information loss severe area, and setting the abnormal area where the mark point higher than the preset resolution threshold is positioned as a correctable area. Performing brightness correction on the correctable region according to the brightness deviation coefficient may include: and obtaining the image with the corrected brightness by using the brightness deviation coefficient through least square fitting.
In one embodiment, the image data generator may be configured to: generating a standard brightness gradient curve of an image to be corrected, comprising: drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image; calculating the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image; determining an edge field of view and a center field of view in the image; fitting a brightness gradient curve in an edge field range according to the average brightness gradient curve in the central field range; and combining the average brightness gradient curve in the central field range with the brightness gradient curve in the fitted edge field range to generate a standard brightness gradient curve.
Further, the image data generator may be configured to: comparing brightness gradient curves of eight strips positioned at diagonal positions in the marginal field of view, and calculating a deviation value; and correcting the standard brightness gradient curve by using the deviation value.
In one embodiment, the image data generator may be configured to: generating a standard resolution force variation curve of the image, comprising: solving a resolution force change curve of the resolution force drawing image of each mark point in the image; fitting a resolution force change curve in an edge field range according to the resolution force change curve of the image in the central field range; and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate a standard resolution force change curve.
Fig. 5 shows a schematic block diagram of a computer system 400 suitable for implementing the terminal device or server of the present application.
The application also provides a computer system, which can be a mobile terminal, a Personal Computer (PC), a tablet computer, a server and the like. Referring now to FIG. 5, there is shown a schematic block diagram of a computer system 400 suitable for use in implementing the terminal device or server of the present application: as shown in fig. 5, the computer system 400 includes one or more processors, such as one or more Central Processing Units (CPUs) 401, and/or one or more image processors (GPUs) 413, and the like, memory, and communications. The processor may perform various appropriate actions and processes according to executable instructions stored in a Read Only Memory (ROM)402 or loaded from a storage portion 408 into a Random Access Memory (RAM) 403. The communication section 412 may include, but is not limited to, a network card.
The processor may communicate with the read-only memory 402 and/or the random access memory 403 to execute the executable instructions, connect with the communication part 412 through the bus 404, and communicate with other target devices through the communication part 412, so as to complete the operations corresponding to any one of the methods provided by the embodiments of the present application, for example: determining an abnormal area of the edge of the image to be corrected according to a standard brightness gradient curve of the image of the target plate with the specific pattern, and calculating a brightness deviation coefficient of the abnormal area; judging a correctable area in the abnormal area based on a standard resolving power change curve of the image to be corrected; and performing brightness correction on the correctable area according to the brightness deviation coefficient.
Further, the processor executes the instructions to further train a convolutional neural network model for image edge brightness correction; and reconstructing the image after brightness correction through a convolutional neural network model.
Wherein the processor executing the above instructions to train the convolutional neural network model for image edge brightness correction may comprise: building a convolutional neural network model with a feature extraction layer and a reconstruction layer; selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image; defining an edge area of the image with abnormal brightness; extracting feature data of the edge area in a feature extraction layer; optimizing the edge region in a reconstruction layer; comparing the optimized result image with the standard image to determine the difference; when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; repeating the above steps until the difference is less than the predetermined value.
In one embodiment, the specific pattern of the image to be processed in the system comprises: a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are arranged on the square plane; and a plurality of stripes extending from the center of the square plane to four sides and four corners, and equally dividing the square plane, wherein the first color block and the second color block are arranged in the square plane in the area outside the stripes; the first color patch and the second color patch have different colors; and at least one of the first color block and the second color block is provided with a mark point.
In one embodiment, the processor executes the above instructions to further generate a standard luminance gradient curve for the image, comprising: drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image; calculating the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image; determining an edge field of view and a center field of view in the image; fitting a brightness gradient curve in an edge field range according to the average brightness gradient curve in the central field range; and combining the average brightness gradient curve in the central field range with the brightness gradient curve in the fitted edge field range to generate a standard brightness gradient curve.
Further, the processor executing the above instructions to further generate a standard brightness gradient curve of the image further comprises: comparing the brightness gradient curves of the eight strips positioned at diagonal positions in the marginal field of view with each other, and calculating a deviation value; and correcting the standard brightness gradient curve by using the deviation value.
In one embodiment, the processor executes the instructions to fit a luminance gradient curve over the image edge field of view using a least squares method.
In one embodiment, the processor executes the above instructions to determine an abnormal region of an image edge according to the following steps: setting a brightness threshold value; making difference values between the gray values of all the mark points in the image edge view field and the corresponding values in the standard brightness gradient curve; and setting the first color block or the second color block where the mark point with the difference value exceeding the brightness threshold value is located as an abnormal area.
In one embodiment, the processor executes the above instructions to set a ratio between the gray value of each marker point in the abnormal region and the corresponding value in the standard luminance gradient curve as a luminance deviation coefficient.
In one embodiment, the processor executing the instructions further generates a standard resolution force variation curve of the image, including: solving a resolution power change curve of the resolution power drawn image of each mark point in the image, for example, solving an SFR value change curve of the SFR value drawn image of each mark point in the image; fitting a resolution force change curve in the marginal field range according to the resolution force change curve of the central field range image, for example, fitting the resolution force change curve in the marginal field range by using a least square method; and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate a standard resolution force change curve.
In one embodiment, the processor executes the instructions to fit the solution force profile within the image edge field of view using a least squares fit.
In one embodiment, the processor executes the above instructions to derive a SFR value change curve for each marker point in the image.
In one embodiment, the processor executes the instructions to determine a correctable region of an image anomaly region by: setting a resolving power threshold value; comparing the resolution power of each marking point in the abnormal area with the corresponding value in the standard resolution power change curve; and setting the abnormal area where the mark point lower than the resolution threshold is positioned as an information loss serious area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
In one embodiment, the processor executes the instructions to obtain a brightness corrected image by least squares fitting using the brightness deviation factor.
In addition, in the RAM 403, various programs and data necessary for the operation of the device can also be stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. The ROM 402 is an optional module in case of the RAM 403. The RAM 403 stores or writes executable instructions into the ROM 402 at runtime, and the executable instructions cause the processor 401 to execute operations corresponding to the above-described communication method. An input/output (I/O) interface 405 is also connected to bus 404. The communication unit 412 may be integrated, or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and connected to the bus link.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
It should be noted that the architecture shown in fig. 5 is only an optional implementation manner, and in a specific practical process, the number and types of the components in fig. 5 may be selected, deleted, added or replaced according to actual needs; in different functional component settings, separate settings or integrated settings may also be used, for example, the GPU and the CPU may be separately set or the GPU may be integrated on the CPU, the communication part may be separately set or integrated on the CPU or the GPU, and so on. These alternative embodiments are all within the scope of the present disclosure.
The above description is only an embodiment of the present application and an illustration of the technical principles applied. It will be appreciated by a person skilled in the art that the scope of protection covered by the present application is not limited to the embodiments with a specific combination of the features described above, but also covers other embodiments with any combination of the features described above or their equivalents without departing from the technical idea. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (41)

1. An image edge brightness correction method, comprising:
shooting a target with a specific pattern to obtain an image of the specific pattern;
determining an abnormal area of the image edge through a standard brightness gradient curve of the image, and calculating a brightness deviation coefficient of the abnormal area;
determining a correctable region in the abnormal region based on a standard resolving power variation curve of the image; and
and performing brightness correction on the correctable area according to the brightness deviation coefficient.
2. The method of claim 1, further comprising:
training a convolution neural network model for correcting the image edge brightness; and
and reconstructing the image after brightness correction through the convolutional neural network model.
3. The method of claim 2, wherein training the convolutional neural network model for image edge brightness correction comprises:
building a convolutional neural network model with a feature extraction layer and a reconstruction layer;
selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image;
defining an edge region of the luminance anomaly image;
extracting the feature data of the edge area in the feature extraction layer;
optimizing the edge region in the reconstruction layer;
comparing the optimized result image with the standard image to determine the difference;
when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; and
repeating the above steps until the difference is less than the predetermined value.
4. The method of claim 1, wherein capturing a target having a specific pattern, acquiring an image of the specific pattern comprises:
arranging a plurality of first color blocks and second color blocks which are same in size and are alternately arranged on a square plane, and drawing a plurality of strips which extend from the center of the square plane to four sides and four corners and equally divide the square plane to form the specific pattern; and
and setting the center of the target on the optical axis of a camera module, and enabling the shooting range of the camera module to be equal to the size of the specific pattern so as to acquire the image of the specific pattern.
5. The method of claim 4, wherein disposing a plurality of equally sized first color patches and second color patches in an alternating arrangement on a square plane further comprises:
disposing the first color patch and the second color patch in an area outside the band in the square plane;
setting the first color patch and the second color patch to different colors; and
and setting a mark point on at least one of the first color block and the second color block.
6. The method of claim 4, further comprising:
generating a standard luminance gradient curve for the image, comprising:
drawing according to the gray value of the sampling points distributed on each strip in the image
Making a brightness gradient curve of each strip;
calculating the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image;
determining an edge field of view and a center field of view in the image;
fitting according to the average brightness gradient curve in the central field of view
A brightness gradient curve within the marginal field of view; and
and combining the average brightness gradient curve in the central view field range and the fitted brightness gradient curve in the edge view field range to generate the standard brightness gradient curve.
7. The method of claim 6, wherein generating the standard luminance gradient curve further comprises:
comparing the brightness gradient curves of the eight strips at diagonal positions in the marginal field of view, and calculating a deviation value; and
and correcting the standard brightness gradient curve by using the deviation value.
8. The method of claim 6, wherein fitting the brightness gradient curve in the marginal field of view from the average brightness gradient curve in the central field of view further comprises:
and fitting a brightness gradient curve in the marginal field range by adopting a least square method.
9. The method of claim 6, wherein determining the abnormal region of the image edge from a standard luminance gradient curve of the image comprises:
setting a brightness threshold value;
making a difference value between the gray value of each mark point in the edge view field and the corresponding value in the standard brightness gradient curve; and
and setting the first color block or the second color block where the mark point with the difference value exceeding the brightness threshold value is located as the abnormal area.
10. The method of claim 9, wherein calculating the luminance deviation factor for the abnormal region comprises:
and setting the ratio of the gray value of each mark point in the abnormal area to the corresponding value in the standard brightness gradient curve as the brightness deviation coefficient.
11. The method of claim 4, further comprising:
generating a standard resolution force variation curve for the image, comprising:
solving the resolution power of each mark point in the image to draw a resolution power change curve of the image;
fitting a resolution force change curve in the edge field range according to the resolution force change curve of the image in the central field range; and
and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate the standard resolution force change curve.
12. The method of claim 11, wherein fitting the solution force variation curve in the peripheral field of view to the solution force variation curve of the image in the central field of view further comprises:
and fitting the variation curve of the resolution force in the marginal field of view by using a least square method.
13. The method of claim 11, wherein obtaining the resolution power of each marker point in the image to plot the variation curve of the resolution power of the image further comprises:
and solving the SFR value of each mark point in the image to draw an SFR value change curve of the image.
14. The method of claim 11, wherein determining a correctable region in the abnormal region based on a standard resolution force variation curve of the image comprises:
setting a resolving power threshold value;
comparing the resolution power of each marking point in the abnormal area with a corresponding value in the standard resolution power variation curve; and
and setting the abnormal area where the mark point lower than the resolution threshold is positioned as an information loss serious area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
15. The method of claim 1, wherein performing the brightness correction on the correctable region according to the brightness deviation factor further comprises:
and obtaining the image with the corrected brightness by using the brightness deviation coefficient through least square fitting.
16. An image edge brightness correction system, comprising:
a memory storing computer readable instructions; and
a processor coupled to the memory that executes the instructions to:
determining an abnormal area of the image edge according to a standard brightness gradient curve of the image with a specific pattern of the target, and calculating a brightness deviation coefficient of the abnormal area;
determining a correctable region in the abnormal region based on a standard resolving power variation curve of the image; and
and performing brightness correction on the correctable area according to the brightness deviation coefficient.
17. The system of claim 16, wherein the processor executing the instructions further:
training a convolution neural network model for correcting the image edge brightness; and
and reconstructing the image after brightness correction through the convolutional neural network model.
18. The system of claim 17, wherein the processor executing the instructions to train the convolutional neural network model for image edge brightness correction comprises:
building a convolutional neural network model with a feature extraction layer and a reconstruction layer;
selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image;
defining an edge region of the luminance anomaly image;
extracting the feature data of the edge area in the feature extraction layer;
optimizing the edge region in the reconstruction layer;
comparing the optimized result image with the standard image to determine the difference;
when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; and
repeating the above steps until the difference is less than the predetermined value.
19. The system of claim 16, wherein the particular pattern comprises:
a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are arranged on the square plane; and
a plurality of strips extending from the center of the square plane to four sides and four corners, equally dividing the square plane,
wherein the first color patch and the second color patch are disposed in the square plane in an area outside the band; the first color patch and the second color patch have different colors; and at least one of the first color block and the second color block is provided with a mark point.
20. The system of claim 16, wherein the processor executes the instructions to further:
generating a standard luminance gradient curve for the image, comprising:
drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image;
calculating the average value of a plurality of strip brightness gradient curves to obtain the average brightness gradient curve of the image;
determining an edge field of view and a center field of view in the image;
fitting a brightness gradient curve in the marginal field range according to the average brightness gradient curve in the central field range; and
and combining the average brightness gradient curve in the central view field range and the fitted brightness gradient curve in the edge view field range to generate the standard brightness gradient curve.
21. The system of claim 20, wherein the processor executes the instructions to further:
generating the standard brightness gradient curve, including:
comparing the brightness gradient curves of the eight strips positioned at diagonal positions in the marginal field of view with each other, and calculating a deviation value; and
and correcting the standard brightness gradient curve by using the deviation value.
22. The system of claim 20, wherein the processor executes the instructions to fit the brightness gradient profile within the marginal field of view using a least squares fit.
23. The system of claim 16, wherein the processor executes the instructions to determine an abnormal region of the image edge according to the following steps:
setting a brightness threshold value;
making a difference value between the gray value of each mark point in the edge view field and the corresponding value in the standard brightness gradient curve; and
and setting the first color block or the second color block where the mark point with the difference value exceeding the brightness threshold value is located as the abnormal area.
24. The system of claim 16, wherein the processor executes the instructions to set a ratio between a gray value of each marker point in the abnormal region and a corresponding value in the standard luminance gradient curve as the luminance deviation coefficient.
25. The system of claim 16, wherein the processor executing the instructions further:
generating a standard resolution force variation curve for the image, comprising:
solving the resolution power of each mark point in the image to draw a resolution power change curve of the image;
fitting a resolution force change curve in the edge field range according to the resolution force change curve of the image in the central field range; and
and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate the standard resolution force change curve.
26. The system of claim 25, wherein the processor executes the instructions to fit a solution force variation curve within the marginal field of view using a least squares method.
27. The system of claim 25, wherein the processor executes the instructions to determine the SFR value for each marker point in the image to plot a SFR value change curve for the image.
28. The system of claim 16, wherein the processor executes the instructions to determine a correctable region in the abnormal region by:
setting a resolving power threshold value;
comparing the resolution power of each marking point in the abnormal area with a corresponding value in the standard resolution power variation curve; and
and setting the abnormal area where the mark point lower than the resolution threshold is positioned as an information loss serious area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
29. The system of claim 16, wherein the processor executes the instructions to obtain a brightness corrected image by least squares fitting using the brightness deviation factor.
30. An apparatus for an image edge brightness correction method, the apparatus comprising:
the module clamp is used for fixing the camera module to be tested;
the target with a specific pattern, the center of the target is arranged on the optical axis of the camera module, and the shooting range of the camera module is equal to the size of the specific pattern so as to obtain an image with the specific pattern;
the image data generator is used for acquiring the gray value and the image resolving power of the image and correcting the edge brightness of the image; and
and the reconstructor is provided with a convolution neural network model and is used for correcting the edge brightness of the image.
31. The apparatus of claim 30, wherein the convolutional neural network model comprises:
the characteristic extraction layer is used for extracting characteristics of the image to be processed or the image obtained by the previous edge brightness correction processing so as to obtain a plurality of characteristic graphs; and
and the reconstruction layer is positioned behind the feature extraction layer and is used for performing deconvolution processing on the extracted feature maps so as to reconstruct the image to be processed according to the extracted feature maps.
32. The apparatus of claim 31, further comprising a trainer for training the convolutional neural network model, wherein the trainer is configured to:
selecting a brightness abnormal image for training and a standard image corresponding to the brightness abnormal image;
defining an edge region of the luminance anomaly image;
extracting the feature data of the edge area in the feature extraction layer;
optimizing the edge region in the reconstruction layer;
comparing the optimized result image with the standard image to determine the difference;
when the difference is larger than a preset value, adjusting parameters of the feature extraction layer and the reconstruction layer according to the difference; and
repeating the above steps until the difference is less than the predetermined value.
33. The apparatus of claim 30, wherein the specific pattern comprises:
a plurality of first color blocks and second color blocks which are same in size and are alternately arranged are arranged on the square plane; and
a plurality of strips extending from the center of the square plane to four sides and four corners, equally dividing the square plane,
wherein the first color patch and the second color patch are disposed in the square plane in an area outside the band; the first color patch and the second color patch have different colors; and a mark point is arranged on at least one of the first color block and the second color block.
34. The apparatus of claim 30, wherein the image data generator is configured to:
determining an abnormal area of the image edge through a standard brightness gradient curve of the image, and calculating a brightness deviation coefficient of the abnormal area;
determining a correctable region in the abnormal region based on a standard resolving power variation curve of the image; and
and performing brightness correction on the correctable area according to the brightness deviation coefficient.
35. The apparatus of claim 34, wherein the image data generator is configured to:
and performing difference between the gray value of each mark point in the edge field and the corresponding value in the standard brightness gradient curve, and setting the first color block or the second color block where the mark point with the difference value exceeding a preset brightness threshold is located as the abnormal area.
36. The apparatus of claim 34, wherein the image data generator is configured to:
and setting the ratio of the gray value of each mark point in the abnormal area to the corresponding value in the standard brightness gradient curve as the brightness deviation coefficient.
37. The apparatus of claim 34, wherein the image data generator is configured to:
comparing the resolution power of each marking point in the abnormal area with a corresponding value in the standard resolution power variation curve; and setting the abnormal area where the mark point lower than the preset resolution threshold is positioned as an information loss severe area, and setting the abnormal area where the mark point higher than the resolution threshold is positioned as a correctable area.
38. The apparatus of claim 34, wherein the image data generator is configured to:
and obtaining the image with the corrected brightness by using the brightness deviation coefficient through least square fitting.
39. The apparatus of claim 30, wherein the image data generator is further configured to generate a standard luminance gradient curve for the image, comprising:
drawing a brightness gradient curve of each strip according to the gray value of the sampling point distributed on each strip in the image;
calculating the average value of the brightness gradient curves of the plurality of strips to obtain the average brightness gradient curve of the image;
determining an edge field of view and a center field of view in the image;
fitting a brightness gradient curve in the marginal field range according to the average brightness gradient curve in the central field range; and
and combining the average brightness gradient curve in the central view field range and the fitted brightness gradient curve in the edge view field range to generate the standard brightness gradient curve.
40. The apparatus of claim 39, wherein the image data generator is further configured to generate a standard luminance gradient curve for the image, comprising:
comparing the brightness gradient curves of the eight strips at diagonal positions in the marginal field of view, and calculating a deviation value; and
and correcting the standard brightness gradient curve by using the deviation value.
41. The apparatus of claim 30, wherein the image data generator is further configured to generate a standard resolution force profile for the image, comprising:
solving the resolution power of each mark point in the image to draw a resolution power change curve of the image;
fitting a resolution force change curve in the edge field range according to the resolution force change curve of the image in the central field range; and
and merging the resolution force change curve of the image in the central view field range and the fitted resolution force change curve in the edge view field range to generate the standard resolution force change curve.
CN202010850604.0A 2020-08-21 2020-08-21 Image edge brightness correction method, device and system Pending CN114078094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010850604.0A CN114078094A (en) 2020-08-21 2020-08-21 Image edge brightness correction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010850604.0A CN114078094A (en) 2020-08-21 2020-08-21 Image edge brightness correction method, device and system

Publications (1)

Publication Number Publication Date
CN114078094A true CN114078094A (en) 2022-02-22

Family

ID=80282570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010850604.0A Pending CN114078094A (en) 2020-08-21 2020-08-21 Image edge brightness correction method, device and system

Country Status (1)

Country Link
CN (1) CN114078094A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252776A (en) * 2023-09-26 2023-12-19 钛玛科(北京)工业科技有限公司 Image adjustment method, device and equipment suitable for multiple materials

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252776A (en) * 2023-09-26 2023-12-19 钛玛科(北京)工业科技有限公司 Image adjustment method, device and equipment suitable for multiple materials
CN117252776B (en) * 2023-09-26 2024-04-30 钛玛科(北京)工业科技有限公司 Image adjustment method, device and equipment suitable for multiple materials

Similar Documents

Publication Publication Date Title
Cao et al. Contrast enhancement of brightness-distorted images by improved adaptive gamma correction
CN109817170B (en) Pixel compensation method and device and terminal equipment
CN107408367B (en) Method, device and system for correcting unevenness of display screen
CN109036245B (en) Display processing method and device, integrated circuit and computer storage medium
CN101853504B (en) Image quality evaluating method based on visual character and structural similarity (SSIM)
CN111192552B (en) Multi-channel LED spherical screen geometric correction method
CN105678700B (en) Image interpolation method and system based on prediction gradient
CN108009997B (en) Method and device for adjusting image contrast
CN109068025B (en) Lens shadow correction method and system and electronic equipment
CN108022223B (en) Tone mapping method based on logarithm mapping function blocking processing fusion
US20230084728A1 (en) Systems and methods for object measurement
CN112669758B (en) Display screen correction method, device, system and computer readable storage medium
CN109686342B (en) Image processing method and device
CN110807731A (en) Method, apparatus, system and storage medium for compensating image dead pixel
US8036456B2 (en) Masking a visual defect
CN108074220A (en) A kind of processing method of image, device and television set
CN111951172A (en) Image optimization method, device, equipment and storage medium
CN108305232A (en) A kind of single frames high dynamic range images generation method
CN107113412B (en) Display device, display correction device, display correction system and display bearing calibration
CN102236790A (en) Image processing method and device
CN114078094A (en) Image edge brightness correction method, device and system
CN113409247B (en) Multi-exposure fusion image quality evaluation method
CN106971375B (en) Image amplification processing method and device
CN111599294B (en) Method and device for evaluating granular sensation of display screen
CN109978859B (en) Image display adaptation quality evaluation method based on visible distortion pooling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination