CN112669238A - Method for accurately restoring original image of digital image after color correction - Google Patents

Method for accurately restoring original image of digital image after color correction Download PDF

Info

Publication number
CN112669238A
CN112669238A CN202011636596.6A CN202011636596A CN112669238A CN 112669238 A CN112669238 A CN 112669238A CN 202011636596 A CN202011636596 A CN 202011636596A CN 112669238 A CN112669238 A CN 112669238A
Authority
CN
China
Prior art keywords
image
color
pixel
pixels
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011636596.6A
Other languages
Chinese (zh)
Other versions
CN112669238B (en
Inventor
罗运辉
王庆
陈业红
徐倩倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202011636596.6A priority Critical patent/CN112669238B/en
Publication of CN112669238A publication Critical patent/CN112669238A/en
Application granted granted Critical
Publication of CN112669238B publication Critical patent/CN112669238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for accurately recovering an original image by a digital image after color correction, which comprises the steps of generating a color reduction matrix based on K-means clustering according to the original image and an image after color correction, training a nonlinear model of a least square support vector machine, estimating an error compensation matrix aiming at an image saturation area, obtaining an image recovery model and embedding the image; when the original image needs to be restored, model parameters are extracted from the image embedded with the restoration model for transformation, and the original image is obtained through accurate restoration. The method provided by the invention can effectively and accurately restore the color of the original image even if the color component values of part of pixels exceed the gray level range to cause truncation and saturation after the digital image is subjected to color correction or certain nonlinear processing. The method of the invention is beneficial to the good color tracing of the digital image in the transferring and copying processes, and can be effectively applied to the accurate readjustment of the image color display and the re-editing after the image color correction in the printing and copying processes.

Description

Method for accurately restoring original image of digital image after color correction
Technical Field
The invention relates to the technical field of printing image-text processing, in particular to a method for accurately restoring an original image of a digital image after color correction.
Background
In the process of imaging digital images, due to various reasons, such as underexposure, capturing images under artificial light and special natural light, etc., the whole image is subjected to certain color deviation. When these images need to be accurately displayed and copied, a certain method is usually needed for color correction, so that these images display real and correct colors.
Color correction generally employs the formula:
Figure BDA0002878615900000011
r, G, B are the red, green, and blue color component values of the corrected image pixel, respectively, and r, g, and b are the red, green, and blue color component values of the pixel in the original image, respectively; c is a color correction matrix, and C is a color correction matrix,
Figure BDA0002878615900000012
cij(i, j ═ 1,2, and 3) are color correction coefficients, respectively. Therefore, the color correction matrix C determines the color value of the color-corrected pixel.
Note that the gray level of each color channel of the color image is L, and the range of each color component is [0, L-1 ]. According to the color correction formula, the color component values of the color corrected pixels red, green and blue are respectively:
Figure BDA0002878615900000013
when the original pixel color values r, g, b are calculated using the color correction factors, some component values of the corrected pixel color value R, G, B of the resulting corrected image may exceed the range of [0, L-1 ]. The values of these components will now be truncated, with values less than 0 being replaced by 0 and values greater than L-1 being replaced by L-1, whereby the image color information will be partially lost. That is, after color correction, the color values of some pixels may be saturated, resulting in color distortion.
In general, the saturated color region in the color corrected image, i.e., the pixel point whose pixel color component value is close to L-1 or close to 0The proportion is small, the color distortion degree caused by color correction is light and not easy to be perceived, and then the inverse transformation of the color correction is adopted
Figure BDA0002878615900000021
The original image and the color thereof can be well restored. However, in some cases, for example, when the original image needs to be restored accurately, or the proportion of the saturated color area in the image is not negligible, the color of the image restored by directly using the inverse transformation will have a larger difference from the actual original image.
Therefore, the invention designs a method for accurately restoring the original image of the color-corrected digital image, so as to solve the problems.
Disclosure of Invention
In view of the above technical problems, the present invention provides a method for accurately restoring a color-corrected digital image to an original image, so that the digital image can still restore the original image after color correction or some non-linear processing, and even if a part of pixel color components exceed a gray scale range to cause truncation and saturation, the method can still effectively and accurately restore the original image color, and maintain the original image color information to the maximum extent. The method of the invention is beneficial to the good tracing of the color information of the digital image in the transmission and copying processes, and can be effectively applied to the accurate adjustment of the image color display and the re-editing after the color correction in the printing and copying processes.
A method for accurately restoring an original image of a color corrected digital image comprises the following steps:
the first stage is as follows: and establishing an image recovery model according to the original image and the image after the color correction of the image.
S101: the color corrected image is separated into saturated and unsaturated regions, and the regions corresponding to the regions in the original image are found. The specific process comprises the following steps:
recording the image after color correction as Ic and the original image as Io; saturated region in Ic is Ic_sThe remaining part is an unsaturated region, and is marked as
Figure BDA0002878615900000022
Namely, it is
Figure BDA0002878615900000023
The saturation area in the color corrected image corresponds to an area I in the original image Ioo_sThe rest are marked as
Figure BDA0002878615900000024
Namely, it is
Figure BDA0002878615900000025
Let the gray level of each color channel of RGB in the image be L, i.e. the range of each color component is [0, L-1]]. Traversing each color component value of each pixel of the image if each color component value is in [0, L-1]]Range, then the pixel is considered to be in the non-saturated region; if any color component value is 0 or L-1, the pixel is considered to be in a saturated region; recording the row and column position (u) of all pixels in the saturation regioni,vi) Forming a sequence of positions
Figure BDA0002878615900000026
Figure BDA0002878615900000027
nsThe number of pixels in the saturation region.
S102: and extracting a color abbreviation matrix of the unsaturated region based on the K-means clustering to construct a modeling data set. The specific process comprises the following steps:
(a) pixel color deduplication, i.e. over unsaturated regions of Ic
Figure BDA0002878615900000031
Only one pixel point with the same color is reserved, and the pixel points with repeated colors are removed to obtain a pixel thumbnail set phi.
(b) The pixels are partitioned into blocks. Note that the number of pixels in phi is nφBy NφOne pixel as a group, NφTaking 25-100, and randomly dividing all pixels into M blocks: i isb1,Ib2,…,IbM. When n isφCan be covered with NφWhen the material is removed in an integral way,
Figure BDA0002878615900000032
when n isφCan not be covered by NφWhen the material is removed in an integral way,
Figure BDA0002878615900000033
the number of the pixels in the first M-1 pixel blocks is NφThe number of pixels in the Mth pixel block is Mod (n)φM), where floor (-) is the rounding function and Mod (-) is the remainder function.
(c) And performing K-means clustering on all pixel blocks one by one according to the color values. Let us take the jth pixel block IbjThe number of clusters of (2) is kjJ is 1,2, …, M. First order kjCalculating Euclidean distance d from the cluster center to each pixel color, and calculating the maximum value d of the distancemaxStopping clustering when the threshold value eta is smaller than; otherwise let kj=kj+1, re-clustering and judging the maximum distance d in each classmaxIf all the k values are less than the threshold eta, if yes, stopping clustering, and if not, continuing to enable kj=kj+1, re-clustering, … …; up to the maximum value d of the distances of all pixels in each class to the cluster centermaxAre less than the threshold η. The threshold η may be chosen between 5 and 8.
K at clustering stopjIs a block of pixels IbjThe final cluster number of (2). Find k separatelyjThe pixel P in the cluster with the smallest distance from the cluster centermin,iThe pixel P with the largest distance from the cluster centermax,iThereby constructing a pixel block IbjThe set of thumbnail points of (1):
Figure BDA0002878615900000034
all M pixel blocks Ib1,Ib2,…,IbMCombining the obtained M point sets to obtain a pixel thumbnail point set
Figure BDA0002878615900000035
j=1,2,…,M。
(d) Recording pixel number in pixel thumbnail set phiA number nφNumber of blocks
Figure BDA0002878615900000036
And dividing phi again randomly. When n isφWhen the pixel number is divisible by M, the number of pixels in each pixel block is
Figure BDA0002878615900000037
When n isφWhen the pixel can not be divided by M, the number of the pixels in the first M-1 pixel blocks is
Figure BDA0002878615900000038
The number of pixels in the Mth pixel block is Mod (n)φM), where floor (-) is the rounding function and Mod (-) is the remainder function. And (c) after blocking, performing K-means clustering on all the pixel blocks one by one according to the color values according to the method in the step (c) to obtain a new pixel thumbnail set phi. And repeating the loop until the number M of the blocks is 1 to obtain a final pixel thumbnail set phi.
(e) A color thumbnail matrix is constructed. Let the number of pixels in the final pixel thumbnail set phi be N, and the sequence of the positions of the color values of each pixel in Ic be PN={(ui,vi),i=1,2,…,N},(ui,vi) And the row and column positions of the ith pixel in Ic are shown. When a plurality of pixels in Ic correspond to the pixel color in phi, one pixel may be selected at random. Note PNThe corresponding pixel color values in Ic and Io form a color thumbnail matrix Ic_th、Io_thExpressed as follows:
Figure BDA0002878615900000041
where r, g, b represent the red, green, blue color component values of the pixel, respectively.
S103: a nonlinear transformation model from the color-corrected image to the original image is established based on a Least Squares Support Vector Machine (LSSVM). The specific process comprises the following steps:
(a) abbreviating the color obtained in step S102 into matrix Ic_th、Io_thThe N rows of data are respectively used as input and output training samples to obtain N sample points { xi,y i1,2, …, N, where xi∈R3As input, represents Ic_thRGB color value, y of a pixel pointi∈R3As output, represents Io_thThe RGB color values of the pixel points.
(b) And training an LSSVM nonlinear mapping transformation function model, and establishing a color nonlinear mapping conversion relation from the Ic unsaturated region to the corresponding region Io. Introducing non-linear mapping
Figure BDA0002878615900000042
Projecting the input data into a high-dimensional feature space of dimension N to transform the low-dimensional nonlinear regression problem into a linear regression problem in the high-dimensional feature space, the regression estimation being expressed as
Figure BDA0002878615900000043
Wherein w is a weight parameter vector of the classification hyperplane, and w belongs to RNB is an offset, b is an element of R3. The following optimization problem is defined:
Figure BDA0002878615900000044
wherein, minw,b,eJ (w, b, e) represents an optimization objective function, and the 1 st item and the 2 nd item in the expression respectively control the complexity of the model and the range of errors; wherein gamma is a penalty factor, gamma>0;eiRelaxation factor of insensitive loss function, e ═ e1e2…eN]T(ii) a s.t. represents a constraint.
(c) To be able to solve the quadratic programming problem with equality constraints described in (b) in dual space, the lagrange function is defined as:
Figure BDA0002878615900000045
in the formula of alphaiIs a lagrange multiplier. Solving an analytic solution of the optimization problem according to the KTT optimal condition:
Figure BDA0002878615900000051
by formula (4), eliminating w and eiThe corresponding linear matrix equation is obtained as follows:
Figure BDA0002878615900000052
wherein y ═ y1 y2…yN]T,Il=[1 1…1]TIs an N-dimensional all-1 vector, INNIs an NxN dimensional all-1 matrix, α ═ α1α2…αN]TOmega is a kernel function matrix, is an N × N square matrix, and has k-th column and l-th row elements of
Figure BDA0002878615900000053
k, l ═ 1,2, …, N. K (,) is a kernel function, taking the form of a Gaussian radial basis kernel function:
Figure BDA0002878615900000054
where σ is the kernel width, which reflects the radius of the boundary closure. Preferably, the penalty factor gamma can be 10-20, and the kernel width sigma can be 0.01-0.1.
(d) And (5) obtaining alpha and b to obtain a nonlinear transformation model based on the LSSVM, wherein the nonlinear transformation model is as follows:
Figure BDA0002878615900000055
s104: and estimating an error compensation matrix aiming at the saturation region by using the established LSSVM model.
(a) Calculating by using the LSSVM nonlinear transformation model obtained in S103 with the RGB color components of the pixels in the color-corrected image Ic as input quantities to obtain a set of predicted output values of the RGB color components; and taking the predicted output value as the color component of the pixel, and keeping the pixel position in one-to-one correspondence with the pixel position in the Ic to obtain the original image Iop.
(b) The difference between the original image Io and the original predicted image Iop is referred to as "Io-Iop", that is, the value of each pixel in D is the difference between the color components of the pixels corresponding to Io and Iop. In D, the position sequence obtained in step S101 is maintained
Figure BDA0002878615900000056
The color component values of the pixels at the positions listed in the list are unchanged, and the color component values of the pixels at the other positions are all set to be 0; thus, the obtained D is used as an error compensation matrix.
S105: and taking the LSSVM transformation model parameters and the error compensation matrix as an image recovery model, and embedding the image after color correction.
The color-corrected image Ic is stored in the JPG format, and the parameters α and b of the LSSVM model obtained in step S103, the values of γ and σ selected in step S103, and the value of the error compensation matrix D obtained in step S104 are sequentially stored in the annotation field of the JPG image. The values of α, b, γ, σ, D are stored in the first, second, third, fourth, fifth fields of the annotation field, respectively. Because D is a sparse matrix, the D can be stored in a sparse matrix storage mode.
And a second stage: and reading the image embedded with the image recovery model, analyzing the stored parameters, and recovering the image. The method comprises the following specific steps:
s201: and extracting model parameters from the image embedded with the recovery model, and restoring the image recovery model.
The JPG image file to which step S105 is applied and in which the image restoration model parameters are stored is read, and the content of the image annotation field is read as Ia. The contents of the first, second, third, fourth and fifth fields in the annotation domain are sequentially analyzed and respectively assigned to a Lagrange multiplier vector alpha, a bias b, a penalty factor gamma, a kernel width sigma and an error compensation matrix D. And (5) reducing the parameters alpha, b, gamma and sigma obtained in the step (S201) to obtain a nonlinear transformation model of the LSSVM shown in the formula (6).
S202: and restoring the image after the color correction according to the nonlinear transformation image restoration model based on the LSSVM obtained in the step S201.
The specific process is as follows: traversing all pixels of the image Ia to be restored obtained in the S201, inputting RGB color components of the pixels serving as input quantities into the LSSVM nonlinear transformation model obtained in the S201 according to the sequence from left to right and from top to bottom, and predicting to obtain corresponding output values; and taking the predicted output value as the color component of the pixel, and keeping the pixel position in one-to-one correspondence with the pixel position in Ia to obtain the preliminarily recovered image Iao.
S203: using the error compensation matrix obtained in S201, error compensation is further performed on the restored image obtained in S202.
Restoring the error compensation matrix D obtained in the step S201 and stored in the sparse storage mode into a matrix with the row and column number consistent with Ia and recording as Dr(ii) a Finally obtaining a recovered image I after error compensationr=Iao+DrIn the formula IaoFor the preliminarily restored image obtained in step S202, the color component values of the pixels of the image are added respectively.
The invention has the beneficial effects that:
the method provided by the invention can be used for extracting the color thumbnail matrix of the digital image unsaturated area based on K-means clustering and constructing the modeling data set. The color thumbnail matrix effectively covers the color range of the image unsaturated region by a small number of color samples and has good spatial uniformity, so that a data set constructed by the color samples can be effectively applied to LSSVM model training and the generalization capability of the obtained model is improved.
The LSSVM has good learning capacity, few required training samples and strong nonlinear representation capacity, is used for regression, and has low generalization error rate; the method can be used for solving the machine learning problem under the condition of small samples, can solve the high-dimensional problem, and avoids the defects that the selection of a neural network model requires structure selection and is easy to fall into local minimum points; different from a standard Support Vector Machine (SVM), the LSSVM adds an error square sum term in an objective function, replaces inequality constraint with equality constraint, changes a solving process into an equality equation set, does not need to solve a time-consuming constrained quadratic programming problem, and greatly accelerates the solving speed. The characteristics enable the LSSVM to be used for establishing the nonlinear mapping relation between the original image and the image after color correction, and the method has the obvious advantages of being fast, reliable, easy to implement and the like.
The method provided by the invention can embed the model and the parameters which can restore the original image into the image after color correction, and can keep the color information of the image from losing to the maximum extent in the process of transmission and copying. Applying a recovery model to ensure that the color of the original image can still be recovered after the digital image is subjected to color correction or certain nonlinear processing; because the recovery model includes color compensation for saturated regions of the image, even if the pixel color components exceed the range of gray values to cause truncation and saturation, the color can be effectively and accurately recovered.
The method of the invention is beneficial to the good color tracing of the digital image in the transferring and copying processes, and can be effectively applied to the accurate readjustment of the image color display and the re-editing after the image color correction in the printing and copying processes.
Drawings
Fig. 1 is a schematic diagram of a method for restoring an original image from a digital image according to the present invention.
Fig. 2 is a flow chart of the working process of restoring the original image of the digital image in the invention.
Fig. 3 is a flow chart of the working process of extracting the color thumbnail matrix of the unsaturated zone in the invention.
FIG. 4 is a flowchart of the operation process of the present invention for regenerating a thumbnail point set based on K-means clustering.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
Fig. 1 is a schematic diagram of a method for restoring a digital image to an original, and fig. 2 is a flowchart of a working process of restoring a digital image to an original according to the present invention. In fig. 2, steps S101 to S105 are mainly used to establish an image restoration model, which is a first stage of the method of the present invention; in steps S201 to S203, the model obtained in steps S101 to S105 is applied to restore the color-corrected image original, which is the second stage of the method. After the image restoration model and parameters are embedded in the color corrected image file in the first stage, the file can restore the original image by the method of the second stage without the original image for modeling in the first stage.
In order to further explain the embodiments, processes and effects of the present invention in detail, they are described with reference to examples.
(1) Referring to step S101 in fig. 2: the color corrected image is separated into saturated and unsaturated regions, and the regions corresponding to the regions in the original image are found. When the R, G, B color gray scale of the image is 256, if any color component value of the pixel is 0 or 255, the pixel is considered to be in a saturated region; a pixel is considered to be in a non-saturated region only if its R, G, B color components are each in the [1,254] range.
(2) Referring to step S102 in fig. 2: and extracting a color abbreviation matrix of the unsaturated region based on the K-means clustering to construct a modeling data set. The step of obtaining a data set composed of pixel points for LSSVM modeling of the step S103; and the number of modeling sample data can be reduced by adopting a K-means clustering algorithm and circulating iteration.
FIG. 3 is a flow chart of the process of extracting a color thumbnail matrix of an unsaturated zone. After the pixel color is removed, dividing the pixel into M areas, and performing clustering operation on each area to generate a thumbnail point set; further, let M be floor (M/4) +1, perform clustering operation, regenerate the thumbnail point set, and continue iteration until M is 1.
FIG. 4 is a flowchart of a process for regenerating a set of thumbnail points based on K-mean clustering. When M regions are clustered, only two points, namely a closest point and a farthest point from a clustering center, are reserved for a clustering point set in each region. Corresponding to M regions, the clustered condensed point set is common
Figure BDA0002878615900000081
A pixel point, wherein kjThe number of clusters of the jth region, j is 1,2, …, M. When clustering is performed, preferably, the selectable threshold eta is 5-8; the distance function selects the Euclidean distance form, namely the color of two pixel pointsColor values are respectively { r1,g1,b1}、{r2,g2,b2Define the distance between two points as
Figure BDA0002878615900000082
(3) Referring to step S103 in fig. 2: and establishing a nonlinear transformation model from the image after color correction to the original image based on the LSSVM. Abbreviating the color obtained in step S102 into matrix Ic_th、Io_thThe N rows of data are respectively used as input and output training samples to obtain N sample points { xi,y i1,2, …, N, where xi∈R3As input, represents Ic_thRGB color value, y of a pixel pointi∈R3As output, represents Io_thThe RGB color values of the pixel points. By the method of the invention, a linear matrix equation is obtained:
Figure BDA0002878615900000083
wherein y is [ y ]1 y2…yN]T,Il=[1 1…1]TIs an N-dimensional all-1 vector, INNIs an NxN dimensional all-1 matrix, α ═ α1α2…αN]TOmega is a kernel function matrix, is an N × N square matrix, and has k-th column and l-th row elements of
Figure BDA0002878615900000084
k, l ═ 1,2, …, N. K (,) is a kernel function, taking the form of a Gaussian radial basis kernel function:
Figure BDA0002878615900000085
where σ is the kernel width, which reflects the radius of the boundary closure. Preferably, the penalty factor gamma can be 10-20, and the kernel width sigma can be 0.01-0.1. Solving the matrix equation to obtain alpha and b, thereby obtaining a nonlinear transformation model based on the LSSVM as follows:
Figure BDA0002878615900000091
(4) referring to step S104 in fig. 2: and estimating an error compensation matrix aiming at the saturation region by using the established LSSVM model. Inputting the pixel color value of the image saturated region after color correction into the LSSVM-based nonlinear transformation model obtained in S103, and obtaining the pixel color of the original image saturated region which is predicted and output; then, the difference value between the predicted output and the actual color value of the original image is calculated, and then the error compensation matrix D for the saturated area is obtained by the method provided by the invention.
(5) Referring to step S105 in fig. 2: and embedding the image after color correction by using the mapping transformation model parameters and the error compensation matrix as an image recovery model. The specific process comprises the following steps: storing the color-corrected image in a JPG format, and sequentially storing the parameters alpha and b of the LSSVM model obtained in the step S103, the values gamma and sigma selected in the step S103 and the value of the error compensation matrix D obtained in the step S104 into the first, second, third, fourth and fifth fields of the annotation field of the JPG image.
Because the error compensation matrix D only records the color compensation value of the corresponding position of the Ic saturated area, and the color component values of the pixels in the unsaturated area are all 0, that is, D is a sparse matrix, a sparse storage mode can be adopted, and only the values of all nonzero elements of the matrix D and the positions of the row number and the column number of the matrix D are stored in the fifth field of the annotation field.
(6) Referring to step S201 in fig. 2: and extracting model parameters from the image embedded with the recovery model, and restoring the image recovery model. The specific process comprises the following steps: the JPG image file to which step S105 is applied and in which the image restoration model parameters are stored is read, and the content of the image annotation field is read as Ia. The contents of the first, second, third, fourth and fifth fields in the annotation domain are sequentially analyzed and respectively assigned to a Lagrange multiplier vector alpha, a bias b, a penalty factor gamma, a kernel width sigma and an error compensation matrix D. And reducing to obtain a nonlinear transformation model of the LSSVM according to the parameters alpha, b, gamma and sigma obtained in the S201.
(7) Referring to step S202 in fig. 2: and restoring the image after the color correction according to the nonlinear transformation image restoration model based on the LSSVM obtained in the step S201. The specific process comprises the following steps: traversing all pixels of the image Ia, inputting RGB color components of the pixels as input quantities into the LSSVM nonlinear transformation model obtained in S201 according to the sequence from left to right and from top to bottom, and predicting to obtain corresponding output values; and arranging all the output values according to the corresponding positions of the pixels of Ia to obtain an initially recovered image Iao.
(8) Referring to step S203 in fig. 2: using the error compensation matrix obtained in S201, error compensation is further performed on the restored image obtained in S202. The specific process comprises the following steps: restoring the error compensation matrix D obtained in the step S201 and stored in the sparse storage mode into a matrix with the row and column number consistent with Ia and recording as Dr(ii) a Calculating to obtain an error-compensated restored image Ir=Iao+DrIn the formula, addition means that the color component values of the image pixels are added respectively.
In the present specification, fig. 2, steps S101 to S105, and steps S201 to S203 can be implemented by writing programs according to the method of the present invention. The written program utilizes the original image and the image after color correction to automatically calculate, quickly and efficiently obtain a recovery model and parameters, and is embedded into the image after color correction. When the image is restored, the written program extracts the model parameters from the image embedded with the restoration model and the parameters, and performs transformation operation to restore the image to obtain the original image.
The color corrected image according to the present invention is an image of an original image subjected to some global color conversion operations, such as white balance and color temperature adjustment. The method provided by the invention is not suitable for the image generated by color editing in a local area by adopting different color correction parameters or different adjustment modes.
It should be noted that modifications can be made by one of ordinary skill in the art without departing from the principles of the present invention and should be considered within the scope of the present invention. The components not specifically described in the present embodiment can be implemented by the prior art.

Claims (9)

1. A method for accurately restoring an original image of a color corrected digital image is characterized by comprising the following steps:
s101, separating the image after color correction into saturated and unsaturated areas, and finding out the areas corresponding to the saturated and unsaturated areas in the original image;
s102, extracting a color abbreviation matrix of a non-saturated area based on K-means clustering, and constructing a modeling data set;
s103, establishing a nonlinear transformation model from the color-corrected image to the original image based on a least square support vector machine;
s104, estimating an error compensation matrix aiming at a saturation region by using the established LSSVM model;
s105, embedding the color corrected image by taking LSSVM transformation model parameters and an error compensation matrix as an image recovery model;
s201, extracting model parameters from the image embedded with the recovery model, and restoring the image recovery model;
s202, restoring the image after color correction according to the nonlinear transformation image restoration model based on the LSSVM obtained in S201;
in step S203, the error compensation matrix obtained in step S201 is used to further perform error compensation on the restored image obtained in step S202.
2. The method for accurately restoring an original image according to the color-corrected digital image of claim 1, wherein the step S101 is specifically as follows:
recording the image after color correction as Ic and the original image as Io; saturated region in Ic is Ic_sThe remaining part is an unsaturated region, and is marked as
Figure RE-FDA0002974246240000011
Namely, it is
Figure RE-FDA0002974246240000012
The saturation area in the color corrected image corresponds to an area I in the original image Ioo_sThe rest are marked as
Figure RE-FDA0002974246240000013
Namely, it is
Figure RE-FDA0002974246240000014
Let the gray level of each color channel of RGB in the image be L, i.e. the range of each color component is [0, L-1]]. Traversing each color component value of each pixel of the image if each color component value is in [0, L-1]]Range, then the pixel is considered to be in the non-saturated region; if any color component value is 0 or L-1, the pixel is considered to be in a saturated region; recording the row and column position (u) of all pixels in the saturation regioni,vi) Forming a sequence of positions
Figure RE-FDA0002974246240000015
Figure RE-FDA0002974246240000016
nsThe number of pixels in the saturation region.
3. The method for accurately restoring an original image according to claim 1, wherein the specific process of S102 comprises:
(a) pixel color deduplication, i.e. over unsaturated regions of Ic
Figure RE-FDA0002974246240000017
Only one pixel point with the same color is reserved, and the pixel points with repeated other colors are removed to obtain a pixel thumbnail set phi;
(b) the pixels are divided into blocks, and the number of the pixels in phi is recorded as nφBy NφOne pixel as a group, NφTaking 25-100, and randomly dividing all pixels into M blocks: i isb1,Ib2,…,IbMWhen n isφCan be covered with NφWhen the material is removed in an integral way,
Figure RE-FDA0002974246240000021
when n isφCan not be covered by NφWhen the material is removed in an integral way,
Figure RE-FDA0002974246240000022
the number of the pixels in the first M-1 pixel blocks is NφThe number of pixels in the Mth pixel block is Mod (n)φM), where floor (-) is the rounding function and Mod (-) is the remainder function;
(c) performing K-means clustering on all pixel blocks one by one according to color values, and recording the jth pixel block IbjThe number of clusters of (2) is kjJ ═ 1,2, …, M; first order kjCalculating Euclidean distance d from the cluster center to each pixel color, and calculating the maximum value d of the distancemaxStopping clustering when the threshold value eta is smaller than; otherwise let kj=kj+1, re-clustering and judging the maximum distance d in each classmaxIf all the k values are less than the threshold eta, if yes, stopping clustering, and if not, continuing to enable kj=kj+1, re-clustering, … …; up to the maximum value d of the distances of all pixels in each class to the cluster centermaxAre both less than a threshold η; the threshold eta can be selected from 5 to 8; k at clustering stopjIs a block of pixels IbjRespectively find the kjThe pixel P in the cluster with the smallest distance from the cluster centermin,iThe pixel P with the largest distance from the cluster centermax,iThereby constructing a pixel block IbjThe set of thumbnail points of (1):
Figure RE-FDA0002974246240000027
all M pixel blocks Ib1,Ib2,…,IbMCombining the obtained M point sets to obtain a pixel thumbnail point set
Figure RE-FDA0002974246240000023
(d) Let the number of pixels in the pixel thumbnail set phi be nφNumber of blocks
Figure RE-FDA0002974246240000024
Dividing phi randomly again; when n isφWhen the pixel number is divisible by M, the number of pixels in each pixel block is
Figure RE-FDA0002974246240000025
When n isφWhen the pixel can not be divided by M, the number of the pixels in the first M-1 pixel blocks is
Figure RE-FDA0002974246240000026
The number of pixels in the Mth pixel block is Mod (n)φM), where floor (-) is the rounding function and Mod (-) is the remainder function; after blocking, performing K-means clustering on all pixel blocks one by one according to the color values according to the method in the step (c) again to obtain a new pixel thumbnail set phi; repeating the iteration in a circulating way until the number M of the blocks is 1 to obtain a final pixel thumbnail set phi;
(e) constructing a color abbreviating matrix, and recording the number of pixels in a final pixel abbreviating point set phi as N, and the sequence formed by the positions of the color values of each pixel in Ic as PN={(ui,vi),i=1,2,…,N},(ui,vi) The row and column positions of the ith pixel in Ic are shown, when the color of the pixel in phi corresponds to a plurality of pixels in Ic, one pixel is randomly selected, and P is recordedNThe corresponding pixel color values in Ic and Io form a color thumbnail matrix Ic_th、Io_thExpressed as follows:
Figure RE-FDA0002974246240000031
where r, g, b represent the red, green, blue color component values of the pixel, respectively.
4. The method for accurately restoring an original image according to claim 1, wherein the specific process of S103 comprises:
abbreviating the color matrix obtained in step S102Ic_th、Io_thThe N rows of data are respectively used as input and output training samples to obtain N sample points { xi,yi1,2, …, N, where xi∈R3As input, represents Ic_thRGB color value, y of a pixel pointi∈R3As output, represents Io_thThe RGB color values of the pixel points are used for constructing a linear matrix equation:
Figure RE-FDA0002974246240000032
wherein y is [ y ]1 y2 … yN]T,Il=[1 1 … 1]TIs an N-dimensional all-1 vector, INNIs an NxN dimensional all-1 matrix, α ═ α1α2 … αN]TOmega is a kernel function matrix, is an N × N square matrix, and has k-th column and l-th row elements of
Figure RE-FDA0002974246240000035
K (,) is a kernel function, taking the form of a Gaussian radial basis kernel function:
Figure RE-FDA0002974246240000033
wherein sigma is the width of the nucleus, preferably, the penalty factor gamma can be 10-20, and the width of the nucleus sigma can be 0.01-0.1; solving the matrix equation to obtain alpha and b, thereby obtaining a nonlinear transformation model based on the LSSVM as follows:
Figure RE-FDA0002974246240000034
5. the method for accurately restoring an original image according to claim 1, wherein S104 is specifically:
using the RGB color components of the pixels in the color-corrected image Ic as input quantities, the LSSVM nonlinear transformation obtained in S103 is usedCalculating the model to obtain a group of predicted output values of RGB color components; taking the prediction output value as the color component of the pixel, and keeping the pixel position in one-to-one correspondence with the pixel position in the Ic to obtain an original image prediction image Iop; calculating the difference between the original image Io and the original predicted image Iop as D-Io-Iop, namely the difference between the color components of the pixels corresponding to the Io and the Iop in the value of each pixel in D; in D, the position sequence obtained in step S101 is maintained
Figure RE-FDA0002974246240000041
The color component values of the pixels at the positions listed in the list are unchanged, and the color component values of the pixels at the other positions are all set to be 0; thus, the obtained D is used as an error compensation matrix.
6. The method for accurately restoring an original image according to the color-corrected digital image as claimed in claim 1, wherein said S105 is specifically:
storing the color-corrected image Ic in a JPG format, and sequentially storing the parameters alpha and b of the LSSVM model obtained in the step S103, the values of gamma and sigma selected in the step S103 and the value of the error compensation matrix D obtained in the step S104 into an annotation domain of the JPG image; the values of α, b, γ, σ, D are saved in the first, second, third, fourth, fifth fields of the annotation field, respectively; because D is a sparse matrix, the D can be stored in a sparse matrix storage mode.
7. The method for accurately restoring an original image according to the color-corrected digital image of claim 1, wherein S201 is specifically:
reading the JPG image file which is applied with the step S105 and is stored with image recovery model parameters and is marked as Ia, simultaneously reading the content of the image annotation domain, sequentially analyzing the content of the first, second, third, fourth and fifth fields in the annotation domain, respectively assigning the content to a Lagrange multiplier vector alpha, a bias b, a penalty factor gamma, a kernel width sigma and an error compensation matrix D, and restoring the parameters alpha, b, gamma and sigma obtained in the step S201 to obtain the nonlinear transformation model of the LSSVM obtained in the step S103.
8. The method for accurately restoring an original image according to claim 1, wherein S202 is specifically:
traversing all pixels of the image Ia to be restored obtained in the S201, inputting RGB color components of the pixels serving as input quantities into the LSSVM nonlinear transformation model obtained in the S201 according to the sequence from left to right and from top to bottom, and predicting to obtain corresponding output values; and taking the predicted output value as the color component of the pixel, and keeping the pixel position in one-to-one correspondence with the pixel position in Ia to obtain the preliminarily recovered image Iao.
9. The method for accurately restoring an original image of a color-corrected digital image according to claim 1, wherein the specific process of S203 is as follows:
restoring the error compensation matrix D obtained in the step S201 and stored in the sparse storage mode into a matrix with the row and column number consistent with Ia and recording as Dr(ii) a Finally obtaining a recovered image I after error compensationr=Iao+DrIn the formula IaoFor the preliminarily restored image obtained in step S202, the color component values of the pixels of the image are added respectively.
CN202011636596.6A 2020-12-31 2020-12-31 Method for accurately restoring original image of digital image after color correction Active CN112669238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011636596.6A CN112669238B (en) 2020-12-31 2020-12-31 Method for accurately restoring original image of digital image after color correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011636596.6A CN112669238B (en) 2020-12-31 2020-12-31 Method for accurately restoring original image of digital image after color correction

Publications (2)

Publication Number Publication Date
CN112669238A true CN112669238A (en) 2021-04-16
CN112669238B CN112669238B (en) 2022-04-29

Family

ID=75413578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011636596.6A Active CN112669238B (en) 2020-12-31 2020-12-31 Method for accurately restoring original image of digital image after color correction

Country Status (1)

Country Link
CN (1) CN112669238B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638786A (en) * 2022-02-21 2022-06-17 杭州印鸽科技有限公司 Picture printing effect simulation system and color matching method for different processes
CN116996786A (en) * 2023-09-21 2023-11-03 清华大学 RGB-IR image color recovery and correction method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102732660A (en) * 2012-06-27 2012-10-17 浙江大学 Burden surface temperature field detection method based on multi-source information fusion
CN104408728A (en) * 2014-12-03 2015-03-11 天津工业大学 Method for detecting forged images based on noise estimation
CN108377373A (en) * 2018-05-10 2018-08-07 杭州雄迈集成电路技术有限公司 A kind of color rendition device and method pixel-based
CN110186962A (en) * 2019-05-10 2019-08-30 天津大学 A kind of imperfect measurement data imaging method for capacitance chromatography imaging
CN110457781A (en) * 2019-07-24 2019-11-15 中南大学 Train towards passenger comfort crosses tunnel duration calculation method
US20200285997A1 (en) * 2019-03-04 2020-09-10 Iocurrents, Inc. Near real-time detection and classification of machine anomalies using machine learning and artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102732660A (en) * 2012-06-27 2012-10-17 浙江大学 Burden surface temperature field detection method based on multi-source information fusion
CN104408728A (en) * 2014-12-03 2015-03-11 天津工业大学 Method for detecting forged images based on noise estimation
CN108377373A (en) * 2018-05-10 2018-08-07 杭州雄迈集成电路技术有限公司 A kind of color rendition device and method pixel-based
US20200285997A1 (en) * 2019-03-04 2020-09-10 Iocurrents, Inc. Near real-time detection and classification of machine anomalies using machine learning and artificial intelligence
CN110186962A (en) * 2019-05-10 2019-08-30 天津大学 A kind of imperfect measurement data imaging method for capacitance chromatography imaging
CN110457781A (en) * 2019-07-24 2019-11-15 中南大学 Train towards passenger comfort crosses tunnel duration calculation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAOJIELOU ETAL.: ""Blind Image Quality Assessment Based Automatical Motion Blur Restoration Algorithm"", 《RESEARCHGATE》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638786A (en) * 2022-02-21 2022-06-17 杭州印鸽科技有限公司 Picture printing effect simulation system and color matching method for different processes
CN114638786B (en) * 2022-02-21 2022-12-13 杭州印鸽科技有限公司 Picture printing effect simulation system and color matching method for different processes
CN116996786A (en) * 2023-09-21 2023-11-03 清华大学 RGB-IR image color recovery and correction method and device
CN116996786B (en) * 2023-09-21 2024-01-16 清华大学 RGB-IR image color recovery and correction method and device

Also Published As

Publication number Publication date
CN112669238B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN111768432B (en) Moving target segmentation method and system based on twin deep neural network
Hosu et al. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment
US10535141B2 (en) Differentiable jaccard loss approximation for training an artificial neural network
CN110021047A (en) Image processing method, image processing apparatus and storage medium
CN112669238B (en) Method for accurately restoring original image of digital image after color correction
US8611654B2 (en) Color saturation-modulated blending of exposure-bracketed images
AU2022201841A1 (en) Method and system for image processing
WO2023131301A1 (en) Digestive system pathology image recognition method and system, and computer storage medium
KR102192016B1 (en) Method and Apparatus for Image Adjustment Based on Semantics-Aware
CN112115967B (en) Image increment learning method based on data protection
KR20200144398A (en) Apparatus for performing class incremental learning and operation method thereof
US20210358081A1 (en) Information processing apparatus, control method thereof, imaging device, and storage medium
CN111489401A (en) Image color constancy processing method, system, equipment and storage medium
CN114463605A (en) Continuous learning image classification method and device based on deep learning
CN115631399A (en) Training method of image quality evaluation model, image quality evaluation method and device
CN114692725A (en) Decoupling representation learning method and system for multi-temporal image sequence
CN115867933A (en) Computer-implemented method, computer program product and system for processing images
CN117058554A (en) Power equipment target detection method, model training method and device
CN107102827A (en) The equipment for improving the method for the quality of image object and performing this method
WO2023010701A1 (en) Image generation method, apparatus, and electronic device
CN116109656A (en) Interactive image segmentation method based on unsupervised learning
CN113344771A (en) Multifunctional image style migration method based on deep learning
EP4038571A1 (en) Detection and treatment of dermatological conditions
CN112686277A (en) Method and device for model training
KR102536808B1 (en) Method, server and computer program for compositing natural language-based background image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant