CN107507250A - A kind of complexion tongue color image color correction method based on convolutional neural networks - Google Patents

A kind of complexion tongue color image color correction method based on convolutional neural networks Download PDF

Info

Publication number
CN107507250A
CN107507250A CN201710406983.2A CN201710406983A CN107507250A CN 107507250 A CN107507250 A CN 107507250A CN 201710406983 A CN201710406983 A CN 201710406983A CN 107507250 A CN107507250 A CN 107507250A
Authority
CN
China
Prior art keywords
layer
image
color
characteristic pattern
color correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710406983.2A
Other languages
Chinese (zh)
Other versions
CN107507250B (en
Inventor
李晓光
卢运西
卓力
张菁
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710406983.2A priority Critical patent/CN107507250B/en
Publication of CN107507250A publication Critical patent/CN107507250A/en
Application granted granted Critical
Publication of CN107507250B publication Critical patent/CN107507250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

A kind of complexion tongue color image color correction method based on convolutional neural networks is related to digital image processing method.Algorithm mainly includes offline part and online part.Offline part is by collecting training data, color correction convolutional neural networks network frame is built and training forms, and online part is then color of image correction and color correction effect assessment.CNN imitates the cognitive process of the mankind, i.e., being successively abstracted from local feature to global characteristics.Convolutional neural networks are applied in color correction, ideal chromatic rendition effect can be obtained.The present invention carries out color correction to the face complexion and tongue color collected in stable optical environment, reappears the colouring information that destination object is really presented in identical optical environment by using depth convolutional neural networks method.

Description

A kind of complexion tongue color image color correction method based on convolutional neural networks
Technical field
The present invention relates to digital image processing method, more particularly to a kind of complexion tongue color image based on convolutional neural networks Color calibration method.
Background technology
The collection of true colors has important value in fields such as medical science, art with reproduction.The colouring information of image is Carry out the important evidence of some specialized image analyses.The color that body surface is presented and light source characteristic, illumination condition, collecting device It is closely related with the various links such as display device, printing device.Color correction is color reproduction, realizes colour consistency presentation Key technology.At present, color correction obtains in numerous image processing fields such as medical image, mural painting image and license image Application is arrived.Research can truly reflect that the color correction technology of object of observation intrinsic colour has great importance.
Camera obtains image existing cross-color compared with actual scene image.Therefore, have and be much directed to color of image The method of correction proposes in succession, for example, polynomial regression, based on the side such as PLS color correction method and neutral net Method.The training sample that polynomial regression color calibration method needs is less, and computational complexity is low, but its regression accuracy and training Sample and multinomial selection relation are big, and extrapolability is poor.PLS color correction method can be solved preferably as certainly Multiple correlation between variable, the problem of sample number is relatively fewer, but precision is still difficult to meet practical medical application demand.Pass Uniting, the training of the color calibration method based on neutral net is limited to the network number of plies, initiation parameter selects and parallel mode, therefore net Network Generalization Capability generally there are over-fitting.
In recent years, deep learning is used widely, wherein convolutional neural networks (Convolutional Neural Network, CNN) it is a kind of typical depth feedforward network.CNN imitates the cognitive process of the mankind, i.e., from local feature to the overall situation Feature is successively abstracted.Convolutional neural networks are applied in color correction, ideal chromatic rendition effect can be obtained.
The content of the invention
It is an object of the present invention to by using depth convolutional neural networks method, to being collected in stable optical environment Face complexion and tongue color carry out color correction, reappear the colouring information that destination object is really presented in identical optical environment.
The present invention is realized using following technological means:
A kind of complexion tongue color image color correction method based on convolutional neural networks, overall flow figure is as shown in Figure 1; Algorithm mainly includes offline part and online part.Offline part is by collecting training data, color correction convolutional neural networks net Composition is built and trained to network framework, and online part is then color of image correction and color correction effect assessment.
Described offline part, particular content are as follows:
(1) collecting training data
IMAQ of the present invention is using closed environment --- artificial camera bellows, to avoid the influence of external stray light, Illuminated using artificial light source, to ensure the quality and stability of tongue image collection;Light source, imaging device relative position are fixed, from And reach uniformity and the standardization of collection environment;Artificial light source (D65) simulates natural light under booth conditions, effective to protect Demonstrate,prove the stability of light conditions.
Different from conventional method, this method uses colour atlas of the ColorChecker Digital SG as color correction. ColorChecker Digital SG have 140 color lumps, and relatively conventional ColorChecker Classic have wider array of color gamut. Meanwhile in training sample, we increase with the colour of skin and tongue color color similar in color lump sample, be so advantageous to improve color school Positive precision.Under the conditions of closed environment, ColorChecker Digital SG standard color cards are taken pictures.By changing Become the shooting angle of colour atla, adjust colour atla and light source distance, adjust the modes such as the distance of colour atla and camera and shoot to obtain colour chart Picture, the training data of convolutional neural networks color correction model is generated using these images.At the image obtained to shooting Reason, cut and intercept each color lump, each color lump needs to set fixed size form to utilize each color lump of colour atla as training sample Standard value generation RGB image makees the label of training data, and training sample and label correspond.
(2) color correction convolutional neural networks network frame is built and trained
The present invention is the relation being fitted by neutral net between the color of image that true colors and taking pictures obtain, learning Content Relatively easy, so network design is the deep neural network of shallow-layer, the network number of plies is 5 layers.The present invention is different using three kinds Rotating fields, it is input layer, nonlinear transformation layer, output layer respectively as shown in Figure 2.Input layer is by a convolutional layer and repaiied Linear positive unit (Rectified linear unit, ReLU) forms;Nonlinear transformation layer is made up of 3 layer networks, and every layer by one Individual convolutional layer and ReLU activation primitive composition, there is one batch of normalization among convolutional layer and activation primitive;Output layer is by one Individual convolutional layer composition.
In training, the present invention is using the stochastic gradient descent algorithm with mini-batch come iteration and renewal convolution kernel shape State W and biasing B, micro- batch data collection (mini-batch) computing is carried out every time, and the overall situation is found using stochastic gradient descent algorithm Optimal solution.
Need to contact by convolution filter in CNN image processing process, between convolutional layer, convolution filter is determined Justice is expressed as W × H × C × D, wherein, C is represented by the port number of filtering image;W, H represents the width of filter range, height respectively;D Represent the species of convolution filter.
The input layer of network contains a convolutional layer and ReLU activation primitives.Input layer feature extraction formula represents as follows:
F1(X1)=max (0, W1*X1+B1) (1)
In formula, X1To enter the characteristic pattern of input layer.W1And B1Convolution filter and the biasing of input layer, W are represented respectively1 Size be 3 × 3 × 3 × 64, it represent 64 kinds of different convolution filters, the core size 3 × 3 × 3 of each convolution, F1(X1) It is the characteristic pattern that input layer obtains.Input picture is 3 × 40 × 40 characteristic pattern, represents the cromogram that characteristic pattern is 3 passages, wide W and high h is 40.By the wide w of convolutional layer output characteristic figure1With high h1Shown in calculation formula such as formula (2) and formula (3), Kernel is the core size of convolution;Stride is the step-length of convolution kernel, when value is 1, extracts overlapping image block, effect compared with It is good;Pad is edge zero padding number of pixels.The value for setting kernel in the present invention is as the value that 3, stride value is 1, pad 1.Therefore, input picture can produce 64 × 40 × 40 characteristic pattern by 64 convolution kernels 3 × 3 of input layer afterwards;Then, feature Figure is by correcting linear unit R eLU.ReLu's is expressed as max (0, X), can extract useful feature figure.Last output result Still it is 64 × 40 × 40 characteristic pattern.
During the Nonlinear Mapping of nonlinear transformation layer, convolutional layer, batch normalization and ReLU functions be located at the second layer, Third layer and the 4th layer.The formula in nonlinear transformation layer each stage represents as follows:
Fi(Xi)=max (0, Wi*Fi-1(Xi-1)+Bi) { i=2,3,4 } (4)
(4) in formula, i represents i-th layer, XiFor the i-th -1 layer of output, i.e. Fi-1(Xi-1)。WiAnd BiNon-linear change is represented respectively Convolution filter and the biasing in stage are changed, wherein, convolution filter W1Size be the 3 × 3 × 3 × 64, the 2nd, 3,4 layer of convolutional layer WiSize be 64 × 3 × 3 × 64, the size of each convolution kernel is 64 × 3 × 3.64 × 40 × 40 spy of input layer output Sign figure, is input in second convolutional layer, by the characteristic pattern that 64 × 40 × 40 can be produced after 64 convolution kernels 3 × 3.So Afterwards, 64 × 40 × 40 characteristic pattern enters batch normalization.Normalization is criticized among convolutional layer and ReLU activation primitives, solves god Through convergence rate during network training slowly and the situations about can not train such as gradient is exploded.Meanwhile batch normalization accelerates network Training speed, improve model accuracy.Finally, characteristic pattern improves the non-linear of feature by amendment linear unit.The second layer Network passes through third and fourth layer for having identical structure with the second layer after exporting 64 × 40 × 40 characteristic pattern, finally gives 64 × 40 × 40 characteristic pattern.
In the output process of reconstruction of output layer, characteristic pattern is input to the output layer for comprising only a convolutional layer.Output weight The formula built represents as follows:
F5(X5)=W5*F4(X4)+B5 (5)
In formula, X5For the 4th layer of output.W5And B5Convolution filter and the biasing of feature reconstruction layer, W are represented respectively5Chi Very little is 3 × 3 × 64 × 3, and feature reconstruction layer has 3 convolution filters, is equal to the effect of mean filter, the core of each convolution Size is 3 × 3 × 64, can realize the effect of average characteristics figure, F4(X4) it is characteristic pattern caused by nonlinear transformation layer, i.e. X5; The characteristic pattern of nonlinear transformation layer output can produce 3 × 40 × 40 characteristic pattern after 3 convolution kernels 3 × 3.
The data set collected is trained by the network, and the model of every wheel training is obtained after iteration more than 50 times, Model is finally saved in file.
Described online part, particular content are as follows:
(1) color of image corrects
Color correction, the image after being corrected are carried out to cross-color image using the model that training obtains.In this hair Colour atla, face and tongue image are shot in bright darkroom, obtained photo has distortion compared with actual color, using based on volume Product neutral net color calibration method carries out color correction to distorted image.Image slices vegetarian refreshments to be corrected is read first saves as figure As matrix, then read the MAT formatted files that training obtains and obtain color correction model.Image array is input to network model It is central, color correction is carried out to image in tri- passages of R, G, B respectively, the image after output calibration.
(2) color correction effect assessment
In order to verify the validity of color correction model, it is necessary to evaluate color correction effect.Color correction is commented The problem of valency is one very complicated, it is related to the different ambits such as colorimetry, physiology, psychology.Conventional evaluation method There are objective evaluation and subjective assessment.
According to the theory of colorimetry, the evaluation criterion of chromatic rendition has reflectance spectrum matching, the matching of color looks and tristimulus With etc., these belong to objective standard.Colourimetric matching is the tristimulus values for the object for making Computer display and corresponding reality The tristimulus values of border object color is identical.The color of image after shooting, its tristimulus values it is identical with corresponding actual object or Person's aberration is in allowed band, then the quality of chromatic rendition is all right.Colourimetric matching is most common and is most of practical significance Chromatic rendition standard.The present invention uses CIE1976L*a*b*As evaluation index.
Subjective assessment is exactly that someone does direct evaluation to the visual experience of a certain given stimulation, color correction master in the present invention The observer for seeing evaluation contrasts image after real food and correction, evaluates whether this method restores real color afterwards.
Brief description of the drawings:
Fig. 1 is based on convolutional neural networks color calibration method flow chart;
Fig. 2 color correction convolutional neural networks model support compositions;
Comparison diagram before and after Fig. 3 faces and tongue correction.
Embodiment
According to foregoing description, the specific implementing procedure of the present invention introduced below.
It is described to be partly divided into 2 steps offline:
Step 1:Collecting training data
IMAQ is the basis of color correction work.When collecting device and lighting condition change, how to ensure what is obtained Image has the key issue that constant color characteristic is IMAQ.It is related to the design of image collecting device, illumination light The problems such as the selection in source, color space (Color Space) are chosen, the foundation of system colors characteristic mathematical model.Therefore, The standardization of color correction image capture environment and method is the important foundation of color correction.Generally, darkroom or camera bellows It is optimal shooting environmental, the camera bellows voluntarily developed that the present invention uses, the interference of extraneous veiling glare can be avoided, keeps light source The stabilization of environment.
Step 1.1:Standardize sample collection
By lot of experiments, the present invention avoids the influence of external environment, to color correction to ensure the quality of IMAQ Collection environment proposes following condition.
1) using the collection environment of closing, veiling glare is avoided to enter shooting environmental and intense light injection camera lens;
2) experimental light sources select D65 light sources, simulate natural light;
3) the D65 light stabilities time is 10 minutes.Open light source, after treating light stability, capturing sample image;
4) both light source and camera position are fixed, the position of colour atla be set in apart from the cm range of camera 30~35 it It is interior;
5) camera parameter is correctly set.To Canon EOS1200D, set white balance automatic, aperture F10, ISO are arranged to 3200, aperture time is 1/160 second;
Step 1.2:Sample collection
Sample collection procedure is carried out according to imposing a condition.The present invention uses the Canon for being configured parameter in camera bellows EOS1200D takes pictures to ColorCheck Digital SG colour atlas, by changing colour atla position, camera shooting angle, obtains Obtain a large amount of colour atla photos.In order to increase the robustness of color correction, face, tongue etc. are taken pictures under identical photoenvironment, Obtained face and tongue image may apply to the checking of convolutional neural networks color correction model calibration result.ColorCheck Digital SG colour atlas have an optical standard value, therefore the colour atla photo of gained, by a pair of processing and corresponding standard picture 1 Should, can be as the training sample of color correction network.
Step 1.3:Sample preprocessing
Camera shoots obtained ColorCheck Digital SG colour atla photos, it is impossible to directly as convolutional neural networks Training sample, it is necessary to which each color lump is split.The present invention splits each color lump of colour atla, the size of each color lump For 180 × 180 pixels.The ColorCheck Digital SG optical datas provided by consulting official obtain each color of colour atla The LAB values of block.The LAB values of each color lump are converted into the rgb value under D65 lighting environments by the present invention, utilize the RGB numbers of color lump Value, generate RGB image.Equally, it is 180 × 180 that the present invention, which sets standard color block RGB image size, and photo intercepts patch image It is identical with standard color block picture size size, computing after being easy to.Standard color card RGB image is as the patch image training intercepted Label, the name of the patch image intercepted and label correspond, and can avoid larger error caused by corresponding mistake.
It is of the invention by same colour atla in order to improve the generalization ability and robustness of convolutional neural networks color correction model 140 color lumps that image cropping goes out, put in order and be stitched together according to color lump in ColorCheck Digital SG colour atlas, it is raw Into stitching image, together it is put into monochromatic block picture in training data, while standard color block photo is spelled by same order Pick up and, be put into label data concentration.So diversity is presented in training data sample type, is advantageous to improve the extensive energy of model Power.
In order to increase the expansion of sample diversity and Sample Storehouse, the present invention carries out data enhancing to training dataset, passed through Spin upside down, be rotated by 90 ° to the left, being rotated by 90 ° to the right, change 180 degree to the right, be rotated by 90 ° and spin upside down to the left, to the right It is rotated by 90 ° and spins upside down and change 180 degree to the right and spin upside down seven kinds of conversion, training data is extended for original 8 Times, it is divided into the small figures of 40*40 to training sample using sliding window afterwards.Reached by the quantity of a series of expansion training dataset To 96000.Color correction model training is substantially a kind of mapping for being input to output, it can learn substantial amounts of input with Mapping relations between output, ultimately produce calibration model.For the degree of accuracy of authentication model, using similar mode Prepare the test set that size is 1280 pictures.Training set and test set are sent in convolutional neural networks and are trained.
Step 2:Color correction convolutional neural networks network frame is built and trained
Without pond layer and full articulamentum is added in the present invention, whole network is divided into input layer, nonlinear transformation layer, output Layer.Input layer is made up of a convolutional layer and a ReLU activation primitive;Nonlinear transformation layer is made up of 3 layer networks, every layer It is made up of a convolutional layer and a ReLU activation primitive, there is one batch of normalization among convolutional layer and activation primitive;Output Layer is made up of a convolutional layer.
(1) input layer of network contains a convolutional layer and ReLU activation primitives.Input layer feature extraction formula represents such as Under:
F1(X1)=max (0, W1*X1+B1) (6)
In formula, X1To enter the characteristic pattern of input layer.W1And B1Convolution filter and the biasing of input layer, W are represented respectively1 Size be 3 × 3 × 3 × 64, it represent 64 kinds of different convolution filters, the core size 3 × 3 × 3 of each convolution, F1(X1) It is the characteristic pattern that input layer obtains.
During the Nonlinear Mapping of nonlinear transformation layer, convolutional layer, batch normalization and ReLU functions be located at the second layer, Third layer and the 4th layer.The formula in nonlinear transformation layer each stage represents as follows:
Fi(Xi)=max (0, Wi*Fi-1(Xi)+Bi) { i=2,3,4 } (7)
In formula, i represents i-th layer, XiFor the i-th -1 layer of output.WiAnd BiThe convolution filter in nonlinear transformation stage is represented respectively Ripple device and biasing, wherein, convolution filter W1Size be the 3 × 3 × 3 × 64, the 2nd, 3,4 layer of convolutional layer WiSize be 64 × 3 × 3 × 64, the size of each convolution kernel is 64 × 3 × 3.
In the output process of reconstruction of output layer, characteristic pattern is input to the output layer for comprising only a convolutional layer.Output weight The formula built represents as follows:
F5(X5)=W5*F4(X4)+B5 (8)
In formula, X5For the 4th layer of output.W5And B5Convolution filter and the biasing of feature reconstruction layer, W are represented respectively5Chi Very little is 3 × 3 × 64 × 3, and feature reconstruction layer has 3 convolution filters, is equal to the effect of mean filter, the core of each convolution Size is 3 × 3 × 64, can realize the effect of average characteristics figure, F4(X4) it is characteristic pattern caused by nonlinear transformation layer.
During model training, input picture size is 40 × 40 characteristic pattern, in first convolutional layer, by 64 64 × 40 × 40 characteristic pattern can be produced after individual convolution kernel 3 × 3;In second convolutional layer, input size be 64 × 40 × 40 characteristic pattern, by the characteristic pattern that 64 × 40 × 40 can be produced after 64 convolution kernels 3 × 3;It is defeated in the 3rd convolutional layer Enter the characteristic pattern that size is 64 × 40 × 40, by the characteristic pattern that 64 × 40 × 40 can be produced after 64 convolution kernels 3 × 3; In 4th convolutional layer, characteristic pattern that input size is 64 × 40 × 40, by can produce 64 after 64 convolution kernels 3 × 3 × 40 × 40 characteristic pattern;The convolutional layer of output layer is recently entered, by 3 × 40 × 40 can be produced after 3 convolution kernels 3 × 3 Characteristic pattern.
(2) batch normalization is added in nonlinear transformation layer, between convolutional layer and excitation layer.Criticizing normalization main thought is By input data albefaction (Whitened), accelerate network convergence speed, reduce data redundancy and characteristic correlation, it is actually logical Crossing linear transformation makes the average of data 0 and unit variance.It is slow that batch normalization solves the convergence rate that is run into during neural metwork training With the situation about can not train such as gradient blast.Meanwhile batch normalization accelerates the training speed of network, improves model accuracy.
(3) in every layer of activation primitive, the present invention is using amendment linear unit, and formula is as shown in (9) formula.X represents special Result of the figure after convolution kernel is levied, works as x<0, f (x)=0;If x>0, f (x)=x.In forward-propagating, calculating is accelerated Speed.In backpropagation, work as x>When 0, gradient 1, thus alleviate gradient disperse problem.Therefore obtained stochastic gradient descent Algorithm the convergence speed improves a lot compared with sigmoid and tanh.
F (x)=max (0, x) (9)
(4) in network training, the present invention is trained using the stochastic gradient descent algorithm with mini-batch, works as sample This amount is bigger or iterations is high constantly, and traditional gradient descent algorithm arithmetic speed is slower, and stochastic gradient descent calculation gram Take these shortcomings.All samples are brought in traditional training into every time, and the stochastic gradient descent algorithm with mini-batch is then band Batch data collection (mini-batch) carries out computing in a subtle way, adds searching globally optimal solution using random decline.Learning rate is random The call parameter of gradient descent algorithm learning method, determine the speed of right value update, set too conference causes cost function Vibration, as a result crosses optimal value, too small that convergence rate can be made excessively slow, generally tends to choose less learning rate, and such as 0.001 ± 0.01 to keep system stable.Momentum parameter and weights decay factor can improve training adaptivity, and momentum parameter is usually [0.9,1.0], weights decay factor are usually 0.0005 ± 0.0002.By Germicidal efficacy, learning rate is set to by the present invention 10-4, momentum parameter is set to 0.9, weights decay factor value 0.0005.By many experiments, the present invention set criticize size as 128。
Four key step designs and adjustment, convolutional neural networks color correction model buildings are completed, will walked more than The training sample data collected in rapid 1, which are put into network, trains, and after the completion of training, has obtained the model after 50 wheel training, Preserved with Matlab MAT forms.
It is described to be partly divided into 2 steps online:
Step 1:Color of image corrects
Colour atla, face and tonguing row are shot in camera bellows, obtain colour atla, face and the tongue photo under the lighting environment. By comparing it can be found that photo and actual object have different degrees of cross-color.The present invention uses and is based on convolutional Neural Network color correction model, color correction is carried out to cross-color image.First, image to be corrected is read pixel-by-pixel Rgb value, save as image array.Then, color correction network is read out from Matlab MAT forms, be based on Convolutional neural networks color correction model.Image array is input in network, 3 convolution kernels of input layer read image R, G, B The matrix of triple channel.Afterwards, R, G to image, B component carry out color correction to model respectively.The image array of triple channel passes through After network color correction, triple channel image array is exported.Three image arrays are finally merged into RGB image, that is, obtain color school Image after just.
Step 2:Color correction effect assessment
Color correction, the image after being corrected are carried out to colour atla, face and tongue image using the model.In order to verify face The validity of color calibration model is, it is necessary to evaluate the image after correction.
The evaluation of color correction is related to the fields such as colorimetry, psychology, the problem of being a complexity.It is generally divided into subjectivity Evaluation and the class of objective evaluation two.Subjective evaluation method often allows observer, and observation ratio is carried out to correction chart picture and shooting object Compared with.Observer compared with true colors, weighs the effect of chromatic rendition by the color of the image after correction.This method It is directly perceived effective, it is the main method of color correction quality evaluation.
Objective evaluation is under certain conditions, to be carried out using a set of colour code.If the color after correction is close to original Colour code value, then the effect of color correction is preferable.Generally, chromatic rendition tristimulus values and colour code aberration in allowed limits, The image and real object for being considered as obtaining correction obtain identical visual effect.In CIELAB spaces, it is considered that Δ E <3 be color real-playback standard, Δ E<6 be that color correction precisely can received standard.
In order to verify the validity based on convolutional neural networks color calibration method, the present invention is clapped colour atla, face and tongue Take the photograph image and carry out color correction;In order to assess the quality of color correction model performance, the present invention sets forth subjective and objective Color evaluation result.
Step 1.1:Subjective assessment
As shown in figure 3, this experiment provides the comparison diagram before and after face and tongue color correction.The photograph of the former shooting people of Canon's camera Inclined brown, partially dark problem be present in piece.After color correction, restored well based on convolutional neural networks color calibration method The true colors of face and tongue.
Step 1.2:Objective evaluation
For objective colour evaluation result, to standard color card image, using the independently developed software of the present invention by colour chart As cutting out patch image, and rgb value in each color lump is read, rgb value is converted into LAB values, utilizes standard error value CIE1976L*a*b*It is shown as evaluation index, error such as calculation formula (10).
Δ L in above formula*、Δa*With Δ b*It is the difference of patch image LAB values and standard color block LAB values respectively.Using above-mentioned Method, aberration situation before and after test sample correction.Calculate, sample of color error before correctionAverage value is 14.21, The error after color correctionAverage value is 3.70.Therefore, color calibration method proposed by the present invention achieves good Correct result.

Claims (2)

1. a kind of complexion tongue color image color correction method based on convolutional neural networks, including offline part and online part, It is characterized in that:Offline part is by collecting training data, color correction convolutional neural networks network frame is built and training forms, Online part includes color of image and corrected;
Described offline part, particular content are as follows:
(1) collecting training data
Collection using under the booth conditions artificial light source simulate natural light, the stability of effective guarantee light conditions;
The image obtained to shooting is handled, and is cut and is intercepted each color lump, and each color lump needs to set fixed size form to make For training sample, the label of training data, training sample and label one are done using each color lump standard value generation RGB image of colour atla One correspondence;
(2) color correction convolutional neural networks network frame is built and trained
Network design is the deep neural network of shallow-layer, and the network number of plies is 5 layers;It is input layer, nonlinear transformation layer, output respectively Layer;Input layer is made up of a convolutional layer and the linear unit R eLU of amendment;Nonlinear transformation layer is made up of 3 layer networks, every layer It is made up of a convolutional layer and ReLU activation primitives, there is one batch of normalization among convolutional layer and activation primitive;Output layer is It is made up of a convolutional layer;
In training, using the stochastic gradient descent algorithm with mini-batch come iteration and renewal convolution nuclear state W and biasing B, micro- batch data set operation is carried out every time, and globally optimal solution is found using stochastic gradient descent algorithm;
Need to contact by convolution filter in CNN image processing process, between convolutional layer, the definition table of convolution filter W × H × C × D is shown as, wherein, C is represented by the port number of filtering image;W, H represents the width of filter range, height respectively;D is represented The species of convolution filter;
The input layer of network contains a convolutional layer and ReLU activation primitives.Input layer feature extraction formula represents as follows:
F1(X1)=max (0, W1*X1+B1)
(1)
In formula, X1To enter the characteristic pattern of input layer.W1And B1Convolution filter and the biasing of input layer, W are represented respectively1Size It is 3 × 3 × 3 × 64, it represents 64 kinds of different convolution filters, the core size 3 × 3 × 3 of each convolution, F1(X1) it is input The characteristic pattern that layer obtains.Input picture is 3 × 40 × 40 characteristic pattern, represents the cromogram that characteristic pattern is 3 passages, wide w and high h It is 40.By the wide w of convolutional layer output characteristic figure1With high h1Shown in calculation formula such as formula (2) and formula (3), kernel is The core size of convolution;Stride is the step-length of convolution kernel, when value is 1, extracts overlapping image block, effect is preferable;Pad is Edge zero padding number of pixels.Set in the present invention kernel value as the value that 3, stride value is 1, pad be 1.Therefore, it is defeated 64 × 40 × 40 characteristic pattern can be produced by 64 convolution kernels 3 × 3 of input layer afterwards by entering image;Then, characteristic pattern is by repairing Linear positive unit R eLU.ReLu's is expressed as max (0, X), extracts useful feature figure.Last output result still for 64 × 40 × 40 characteristic pattern;
<mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mi>w</mi> <mo>-</mo> <mi>p</mi> <mi>a</mi> <mi>d</mi> <mo>*</mo> <mn>2</mn> <mo>+</mo> <mi>ker</mi> <mi> </mi> <mi>n</mi> <mi>e</mi> <mi>l</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> <mi>e</mi> </mrow> </mfrac> <mo>+</mo> <mn>1</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mi>h</mi> <mo>-</mo> <mi>p</mi> <mi>a</mi> <mi>d</mi> <mo>*</mo> <mn>2</mn> <mo>+</mo> <mi>ker</mi> <mi> </mi> <mi>n</mi> <mi>e</mi> <mi>l</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> <mi>e</mi> </mrow> </mfrac> <mo>+</mo> <mn>1</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
During the Nonlinear Mapping of nonlinear transformation layer, convolutional layer, batch normalization and ReLU functions are located at the second layer, the 3rd Layer and the 4th layer;The formula in nonlinear transformation layer each stage represents as follows:
Fi(Xi)=max (o, Wi*Fi-1(xi-1)+Ri) { i=2,3,4 }
(4) in formula, i represents i-th layer, XiFor the i-th -1 layer of output, i.e. Fi-1(Xi-1);WiAnd BiNonlinear transformation rank is represented respectively The convolution filter of section and biasing, wherein, convolution filter W1Size be the 3 × 3 × 3 × 64, the 2nd, 3,4 layer of convolutional layer Wi's Size is 64 × 3 × 3 × 64, and the size of each convolution kernel is 64 × 3 × 3;64 × 40 × 40 characteristic pattern of input layer output, It is input in second convolutional layer, by the characteristic pattern that 64 × 40 × 40 can be produced after 64 convolution kernels 3 × 3;64 then, × 40 × 40 characteristic pattern enters batch normalization;Normalization is criticized among convolutional layer and ReLU activation primitives, solves neutral net The situation that convergence rate during training is slowly and gradient is exploded etc. to train;Meanwhile batch normalization accelerates the training speed of network Degree, improves model accuracy;Finally, characteristic pattern improves the non-linear of feature by amendment linear unit;Second layer network is defeated Go out after 64 × 40 × 40 characteristic pattern by there is third and fourth layer of identical structure with the second layer, finally give 64 × 40 × 40 characteristic pattern;
In the output process of reconstruction of output layer, characteristic pattern is input to the output layer for comprising only a convolutional layer;What output was rebuild Formula represents as follows:
F5(X5)=W5*F4(X4)+B5 (5)
In formula, X5For the 4th layer of output;W5And B5Convolution filter and the biasing of feature reconstruction layer, W are represented respectively5Size be 3 × 3 × 64 × 3, feature reconstruction layer has 3 convolution filters, is equal to the effect of mean filter, the core size of each convolution It is 3 × 3 × 64, the effect of average characteristics figure, F can be realized4(X4) it is characteristic pattern caused by nonlinear transformation layer, i.e. X5;It is non-thread Property transform layer output characteristic pattern by can be produced after 3 convolution kernels 3 × 33 × 40 × 40 characteristic pattern;
The data set collected is trained by the network, and the model of every wheel training, model are obtained after iteration more than 50 times Finally it is saved in file;
Described online part, particular content are as follows:
Color correction, the image after being corrected are carried out to cross-color image using the model that training obtains;Shot in darkroom There is distortion in colour atla, face and tongue image, obtained photo, compared with actual color using based on convolutional neural networks color Bearing calibration carries out color correction to distorted image;Image slices vegetarian refreshments to be corrected is read first and saves as image array, is then read The MAT formatted files for taking training to obtain obtain color correction model;Image array is input among network model, respectively R, G, tri- passages of B carry out color correction to image, the image after output calibration.
2. according to the method for claim 1, it is characterised in that:
The colour atla of color correction is used as using ColorChecker Digital SG;It is right under the conditions of closed environment ColorChecker Digital SG standard color cards are taken pictures;By include change colour atla shooting angle, adjustment colour atla with Light source distance, the range range mode for adjusting colour atla and camera are shot to obtain colour chart picture, and convolutional Neural net is generated using these images The training data of network color correction model.
CN201710406983.2A 2017-06-02 2017-06-02 Surface color and tongue color image color correction method based on convolutional neural network Active CN107507250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710406983.2A CN107507250B (en) 2017-06-02 2017-06-02 Surface color and tongue color image color correction method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710406983.2A CN107507250B (en) 2017-06-02 2017-06-02 Surface color and tongue color image color correction method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN107507250A true CN107507250A (en) 2017-12-22
CN107507250B CN107507250B (en) 2020-08-21

Family

ID=60679349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710406983.2A Active CN107507250B (en) 2017-06-02 2017-06-02 Surface color and tongue color image color correction method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN107507250B (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038832A (en) * 2017-12-25 2018-05-15 中国科学院深圳先进技术研究院 A kind of underwater picture Enhancement Method and system
CN108388905A (en) * 2018-03-21 2018-08-10 合肥工业大学 A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context
CN108710831A (en) * 2018-04-24 2018-10-26 华南理工大学 A kind of small data set face recognition algorithms based on machine vision
CN108765502A (en) * 2018-04-25 2018-11-06 上海健康医学院 A kind of color looks acquisition methods under complex environment
CN109102457A (en) * 2018-06-12 2018-12-28 杭州米绘科技有限公司 A kind of intelligent color change system and method based on convolutional neural networks
CN109118549A (en) * 2018-07-20 2019-01-01 上海电力学院 A method of making object of reference with white printing paper and restores object color
CN109242792A (en) * 2018-08-23 2019-01-18 广东数相智能科技有限公司 A kind of white balance proofreading method based on white object
CN109272441A (en) * 2018-09-14 2019-01-25 三星电子(中国)研发中心 The generation method of neural network and associated images
CN109273071A (en) * 2018-08-23 2019-01-25 广东数相智能科技有限公司 A method of establishing white object contrast model
CN109615593A (en) * 2018-11-29 2019-04-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109636864A (en) * 2018-12-19 2019-04-16 新绎健康科技有限公司 A kind of tongue dividing method and system based on color correction Yu depth convolutional neural networks
CN109711306A (en) * 2018-12-19 2019-05-03 新绎健康科技有限公司 A kind of method and apparatus obtaining facial characteristics based on depth convolutional neural networks
CN109815860A (en) * 2019-01-10 2019-05-28 中国科学院苏州生物医学工程技术研究所 TCM tongue diagnosis image color correction method, electronic equipment, storage medium
CN109859117A (en) * 2018-12-30 2019-06-07 南京航空航天大学 A kind of image color correction method directly correcting rgb value using neural network
CN110534071A (en) * 2019-07-19 2019-12-03 南京巨鲨显示科技有限公司 A kind of display color calibration system neural network based and method
CN110599554A (en) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 Method and device for identifying face skin color, storage medium and electronic device
CN111062876A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111064860A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Image correction method, image correction device and electronic equipment
CN111292251A (en) * 2019-03-14 2020-06-16 展讯通信(上海)有限公司 Image color cast correction method, device and computer storage medium
CN111784780A (en) * 2020-06-16 2020-10-16 北京理工大学 Color calibration method of color camera based on deep learning
CN112487945A (en) * 2020-11-26 2021-03-12 上海贝业斯健康科技有限公司 Pulse condition identification method based on double-path convolution neural network fusion
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN112634143A (en) * 2019-09-24 2021-04-09 北京地平线机器人技术研发有限公司 Image color correction model training method and device and electronic equipment
CN112771355A (en) * 2018-09-27 2021-05-07 德塔颜色公司 Correction of inter-instrument variation
WO2021092796A1 (en) * 2019-11-13 2021-05-20 深圳市大疆创新科技有限公司 Neural network model deployment method and apparatus, and device
WO2021114184A1 (en) * 2019-12-12 2021-06-17 华为技术有限公司 Neural network model training method and image processing method, and apparatuses therefor
CN113095109A (en) * 2019-12-23 2021-07-09 中移(成都)信息通信科技有限公司 Crop leaf surface recognition model training method, recognition method and device
CN113378754A (en) * 2021-06-24 2021-09-10 中国计量大学 Construction site bare soil monitoring method
CN113452969A (en) * 2020-03-26 2021-09-28 华为技术有限公司 Image processing method and device
CN113542593A (en) * 2021-06-16 2021-10-22 深圳市景阳科技股份有限公司 Image processing method and device and terminal equipment
CN114511567A (en) * 2022-04-20 2022-05-17 天中依脉(天津)智能科技有限公司 Tongue body and tongue coating image identification and separation method
CN116433508A (en) * 2023-03-16 2023-07-14 湖北大学 Gray image coloring correction method based on Swin-Unet
CN117649661A (en) * 2024-01-30 2024-03-05 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5828781A (en) * 1994-08-11 1998-10-27 Toyo Ink Manufacturing Co., Ltd. Color image reproducing system with image signal correction function
CN1622135A (en) * 2004-12-13 2005-06-01 中国科学院长春光学精密机械与物理研究所 Digital image color correction method
CN104410850A (en) * 2014-12-25 2015-03-11 武汉大学 Colorful digital image chrominance correction method and system
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5828781A (en) * 1994-08-11 1998-10-27 Toyo Ink Manufacturing Co., Ltd. Color image reproducing system with image signal correction function
CN1622135A (en) * 2004-12-13 2005-06-01 中国科学院长春光学精密机械与物理研究所 Digital image color correction method
CN104410850A (en) * 2014-12-25 2015-03-11 武汉大学 Colorful digital image chrominance correction method and system
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038832A (en) * 2017-12-25 2018-05-15 中国科学院深圳先进技术研究院 A kind of underwater picture Enhancement Method and system
CN108388905A (en) * 2018-03-21 2018-08-10 合肥工业大学 A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context
CN108388905B (en) * 2018-03-21 2019-07-19 合肥工业大学 A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context
CN108710831A (en) * 2018-04-24 2018-10-26 华南理工大学 A kind of small data set face recognition algorithms based on machine vision
CN108765502A (en) * 2018-04-25 2018-11-06 上海健康医学院 A kind of color looks acquisition methods under complex environment
CN108765502B (en) * 2018-04-25 2021-09-24 上海健康医学院 Color appearance obtaining method in complex environment
CN109102457A (en) * 2018-06-12 2018-12-28 杭州米绘科技有限公司 A kind of intelligent color change system and method based on convolutional neural networks
CN109102457B (en) * 2018-06-12 2023-01-17 杭州米绘科技有限公司 Intelligent color changing system and method based on convolutional neural network
CN109118549A (en) * 2018-07-20 2019-01-01 上海电力学院 A method of making object of reference with white printing paper and restores object color
CN109242792A (en) * 2018-08-23 2019-01-18 广东数相智能科技有限公司 A kind of white balance proofreading method based on white object
CN109273071A (en) * 2018-08-23 2019-01-25 广东数相智能科技有限公司 A method of establishing white object contrast model
CN109272441A (en) * 2018-09-14 2019-01-25 三星电子(中国)研发中心 The generation method of neural network and associated images
CN112771355A (en) * 2018-09-27 2021-05-07 德塔颜色公司 Correction of inter-instrument variation
CN111064860A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Image correction method, image correction device and electronic equipment
CN111062876A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111062876B (en) * 2018-10-17 2023-08-08 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN109615593A (en) * 2018-11-29 2019-04-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109711306A (en) * 2018-12-19 2019-05-03 新绎健康科技有限公司 A kind of method and apparatus obtaining facial characteristics based on depth convolutional neural networks
CN109636864A (en) * 2018-12-19 2019-04-16 新绎健康科技有限公司 A kind of tongue dividing method and system based on color correction Yu depth convolutional neural networks
CN109711306B (en) * 2018-12-19 2023-04-25 新绎健康科技有限公司 Method and equipment for obtaining facial features based on deep convolutional neural network
CN109859117A (en) * 2018-12-30 2019-06-07 南京航空航天大学 A kind of image color correction method directly correcting rgb value using neural network
CN109815860A (en) * 2019-01-10 2019-05-28 中国科学院苏州生物医学工程技术研究所 TCM tongue diagnosis image color correction method, electronic equipment, storage medium
CN111292251B (en) * 2019-03-14 2022-09-30 展讯通信(上海)有限公司 Image color cast correction method, device and computer storage medium
CN111292251A (en) * 2019-03-14 2020-06-16 展讯通信(上海)有限公司 Image color cast correction method, device and computer storage medium
CN110534071A (en) * 2019-07-19 2019-12-03 南京巨鲨显示科技有限公司 A kind of display color calibration system neural network based and method
CN110534071B (en) * 2019-07-19 2020-09-18 南京巨鲨显示科技有限公司 Display color calibration system and method based on neural network
CN110599554A (en) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 Method and device for identifying face skin color, storage medium and electronic device
CN112634143A (en) * 2019-09-24 2021-04-09 北京地平线机器人技术研发有限公司 Image color correction model training method and device and electronic equipment
WO2021092796A1 (en) * 2019-11-13 2021-05-20 深圳市大疆创新科技有限公司 Neural network model deployment method and apparatus, and device
WO2021114184A1 (en) * 2019-12-12 2021-06-17 华为技术有限公司 Neural network model training method and image processing method, and apparatuses therefor
CN113095109A (en) * 2019-12-23 2021-07-09 中移(成都)信息通信科技有限公司 Crop leaf surface recognition model training method, recognition method and device
CN113452969A (en) * 2020-03-26 2021-09-28 华为技术有限公司 Image processing method and device
CN111784780B (en) * 2020-06-16 2023-06-16 北京理工大学 Color calibration method of color camera based on deep learning
CN111784780A (en) * 2020-06-16 2020-10-16 北京理工大学 Color calibration method of color camera based on deep learning
CN112487945A (en) * 2020-11-26 2021-03-12 上海贝业斯健康科技有限公司 Pulse condition identification method based on double-path convolution neural network fusion
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN113542593B (en) * 2021-06-16 2023-04-07 深圳市景阳科技股份有限公司 Image processing method and device and terminal equipment
CN113542593A (en) * 2021-06-16 2021-10-22 深圳市景阳科技股份有限公司 Image processing method and device and terminal equipment
CN113378754A (en) * 2021-06-24 2021-09-10 中国计量大学 Construction site bare soil monitoring method
CN113378754B (en) * 2021-06-24 2023-06-20 中国计量大学 Bare soil monitoring method for construction site
CN114511567A (en) * 2022-04-20 2022-05-17 天中依脉(天津)智能科技有限公司 Tongue body and tongue coating image identification and separation method
CN116433508A (en) * 2023-03-16 2023-07-14 湖北大学 Gray image coloring correction method based on Swin-Unet
CN116433508B (en) * 2023-03-16 2023-10-27 湖北大学 Gray image coloring correction method based on Swin-Unet
CN117649661A (en) * 2024-01-30 2024-03-05 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method
CN117649661B (en) * 2024-01-30 2024-04-12 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method

Also Published As

Publication number Publication date
CN107507250B (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN107507250A (en) A kind of complexion tongue color image color correction method based on convolutional neural networks
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
Barnard et al. A comparison of computational color constancy algorithms. ii. experiments with image data
Rizzi et al. From retinex to automatic color equalization: issues in developing a new algorithm for unsupervised color equalization
CN105447884B (en) A kind of method for objectively evaluating image quality based on manifold characteristic similarity
CN109883548B (en) Optimization heuristic-based coding optimization method for spectral imaging system of neural network
CN107256541A (en) A kind of multi-spectral remote sensing image defogging method based on convolutional neural networks
CN110736542B (en) Spectral reconstruction method based on RGB value
CN112183637A (en) Single-light-source scene illumination re-rendering method and system based on neural network
CN110197517A (en) The SAR image painting methods that consistent sex resistance generates network are recycled based on multiple domain
CN114581356B (en) Image enhancement model generalization method based on style migration data augmentation
CN103324952A (en) Method for acne classification based on characteristic extraction
CN106709504A (en) Detail-preserving high fidelity tone mapping method
CN115499566A (en) End-to-end high quality achromatic imaging system based on depth calculation optical element
Zhang et al. A real-time semi-supervised deep tone mapping network
Zerman et al. Colornet-estimating colorfulness in natural images
CN113052783A (en) Face image fusion method based on face key points
Ma et al. Color discrimination enhancement for dichromats using self-organizing color transformation
Zhou et al. IACC: Cross-Illumination Awareness and Color Correction for Underwater Images Under Mixed Natural and Artificial Lighting
CN108735010A (en) A kind of intelligent English teaching system for English teaching
CN113256733A (en) Camera spectral sensitivity reconstruction method based on confidence voting convolutional neural network
CN111178229B (en) Deep learning-based vein imaging method and device
CN116597223A (en) Narrow-band laryngoscope image classification method based on multidimensional attention
CN112862906B (en) Color space conversion method based on neural network
Wei et al. Spectral reflectance estimation based on two-step k-nearest neighbors locally weighted linear regression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant