CN110211065B - Color correction method and device for food material image - Google Patents
Color correction method and device for food material image Download PDFInfo
- Publication number
- CN110211065B CN110211065B CN201910432260.9A CN201910432260A CN110211065B CN 110211065 B CN110211065 B CN 110211065B CN 201910432260 A CN201910432260 A CN 201910432260A CN 110211065 B CN110211065 B CN 110211065B
- Authority
- CN
- China
- Prior art keywords
- food material
- image
- color correction
- model
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 235000013305 food Nutrition 0.000 title claims abstract description 99
- 239000000463 material Substances 0.000 title claims abstract description 95
- 238000012937 correction Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000012549 training Methods 0.000 claims description 51
- 238000013527 convolutional neural network Methods 0.000 claims description 25
- 238000012360 testing method Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000005286 illumination Methods 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000009825 accumulation Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 7
- 239000003086 colorant Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 235000013372 meat Nutrition 0.000 description 1
- 238000000053 physical method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000010025 steaming Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses a color correction method and device for food material images, wherein the method comprises the following steps: acquiring a video stream related to food materials, and determining key frames and non-key frames in the video stream; inputting the key frame into a preset color correction model, and performing color correction on the key frame through the color correction model; inputting the non-key frame into a preset atmospheric scattering model, and restoring color distortion in the non-key frame through the atmospheric scattering model. According to the scheme, the problems of color cast and highlighting of pictures shot by the camera are solved, high-temperature interference in the oven cavity can be avoided, and robustness is provided for different types of shot food materials.
Description
Technical Field
The embodiment of the application relates to an image processing technology, in particular to a color correction method and device for food material images.
Background
With the increasing integration of artificial intelligence with appliances, ovens, which are one of the smaller appliances, are also becoming more intelligent and automated, and the premise of such intelligence is good data acquisition paths and closed loops for data processing. The camera shooting pictures and capturing video information are the most direct method for obtaining visual data, and are one of the common ways for obtaining data for an oven. However, the oven and camera scheme still has many challenges at the data acquisition end, one of which is that when image data is acquired in the internal environment of the oven cavity, serious interference is caused by reflection and scattering of the internal light source and different materials. The camera needs to acquire high-quality pictures at the source, so that the burden of post-processing can be reduced, and meanwhile, the robustness of the whole post-processing system is improved.
The more common methods currently employed are the following:
1. the self-adaptive adjusting scheme based on white balance can inhibit the color cast problem to a certain extent because most of the lights in the oven are color cast, but in the actual use process, different food materials often can be found to have different adjusting effects due to white balance, and some foods can even reversely aggravate the color cast.
2. Color cast and highlighting are avoided through a physical method, the most common method is to use an optical filter and a camera mask, the optical filter can inhibit illumination components of a certain wave band to a certain extent, so that the problem of correcting color deviation is solved, but the optical filter cannot adapt, is often a camera calibrated under a white background, and images in an actual oven are very poor; the inhibition effect of the light shield on the highlight is relatively large, and is also a relatively direct solution.
3. The scheme can completely avoid the problem of color cast in theory by adding an RGB (red, green and blue) color sensor to be matched with a camera for correction, and can directly acquire current three primary color components through the RGB color sensor, so that colors are recovered through a certain strategy, color distortion is eliminated, however, the following problems are found in actual use, the accuracy of the first correction is completely dependent on the RGB color sensor, and great burden is brought to the installation and correction of the sensor; second, the entire closed loop color adjustment system is not stable, especially to the high temperature oven internal environment, and can be severely disturbed, even leading to abrupt changes in color adjustment.
Disclosure of Invention
The embodiment of the application provides a color correction method and device for food material images, which can solve the problems of color cast and highlighting of pictures shot by a camera.
In order to achieve the purpose of the embodiment of the present application, the embodiment of the present application provides a color correction method for food material images, which may include:
acquiring a video stream related to food materials, and determining key frames and non-key frames in the video stream;
inputting the key frame into a preset color correction model, and performing color correction on the key frame through the color correction model;
inputting the non-key frame into a preset atmospheric scattering model, and restoring color distortion in the non-key frame through the atmospheric scattering model.
In an exemplary embodiment of the present application, the color correction model may be obtained by training a convolutional neural network model using food material sample pictures in the simulated oven cavity as training data and test data.
In an exemplary embodiment of the present application, training the convolutional neural network model with the food material sample pictures in the simulated oven cavity as training data and test data may include:
acquiring a food material picture set in the simulated oven cavity;
extracting the food material sample picture from the food material picture set, and dividing the food material sample picture into two parts; one part is used as the training data, and the other part is used as the test data;
training the convolutional neural network model through the training data and a preset deep learning algorithm, and verifying a training result by adopting the test data.
In an exemplary embodiment of the present application, the method may further include: when training the convolutional neural network model, the middle layer convolution in the convolutional network adopts a variable-shape convolution Deformable Convolution as a basic layer structure.
In an exemplary embodiment of the present application, the food material picture set may include one or more of the following picture data:
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under a standard light source;
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under the oven light; the method comprises the steps of,
in the sample collection environment of the simulation oven cavity, the collected picture data of various food materials are collected at different collection temperatures and in the uniform change process of the collection temperatures.
In an exemplary embodiment of the present application, the method may further include: separating the food material image from the background image in the extracted food material sample image, establishing an image binary segmentation sample set related to the food material image, and dividing the image binary segmentation sample set into the training data and the test data.
In an exemplary embodiment of the present application, the method may further include: and in the process of inputting the non-key frame into a preset atmospheric scattering model and restoring the color distortion in the non-key frame through the atmospheric scattering model, correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frame before the current non-key frame.
In an exemplary embodiment of the present application, the correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frame before the current non-key frame may include:
and carrying out weighted accumulation calculation on the parameters of the atmospheric scattering model adopted by the historical non-key frame and the parameters of the current atmospheric scattering model to obtain corrected atmospheric scattering model parameters.
In an exemplary embodiment of the present application, the determining key frames and non-key frames in the video stream may include:
inputting the video stream into a preset key frame judging module to judge key frames according to a judging strategy in the key frame judging module, and taking frames except the key frames in the video stream as the non-key frames.
In an exemplary embodiment of the present application, the judging policy may include:
wherein thr is a preset threshold value, an initial value is 0, g is a gradient map of an image obtained by calculation of the first two frames, n represents an nth frame, n is a positive integer, S is the saturation of the image, num is the number of integral pixels of the image, and alpha is a balance parameter.
The embodiment of the application also provides a color correction device of the food material image, which comprises a processor and a computer readable storage medium, wherein the computer readable storage medium stores instructions, and when the instructions are executed by the processor, the color correction method of any one of the food material images is realized.
The beneficial effects of the embodiment of the application can include:
1. the color correction method of the food material image of the embodiment of the application can comprise the following steps: acquiring a video stream related to food materials, and determining key frames and non-key frames in the video stream; inputting the key frame into a preset color correction model, and performing color correction on the key frame through the color correction model; inputting the non-key frame into a preset atmospheric scattering model, and restoring color distortion in the non-key frame through the atmospheric scattering model. According to the scheme, the problems of color cast and highlighting of pictures shot by the camera are solved, high-temperature interference in the oven cavity can be avoided, and robustness is provided for different types of shot food materials.
2. The color correction model of the embodiment of the application can be obtained by training a convolutional neural network model by taking food sample pictures in a simulated oven cavity as training data and test data. According to the embodiment, the convolutional neural network model can be trained through pictures of various food materials, so that the influence of the food material types is avoided, the image colors are adaptively corrected according to the food material types in a scene, and the robustness of the color correction scheme provided by the embodiment of the application to shooting different types of food materials is improved.
3. In the embodiment of the application, training the convolutional neural network model by using food sample pictures in the simulated oven cavity as training data and test data may include: acquiring a food material picture set in the simulated oven cavity; extracting the food material sample picture from the food material picture set, and dividing the food material sample picture into two parts; one part is used as the training data, and the other part is used as the test data; training the convolutional neural network model through the training data and a preset deep learning algorithm, and verifying a training result by adopting the test data. With this embodiment, the validity of the color correction model can be ensured, and the reliability of the embodiment of the present application can be ensured.
4. The method of the embodiment of the application can also comprise the following steps: when training the convolutional neural network model, the middle layer convolution in the convolutional network adopts a variable-shape convolution Deformable Convolution as a basic layer structure. By the embodiment scheme, the extraction of the color characteristics of the convolutional neural network on the food material is improved, and the influence of insignificant background (such as a tray of an oven) on color correction is restrained.
5. The food material picture set of the embodiment of the application can comprise one or more of the following picture data: in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under a standard light source; in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under the oven light; and in the sample collection environment of the simulated oven cavity, collecting the picture data of various food materials at different collection temperatures and in the uniform change process of the collection temperatures. By the embodiment, the comprehensiveness of the sample picture is guaranteed, and therefore the reliability of the color correction model is further guaranteed.
6. The method of the embodiment of the application can further comprise the following steps: and in the process of inputting the non-key frame into a preset atmospheric scattering model and restoring the color distortion in the non-key frame through the atmospheric scattering model, correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frame before the current non-key frame. The embodiment scheme increases the robustness of the embodiment scheme of the application to noise through closed loop adjustment.
Additional features and advantages of embodiments of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the application. The objectives and other advantages of embodiments of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technical aspects of embodiments of the application, and are incorporated in and constitute a part of this specification, illustrate and explain the technical aspects of embodiments of the application, and not to limit the technical aspects of embodiments of the application.
FIG. 1 is a flowchart of a color correction method for food material images according to an embodiment of the present application;
fig. 2 is a schematic diagram of a color correction method of a food material image according to an embodiment of the application;
FIG. 3 is a flowchart of a method for training a convolutional neural network model using food sample pictures in a simulated oven cavity as training data and test data according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a method for training a convolutional neural network model using food sample pictures in a simulated oven cavity as training data and test data according to an embodiment of the present application;
fig. 5 is a schematic diagram of the basic structure of an FCN according to an embodiment of the present application;
FIG. 6 is a one-dimensional convolution schematic of an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of one-dimensional deconvolution of an embodiment of the present application;
FIG. 8 is a basic flow diagram of a deformable convolution of an embodiment of the present application;
fig. 9 is a block diagram of a color correction device for food material images according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in detail hereinafter with reference to the accompanying drawings. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be arbitrarily combined with each other.
The steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer-executable instructions. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
The embodiment of the application provides a color correction method of food material images, as shown in fig. 1 and 2, the method can comprise S101-S103:
s101, acquiring a video stream about food materials, and determining key frames and non-key frames in the video stream.
In the exemplary embodiment of the present application, the embodiment may be applicable to different types of scenes such as steaming ovens, embedded ovens, table ovens, etc. in the current industry, for food materials that often appear on recipes, such as: meat, aquatic products, vegetables and the like can be suitable for achieving good color adjustment effect.
In an exemplary embodiment of the present application, the determining key frames and non-key frames in the video stream may include:
inputting the video stream into a preset key frame judging module to judge key frames according to a judging strategy in the key frame judging module, and taking frames except the key frames in the video stream as the non-key frames.
In an exemplary embodiment of the present application, after the video stream is input to the preprocessing system, the video stream may directly enter a key frame judgment module, and the key frame is judged by a judgment policy in the key frame judgment module.
In an exemplary embodiment of the present application, the judging policy may include:
wherein thr is a preset threshold value, an initial value is 0, g is a gradient map of an image obtained by calculation of the first two frames, n represents an nth frame, n is a positive integer, S is the saturation of the image, num is the number of integral pixels of the image, and alpha is a balance parameter.
In an exemplary embodiment of the present application, the value of α may range from 0 to 1, and the parameter may be updated according to experiments using the following calculation formula:
where λ may be set to a frame rate, the greater λ the rate at which α tends to be 1 decreases, while the greater α indicates a higher confidence in using picture saturation information, and x is the sequence number of the frame.
S102, inputting the key frame into a preset color correction model, and performing color correction on the key frame through the color correction model.
In an exemplary embodiment of the present application, after judging the key frame, the key frame may be sent to a color correction model to perform color correction, and the obtained correction result is saved and may be recorded as F d The original frame picture can be recorded as F s 。
In an exemplary embodiment of the present application, the color correction model may be obtained by training a convolutional neural network model using food material sample pictures in the simulated oven cavity as training data and test data.
In the exemplary embodiment of the application, the neural network is used for color correction on the key frame, which is an embodiment scheme for performing 'fitting' on color differences generated by different food materials in the oven based on a statistical rule, finally generating a relatively more adaptive model to express the colors of the different food materials, and finally performing color correction on the key frame picture.
In an exemplary embodiment of the present application, the convolutional neural network model may be a full convolutional neural network FCN structure.
In an exemplary embodiment of the present application, as shown in fig. 3 and fig. 4, the training the convolutional neural network model with the food material sample pictures in the simulated oven cavity as training data and test data may include S201-S203:
s201, acquiring a food material picture set in the simulated oven cavity.
In an exemplary embodiment of the present application, the food material picture set may include one or more of the following picture data:
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under a standard light source;
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under the oven light; the method comprises the steps of,
in the sample collection environment of the simulation oven cavity, the collected picture data of various food materials are collected at different collection temperatures and in the uniform change process of the collection temperatures.
S202, extracting the food material sample picture from the food material picture set, and dividing the food material sample picture into two parts; one part is used as the training data and the other part is used as the test data.
And S203, training the convolutional neural network model through the training data and a preset deep learning algorithm, and verifying a training result by adopting the test data.
In an exemplary embodiment of the present application, the FCN (full convolutional network) network may be trained using any existing deep learning tool.
In an exemplary embodiment of the present application, in order to increase the generalization ability of the model and improve the accuracy of color correction during actual operation, some special methods are used during the training of the convolutional neural network model.
In the exemplary embodiment of the present application, an FCN network structure is used, the FCN is a typical full convolution neural network structure, the core structure of which is a convolution operation layer, so as to extract characteristics of a target image under different fields of view, and the basic structure of the FCN is shown in fig. 5.
In an exemplary embodiment of the present application, a downsampling (downsampling) portion employs convolutional layers, with sampling intervals controlled by stride in the convolutional window, progressively reducing the size of the feature map output for each layer.
In an exemplary embodiment of the present application, the basic convolution structure is schematically shown in fig. 6, and the fig. 6 bit one-dimensional convolution is schematically shown.
In the exemplary embodiment of the present application, taking one-dimensional convolution as an example, the input a data in fig. 6 is a one-dimensional vector of length 5, and the convolution kernel b is a one-dimensional vector of length 3, provided that: in the case of padding=1 and stride=2, where the sign represents the convolution operation, the length of the output c is outsize= (inSize-kersize+2 x padding)/2+1= (5-3+2 x 1)/2+1=3, and similarly in the two-dimensional convolution the size of the output feature map can be calculated as such, and so-called downsampling can be controlled by adjusting the size of the stride.
In the exemplary embodiment of the present application, up-sampling (up-sampling) uses deconvolution, which is also a convolution method, but from the result, it is often used to add size to the feature map, and is also commonly used for up-sampling layers, one of which is described in brief below, and a schematic diagram is shown in fig. 7.
In an exemplary embodiment of the present application, as shown in fig. 7, the input is a one-dimensional vector a of length 3, the convolution kernel b is a one-dimensional vector of length 3, and the dimension of the output c is calculated as: outsize= (in size-1) stride+kersize= (3-1) 1+3=5, from which it can be seen that deconvolution can achieve an up-sampling effect.
In an exemplary embodiment of the present application, the method may further include: when training the convolutional neural network model, the middle layer convolution in the convolutional network adopts a variable-shape convolution Deformable Convolution as a basic layer structure.
In the exemplary embodiment of the present application, a deformable convolution (deformable convolution) may be further adopted as a link of the upsampling process, where the deformable convolution is based on a conventional convolution operation, and by modifying an input position corresponding to a convolution kernel in each windowing process, the effect of focusing on a key feature or an object position is achieved, where the deformable convolution is adopted in the scheme to improve extraction of color features of the food material by the FCN network, and at the same time, suppress an influence of an insignificant background (for example, a tray of an oven) on color correction.
In an exemplary embodiment of the present application, a basic block diagram of a deformable convolution may be as shown in fig. 8.
In an exemplary embodiment of the present application, the size of each step of feature map is shown in parentheses in fig. 8, and the value of the position offset can be obtained by calculation through the convolution operation of the first step, then mapped onto a new feature map from the original map using bilinear interpolation, and finally output is obtained on the new feature map using a general convolution operation.
In an exemplary embodiment of the present application, the method may further include: separating the food material image from the background image in the extracted food material sample image, establishing an image binary segmentation sample set related to the food material image, and dividing the image binary segmentation sample set into the training data and the test data.
In an exemplary embodiment of the present application, in addition to the basic method used above, additional methods may be employed to enhance the effect in practice.
In the exemplary embodiment of the present application, the image acquisition is performed in a standard simulation environment, so that the background of the image has strong correlation, in order to utilize the characteristic, the background can be filtered, and the attention point is concentrated on the food material, for this point, in the actual implementation process, the food material and the background (tray, inner wall of oven cavity, etc.) can be separated by adopting a part of samples, and an image binary-segmentation sample set C is additionally established.
In an exemplary embodiment of the present application, before training the FCN network, the FCN may be used to complete training of the image segmentation task of the sample C, for this operation, an average IU (cross-over ratio) of the test data set may be set to be above 70%, and the calculation formula of IU may be as follows:
in the exemplary embodiment of the application, after the segmentation task is completed, the parameters obtained in the previous step are used as the reference to perform alternating training on the up-sampling layer and the down-sampling layer, because of the condition limitation of sample set collection, in order to fully utilize the effective information of the samples, multiple times of folding training, such as 5 times of folding training, can be adopted in training, and finally, an FCN network model with relatively high accuracy rate is obtained.
S103, inputting the non-key frames into a preset atmospheric scattering model, and reducing color distortion in the non-key frames through the atmospheric scattering model.
In the exemplary embodiment of the present application, since redundancy of data information is serious for video streams, especially in the oven internal environment, the effect of color distortion is smoothed for a period of time without losing general assumption that color variation (mainly the effect of temperature on the color of a picture acquired by a camera) is smooth, so that correction processing using a neural network for pictures of non-key frames is unnecessary, and an atmospheric scattering model is adopted, which is generally used for simulation of illumination intensity and has great significance for defogging and enhancement of images, since color presentation is also generated based on intensity contrast of three primary colors, reduction of color distortion can be well accomplished using the atmospheric model.
In an exemplary embodiment of the present application, after obtaining the key frame correction result, the atmospheric scattering model may be used to model a color correction model of a normal frame (i.e., a non-key frame). The atmospheric scattering model relationship can be as follows:
I(x,λ)=t(x,λ)*R(x,λ)+A(1-t(x,λ))
where t (x, λ) is the projection ratio, R (x, λ) is the imaging after interference removal, a (1-t (x, λ)) is the atmospheric light intensity under natural illumination, and λ is the wavelength. Since the wavelength of the three components R, G, B is different, λ of each channel can be considered to be different, x is the distance from the light source to the imaging device, and can be considered to be a constant value for each pixel, then the parameters we need to estimate in the scheme are: t (x, λ) and atmospheric light intensity a at infinity. In general, a can be obtained by calibrating uniform calculation in advance in the natural environment in the oven cavity, and is obtained by calculating the pixel value at the highest brightness point in the embodiment, because:
A=A(∞)
the place where natural light is strongest under most conditions can be considered as infinity.
In an exemplary embodiment of the present application, the method may further include: and in the process of inputting the non-key frame into a preset atmospheric scattering model and restoring the color distortion in the non-key frame through the atmospheric scattering model, correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frame before the current non-key frame.
In an exemplary embodiment of the present application, this portion of the embodiment is a non-key frame adjustment on a closed loop. Because the video stream is a data stream that has a back-and-forth correlation on the timeline, based on this consideration, in color correction of non-key frames, instead of using parameters of a set of atmospheric scattering models alone, historical data (or observations) of previous frames are synthesized to correct parameters of the current model and save it. It was found in practical tests that such closed loop adjustment can increase the robustness of the color correction algorithm of embodiments of the present application to noise.
In an exemplary embodiment of the present application, the correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frame before the current non-key frame may include:
and carrying out weighted accumulation calculation on the parameters of the atmospheric scattering model adopted by the historical non-key frame and the parameters of the current atmospheric scattering model to obtain corrected atmospheric scattering model parameters.
In an exemplary embodiment of the application, the atmospheric scattering model parameters may be updated using a weighted accumulation of historical values and current values, and in experiments, the weighted values may use 0.8, weighting the historical values 0.8, weighting the current values 0.2, and then updating the current parameter values.
The embodiment of the present application further provides a color correction device 1 for food material images, as shown in fig. 9, may include a processor 11 and a computer readable storage medium 12, where the computer readable storage medium 12 stores instructions, and when the instructions are executed by the processor 11, the color correction method for food material images is implemented.
The embodiment of the application at least comprises the following beneficial effects:
1. color distortion in different food material scenes in the oven is adjusted based on the convolutional neural network, and reliability of the color correction algorithm of the embodiment of the application is improved.
2. Based on requirements of continuity and timeliness of photographed videos of the oven, for redundant video frames, a convolutional neural network is replaced, an atmospheric scattering model is used as a color distortion adjustment model, and a color adjustment process of non-key frames is simplified.
3. The parameters of the atmospheric scattering model are used as references for the complex illumination environment in the oven cavity, so that the stability of the color correction algorithm of the embodiment of the application is improved.
4. The timeliness and the algorithm accuracy are balanced, local adjustment and global adjustment are adopted in the scheme to be mutually alternated, and the real-time performance is enhanced while the operand is reduced.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
Claims (9)
1. A method of color correction of an image of food material, the method comprising:
acquiring a video stream related to food materials, and determining key frames and non-key frames in the video stream;
inputting the key frame into a preset color correction model, and performing color correction on the key frame through the color correction model;
inputting the non-key frames into a preset atmospheric scattering model, reducing color distortion in the non-key frames through the atmospheric scattering model, correcting parameters of the current atmospheric scattering model according to parameters of an atmospheric scattering model adopted by historical non-key frames before the current non-key frames, wherein the atmospheric scattering model is used for simulating illumination intensity so as to cope with the illumination environment in the oven cavity;
the color correction model is obtained by training with food material sample pictures in the simulated oven cavity as data;
obtaining a food material picture set in the simulation oven cavity, extracting the food material sample picture from the food material picture set, wherein the food material picture set comprises one or more of the following picture data:
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under a standard light source;
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under the oven light; the method comprises the steps of,
in the sample collection environment of the simulation oven cavity, the collected picture data of various food materials are collected at different collection temperatures and in the uniform change process of the collection temperatures.
2. The method for correcting color of food material image according to claim 1, wherein the color correction model is obtained by training a convolutional neural network model by using food material sample pictures in a simulated oven cavity as training data and test data.
3. The method for color correction of food material images according to claim 2, wherein training the convolutional neural network model with food material sample pictures in the simulated oven cavity as training data and test data comprises:
acquiring a food material picture set in the simulated oven cavity;
extracting the food material sample picture from the food material picture set, and dividing the food material sample picture into two parts; one part is used as the training data, and the other part is used as the test data;
training the convolutional neural network model through the training data and a preset deep learning algorithm, and verifying a training result by adopting the test data.
4. A method of colour correction of an image of food material according to claim 2 or 3, characterised in that the method further comprises: when training the convolutional neural network model, the middle layer convolution in the convolutional network adopts a variable-shape convolution Deformable Convolution as a basic layer structure.
5. A method of color correction of an image of food material according to claim 3, the method further comprising: separating the food material image from the background image in the extracted food material sample image, establishing an image binary segmentation sample set related to the food material image, and dividing the image binary segmentation sample set into the training data and the test data.
6. The method for color correction of an image of a food material according to claim 1, wherein,
the correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frames before the current non-key frame comprises the following steps:
and carrying out weighted accumulation calculation on the parameters of the atmospheric scattering model adopted by the historical non-key frame and the parameters of the current atmospheric scattering model to obtain corrected atmospheric scattering model parameters.
7. The method of claim 1, wherein the determining key frames and non-key frames in the video stream comprises:
inputting the video stream into a preset key frame judging module to judge key frames according to a judging strategy in the key frame judging module, and taking frames except the key frames in the video stream as the non-key frames.
8. The method of claim 7, wherein the determining strategy comprises:
wherein thr is a preset threshold value, an initial value is 0, g is a gradient map of an image obtained by calculation of the first two frames, n represents an nth frame, n is a positive integer, S is the saturation of the image, num is the number of integral pixels of the image, and alpha is a balance parameter.
9. A color correction device for a food material image, comprising a processor and a computer readable storage medium having instructions stored therein, wherein the instructions, when executed by the processor, implement the color correction method for a food material image according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910432260.9A CN110211065B (en) | 2019-05-23 | 2019-05-23 | Color correction method and device for food material image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910432260.9A CN110211065B (en) | 2019-05-23 | 2019-05-23 | Color correction method and device for food material image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110211065A CN110211065A (en) | 2019-09-06 |
CN110211065B true CN110211065B (en) | 2023-10-20 |
Family
ID=67788252
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910432260.9A Active CN110211065B (en) | 2019-05-23 | 2019-05-23 | Color correction method and device for food material image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211065B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114697483B (en) * | 2020-12-31 | 2023-10-10 | 复旦大学 | Under-screen camera shooting device and method based on compressed sensing white balance algorithm |
CN113516132B (en) * | 2021-03-25 | 2024-05-03 | 杭州博联智能科技股份有限公司 | Color calibration method, device, equipment and medium based on machine learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102223545A (en) * | 2011-06-17 | 2011-10-19 | 宁波大学 | Rapid multi-view video color correction method |
CN106846260A (en) * | 2016-12-21 | 2017-06-13 | 常熟理工学院 | Video defogging method in a kind of computer |
CN108416741A (en) * | 2018-01-23 | 2018-08-17 | 浙江工商大学 | Rapid image defogging method based on luminance contrast enhancing and saturation degree compensation |
CN109523485A (en) * | 2018-11-19 | 2019-03-26 | Oppo广东移动通信有限公司 | Image color correction method, device, storage medium and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106022221B (en) * | 2016-05-09 | 2021-11-30 | 腾讯科技(深圳)有限公司 | Image processing method and system |
-
2019
- 2019-05-23 CN CN201910432260.9A patent/CN110211065B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102223545A (en) * | 2011-06-17 | 2011-10-19 | 宁波大学 | Rapid multi-view video color correction method |
CN106846260A (en) * | 2016-12-21 | 2017-06-13 | 常熟理工学院 | Video defogging method in a kind of computer |
CN108416741A (en) * | 2018-01-23 | 2018-08-17 | 浙江工商大学 | Rapid image defogging method based on luminance contrast enhancing and saturation degree compensation |
CN109523485A (en) * | 2018-11-19 | 2019-03-26 | Oppo广东移动通信有限公司 | Image color correction method, device, storage medium and mobile terminal |
Non-Patent Citations (1)
Title |
---|
何人杰.图像去雾与去湍流方法研究.万方学位论文数据库.2017,第79-91页. * |
Also Published As
Publication number | Publication date |
---|---|
CN110211065A (en) | 2019-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10237527B2 (en) | Convolutional color correction in digital images | |
EP3542347B1 (en) | Fast fourier color constancy | |
US9591237B2 (en) | Automated generation of panning shots | |
EP2987134B1 (en) | Generation of ghost-free high dynamic range images | |
CN112330531B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US8135184B2 (en) | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images | |
CN109862389B (en) | Video processing method, device, server and storage medium | |
US20060093213A1 (en) | Method and apparatus for red-eye detection in an acquired digital image based on image quality pre and post filtering | |
CN113454680A (en) | Image processor | |
CN113228094A (en) | Image processor | |
CN113888437A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
Bertalmio et al. | Variational approach for the fusion of exposure bracketed pairs | |
CN110930341A (en) | Low-illumination image enhancement method based on image fusion | |
WO2023273868A1 (en) | Image denoising method and apparatus, terminal, and storage medium | |
CN110211065B (en) | Color correction method and device for food material image | |
CN109410152A (en) | Imaging method and device, electronic equipment, computer readable storage medium | |
JP4721285B2 (en) | Method and system for modifying a digital image differentially quasi-regularly from pixel to pixel | |
Niu et al. | Visually consistent color correction for stereoscopic images and videos | |
KR102315471B1 (en) | Image processing method and device | |
CN113379609A (en) | Image processing method, storage medium and terminal equipment | |
KR101349968B1 (en) | Image processing apparatus and method for automatically adjustment of image | |
WO2016026072A1 (en) | Method, apparatus and computer program product for generation of extended dynamic range color images | |
US20240127403A1 (en) | Multi-frame image fusion method and system, electronic device, and storage medium | |
EP3834170B1 (en) | Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion | |
CN114283101B (en) | Multi-exposure image fusion unsupervised learning method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |