CN110211065B - Color correction method and device for food material image - Google Patents

Color correction method and device for food material image Download PDF

Info

Publication number
CN110211065B
CN110211065B CN201910432260.9A CN201910432260A CN110211065B CN 110211065 B CN110211065 B CN 110211065B CN 201910432260 A CN201910432260 A CN 201910432260A CN 110211065 B CN110211065 B CN 110211065B
Authority
CN
China
Prior art keywords
food material
image
color correction
key frames
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910432260.9A
Other languages
Chinese (zh)
Other versions
CN110211065A (en
Inventor
朱泽春
乔中义
李宏峰
鲁斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jiuchuang Home Appliance Co ltd
Original Assignee
Joyoung Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Joyoung Co Ltd filed Critical Joyoung Co Ltd
Priority to CN201910432260.9A priority Critical patent/CN110211065B/en
Publication of CN110211065A publication Critical patent/CN110211065A/en
Application granted granted Critical
Publication of CN110211065B publication Critical patent/CN110211065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开了一种食材图像的颜色校正方法和装置,该方法包括:获取关于食材的视频流,并确定所述视频流中的关键帧和非关键帧;将所述关键帧输入预设的色彩校正模型中,通过所述色彩校正模型对所述关键帧进行色彩校正;将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原。通过该实施例方案,解决了摄像头拍摄的图片有色偏和高亮的问题,并且该方案可以避免烤箱腔体内部高温的干扰,对拍摄食材的不同种类有鲁棒性。

The embodiment of the present invention discloses a method and device for color correction of food material images. The method includes: obtaining a video stream about food materials and determining key frames and non-key frames in the video stream; inputting the key frames into a preset In the assumed color correction model, the key frames are color corrected through the color correction model; the non-key frames are input into the preset atmospheric scattering model, and the non-key frames are corrected through the atmospheric scattering model. Color distortion is restored. Through this embodiment solution, the problem of color cast and highlight in the pictures taken by the camera is solved, and this solution can avoid the interference of high temperature inside the oven cavity and is robust to photographing different types of food ingredients.

Description

一种食材图像的颜色校正方法和装置Color correction method and device for food image

技术领域Technical field

本发明实施例涉及图像处理技术,尤指一种食材图像的颜色校正方法和装置。Embodiments of the present invention relate to image processing technology, and in particular, to a color correction method and device for food images.

背景技术Background technique

随着人工智能与家电结合的越来越紧密,作为小家电之一的烤箱也越来越趋于智能化、自动化,而这种智能化的前提是有良好的数据获取途径和数据处理闭环。摄像头拍摄图片、抓取视频信息是可视化数据获取的最为直接的方法,也是烤箱常用的用于获取数据的途径之一。但是烤箱加摄像头的方案在数据获取端依然存在许多挑战,其中之一就是在烤箱腔体内部环境中采集图像数据时,会因为内部光源以及不同材料的反射、散射造成很严重的干扰。而摄像头采集视频数据需要在源头有高质量的图片,这样也可以减轻后期的预处理的负担,同时增加整个后处理系统的鲁棒性。As artificial intelligence and home appliances become more and more closely integrated, ovens, one of the small home appliances, are becoming more and more intelligent and automated. The premise of this intelligence is good data acquisition channels and closed-loop data processing. Taking pictures and capturing video information with a camera is the most direct way to obtain visual data, and it is also one of the commonly used ways for ovens to obtain data. However, the oven plus camera solution still has many challenges on the data acquisition side. One of them is that when collecting image data in the internal environment of the oven cavity, serious interference will be caused by internal light sources and reflection and scattering of different materials. The video data collected by the camera requires high-quality pictures at the source, which can also reduce the burden of post-processing and increase the robustness of the entire post-processing system.

目前采用的比较普遍的方法有以下几种:The more common methods currently used include the following:

1、基于白平衡的自适应调节方案,因为烤箱内灯光多数存在偏色,这种方案可以在一定程度上抑制偏色的问题,但是在实际使用过程中往往会发现不同的食材因为白平衡而导致差异很大的调整效果,有的甚至会反向加剧偏色。1. Adaptive adjustment scheme based on white balance. Because most lights in the oven have color casts, this scheme can suppress the color cast problem to a certain extent. However, in actual use, it is often found that different ingredients have different colors due to white balance. This results in very different adjustment effects, and some may even reversely aggravate the color cast.

2、通过物理方法避免偏色和高亮,最为普遍的做法是使用滤光片和摄像头遮罩,滤光片可以一定程度的抑制某个波段的光照分量,从而达到校正颜色偏差的问题,但是滤光片不能自适应,往往是在白色背景下校准好的摄像头,在实际烤箱内成像非常糟糕;遮光罩对于高亮的抑制作用比较大,也是一个比较直接的解决方法。2. Avoid color casts and highlights through physical methods. The most common method is to use filters and camera masks. Filters can suppress the light component of a certain band to a certain extent, thereby correcting the color deviation problem. However, The filter cannot be adaptive, and the camera is often calibrated on a white background, and the image in the actual oven is very poor; the hood has a relatively large suppression effect on highlights, and is also a relatively direct solution.

3、通过增加RGB(红绿蓝)色彩传感器配合摄像头去校正,该方案从理论上可以完全规避偏色的问题,因为通过RGB色彩传感器可以直接获取到当前的三原色分量,从而通过一定的策略恢复色彩,消除色彩畸变,然而在实际使用的时候发现如下的问题,第一、校正的准确性完全取决于RGB色彩传感器,这对于传感器的安装和校正会带来很大的负担;第二、整个闭环色彩调节系统并不稳定,尤其是对高温的烤箱内部环境,会出现严重的干扰,甚至导致色彩调节上的突变。3. By adding an RGB (red, green, and blue) color sensor to coordinate with the camera for correction, this solution can theoretically completely avoid the color cast problem, because the current three primary color components can be directly obtained through the RGB color sensor, and can be restored through a certain strategy. Color, eliminate color distortion, however, the following problems were found in actual use. First, the accuracy of correction completely depends on the RGB color sensor, which will bring a great burden to the installation and correction of the sensor; second, the entire The closed-loop color adjustment system is not stable. Especially in the high-temperature oven internal environment, it will cause serious interference and even lead to sudden changes in color adjustment.

发明内容Contents of the invention

本发明实施例提供了一种食材图像的颜色校正方法和装置,能够解决摄像头拍摄的图片有色偏和高亮的问题。Embodiments of the present invention provide a color correction method and device for food images, which can solve the problem of color casts and highlights in pictures taken by cameras.

为了达到本发明实施例目的,本发明实施例提供了一种食材图像的颜色校正方法,所述方法可以包括:In order to achieve the purpose of the embodiments of the present invention, the embodiments of the present invention provide a color correction method for food material images. The method may include:

获取关于食材的视频流,并确定所述视频流中的关键帧和非关键帧;Obtaining a video stream of ingredients and determining key frames and non-key frames in the video stream;

将所述关键帧输入预设的色彩校正模型中,通过所述色彩校正模型对所述关键帧进行色彩校正;Input the key frames into a preset color correction model, and perform color correction on the key frames through the color correction model;

将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原。The non-key frames are input into a preset atmospheric scattering model, and the color distortion in the non-key frames is restored through the atmospheric scattering model.

在本发明的示例性实施例中,所述色彩校正模型可以是以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练获得的。In an exemplary embodiment of the present invention, the color correction model may be obtained by training a convolutional neural network model using pictures of food sample samples in the simulated oven cavity as training data and test data.

在本发明的示例性实施例中,所述以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练可以包括:In an exemplary embodiment of the present invention, using food sample pictures in the simulated oven cavity as training data and test data, training the convolutional neural network model may include:

获取所述仿真烤箱腔体内的食材图片集;Obtain a picture set of food ingredients in the simulated oven cavity;

从所述食材图片集中抽取所述食材样本图片,并将所述食材样本图片划分为两部分;一部分作为所述训练数据,另一部分作为所述测试数据;Extract the food sample pictures from the food material picture set, and divide the food sample pictures into two parts; one part is used as the training data, and the other part is used as the test data;

通过所述训练数据和预设的深度学习算法对所述卷积神经网络模型进行训练,并采用所述测试数据对训练结果进行验证。The convolutional neural network model is trained using the training data and the preset deep learning algorithm, and the training results are verified using the test data.

在本发明的示例性实施例中,所述方法还可以包括:在对所述卷积神经网络模型进行训练时,卷积网络中的中间层卷积采用可变形状卷积Deformable Convolution作为基本的层结构。In an exemplary embodiment of the present invention, the method may further include: when training the convolutional neural network model, the intermediate layer convolution in the convolutional network adopts Deformable Convolution as the basic layer structure.

在本发明的示例性实施例中,所述食材图片集可以包括以下一种或多种图片数据:In an exemplary embodiment of the present invention, the food ingredient picture set may include one or more of the following picture data:

在所述仿真烤箱腔体的样本采集环境中,在标准光源下,采集的多种食材的图片数据;In the sample collection environment of the simulated oven cavity, image data of various food ingredients collected under a standard light source;

在所述仿真烤箱腔体的样本采集环境中,在烤箱灯光下,采集的多种食材的图片数据;以及,In the sample collection environment of the simulated oven cavity, image data of various food ingredients collected under the oven light; and,

在所述仿真烤箱腔体的样本采集环境中,在不同的采集温度下以及采集温度的均匀变化过程中,采集的多种食材的图片数据。In the sample collection environment of the simulated oven cavity, picture data of various food materials were collected at different collection temperatures and during uniform changes in the collection temperature.

在本发明的示例性实施例中,所述方法还可以包括:将所抽取的所述食材样本图片中的食材图像与背景图像进行分离,并建立关于所述食材图像的图像二值分割样本集,并将所述图像二值分割样本集划分为所述训练数据和所述测试数据。In an exemplary embodiment of the present invention, the method may further include: separating the food material image from the background image in the extracted food material sample picture, and establishing an image binary segmentation sample set regarding the food material image. , and divide the image binary segmentation sample set into the training data and the test data.

在本发明的示例性实施例中,所述方法还可以包括:在将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原过程中,根据当前非关键帧以前的历史非关键帧采用的大气散射模型的参数修正当前大气散射模型的参数。In an exemplary embodiment of the present invention, the method may further include: inputting the non-key frame into a preset atmospheric scattering model, and performing color distortion in the non-key frame through the atmospheric scattering model. During the restoration process, the parameters of the current atmospheric scattering model are corrected based on the parameters of the atmospheric scattering model used in historical non-key frames before the current non-key frame.

在本发明的示例性实施例中,所述根据当前非关键帧以前的历史非关键帧采用的大气散射模型的参数修正当前大气散射模型的参数可以包括:In an exemplary embodiment of the present invention, modifying the parameters of the current atmospheric scattering model based on the parameters of the atmospheric scattering model used in historical non-key frames before the current non-key frame may include:

将所述历史非关键帧采用的大气散射模型的参数和所述当前大气散射模型的参数进行加权累加计算,获取修正后的大气散射模型参数。The parameters of the atmospheric scattering model used in the historical non-key frames and the parameters of the current atmospheric scattering model are weighted and accumulated to obtain corrected atmospheric scattering model parameters.

在本发明的示例性实施例中,所述确定所述视频流中的关键帧和非关键帧可以包括:In an exemplary embodiment of the present invention, determining key frames and non-key frames in the video stream may include:

将所述视频流输入预设的关键帧判断模块,以通过所述关键帧判断模块中的判断策略判断出关键帧,并将所述视频流中除所述关键帧以外的帧作为所述非关键帧。The video stream is input into a preset key frame judgment module to determine key frames through the judgment strategy in the key frame judgment module, and frames other than the key frames in the video stream are regarded as the non-key frames. Keyframe.

在本发明的示例性实施例中,所述判断策略可以包括:In an exemplary embodiment of the present invention, the determination strategy may include:

其中thr是预设阈值,初始值为0,通过前两帧计算获得,g是图像的梯度图,n表示第n帧,n为正整数,S是图像的饱和度,num是图片的整体像素数,α是平衡参数。Where thr is the preset threshold, the initial value is 0, calculated through the first two frames, g is the gradient map of the image, n represents the nth frame, n is a positive integer, S is the saturation of the image, and num is the overall pixels of the image number, α is the equilibrium parameter.

本发明实施例还提供了一种食材图像的颜色校正装置,包括处理器和计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令被所述处理器执行时,实现上述任意一项所述的食材图像的颜色校正方法。An embodiment of the present invention also provides a color correction device for food material images, including a processor and a computer-readable storage medium. Instructions are stored in the computer-readable storage medium. When the instructions are executed by the processor, Implement the color correction method of food image as described in any one of the above.

本发明实施例的有益效果可以包括:The beneficial effects of embodiments of the present invention may include:

1、本发明实施例的食材图像的颜色校正方法可以包括:获取关于食材的视频流,并确定所述视频流中的关键帧和非关键帧;将所述关键帧输入预设的色彩校正模型中,通过所述色彩校正模型对所述关键帧进行色彩校正;将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原。通过该实施例方案,解决了摄像头拍摄的图片有色偏和高亮的问题,并且该方案可以避免烤箱腔体内部高温的干扰,对拍摄食材的不同种类有鲁棒性。1. The color correction method of food material images according to the embodiment of the present invention may include: obtaining a video stream about the food material, and determining key frames and non-key frames in the video stream; inputting the key frames into a preset color correction model , perform color correction on the key frame through the color correction model; input the non-key frame into a preset atmospheric scattering model, and restore the color distortion in the non-key frame through the atmospheric scattering model . Through this embodiment solution, the problem of color cast and highlight in the pictures taken by the camera is solved, and this solution can avoid the interference of high temperature inside the oven cavity and is robust to photographing different types of food ingredients.

2、本发明实施例的所述色彩校正模型可以是以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练获得的。通过该实施例方案,可以通过各种类型的食材的图片对卷积神经网络模型进行训练,从而不受食材种类的影响,根据场景中食材种类自适应校正图像颜色,增加了本发明实施例的色彩校正方案对拍摄食材的不同种类有鲁棒性。2. The color correction model of the embodiment of the present invention can be obtained by training the convolutional neural network model using pictures of food sample samples in the simulated oven cavity as training data and test data. Through this embodiment, the convolutional neural network model can be trained through pictures of various types of food materials, so that it is not affected by the type of food materials, and the image color can be adaptively corrected according to the types of food materials in the scene, which increases the advantages of the embodiments of the present invention. The color correction scheme is robust to different types of food being photographed.

3、本发明实施例的所述以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练可以包括:获取所述仿真烤箱腔体内的食材图片集;从所述食材图片集中抽取所述食材样本图片,并将所述食材样本图片划分为两部分;一部分作为所述训练数据,另一部分作为所述测试数据;通过所述训练数据和预设的深度学习算法对所述卷积神经网络模型进行训练,并采用所述测试数据对训练结果进行验证。通过该实施例方案,可以确保色彩校正模型的有效性,并可以确保本发明实施例方案的可靠性。3. According to the embodiment of the present invention, using the sample pictures of food materials in the simulated oven cavity as training data and test data, training the convolutional neural network model may include: obtaining a set of pictures of food materials in the simulated oven cavity; The food material sample pictures are extracted from the food material pictures, and the food material sample pictures are divided into two parts; one part is used as the training data, and the other part is used as the test data; through the training data and the preset deep learning algorithm The convolutional neural network model is trained, and the test data is used to verify the training results. Through this embodiment solution, the effectiveness of the color correction model can be ensured, and the reliability of the embodiment solution of the present invention can be ensured.

4、本发明实施例的方法还可以包括:在对所述卷积神经网络模型进行训练时,卷积网络中的中间层卷积采用可变形状卷积Deformable Convolution作为基本的层结构。通过该实施例方案,提高了卷积神经网络对于食材本身色彩特征的提取,同时抑制了无关紧要的背景(例如烤箱的托盘)对于色彩校正的影响。4. The method of the embodiment of the present invention may also include: when training the convolutional neural network model, the intermediate layer convolution in the convolutional network adopts Deformable Convolution as the basic layer structure. Through this embodiment, the convolutional neural network improves the extraction of color features of the food ingredients themselves, while suppressing the impact of irrelevant backgrounds (such as oven trays) on color correction.

5、本发明实施例的所述食材图片集可以包括以下一种或多种图片数据:在所述仿真烤箱腔体的样本采集环境中,在标准光源下,采集的多种食材的图片数据;在所述仿真烤箱腔体的样本采集环境中,在烤箱灯光下,采集的多种食材的图片数据;以及,在所述仿真烤箱腔体的样本采集环境中,在不同的采集温度下以及采集温度的均匀变化过程中,采集的多种食材的图片数据。通过该实施例方案,保障了样本图片的全面性,从而进一步保证了色彩校正模型的可靠性。5. The food material picture set in the embodiment of the present invention may include one or more of the following picture data: picture data of a variety of food materials collected under a standard light source in the sample collection environment of the simulated oven cavity; In the sample collection environment of the simulated oven cavity, the picture data of various food materials collected under the oven light; and, in the sample collection environment of the simulated oven cavity, under different collection temperatures and collection Picture data of various food ingredients collected during uniform changes in temperature. Through this embodiment solution, the comprehensiveness of the sample pictures is guaranteed, thereby further ensuring the reliability of the color correction model.

6、本发明实施例的所述方法还可以包括:在将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原过程中,根据当前非关键帧以前的历史非关键帧采用的大气散射模型的参数修正当前大气散射模型的参数。该实施例方案通过闭环调节,增加了本发明实施例方案对于噪声的鲁棒性。6. The method of the embodiment of the present invention may also include: inputting the non-key frame into a preset atmospheric scattering model, and restoring the color distortion in the non-key frame through the atmospheric scattering model. , correct the parameters of the current atmospheric scattering model based on the parameters of the atmospheric scattering model used by historical non-key frames before the current non-key frame. This embodiment solution increases the robustness to noise of the embodiment solution of the present invention through closed-loop adjustment.

本发明实施例的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明实施例的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the embodiments of the invention may be realized and obtained by the structure particularly pointed out in the specification, claims and appended drawings.

附图说明Description of the drawings

附图用来提供对本发明实施例技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本发明实施例的技术方案,并不构成对本发明实施例技术方案的限制。The drawings are used to provide a further understanding of the technical solutions of the embodiments of the present invention, and constitute a part of the description. Together with the embodiments of the present application, they are used to explain the technical solutions of the embodiments of the present invention, and do not constitute a limitation to the technical solutions of the embodiments of the present invention. .

图1为本发明实施例的食材图像的颜色校正方法流程图;Figure 1 is a flow chart of a color correction method for food material images according to an embodiment of the present invention;

图2为本发明实施例的食材图像的颜色校正方法示意图;Figure 2 is a schematic diagram of a color correction method for food material images according to an embodiment of the present invention;

图3为本发明实施例的以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练的方法流程图;Figure 3 is a flow chart of a method for training a convolutional neural network model using pictures of food samples in the simulated oven cavity as training data and test data according to an embodiment of the present invention;

图4为本发明实施例的以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练的方法示意图;Figure 4 is a schematic diagram of a method for training a convolutional neural network model using pictures of food samples in a simulated oven cavity as training data and test data according to an embodiment of the present invention;

图5为本发明实施例的FCN的基本结构示意图;Figure 5 is a schematic diagram of the basic structure of an FCN according to an embodiment of the present invention;

图6为本发明实施例的一维卷积示意图;Figure 6 is a schematic diagram of one-dimensional convolution according to an embodiment of the present invention;

图7为本发明实施例的一维反卷积示意图;Figure 7 is a schematic diagram of one-dimensional deconvolution according to an embodiment of the present invention;

图8为本发明实施例的可变形卷积的基本流程框图;Figure 8 is a basic flow chart of deformable convolution according to an embodiment of the present invention;

图9为本发明实施例的食材图像的颜色校正装置组成框图结构图。FIG. 9 is a block diagram and structural diagram of a color correction device for food image images according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚明白,下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。In order to make the objectives, technical solutions and advantages of the embodiments of the present invention more clear, the embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that, as long as there is no conflict, the embodiments and features in the embodiments of this application can be arbitrarily combined with each other.

在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。The steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer-executable instructions. Also, although a logical order is shown in the flowchart diagrams, in some cases the steps shown or described may be performed in a different order than herein.

本发明实施例提供了一种食材图像的颜色校正方法,如图1和图2所示,所述方法可以包括S101-S103:An embodiment of the present invention provides a color correction method for food material images, as shown in Figures 1 and 2. The method may include S101-S103:

S101、获取关于食材的视频流,并确定所述视频流中的关键帧和非关键帧。S101. Obtain a video stream about food ingredients, and determine key frames and non-key frames in the video stream.

在本发明的示例性实施例中,本实施例方案可以适用于当前行业内的蒸烤箱、嵌入式烤箱、台式烤箱等不同类型的场景,对食谱上经常出现的食材,比如:肉类、水产、蔬菜等均可以适用,达到良好的色彩调节作用。In the exemplary embodiment of the present invention, the solution of this embodiment can be applied to different types of scenarios such as steam ovens, built-in ovens, and desktop ovens in the current industry, and can be used for ingredients that often appear in recipes, such as meat and aquatic products. , vegetables, etc. can be applied to achieve good color adjustment effect.

在本发明的示例性实施例中,所述确定所述视频流中的关键帧和非关键帧可以包括:In an exemplary embodiment of the present invention, determining key frames and non-key frames in the video stream may include:

将所述视频流输入预设的关键帧判断模块,以通过所述关键帧判断模块中的判断策略判断出关键帧,并将所述视频流中除所述关键帧以外的帧作为所述非关键帧。The video stream is input into a preset key frame judgment module to determine key frames through the judgment strategy in the key frame judgment module, and frames other than the key frames in the video stream are regarded as the non-key frames. Keyframe.

在本发明的示例性实施例中,视频流输入到前处理系统后,可以直接进入到关键帧判断模块,通过所述关键帧判断模块中的判断策略判断出关键帧。In an exemplary embodiment of the present invention, after the video stream is input to the pre-processing system, it can directly enter the key frame judgment module, and the key frame is judged through the judgment strategy in the key frame judgment module.

在本发明的示例性实施例中,所述判断策略可以包括:In an exemplary embodiment of the present invention, the determination strategy may include:

其中thr是预设阈值,初始值为0,通过前两帧计算获得,g是图像的梯度图,n表示第n帧,n为正整数,S是图像的饱和度,num是图片的整体像素数,α是平衡参数。Where thr is the preset threshold, the initial value is 0, calculated through the first two frames, g is the gradient map of the image, n represents the nth frame, n is a positive integer, S is the saturation of the image, and num is the overall pixels of the image number, α is the equilibrium parameter.

在本发明的示例性实施例中,α的取值范围可以在0~1之间,根据实验该参数可以使用如下计算式来更新:In an exemplary embodiment of the present invention, the value range of α can be between 0 and 1. According to experiments, this parameter can be updated using the following calculation formula:

其中λ可以设定为帧频,λ越大则α趋向于1的速率降低,同时,α越大则表示使用图片饱和度信息置信度更高,x是帧的序号。Among them, λ can be set as the frame frequency. The larger the λ is, the slower the rate at which α tends to 1. At the same time, the larger the α is, the higher the confidence of using the picture saturation information. x is the sequence number of the frame.

S102、将所述关键帧输入预设的色彩校正模型中,通过所述色彩校正模型对所述关键帧进行色彩校正。S102. Input the key frame into a preset color correction model, and perform color correction on the key frame through the color correction model.

在本发明的示例性实施例中,判断出关键帧后,可以将关键帧送入到色彩校正模型中进行色彩校正,获得的校正结果保存下来,可以记录为Fd,原帧图片可以记录为FsIn an exemplary embodiment of the present invention, after the key frame is determined, the key frame can be sent to the color correction model for color correction. The obtained correction result is saved and can be recorded as F d , and the original frame picture can be recorded as F s .

在本发明的示例性实施例中,所述色彩校正模型可以是以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练获得的。In an exemplary embodiment of the present invention, the color correction model may be obtained by training a convolutional neural network model using pictures of food sample samples in the simulated oven cavity as training data and test data.

在本发明的示例性实施例中,对关键帧使用神经网络进行色彩校正,这是一种基于统计规律对不同食材在烤箱内部产生的色彩差异进行‘拟合’,最后产生一个相对更适配的模型去表达不同食材的色彩,并最后对关键帧图片进行色彩上的修正的实施例方案。In an exemplary embodiment of the present invention, a neural network is used for color correction on key frames, which is a method of 'fitting' the color differences produced by different ingredients inside the oven based on statistical rules, and finally produces a relatively more suitable Models are used to express the colors of different ingredients, and finally the color correction is performed on key frame images.

在本发明的示例性实施例中,该卷积神经网络模型可以为全卷积神经网络FCN结构。In an exemplary embodiment of the present invention, the convolutional neural network model may be a fully convolutional neural network FCN structure.

在本发明的示例性实施例中,如图3、图4所示,所述以仿真烤箱腔体内的食材样本图片为训练数据和测试数据,对卷积神经网络模型进行训练可以包括S201-S203:In an exemplary embodiment of the present invention, as shown in Figures 3 and 4, using the food sample pictures in the simulated oven cavity as training data and test data, training the convolutional neural network model may include S201-S203 :

S201、获取所述仿真烤箱腔体内的食材图片集。S201. Obtain a picture set of food ingredients in the simulated oven cavity.

在本发明的示例性实施例中,所述食材图片集可以包括以下一种或多种图片数据:In an exemplary embodiment of the present invention, the food ingredient picture set may include one or more of the following picture data:

在所述仿真烤箱腔体的样本采集环境中,在标准光源下,采集的多种食材的图片数据;In the sample collection environment of the simulated oven cavity, image data of various food ingredients collected under a standard light source;

在所述仿真烤箱腔体的样本采集环境中,在烤箱灯光下,采集的多种食材的图片数据;以及,In the sample collection environment of the simulated oven cavity, image data of various food ingredients collected under the oven light; and,

在所述仿真烤箱腔体的样本采集环境中,在不同的采集温度下以及采集温度的均匀变化过程中,采集的多种食材的图片数据。In the sample collection environment of the simulated oven cavity, picture data of various food materials were collected at different collection temperatures and during uniform changes in the collection temperature.

S202、从所述食材图片集中抽取所述食材样本图片,并将所述食材样本图片划分为两部分;一部分作为所述训练数据,另一部分作为所述测试数据。S202. Extract the food material sample picture from the food material picture set, and divide the food material sample picture into two parts; one part is used as the training data, and the other part is used as the test data.

S203、通过所述训练数据和预设的深度学习算法对所述卷积神经网络模型进行训练,并采用所述测试数据对训练结果进行验证。S203. Train the convolutional neural network model using the training data and the preset deep learning algorithm, and use the test data to verify the training results.

在本发明的示例性实施例中,可以使用现有的任意深度学习工具训练FCN(fullconvolutional network)网络。In the exemplary embodiment of the present invention, any existing deep learning tool can be used to train the FCN (full convolutional network) network.

在本发明的示例性实施例中,在实际操作过程中,为了增加模型的泛化能力,提高色彩校正的精度,在卷积神经网络模型训练的过程中,使用了一些特别的方法。In the exemplary embodiment of the present invention, during actual operation, in order to increase the generalization ability of the model and improve the accuracy of color correction, some special methods are used during the training of the convolutional neural network model.

在本发明的示例性实施例中,采用了FCN网络结构,FCN是一个典型的全卷积神经网络结构,其核心的结构是卷积操作层,为的是能够在不同的视野范围下提取到目标图像的特征,FCN的基本结构如图5所示。In the exemplary embodiment of the present invention, the FCN network structure is adopted. FCN is a typical fully convolutional neural network structure. Its core structure is the convolution operation layer, in order to be able to extract images under different visual fields. The characteristics of the target image and the basic structure of FCN are shown in Figure 5.

在本发明的示例性实施例中,降采样(downsample)部分采用卷积层,通过卷积划窗中的stride控制采样间隔,逐步的给每一层输出的特征图降低尺寸。In an exemplary embodiment of the present invention, a convolution layer is used in the downsample part, and the sampling interval is controlled by the stride in the convolution window to gradually reduce the size of the feature map output by each layer.

在本发明的示例性实施例中,基本的卷积结构示意如图6所示,图6位一维卷积示意图。In an exemplary embodiment of the present invention, the basic convolution structure is shown in Figure 6, which is a schematic diagram of one-dimensional convolution.

在本发明的示例性实施例中,以一维卷积为例,图6中的输入a数据是一个长度为5的一维向量,卷积核b是一个长度为3的一维向量,在给出条件为:padding=1,stride=2的情况下,卷积的输出c=ab,其中符号·表示卷积操作,输出c的长度为outSize=(inSize-kerSize+2*padding)/2+1=(5-3+2*1)/2+1=3,同理,在二维卷积中,也可以这样来计算输出特征图的尺寸,而所谓的降采样就可以通过调节stride的大小来控制。In an exemplary embodiment of the present invention, taking one-dimensional convolution as an example, the input a data in Figure 6 is a one-dimensional vector with a length of 5, and the convolution kernel b is a one-dimensional vector with a length of 3. In The given conditions are: padding=1, stride=2, the output of the convolution c=ab, where the symbol · represents the convolution operation, and the length of the output c is outSize=(inSize-kerSize+2*padding)/2 +1=(5-3+2*1)/2+1=3. Similarly, in two-dimensional convolution, the size of the output feature map can also be calculated in this way, and the so-called downsampling can be achieved by adjusting stride to control the size.

在本发明的示例性实施例中,升采样(upsample)使用的是反卷积,反卷积也是一种卷积方法,不过从结果上看,它往往用于给特征图增加尺寸,也被普遍用于升采样层,下面简单描述其中一种反卷积的方法,示意图如图7所示。In the exemplary embodiment of the present invention, deconvolution is used for upsampling. Deconvolution is also a convolution method, but from the results, it is often used to increase the size of the feature map and is also used. It is commonly used in the upsampling layer. One of the deconvolution methods is briefly described below. The schematic diagram is shown in Figure 7.

在本发明的示例性实施例中,如图7所示,输入是一个长度为3的一维向量a,卷积核b是一个长度为3的一维向量,输出c的维度计算为:outSize=(inSize-1)*stride+kerSize=(3-1)*1+3=5,从结果可以看出反卷积可以达到升采样的作用。In an exemplary embodiment of the present invention, as shown in Figure 7, the input is a one-dimensional vector a with a length of 3, the convolution kernel b is a one-dimensional vector with a length of 3, and the dimension of the output c is calculated as: outSize =(inSize-1)*stride+kerSize=(3-1)*1+3=5. It can be seen from the results that deconvolution can achieve the effect of upsampling.

在本发明的示例性实施例中,所述方法还可以包括:在对所述卷积神经网络模型进行训练时,卷积网络中的中间层卷积采用可变形状卷积Deformable Convolution作为基本的层结构。In an exemplary embodiment of the present invention, the method may further include: when training the convolutional neural network model, the intermediate layer convolution in the convolutional network adopts Deformable Convolution as the basic layer structure.

在本发明的示例性实施例中,还可以采用可变形卷积(deformable convolution)作为升采样过程的一个环节,可变形卷积是在常规卷积运算的基础上,通过修改每一次划窗过程中与卷积核对应的输入位置,达到聚焦重点特征或者物体位置的作用,方案中采用可变形卷积是为了提高FCN网络对于食材本身色彩特征的提取,同时抑制无关紧要的背景(例如烤箱的托盘)对于色彩校正的影响。In exemplary embodiments of the present invention, deformable convolution (deformable convolution) can also be used as a link in the upsampling process. Deformable convolution is based on the conventional convolution operation by modifying each windowing process. The input position corresponding to the convolution kernel can achieve the function of focusing on key features or object positions. The purpose of using deformable convolution in the plan is to improve the FCN network's extraction of the color features of the ingredients themselves, while suppressing irrelevant backgrounds (such as the oven's pallet) on color correction.

在本发明的示例性实施例中,可变形卷积的基本框图可以如图8所示。In an exemplary embodiment of the present invention, a basic block diagram of deformable convolution may be shown in Figure 8.

在本发明的示例性实施例中,图8中括号内所表示的是每一步特征图的大小,可以通过第一步的卷积运算计算获得位置偏移的值,然后使用双线性插值从原图上映射到新的特征图上,最后再在新的特征图上使用普通的卷积操作获得输出。In an exemplary embodiment of the present invention, what is represented in parentheses in Figure 8 is the size of the feature map of each step. The value of the position offset can be obtained through the convolution operation of the first step, and then bilinear interpolation is used to obtain the value of the position offset from The original image is mapped to a new feature map, and finally an ordinary convolution operation is used on the new feature map to obtain the output.

在本发明的示例性实施例中,所述方法还可以包括:将所抽取的所述食材样本图片中的食材图像与背景图像进行分离,并建立关于所述食材图像的图像二值分割样本集,并将所述图像二值分割样本集划分为所述训练数据和所述测试数据。In an exemplary embodiment of the present invention, the method may further include: separating the food material image from the background image in the extracted food material sample picture, and establishing an image binary segmentation sample set regarding the food material image. , and divide the image binary segmentation sample set into the training data and the test data.

在本发明的示例性实施例中,在实施过程中,除了以上使用到的基本方法外,还可以采取额外的方法增强效果。In the exemplary embodiments of the present invention, during the implementation process, in addition to the basic methods used above, additional methods can also be adopted to enhance the effect.

在本发明的示例性实施例中,方案采集图片是在标准仿真环境下进行的,因此图片的背景相关性很强,为了利用这一特性,可以对背景进行滤除,并且将关注点集中在食材本身,对于这一点,在实际实施过程中,可以采取对部分样本将食材与背景(托盘,烤箱腔体内壁等)分离,另外建立一个图像二值分割的样本集C。In the exemplary embodiment of the present invention, the solution is to collect pictures in a standard simulation environment, so the background of the picture is highly relevant. In order to take advantage of this feature, the background can be filtered out and the focus can be focused on As for the ingredients themselves, in the actual implementation process, some samples can be used to separate the ingredients from the background (tray, inner wall of the oven cavity, etc.), and a sample set C for binary image segmentation can be established.

在本发明的示例性实施例中,在训练FCN网络之前,可以先用FCN完成一个对样本C的图像分割任务的训练,对于这步操作,可以设定测试数据集平均IU(交并比)在70%以上,IU的计算式可以如下所示:In an exemplary embodiment of the present invention, before training the FCN network, FCN can be used to complete an image segmentation task training for sample C. For this step, the average IU (Intersection and Union Ratio) of the test data set can be set. Above 70%, the IU can be calculated as follows:

在本发明的示例性实施例中,在分割任务完成后,可以以上一步获得的参数作为基准,对上采样层和下采样层进行交替训练,因为样本集收集的条件限制,为了充分利用样本的有效信息,训练中可以采用多次,如5次折叠训练,最终获得一个准确率相对较高的FCN网络模型。In an exemplary embodiment of the present invention, after the segmentation task is completed, the parameters obtained in the previous step can be used as a benchmark to alternately train the up-sampling layer and the down-sampling layer. Because of the limitations of the sample set collection conditions, in order to make full use of the sample Effective information can be used multiple times during training, such as 5 times of folding training, and finally an FCN network model with relatively high accuracy is obtained.

S103、将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原。S103. Input the non-key frame into a preset atmospheric scattering model, and restore the color distortion in the non-key frame through the atmospheric scattering model.

在本发明的示例性实施例中,因为对于视频流,尤其是烤箱内部环境中,数据信息的冗余是比较严重的,在一段时间内,不失一般的假设色彩变化(主要是温度对于摄像头采集的图片的色彩的影响)对于色彩畸变的影响是平滑的,这样就没必要对非关键帧的图片使用神经网络的校正处理,这里采用了大气散射模型,大气散射模型一般用于光照强度的模拟,对图像的去雾和增强有很重要的意义,因为色彩的呈现也是基于三原色的强度对比产生的,因此,使用大气模型可以很好的完成对色彩畸变的还原。In the exemplary embodiment of the present invention, because the redundancy of data information is relatively serious for video streams, especially in the internal environment of the oven, within a period of time, it is assumed that color changes (mainly temperature) for the camera without losing the general The influence of the color of the collected pictures) on the color distortion is smooth, so there is no need to use neural network correction processing for non-key frame pictures. The atmospheric scattering model is used here. The atmospheric scattering model is generally used for illumination intensity. Simulation is of great significance to the dehazing and enhancement of images, because the presentation of color is also based on the intensity contrast of the three primary colors. Therefore, the use of atmospheric models can effectively restore color distortion.

在本发明的示例性实施例中,获得关键帧校正结果后,可以使用大气散射模型建模普通帧(即非关键帧)的色彩校正模型。大气散射模型关系式可以如下所示:In an exemplary embodiment of the present invention, after obtaining the key frame correction result, the atmospheric scattering model can be used to model the color correction model of ordinary frames (ie, non-key frames). The atmospheric scattering model relationship can be shown as follows:

I(x,λ)=t(x,λ)*R(x,λ)+A(1-t(x,λ))I(x,λ)=t(x,λ)*R(x,λ)+A(1-t(x,λ))

其中的t(x,λ)为投射率,R(x,λ)为去除干扰后的成像,A(1-t(x,λ))为自然光照下的大气光强,λ为波长。因为R、G、B三个分量的波长不同,可以认为每个通道的λ不同,x是光源到成像设备的距离,对于每一个像素点可以认为是定值,那么,我们在方案中需要估计的参数为:t(x,λ)以及无穷远处的大气光强A。一般地,A可以通过在烤箱腔体内自然环境下提前通过标定统一计算获得,本实施例方案中是通过计算最高亮度点处的像素值来获得,这是因为:Among them, t(x,λ) is the projection rate, R(x,λ) is the image after removing interference, A(1-t(x,λ)) is the atmospheric light intensity under natural illumination, and λ is the wavelength. Because the wavelengths of the three components R, G, and B are different, it can be considered that the λ of each channel is different. x is the distance from the light source to the imaging device. It can be considered a fixed value for each pixel. Then, we need to estimate The parameters are: t(x,λ) and the atmospheric light intensity A at infinity. Generally, A can be obtained through calibration and unified calculation in advance under the natural environment in the oven cavity. In this embodiment, it is obtained by calculating the pixel value at the highest brightness point, because:

A=A(∞)A=A(∞)

可以认为大部分条件下,自然光照最强的地方是无穷远处。It can be considered that under most conditions, the strongest natural light is at infinity.

在本发明的示例性实施例中,所述方法还可以包括:在将所述非关键帧输入预设的大气散射模型中,通过所述大气散射模型对所述非关键帧中的色彩畸变进行还原过程中,根据当前非关键帧以前的历史非关键帧采用的大气散射模型的参数修正当前大气散射模型的参数。In an exemplary embodiment of the present invention, the method may further include: inputting the non-key frame into a preset atmospheric scattering model, and performing color distortion in the non-key frame through the atmospheric scattering model. During the restoration process, the parameters of the current atmospheric scattering model are corrected based on the parameters of the atmospheric scattering model used in historical non-key frames before the current non-key frame.

在本发明的示例性实施例中,该部分实施例是闭环上的非关键帧调整。因为视频流是一个在时间线上有前后关联的数据,因此基于这一点考虑,在非关键帧的色彩校正中,不是单独的使用一组大气散射模型的参数,而是综合前面帧的历史数据(或者说观测结果)来修正当前模型的参数,并且保存它。在实际测试中发现这样的闭环调节能够增加本发明实施例的颜色校正算法对于噪声的鲁棒性。In an exemplary embodiment of the present invention, this partial embodiment is a non-keyframe adjustment on a closed loop. Because the video stream is data that is related to the timeline, based on this consideration, in the color correction of non-key frames, instead of using a set of parameters of the atmospheric scattering model alone, the historical data of the previous frames is integrated. (or observation results) to modify the parameters of the current model and save it. In actual tests, it was found that such closed-loop adjustment can increase the robustness of the color correction algorithm to noise according to the embodiment of the present invention.

在本发明的示例性实施例中,所述根据当前非关键帧以前的历史非关键帧采用的大气散射模型的参数修正当前大气散射模型的参数可以包括:In an exemplary embodiment of the present invention, modifying the parameters of the current atmospheric scattering model based on the parameters of the atmospheric scattering model used in historical non-key frames before the current non-key frame may include:

将所述历史非关键帧采用的大气散射模型的参数和所述当前大气散射模型的参数进行加权累加计算,获取修正后的大气散射模型参数。The parameters of the atmospheric scattering model used in the historical non-key frames and the parameters of the current atmospheric scattering model are weighted and accumulated to obtain corrected atmospheric scattering model parameters.

在本发明的示例性实施例中,大气散射模型参数的更新可以使用历史值和当前值的加权累加,实验中,加权值可以使用0.8,给历史值0.8的权重,给当前值0.2的权重,然后更新当前参数值。In an exemplary embodiment of the present invention, the atmospheric scattering model parameters can be updated using a weighted accumulation of historical values and current values. In the experiment, the weighted value can be 0.8, giving a weight of 0.8 to the historical value and a weight of 0.2 to the current value. Then update the current parameter value.

本发明实施例还提供了一种食材图像的颜色校正装置1,如图9所示,可以包括处理器11和计算机可读存储介质12,所述计算机可读存储介质12中存储有指令,当所述指令被所述处理器11执行时,实现上述任意一项所述的食材图像的颜色校正方法。An embodiment of the present invention also provides a color correction device 1 for food material images. As shown in Figure 9, it may include a processor 11 and a computer-readable storage medium 12. Instructions are stored in the computer-readable storage medium 12. When the instruction is executed by the processor 11, any one of the color correction methods for food material images described above is implemented.

本发明实施例至少包括以下有益效果:Embodiments of the present invention include at least the following beneficial effects:

1、基于卷积神经网络调节烤箱内不同食材场景下的色彩畸变,增加本发明实施例的色彩校正算法的可靠性。1. Based on the convolutional neural network, the color distortion in different food ingredients scenes in the oven is adjusted to increase the reliability of the color correction algorithm of the embodiment of the present invention.

2、基于烤箱的拍摄视频的连续性和时效性要求,对于冗余视频帧,代替卷积神经网络,使用大气散射模型作为色彩畸变的调节模型,简化非关键帧的色彩调节过程。2. Based on the continuity and timeliness requirements of oven-based video shooting, for redundant video frames, instead of the convolutional neural network, the atmospheric scattering model is used as the adjustment model for color distortion to simplify the color adjustment process of non-key frames.

3、应对复杂的烤箱腔体内部光照环境,大气散射模型的参数使用时序上的历史数据作为参考,增加了本发明实施例的色彩校正算法的稳定性。3. To deal with the complex lighting environment inside the oven cavity, the parameters of the atmospheric scattering model use historical data in time series as a reference, which increases the stability of the color correction algorithm of the embodiment of the present invention.

4、平衡时效性和算法准确性,方案中采用局部调节和全局调节相互穿插,减少运算量的同时增强实时性。4. Balance timeliness and algorithm accuracy. The plan uses local adjustment and global adjustment to intersperse each other to reduce the amount of calculation and enhance real-time performance.

本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those of ordinary skill in the art can understand that all or some steps, systems, and functional modules/units in the devices disclosed above can be implemented as software, firmware, hardware, and appropriate combinations thereof. In hardware implementations, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may consist of several physical components. Components execute cooperatively. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or a microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As is known to those of ordinary skill in the art, the term computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. removable, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer. Additionally, it is known to those of ordinary skill in the art that communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Claims (9)

1. A method of color correction of an image of food material, the method comprising:
acquiring a video stream related to food materials, and determining key frames and non-key frames in the video stream;
inputting the key frame into a preset color correction model, and performing color correction on the key frame through the color correction model;
inputting the non-key frames into a preset atmospheric scattering model, reducing color distortion in the non-key frames through the atmospheric scattering model, correcting parameters of the current atmospheric scattering model according to parameters of an atmospheric scattering model adopted by historical non-key frames before the current non-key frames, wherein the atmospheric scattering model is used for simulating illumination intensity so as to cope with the illumination environment in the oven cavity;
the color correction model is obtained by training with food material sample pictures in the simulated oven cavity as data;
obtaining a food material picture set in the simulation oven cavity, extracting the food material sample picture from the food material picture set, wherein the food material picture set comprises one or more of the following picture data:
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under a standard light source;
in a sample collection environment of the simulated oven cavity, collecting picture data of various food materials under the oven light; the method comprises the steps of,
in the sample collection environment of the simulation oven cavity, the collected picture data of various food materials are collected at different collection temperatures and in the uniform change process of the collection temperatures.
2. The method for correcting color of food material image according to claim 1, wherein the color correction model is obtained by training a convolutional neural network model by using food material sample pictures in a simulated oven cavity as training data and test data.
3. The method for color correction of food material images according to claim 2, wherein training the convolutional neural network model with food material sample pictures in the simulated oven cavity as training data and test data comprises:
acquiring a food material picture set in the simulated oven cavity;
extracting the food material sample picture from the food material picture set, and dividing the food material sample picture into two parts; one part is used as the training data, and the other part is used as the test data;
training the convolutional neural network model through the training data and a preset deep learning algorithm, and verifying a training result by adopting the test data.
4. A method of colour correction of an image of food material according to claim 2 or 3, characterised in that the method further comprises: when training the convolutional neural network model, the middle layer convolution in the convolutional network adopts a variable-shape convolution Deformable Convolution as a basic layer structure.
5. A method of color correction of an image of food material according to claim 3, the method further comprising: separating the food material image from the background image in the extracted food material sample image, establishing an image binary segmentation sample set related to the food material image, and dividing the image binary segmentation sample set into the training data and the test data.
6. The method for color correction of an image of a food material according to claim 1, wherein,
the correcting the parameters of the current atmospheric scattering model according to the parameters of the atmospheric scattering model adopted by the historical non-key frames before the current non-key frame comprises the following steps:
and carrying out weighted accumulation calculation on the parameters of the atmospheric scattering model adopted by the historical non-key frame and the parameters of the current atmospheric scattering model to obtain corrected atmospheric scattering model parameters.
7. The method of claim 1, wherein the determining key frames and non-key frames in the video stream comprises:
inputting the video stream into a preset key frame judging module to judge key frames according to a judging strategy in the key frame judging module, and taking frames except the key frames in the video stream as the non-key frames.
8. The method of claim 7, wherein the determining strategy comprises:
wherein thr is a preset threshold value, an initial value is 0, g is a gradient map of an image obtained by calculation of the first two frames, n represents an nth frame, n is a positive integer, S is the saturation of the image, num is the number of integral pixels of the image, and alpha is a balance parameter.
9. A color correction device for a food material image, comprising a processor and a computer readable storage medium having instructions stored therein, wherein the instructions, when executed by the processor, implement the color correction method for a food material image according to any one of claims 1-8.
CN201910432260.9A 2019-05-23 2019-05-23 Color correction method and device for food material image Active CN110211065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910432260.9A CN110211065B (en) 2019-05-23 2019-05-23 Color correction method and device for food material image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910432260.9A CN110211065B (en) 2019-05-23 2019-05-23 Color correction method and device for food material image

Publications (2)

Publication Number Publication Date
CN110211065A CN110211065A (en) 2019-09-06
CN110211065B true CN110211065B (en) 2023-10-20

Family

ID=67788252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910432260.9A Active CN110211065B (en) 2019-05-23 2019-05-23 Color correction method and device for food material image

Country Status (1)

Country Link
CN (1) CN110211065B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697483B (en) * 2020-12-31 2023-10-10 复旦大学 Under-screen camera shooting device and method based on compressed sensing white balance algorithm
CN113516132B (en) * 2021-03-25 2024-05-03 杭州博联智能科技股份有限公司 Color calibration method, device, equipment and medium based on machine learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223545A (en) * 2011-06-17 2011-10-19 宁波大学 Rapid multi-view video color correction method
CN106846260A (en) * 2016-12-21 2017-06-13 常熟理工学院 Video defogging method in a kind of computer
CN108416741A (en) * 2018-01-23 2018-08-17 浙江工商大学 Fast Image Dehazing Method Based on Brightness Contrast Enhancement and Saturation Compensation
CN109523485A (en) * 2018-11-19 2019-03-26 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022221B (en) * 2016-05-09 2021-11-30 腾讯科技(深圳)有限公司 Image processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223545A (en) * 2011-06-17 2011-10-19 宁波大学 Rapid multi-view video color correction method
CN106846260A (en) * 2016-12-21 2017-06-13 常熟理工学院 Video defogging method in a kind of computer
CN108416741A (en) * 2018-01-23 2018-08-17 浙江工商大学 Fast Image Dehazing Method Based on Brightness Contrast Enhancement and Saturation Compensation
CN109523485A (en) * 2018-11-19 2019-03-26 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何人杰.图像去雾与去湍流方法研究.万方学位论文数据库.2017,第79-91页. *

Also Published As

Publication number Publication date
CN110211065A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
US11625815B2 (en) Image processor and method
CN110276767B (en) Image processing method and apparatus, electronic device, computer-readable storage medium
CN113888437B (en) Image processing method, device, electronic device and computer readable storage medium
CN110930301B (en) Image processing method, device, storage medium and electronic equipment
EP3542347B1 (en) Fast fourier color constancy
JP5102374B2 (en) Method and apparatus for moving blur and ghost prevention in an imaging system
CN109862389B (en) Video processing method, device, server and storage medium
CN114862698B (en) A real over-exposure image correction method and device based on channel guidance
CN107409166A (en) Automatic Generation of Panning Shots
US11941791B2 (en) High-dynamic-range image generation with pre-combination denoising
CN108156369B (en) Image processing method and device
CN109410152A (en) Imaging method and apparatus, electronic device, computer-readable storage medium
CN107563984A (en) A kind of image enchancing method and computer-readable recording medium
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN103455979A (en) A low-light video image enhancement method
JP2020031422A (en) Image processing method and device
US20240127403A1 (en) Multi-frame image fusion method and system, electronic device, and storage medium
CN105869112A (en) Method for tone mapping of high dynamic range picture with edge kept minimized
CN110211065B (en) Color correction method and device for food material image
CN113706393A (en) Video enhancement method, device, equipment and storage medium
CN112418279B (en) Image fusion method, device, electronic equipment and readable storage medium
CN116614714A (en) Real exposure correction method and system guided by perception characteristics of camera
EP3834170B1 (en) Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion
CN114283101B (en) Multi-exposure image fusion unsupervised learning method and device and electronic equipment
KR101349968B1 (en) Image processing apparatus and method for automatically adjustment of image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20250507

Address after: No. 999, Mei Li Road, Huaiyin District, Ji'nan, Shandong

Patentee after: Shandong Jiuchuang Home Appliance Co.,Ltd.

Country or region after: China

Address before: No. 999, Mei Li Road, Huaiyin District, Ji'nan, Shandong

Patentee before: JOYOUNG Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right