CN108830154A - A kind of food nourishment composition detection method and system based on binocular camera - Google Patents
A kind of food nourishment composition detection method and system based on binocular camera Download PDFInfo
- Publication number
- CN108830154A CN108830154A CN201810440996.6A CN201810440996A CN108830154A CN 108830154 A CN108830154 A CN 108830154A CN 201810440996 A CN201810440996 A CN 201810440996A CN 108830154 A CN108830154 A CN 108830154A
- Authority
- CN
- China
- Prior art keywords
- food
- picture
- model
- network layer
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/60—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Nutrition Science (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of food nourishment composition detection method and system based on binocular camera, first construct the first training set, train the first artificial intelligence model that can identify food name and food position in picture;Picture is shot by binocular camera, food name and the food position in each picture are identified by the first artificial intelligence model, food position and food name constitute a training sample in two pictures that above-mentioned binocular camera is taken every time, the second training set is obtained, the second artificial intelligence model that can identify quality of food in test sample is trained;Test sample is obtained by binocular camera, test sample is input to the first artificial intelligence model, gets food name and food position, is then input to the second artificial intelligence model, obtains the quality of food;Finally the quality of food is multiplied to obtain the nutritional ingredient of food with the unit mass nutritional ingredient of food.The present invention can be accurate and quickly detects food nourishment composition.
Description
Technical field
The present invention relates to food nourishment composition detection method, in particular to it is a kind of based on the food nutrition of binocular camera at
Sorting surveys method and system.
Background technique
In daily life, people are higher and higher for the nutritional ingredient attention rate of food, especially for slimmer, movement
For the special populations such as member, patient.And the nutritional ingredient of food is difficult to distinguish by human eye, therefore this project proposes one kind and passes through
The method of dual camera calculating food nourishment composition.
In the existing technology calculate by image to the nutritional ingredient of food, there are two types of common and more
Mature thinking:
The food picture taken and the predefined picture of system are compared, estimated by calculating the similarity of picture
Calculate the nutritional ingredient of food.This method has ignored this important indicator of the size of food, and the shooting of each user habit is different,
It is difficult to accurately calculate the nutritional ingredient of food;
A calibration card is aside put when shooting food, the length and calibration card being stuck in picture by Comparison calibration
Actual length, calculate the volume of food, and then calculate the nutritional ingredient of food;This method can be more accurate
The nutritional ingredient of food is calculated, but is had a disadvantage in that, once user forgets to carry calibration card, system is just completely ineffective,
And this method is difficult to cope with error caused by the perspective in photo.
Defect based on above method, the invention discloses one kind to be based on binocular camera and deep learning algorithm, can
Quickly and accurately identify the type of food and the nutritional ingredient of this kind of food.
Summary of the invention
The first object of the present invention is the shortcomings that overcoming the prior art and insufficient, provides a kind of based on binocular camera
Food nourishment composition detection method, this method can accurately and quickly detect food nourishment composition.
The second object of the present invention is to provide a kind of food nourishment composition detection based on binocular camera of above method
System.
The first object of the present invention is achieved through the following technical solutions:A kind of food nourishment composition based on binocular camera
Detection method, which is characterized in that steps are as follows:
Step S1, preferred to obtain multiple include the picture of food, carries out food name and food position to every picture
Mark, constitutes the first training set by the food name and food position of above-mentioned each picture and its middle mark, then passes through first
Training set trains deep learning model, obtains the first artificial intelligence model;
Step S2, multiple are taken by binocular camera and includes the picture of food, while got corresponding in each picture
The quality of food;Then each picture is separately input in the first artificial intelligence model, is sentenced by the first artificial intelligence model
Food name and food position in each picture accessed by disconnected binocular camera out;
Step S3, two pictures that binocular camera in step S2 is shot every time are sentenced by the first artificial intelligence model
Disconnected food name and food position and corresponding quality of food constitute the second training set, so collectively as a training sample
Afterwards by the second training set training neural network model, the second artificial intelligence model is obtained;
Step S4, when needing to detect the nutritional ingredient of food, food is shot by binocular camera first, is taken the photograph by binocular
As two pictures, one test sample of composition taken for the first time, binocular camera is taken into corresponding two figures of test sample
Piece is separately input in the first artificial intelligence model, identifies two picture of test sample respectively by the first artificial intelligence model
Middle food name and corresponding food position;Then it will be eaten in two picture of test sample that the first artificial intelligence model identifies
Name claims and corresponds to food position while being input in the second artificial intelligence model, is got by the second artificial intelligence model
The quality of food;
Step S5, the unit mass nutritional ingredient of food is got by food name in test sample, it then will test
The quality of food, which is multiplied, in the test sample got in the unit mass nutritional ingredient of food and step S4 in sample is surveyed
The nutritional ingredient of food in sample sheet.
It preferably, further include the first verifying collection building process:The picture that multiple include food is obtained, every picture is carried out
The mark of food name and food position constitutes first by the food name and food position of above-mentioned each picture and its middle mark
Verifying collection;
The step S1 training obtains the first artificial intelligence model, and detailed process is as follows:
Step S11, the image data set for obtaining known label, instructs deep learning model by the image data set
Practice, obtains Image Segmentation Model F;
Step S12, using picture each in the first training set as feature, the corresponding food name of each picture and food position are made
Following training is carried out to Image Segmentation Model F for label:
Step S122, network layer is randomly choosed from each network layer of Image Segmentation Model F ', and selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F1;
Step S123, network layer is randomly choosed from network layer more than Image Segmentation Model F ' third network layer, and selected
All neurons of above-mentioned selected network layer are taken out, the neuron by the first training set for above-mentioned selection is trained,
Obtain Image Segmentation Model F2;
Network layer is randomly choosed from network layer more than the 4th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F3;
Network layer is randomly selected from network layer more than the 5th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F4;
Step S13, authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model are collected by the first verifying
The accuracy rate of F3 and Image Segmentation Model F4 select accuracy rate highest as the first artificial intelligence model.
Step S13, pass through the first training set authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model
The accuracy rate of F3 and Image Segmentation Model F4 select accuracy rate highest as the first artificial intelligence model.
Preferably, the image data set of known label is COCO image data set, Penn-Fudan figure in the step S11
As data set or CV image data set.
Preferably, the deep learning model is Mask R-CNN model;
The part training sample for constituting the first training set is the picture with multiple foods, and part training sample is one, band food
The picture of object;When the first training set training sample is the picture with multiple foods, each food is of the same race or different in picture
Kind;
The part training sample for constituting the second training set is two pictures food name therein and food with multiple foods
The position of object, part training sample are the position of two the pictures food name therein and food of one food of band;When second
It is each in every picture when training set training sample is the position of two pictures food name therein and food with multiple foods
A food is of the same race or not of the same race.
Preferably, in the step S4, when in test sample including multiple foods, then all foods in test sample are calculated
The nutritional ingredient N of objectall, specially:
Wherein NiFor nutritional ingredient vector, M contained by the unit mass of i-th of food in test sampleiFor in test sample
The quality of i-th of food, n are the total number of food in test sample.
Preferably, the food position include in picture for food portion each pixel coordinate, food portion it is each
The coordinate of pixel constitutes a location matrix, as food location information.
The second object of the present invention is achieved through the following technical solutions:One kind detects for realizing above-mentioned food nourishment composition
The food nourishment composition detection system based on binocular camera of method, including:
First training set obtains module:Picture for obtaining multiple food names and food position has been marked, by upper
The food name and food position for stating each picture and its middle mark constitute the first training set;
First artificial intelligence model establishes module:For obtaining first by the first training set training deep learning model
Artificial intelligence model;
Second training set obtains module:Two pictures for shooting binocular camera every time pass through the first artificial intelligence
The food name of model judgement and food position and corresponding quality of food constitute the second instruction collectively as a training sample
Practice collection;
Second artificial intelligence model establishes module:For obtaining second by the second training set training neural network model
Artificial intelligence model;
Test sample obtains module:For needing to detect the food picture of nutritional ingredient by binocular camera shooting, by
Two pictures that binocular camera takes every time constitute test sample;
Food name and location identification module:For being known by the first artificial intelligence model when constructing the second training set
It Chu not food name in two pictures that shoot every time of binocular camera and food position;For passing through the first artificial intelligence mould
Type identifies food name and food position in two picture of test sample;
Quality of food detection module, for by two picture of test sample food name and food position pass through second
Artificial intelligence model identifies the quality of food in test sample;
Food nourishment composition computing module:For by the unit mass nutrition of the quality of food in test sample and food at
Divide and be multiplied, obtains the nutritional ingredient of food in test sample.
Preferably, the food position include in picture for food portion each pixel coordinate, food portion it is each
The coordinate of pixel constitutes a location matrix, as food location information.
Preferably, when in test sample including multiple foods, food nourishment composition computing module is for calculating test specimens
The nutritional ingredient N of all foods in thisall, specially:
Wherein NiFor nutritional ingredient vector, M contained by the unit mass of i-th of food in test sampleiFor in test sample
The quality of i-th of food, n are the total number of food in test sample.
It preferably, further include that the first verifying collection obtains module:For obtaining the picture that multiple include food, to every picture
The mark for carrying out food name and food position, is made of the food name and food position of above-mentioned each picture and its middle mark
First verifying collection;
First artificial intelligence model establishes module and obtains the first artificial intelligence model detailed process is as follows:
The image data set for obtaining known label, is trained deep learning model by the image data set, obtains
Image Segmentation Model F;
Using picture each in the first training set as feature, the corresponding food name of each picture and food position are as label pair
Image Segmentation Model F carries out following training:
Network layer is randomly choosed from each network layer of Image Segmentation Model F ', and selects the institute of above-mentioned selected network layer
There is neuron, the neuron by the first training set for above-mentioned selection is trained, and obtains Image Segmentation Model F1;
Network layer is randomly choosed from network layer more than Image Segmentation Model F ' third network layer, and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F2;
Network layer is randomly choosed from network layer more than the 4th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F3;
Network layer is randomly selected from network layer more than the 5th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F4;
Pass through the first verifying collection authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model F3 and image
The accuracy rate of parted pattern F4 selects accuracy rate highest as the first artificial intelligence model.
The present invention has the following advantages and effects with respect to the prior art:
(1) food nourishment composition detection method of the present invention, first plurality of pictures of the acquisition with food, by each picture and figure
The first training set is constructed in food name and food position in piece, by the first training set train can according to input picture,
Export the first artificial intelligence model of food name and food position in picture;Then multiple packets taken by binocular camera
The picture for including food identifies food name and food position in each picture by the first artificial intelligence model, then will
The food name judged in two pictures that binocular camera takes every time by the first artificial intelligence model and food position
It sets and corresponds to quality of food and constitute a training sample, obtain the second training set, training by the second training set being capable of root
According to the food name and food position inputted in two pictures that binocular camera takes, output binocular camera is taken
Second artificial intelligence model of quality of food in two pictures;In the nutritional ingredient for needing to detect food, pass through binocular camera shooting
Camera is once shot two obtained pictures as test sample, first for food progress by the picture of head shooting food
Two pictures in test sample are separately input to the first artificial intelligence model, get food name in two picture of test sample
Then it is artificial to be input to second by title and food position jointly for food name and food position in two picture of test sample
In model of mind, the quality of corresponding food in two picture of test sample is detected by the second artificial intelligence model;Finally
The quality of food is multiplied to obtain the nutritional ingredient of food with the unit mass nutritional ingredient of food.The present invention is artificial in conjunction with first
Model of mind and the second artificial intelligence model, by position of the food in two pictures that binocular camera takes and food
Relationship existing for quality is applied in artificial intelligence model, can accurately and quickly detect food nourishment composition, is people
It carries out diet collocation and plays good booster action.
(2) the present invention is based in the food nourishment composition detection system of intelligent terminal, cloud service platform is in training first
When artificial intelligence model, the image data set training deep learning model that can first pass through ready-made known label obtains image point
Model is cut, is then trained again with the first training set for Image Segmentation Model top layer neuron by the way of transfer learning, with
And randomly select equivalent layer neuron and be trained, finally the last model of accuracy rate after training is selected as the first
Work model of mind can be effectively reduced rely on number of training purpose in the first training set in this way, accelerate the first artificial intelligence
The training process of model can also reach preferable accuracy even if training sample is less in the first training set.
(3) in food nourishment composition detection method of the present invention, constituting first training set a portion training sample can be
Picture with multiple foods, the same second training set a portion training sample that constitutes can be two figures with multiple foods
Piece, therefore when in two picture of test sample of binocular camera shooting including two or more food, side of the present invention
Method can detected the nutritional ingredient of the corresponding each food of test sample.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart.
Fig. 2 a and Fig. 2 b are two pictures that binocular camera takes corresponding test sample in the method for the present invention.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited
In this.
Embodiment
The food nourishment composition detection method based on binocular camera that present embodiment discloses a kind of, steps are as follows:
Step S1, preferred to obtain multiple include the picture of food, carries out food name and food position to every picture
Mark, constitutes the first training set by the food name and food position of above-mentioned each picture and its middle mark, then passes through first
Training set trains deep learning model, obtains the first artificial intelligence model;Specially:Using picture each in the first training set as deep
The feature of learning model is spent, mark of the food position as deep learning model in food name and each picture in corresponding each picture
Label are trained deep learning model, obtain the first artificial intelligence model;Wherein food position includes in picture for food portion
The coordinate of the coordinate of each pixel divided, each pixel of food portion constitutes a location matrix, as food location information.
In the present embodiment, the deep learning network architecture of the above-mentioned deep learning model used is as follows:
First network layer:ResNet1(res1),Batch Norma Batch Normalization1(bn1);
Second network layer:ResNet2(res2),Batch Norma Batch Normalization2(bn2);
Third network layer:ResNet3 (res3), Batch Norma Batch Normalization3 (bn3);
4th network layer:ResNet4(res4),Batch Norma Batch Normalization4(bn4);
5th network layer:ResNet5 (res5), Batch Norma Batch Normalization5 (bn5);
Last network layer, that is, top layer:Mask R-CNN (mrcnn), Region Proposal Network (rpn) and
Feature Pyramid Networks。
It in the present embodiment, further include the first verifying collection acquisition process:The picture that multiple include food is obtained, every is schemed
Piece carries out the mark of food name and food position, by the food name and food position structure of above-mentioned each picture and its middle mark
At the first verifying collection;
In the present embodiment, training obtains the first artificial intelligence model detailed process is as follows:
Step S11, the image data set for obtaining known label, instructs deep learning model by the image data set
Practice, obtains Image Segmentation Model F;In the present embodiment the image data set of known label can choose COCO image data set,
Penn-Fudan image data set or CV image data set.
Step S12, using picture each in the first training set as feature, the corresponding food name of each picture and food position are made
The training of transfer learning is carried out to Image Segmentation Model F for label:
Step S121, the neuron for passing through the independent training image parted pattern F top layer of the first training set first, obtains image
Parted pattern F ';The neuron of Image Segmentation Model F top layer includes mrcnn in top layer, each neuron in rpn and fpn.Instruct
Practicing range includes mrcnn, each neuron in rpn and fpn.
Step S122, network layer is randomly choosed from each layer network of Image Segmentation Model F ', and selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F1;The neuron of each layer network of Image Segmentation Model F ' includes the neuron in first network layer res1 and bn1, the second network
Neuron in layer res2 and bn2, the neuron in third network layer res3 and bn3, the mind in the 4th network layer res4 and bn4
Through member, neuron and top layer i.e. last network layer mrcnn in the 5th network layer res5 and bn5, the mind in rpn and fpn
Through member.
Step S123, network layer is randomly choosed from network layer more than Image Segmentation Model F ' third network layer, and selected
All neurons of above-mentioned selected network layer are taken out, the neuron by the first training set for above-mentioned selection is trained,
Obtain Image Segmentation Model F2;The neuron of network more than Image Segmentation Model F ' third network layer includes third network layer
Neuron in res3 and bn3, the neuron in the 4th network layer res4 and bn4, the nerve in the 5th network layer res5 and bn5
Neuron in first and top layer, that is, the last layer mrcnn, rpn and fpn.I.e. training area include third network layer res3 and
Neuron in bn3, the neuron in the 4th network layer res4 and bn4, neuron in the 5th network layer res5 and bn5 and
Neuron in top layer, that is, the last layer mrcnn, rpn and fpn.
Network layer is randomly choosed from network layer more than the 4th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F3;The neuron of network more than the 4th network layer of Image Segmentation Model F ' includes in the 4th network layer res4 and bn4
Neuron, neuron and top layer, that is, the last layer mrcnn in the 5th network layer res5 and bn5, the mind in rpn and fpn
Through member, i.e. training area includes the neuron in the 4th network layer res4 and bn4, the nerve in the 5th network layer res5 and bn5
Neuron in first and top layer, that is, the last layer mrcnn, rpn and fpn.
Network layer is randomly selected from network layer more than the 5th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F4;The neuron of network more than the 5th network layer of Image Segmentation Model F ' includes in the 5th network layer res5 and bn5
Neuron and top layer, that is, the last layer mrcnn, rpn and fpn in neuron.I.e. training area includes the 5th network layer
The neuron in neuron and top layer, that is, the last layer mrcnn, rpn and fpn in res5 and bn5.
Step S13, authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model are collected by the first verifying
The accuracy rate of F3 and Image Segmentation Model F4 select accuracy rate highest as the first artificial intelligence model.
Step S2, the picture that multiple include food is taken by binocular camera, while is weighed in advance by scale each
The quality of food is corresponded in picture;Then each picture is separately input in the first artificial intelligence model, it is artificial by first
Model of mind judges food name and food position in each picture accessed by binocular camera;
Step S3, two pictures that binocular camera in step S2 is shot every time are sentenced by the first artificial intelligence model
Disconnected food name and food position and corresponding quality of food constitute the second training set, so collectively as a training sample
Afterwards by the second training set training neural network model, the second artificial intelligence model is obtained;Specially:It will be every in the second training set
The feature of food name and food position as neural network model in a training sample i.e. two pictures, will be in the second training set
The quality of corresponding food is as neural network model in the quality of food corresponding to each training sample i.e. two pictures
Label is trained neural network model, obtains the second artificial intelligence model;
Step S4, when needing to detect the nutritional ingredient of food, food is shot by binocular camera first, is taken the photograph by binocular
As two pictures, one test sample of composition taken for the first time, binocular camera is taken into corresponding two figures of test sample
Piece is separately input in the first artificial intelligence model, identifies two picture of test sample respectively by the first artificial intelligence model
Middle food name and corresponding food position;Then it will be eaten in two picture of test sample that the first artificial intelligence model identifies
Name claims and corresponds to food position while being input in the second artificial intelligence model, is got by the second artificial intelligence model
The quality of food;
Step S5, the unit mass nutritional ingredient of food is got by food name in test sample, it then will test
The quality of the food in test sample got in the unit mass nutritional ingredient of food and step S4 in sample is multiplied to obtain
The nutritional ingredient of food in test sample.
In the present embodiment, above-mentioned deep learning model is Mask R-CNN model, other deep learnings also can be used
Model.In the present embodiment, Mask R-CNN model learning rate is set as 0.001, and the number of iterations is set as 1357000, is instructing
After the completion of white silk, the food of 50 seed types can be identified.In the step S1 when constructing the first training set, by reading in data
When the processing of (down-sampling or down-sampled) or amplification (up-sampling or image interpolation) is reduced to picture so that finally obtaining
Be the picture of 128*128 as the training sample in training set.In the present embodiment, convolution can be used in neural network model
Neural network model.
The part training sample for constituting the first training set is the picture with multiple foods, and part training sample is one, band food
The picture of object;When the first training set training sample is the picture with multiple foods, each food is of the same race or different in picture
Kind;
The part training sample for constituting the second training set is two pictures food name therein and food with multiple foods
The position of object, part training sample are the position of two the pictures food name therein and food of one food of band;When second
It is each in every picture when training set training sample is the position of two pictures food name therein and food with multiple foods
A food is of the same race or not of the same race.
In above-mentioned steps S4, when in test sample including multiple foods, then all foods in test sample are calculated
Nutritional ingredient Nall, specially:
Wherein NiFor nutritional ingredient vector contained by the unit mass of i-th of food in test sample, by i-th food
Each nutritional ingredient contained by unit mass combines to obtain, such as i-th of food is Chinese grooseberry, wherein the unit mass institute of Chinese grooseberry
The nutritional ingredient contained is as shown in table 1, then Ni={ 1.23,14.23,0.45,2.0,10.98 };MiIt is i-th in test sample
The quality of food, n are the total number of food in test sample.
Table 1
It is Chinese grooseberry and pears respectively as shown in figs. 2 a and 2b for example, there are two types of fruit in the present embodiment test sample,
So above-mentioned n is 2, M1For the quality of the 1st food Chinese grooseberry in test sample, N1For the 1st food Chinese grooseberry in test sample
Unit mass contained by nutritional ingredient vector, then N1={ 1.23,14.23,0.45,2.0,10.98 };M2For in test sample
The quality of 2nd food pears, N2For nutritional ingredient vector contained by the unit mass of the 2nd food pears in test sample, such as table 2
It is shown, then N2={ 0.5,10.65,0.23,3.6,7.05 };
Nutritional ingredient N so by all foods in test sample are calculatedallFor:
The summation of Chinese grooseberry and the various nutritional ingredients of pears in finally obtained test sample, summation including protein,
The summation of carbohydrate, the summation of fat, the summation of dietary fiber and sugared summation.
Table 2
The present embodiment also disclose it is a kind of for realizing above-mentioned food nourishment composition detection method based on binocular camera
Food nourishment composition detection system, including:
First training set obtains module:Picture for obtaining multiple food names and food position has been marked, by upper
The food name and food position for stating each picture and its middle mark constitute the first training set;Wherein constitute the portion of the first training set
Dividing training sample is the picture with multiple foods, and part training sample is the picture of one food of band;When the training of the first training set
When sample is the picture with multiple foods, each food is of the same race or not of the same race in picture;
First artificial intelligence model establishes module:For obtaining first by the first training set training deep learning model
Artificial intelligence model;Specially:For corresponding to each using training sample each in the first training set as the feature of deep learning model
In training sample in food name and each training sample food position as deep learning model label to deep learning mould
Type is trained, and obtains the first artificial intelligence model;Wherein food name is eaten in each picture by artificial discernable in each picture
Object location is by artificially marking out;Food position include in picture for food portion each pixel coordinate, food portion it is each
The coordinate of pixel constitutes a location matrix, as food location information.
First verifying collection obtains module:For obtaining the picture that multiple include food, food name is carried out to every picture
With the mark of food position, the first verifying collection is constituted by the food name and food position of above-mentioned each picture and its middle mark;
Second training set obtains module:Two pictures for shooting binocular camera every time pass through the first artificial intelligence
The food name of model judgement and food position and corresponding quality of food constitute the second instruction collectively as a training sample
Practice collection;Wherein, constitute the second training set part training sample be the two pictures food name therein with multiple foods and
The position of food, part training sample are the position of two the pictures food name therein and food of one food of band;When
When two training set training samples are the position of two pictures food name therein and food with multiple foods, in every picture
Each food is of the same race or not of the same race.
Such as binocular camera in two pictures once taken include 3 in food, respectively food a, food b and
Food c, wherein position of the food a in two pictures is respectively position a1 and position a2, position of the food b in two pictures
The position of respectively position b1 and position b2, food c in two pictures is respectively position c1 and position c2, then above-mentioned food a,
Position a1, position a2, food b, position b1, position b2, food c, position c1 and position c2 constitute a training sample.Food
A, food b and food c is of the same race or not of the same race between any two.
Second artificial intelligence model establishes module:For obtaining second by the second training set training neural network model
Artificial intelligence model;Specially:For by food name and food in training sample each in the second training set i.e. two pictures
Feature of the position as neural network model, using the quality of the corresponding food of training sample each in the second training set as nerve
The label of network model is trained neural network model, obtains the second artificial intelligence model;Such as above-mentioned binocular camera
Two pictures including food a, food b and food c once taken, when in this two picture food name and food position
When setting as training sample, then the second artificial intelligence model of the present embodiment establishes module for food a, position in this two picture
The feature of a1, position a2, food b, position b1, position b2, food c, position c1 and position c2 as neural network model, by this
The quality of food a, food b and food c instruct neural network model as the label of neural network model in two pictures
Practice, wherein food a, food b and food c are to be weighed in advance by scale.
Test sample obtains module:For needing to detect the food picture of nutritional ingredient by binocular camera shooting, by
Two pictures that binocular camera takes every time constitute test sample;
Food name and location identification module:For being known by the first artificial intelligence model when constructing the second training set
It Chu not food name in two pictures that shoot every time of binocular camera and food position;For passing through the first artificial intelligence mould
Type identifies food name and food position in two picture of test sample;
Quality of food detection module, for by two picture of test sample food name and food position pass through second
Artificial intelligence model identifies the quality of food in test sample;
Food nourishment composition computing module:For by the unit mass nutrition of the quality of food in test sample and food at
Divide and be multiplied, obtains the nutritional ingredient of food in test sample.When in test sample include multiple foods when, food nutrition at
The nutritional ingredient N for dividing computing module to can be used for calculating all foods in test sampleall, specially:
Wherein NiFor nutritional ingredient vector contained by the unit mass of i-th of food in test sample, by i-th food
Each nutritional ingredient contained by unit mass combines to obtain, MiFor the quality of i-th of food in test sample, n is in test sample
The total number of food.
In the present embodiment, the first artificial intelligence model establishes module and obtains the detailed process of the first artificial intelligence model such as
Under:
The image data set for obtaining known label, is trained deep learning model by the image data set, obtains
Image Segmentation Model F;
Using picture each in the first training set as feature, the corresponding food name of each picture and food position are as label pair
Image Segmentation Model F carries out following training:
Network layer is randomly choosed from each network layer of Image Segmentation Model F ', and selects the institute of above-mentioned selected network layer
There is neuron, the neuron by the first training set for above-mentioned selection is trained, and obtains Image Segmentation Model F1;
Network layer is randomly choosed from network layer more than Image Segmentation Model F ' third network layer, and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F2;
Network layer is randomly choosed from network layer more than the 4th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F3;
Network layer is randomly selected from network layer more than the 5th network layer of Image Segmentation Model F ', and selects above-mentioned institute
All neurons for selecting network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image point
Cut model F4;
Pass through the first verifying collection authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model F3 and image
The accuracy rate of parted pattern F4 selects accuracy rate highest as the first artificial intelligence model.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment
Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention,
It should be equivalent substitute mode, be included within the scope of the present invention.
Claims (10)
1. a kind of food nourishment composition detection method based on binocular camera, which is characterized in that steps are as follows:
Step S1, multiple include the picture of food for preferred acquisition, and the mark of food name and food position is carried out to every picture,
First training set is constituted by the food name and food position of above-mentioned each picture and its middle mark, then passes through the first training set
Training deep learning model, obtains the first artificial intelligence model;
Step S2, multiple are taken by binocular camera and includes the picture of food, while got in each picture and corresponding to food
Quality;Then each picture is separately input in the first artificial intelligence model, is judged by the first artificial intelligence model
Food name and food position in each picture accessed by binocular camera;
Step S3, two pictures that binocular camera in step S2 is shot every time are judged by the first artificial intelligence model
Food name and food position and corresponding quality of food constitute the second training set, then lead to collectively as a training sample
The second training set training neural network model is crossed, the second artificial intelligence model is obtained;
Step S4, when needing to detect the nutritional ingredient of food, food is shot by binocular camera first, by binocular camera
Two pictures once taken constitute a test sample, and binocular camera is taken corresponding two pictures point of test sample
It is not input in the first artificial intelligence model, is identified in two picture of test sample and eaten respectively by the first artificial intelligence model
Name claims and corresponds to food position;Then food name in two picture of test sample the first artificial intelligence model identified
Claim and corresponding food position is input in the second artificial intelligence model simultaneously, food is got by the second artificial intelligence model
Quality;
Step S5, the unit mass nutritional ingredient that food is got by food name in test sample, then by test sample
The quality of food is multiplied to obtain test specimens in the test sample got in the unit mass nutritional ingredient of middle food and step S4
The nutritional ingredient of food in this.
2. the food nourishment composition detection method according to claim 1 based on binocular camera, which is characterized in that also wrap
Include the first verifying collection building process:The picture that multiple include food is obtained, food name and food position are carried out to every picture
Mark, the first verifying collection is constituted by the food name and food position of above-mentioned each picture and its middle mark;
The step S1 training obtains the first artificial intelligence model, and detailed process is as follows:
Step S11, the image data set for obtaining known label, is trained deep learning model by the image data set,
Obtain Image Segmentation Model F;
Step S12, using picture each in the first training set as feature, the corresponding food name of each picture and food position are as mark
Sign the training that transfer learning is carried out to Image Segmentation Model F:
Step S122, network layer is randomly choosed from each network layer of Image Segmentation Model F ', and selects above-mentioned selected network
All neurons of layer, the neuron by the first training set for above-mentioned selection are trained, and obtain Image Segmentation Model F1;
Step S123, network layer is randomly choosed from network layer more than Image Segmentation Model F ' third network layer, and selected
All neurons of above-mentioned selected network layer, the neuron by the first training set for above-mentioned selection are trained, obtain
Image Segmentation Model F2;
Network layer is randomly choosed from network layer more than the 4th network layer of Image Segmentation Model F ', and is selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F3;
Network layer is randomly selected from network layer more than the 5th network layer of Image Segmentation Model F ', and is selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F4;
Step S13, by first verifying collection authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model F3 and
The accuracy rate of Image Segmentation Model F4 selects accuracy rate highest as the first artificial intelligence model.
3. the food nourishment composition detection method according to claim 1 based on binocular camera, which is characterized in that described
The image data set of known label is COCO image data set, Penn-Fudan image data set or CV picture number in step S11
According to collection.
4. the food nourishment composition detection method according to claim 1 based on binocular camera, which is characterized in that described
Deep learning model is Mask R-CNN model;
The part training sample for constituting the first training set is the picture with multiple foods, and part training sample is one food of band
Picture;When the first training set training sample is the picture with multiple foods, each food is of the same race or not of the same race in picture;
The part training sample for constituting the second training set is the two pictures food name therein with multiple foods and food
Position, part training sample are the position of two the pictures food name therein and food of one food of band;When the second training
When collecting the position that training sample is two pictures food name therein and food with multiple foods, each food in every picture
Object is of the same race or not of the same race.
5. the food nourishment composition detection method according to claim 1 based on binocular camera, which is characterized in that described
In step S4, when in test sample including multiple foods, then the nutritional ingredient N of all foods in test sample is calculatedall, tool
Body is:
Wherein NiFor nutritional ingredient vector, M contained by the unit mass of i-th of food in test sampleiIt is in test sample i-th
The quality of a food, n are the total number of food in test sample.
6. the food nourishment composition detection method according to claim 1 based on binocular camera, which is characterized in that described
Food position includes in picture for the coordinate of each pixel of food portion, the coordinate composition one of each pixel of food portion
Location matrix, as food location information.
7. a kind of food battalion based on binocular camera for realizing food nourishment composition detection method described in claim 1
Form sorting examining system, which is characterized in that including:
First training set obtains module:Picture for obtaining multiple food names and food position has been marked, by above-mentioned each
The food name and food position of picture and its middle mark constitute the first training set;
First artificial intelligence model establishes module:For it is artificial to obtain first by the first training set training deep learning model
Model of mind;
Second training set obtains module:Two pictures for shooting binocular camera every time pass through the first artificial intelligence model
The food name and food position that are judged and corresponding quality of food constitute the second training collectively as a training sample
Collection;
Second artificial intelligence model establishes module:For it is artificial to obtain second by the second training set training neural network model
Model of mind;
Test sample obtains module:For needing to detect the food picture of nutritional ingredient by binocular camera shooting, by binocular
Two pictures that camera takes every time constitute test sample;
Food name and location identification module:For being identified by the first artificial intelligence model when constructing the second training set
The food name and food position that binocular camera is shot every time in two pictures;For being known by the first artificial intelligence model
It Chu not food name in two picture of test sample and food position;
Quality of food detection module, for by two picture of test sample food name and food position it is artificial by second
Model of mind identifies the quality of food in test sample;
Food nourishment composition computing module:For by the unit mass nutritional ingredient of the quality of food in test sample and food into
Row is multiplied, and obtains the nutritional ingredient of food in test sample.
8. the food nourishment composition detection system according to claim 7 based on binocular camera, which is characterized in that described
Food position includes in picture for the coordinate of each pixel of food portion, the coordinate composition one of each pixel of food portion
Location matrix, as food location information.
9. the food nourishment composition detection system according to claim 7 based on binocular camera, which is characterized in that work as survey
When including multiple foods in sample sheet, food nourishment composition computing module be used to calculate the nutrition of all foods in test sample at
Divide Nall, specially:
Wherein NiFor nutritional ingredient vector, M contained by the unit mass of i-th of food in test sampleiIt is in test sample i-th
The quality of a food, n are the total number of food in test sample.
10. the food nourishment composition detection system according to claim 7 based on binocular camera, which is characterized in that also
Module is obtained including the first verifying collection:For obtaining the picture that multiple include food, food name and food are carried out to every picture
The mark of object location constitutes the first verifying collection by the food name and food position of above-mentioned each picture and its middle mark;
First artificial intelligence model establishes module and obtains the first artificial intelligence model detailed process is as follows:
The image data set for obtaining known label, is trained deep learning model by the image data set, obtains image
Parted pattern F;
Using picture each in the first training set as feature, the corresponding food name of each picture and food position are as label to image
Parted pattern F carries out following training:
Network layer is randomly choosed from each network layer of Image Segmentation Model F ', and selects all minds of above-mentioned selected network layer
Through member, the neuron by the first training set for above-mentioned selection is trained, and obtains Image Segmentation Model F1;
Network layer is randomly choosed from network layer more than Image Segmentation Model F ' third network layer, and is selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F2;
Network layer is randomly choosed from network layer more than the 4th network layer of Image Segmentation Model F ', and is selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F3;
Network layer is randomly selected from network layer more than the 5th network layer of Image Segmentation Model F ', and is selected above-mentioned selected
All neurons of network layer, the neuron by the first training set for above-mentioned selection are trained, and obtain image segmentation mould
Type F4;
Pass through the first verifying collection authentication image parted pattern F1, Image Segmentation Model F2, Image Segmentation Model F3 and image segmentation
The accuracy rate of model F4 selects accuracy rate highest as the first artificial intelligence model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810440996.6A CN108830154A (en) | 2018-05-10 | 2018-05-10 | A kind of food nourishment composition detection method and system based on binocular camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810440996.6A CN108830154A (en) | 2018-05-10 | 2018-05-10 | A kind of food nourishment composition detection method and system based on binocular camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108830154A true CN108830154A (en) | 2018-11-16 |
Family
ID=64147640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810440996.6A Withdrawn CN108830154A (en) | 2018-05-10 | 2018-05-10 | A kind of food nourishment composition detection method and system based on binocular camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830154A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674736A (en) * | 2019-09-23 | 2020-01-10 | 珠海格力电器股份有限公司 | Method, device, server and storage medium for identifying freshness of food materials |
CN111091053A (en) * | 2019-11-12 | 2020-05-01 | 珠海格力电器股份有限公司 | Data analysis method, device, equipment and readable medium |
CN111259184A (en) * | 2020-02-27 | 2020-06-09 | 厦门大学 | Image automatic labeling system and method for new retail |
WO2023159909A1 (en) * | 2022-02-25 | 2023-08-31 | 重庆邮电大学 | Nutritional management method and system using deep learning-based food image recognition model |
-
2018
- 2018-05-10 CN CN201810440996.6A patent/CN108830154A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674736A (en) * | 2019-09-23 | 2020-01-10 | 珠海格力电器股份有限公司 | Method, device, server and storage medium for identifying freshness of food materials |
CN111091053A (en) * | 2019-11-12 | 2020-05-01 | 珠海格力电器股份有限公司 | Data analysis method, device, equipment and readable medium |
CN111259184A (en) * | 2020-02-27 | 2020-06-09 | 厦门大学 | Image automatic labeling system and method for new retail |
CN111259184B (en) * | 2020-02-27 | 2022-03-08 | 厦门大学 | Image automatic labeling system and method for new retail |
WO2023159909A1 (en) * | 2022-02-25 | 2023-08-31 | 重庆邮电大学 | Nutritional management method and system using deep learning-based food image recognition model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108830154A (en) | A kind of food nourishment composition detection method and system based on binocular camera | |
CN106709525A (en) | Method for measuring food nutritional component by means of camera | |
CN106897681A (en) | A kind of remote sensing images comparative analysis method and system | |
CN110097090A (en) | A kind of image fine granularity recognition methods based on multi-scale feature fusion | |
CN109101891A (en) | A kind of rice pest detection system and its detection method merging artificial intelligence | |
CN113222991A (en) | Deep learning network-based field ear counting and wheat yield prediction | |
CN109045664B (en) | Diving scoring method, server and system based on deep learning | |
CN110610149B (en) | Information processing method and device and computer storage medium | |
CN109871833B (en) | Crop maturity monitoring method based on deep learning convolutional neural network | |
CN112989969A (en) | Crop pest and disease identification method and device | |
CN111476119B (en) | Insect behavior identification method and device based on space-time context | |
CN116229265A (en) | Method for automatically and nondestructively extracting phenotype of soybean plants | |
CN107958696A (en) | One kind is used to mark the special food chart system of students in middle and primary schools' meals and mask method | |
CN110874835A (en) | Crop leaf disease resistance identification method and system, electronic equipment and storage medium | |
Thorupunoori et al. | Camera Based Drunks Detection Mechanism Integrated with DL (Deep Learning) | |
CN109033117A (en) | A kind of food nourishment composition detection system based on intelligent terminal | |
CN106773051A (en) | Show the augmented reality devices and methods therefor of the virtual nutritional information of AR markers | |
CN110414369A (en) | A kind of training method and device of ox face | |
CN110097080A (en) | A kind of construction method and device of tag along sort | |
CN109919164A (en) | The recognition methods of user interface object and device | |
CN110363703B (en) | Goods shelf monitoring method based on depth camera | |
JP2022114418A (en) | Training device of artificial intelligence (ai), picking object estimation device, estimation system, and program | |
CN114120117A (en) | Method and system for displaying plant disease diagnosis information and readable storage medium | |
Anupriya et al. | Image Based Plant Disease Detection Model Using Convolution Neural Network | |
CN117078955B (en) | Health management method based on image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181116 |
|
WW01 | Invention patent application withdrawn after publication |