CN114359299B - Diet segmentation method and diet nutrition management method for chronic disease patients - Google Patents

Diet segmentation method and diet nutrition management method for chronic disease patients Download PDF

Info

Publication number
CN114359299B
CN114359299B CN202210266348.XA CN202210266348A CN114359299B CN 114359299 B CN114359299 B CN 114359299B CN 202210266348 A CN202210266348 A CN 202210266348A CN 114359299 B CN114359299 B CN 114359299B
Authority
CN
China
Prior art keywords
diet
food material
segmentation
picture
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210266348.XA
Other languages
Chinese (zh)
Other versions
CN114359299A (en
Inventor
赵芃
黄毅
郑楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Love And Health Technology Co ltd
Andon Health Co Ltd
Original Assignee
Beijing Love And Health Technology Co ltd
Andon Health Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Love And Health Technology Co ltd, Andon Health Co Ltd filed Critical Beijing Love And Health Technology Co ltd
Priority to CN202210266348.XA priority Critical patent/CN114359299B/en
Publication of CN114359299A publication Critical patent/CN114359299A/en
Application granted granted Critical
Publication of CN114359299B publication Critical patent/CN114359299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a diet segmentation method and a diet nutrition management method for patients with chronic diseases, and relates to the technical field of diet nutrition management. The method comprises the following steps: s1, acquiring a diet picture of the target patient; s2, inputting the diet pictures into the diet target detection model for target recognition, and outputting a plurality of diet target pictures; s3, sequentially inputting the food target graphs into the food material segmentation model for food material segmentation, and outputting to obtain a plurality of pixel-level food material segmentation graphs; and S4, calculating the ratio of the nutritional ingredients contained in all the food materials based on the food material segmentation chart. The invention supports the user to shoot a plurality of dishes at one time, avoids the user from separately shooting, and simplifies the user operation, thereby improving the user experience and the compliance.

Description

Diet segmentation method and diet nutrition management method for chronic disease patients
Technical Field
The invention relates to the technical field of dietary nutrition management, in particular to a diet segmentation method and a diet nutrition management method for patients suffering from chronic diseases.
Background
At present, with the improvement of living standard, people are more concerned about their own physical health, and from the dietary aspect, people gradually change from concerned about eating quality to concerned about eating health, namely, concerned about the content of indexes such as nutrient components and calories in food, and especially about the diet control of patients with chronic diseases.
The existing online diet management is characterized in that a user shoots a diet picture and uploads the diet picture to a system, and nutrient components in the diet picture are analyzed through the system, so that healthy diet suggestions are provided for the user. However, in actual implementation, if a user eats a large number of dishes at a time, the user needs to take a beat at a time to form a single diet picture, and then diet segmentation is performed through the system to identify the eating proportion of each food material, so that a reasonable diet suggestion is given. Whole process, the user need carry out the operation of shooing, uploading the diet picture many times, and the complexity is high, and the user probably suspicion trouble can take the photo for a short time or the condition of taking a picture appears leaking, and this can lead to final diet analysis and suggestion inaccurate, reduces user experience and compliance.
Accordingly, there is a need for a method of dividing a diet and a method of managing diet nutrition for a chronic patient to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide a diet segmentation method and a diet nutrition management method for patients with chronic diseases, which can identify and position a plurality of dishes in a diet picture, support a user to shoot a plurality of dishes at one time, avoid the user to shoot separately, simplify the user operation, and improve the user experience and compliance.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a diet segmentation method, which comprises the following steps:
s1, acquiring a diet picture of the target patient;
s2, inputting the diet pictures into a diet target detection model for target recognition, and outputting a plurality of diet target pictures;
s3, sequentially inputting the food target graphs into the food material segmentation model for food material segmentation, and outputting to obtain a plurality of pixel-level food material segmentation graphs;
and S4, calculating the ratio of the nutrient components contained in all the food materials based on the food material segmentation chart.
Optionally, before step S2, the method further includes:
and S10, training a diet target detection model.
Optionally, step S10 specifically includes:
s101, acquiring a diet picture of a target patient for training;
s102, establishing a diet target tag set: inputting the diet picture into image processing software to circle out the single food and the tableware entity in the diet picture by a square frame respectively, and outputting to obtain a diet target tag set;
s103, sequentially inputting the diet picture and the corresponding diet target label set into the diet target detection model for training, and storing the diet target detection model obtained through training.
Optionally, the output result of the diet target detection model is compared with the diet target label data obtained in step S102 by using a composite loss function, the loss is calculated, and iterative optimization is performed on the diet target detection model according to the loss until the loss value is reduced to a value stable region.
Optionally, after step S10 and before step S2, the method further includes:
and S20, training the food material segmentation model.
Optionally, step S20 specifically includes:
s201, establishing a food material segmentation tag set: respectively corresponding the food material categories to corresponding food material areas in the diet pictures, and marking pixels in the food material areas as corresponding food material categories, so that each diet picture generates a template picture labeled in a pixel level manner, and a food material segmentation tag set is formed;
s202, inputting training data output by the diet target detection model as input data and the food material segmentation label set obtained in the step S201 as training labels into the food material segmentation model for training, regarding each pixel point on a diet picture as a classification operation, and storing the food material segmentation model obtained by training.
Optionally, comparing the output result of the food material segmentation model with the food material segmentation tag data obtained in step S201 by using a cross entropy loss function, calculating a loss, and performing iterative optimization on the food material segmentation model according to the loss until the loss value is reduced to a value stable region.
Optionally, before step S4, the method further includes:
and S40, perspective restoration is carried out on the food material segmentation drawing.
Optionally, step S40 includes:
s401, actually shooting videos of all tableware under the perspective effect from all angles and all distances by using a shooting device, splitting the videos into picture frames, and obtaining picture data of all tableware under different perspectives;
s402, analyzing different images of the same tableware in different positions in a plurality of same scene picturesPixel size, calculating to obtain perspective scaling factor
Figure 419312DEST_PATH_IMAGE001
S403, optimizing the following formula by using a gradient descent method:
Figure 675456DEST_PATH_IMAGE002
wherein i is a data iteration vernier; t is a transpose, representing the conversion of a column vector into a row vector;
Figure 806223DEST_PATH_IMAGE003
for the ith sample<The pixel width of the tableware, the distance between the tableware and the reference point, and the shooting angle of the mobile phone>Inputting a feature vector; m is the number of samples participating in the calculation of the loss value at one time; w, b are parameters to be optimized;
after multiple iterations, obtaining W, b parameters which enable the formula to take a minimum value, and storing the parameters;
s404, passing formula
Figure 945081DEST_PATH_IMAGE004
Estimating a scaling coefficient y of the tableware in the diet picture due to perspective effect;
wherein x is < tableware pixel width, distance between tableware and a datum point, and shooting angle of a mobile phone > input characteristic vector;
s405, dividing the food material area located far away in the food material segmentation graph by the scaling coefficient y to obtain the reduction size of each food material based on the near food material.
Optionally, step S4 includes:
s41, merging similar food materials for dishes in all food material segmentation graphs to obtain the pixel quantity ratio of different food materials;
and S42, analyzing the content of the nutrient components of the food materials, and calculating the ratio of the nutrient components of all the food materials in the diet picture.
The present invention also provides a method for managing diet nutrition of a patient with chronic disease, which comprises the diet segmentation method as described above, and after step S4, further comprises:
and S5, providing personalized diet suggestions for the target patient.
Optionally, step S5 specifically includes: carrying out personalized diet suggestion by combining the physiological sign data and the diseased information data of the target patient;
the personalized diet suggestions comprise diet nutrition proportion suggestions, special dish suggestions, food material type suggestions and cooking mode suggestions.
Optionally, before step S4, the method further includes:
s30, constructing a nutrition knowledge base, a chronic disease diet management knowledge base and a nutrition suggestion knowledge base.
The invention has the beneficial effects that:
the invention provides a diet segmentation method and a diet nutrition management method for patients with chronic diseases.
The invention can identify and position multiple dishes in the diet picture, supports the user to shoot multiple dishes at one time, avoids the user from separately shooting, simplifies the user operation, and improves the user experience and the compliance.
Drawings
FIG. 1 is a flow chart of the main steps of a diet segmentation method provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the detailed steps of a diet segmentation method provided by an embodiment of the present invention;
fig. 3 is a front-back comparison diagram of food material segmentation and labeling of a diet picture provided by an embodiment of the invention;
fig. 4 is a flow chart of the main steps of a method for dietary nutrition management of patients with chronic diseases according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems solved, technical solutions adopted and technical effects achieved by the present invention clearer, the technical solutions of the embodiments of the present invention will be described in further detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, unless expressly stated or limited otherwise, the terms "connected," "connected," and "fixed" are to be construed broadly, e.g., as meaning permanently connected, removably connected, or integral to one another; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
The embodiment of the invention discloses a diet segmentation method, which comprises the following steps of: s1, acquiring a diet picture of the target patient; s2, inputting the diet pictures into the diet target detection model for target recognition, and outputting a plurality of diet target pictures; s3, sequentially inputting the food target graphs into the food material segmentation model for food material segmentation, and outputting to obtain a plurality of pixel-level food material segmentation graphs; and S4, calculating the ratio of the nutritional ingredients contained in all the food materials based on the food material segmentation chart.
The method comprises the steps that a diet picture uploaded by a target patient is firstly identified into a plurality of diet target graphs through a diet target detection model, each diet target graph outputs a plurality of pixel-level food material segmentation graphs after passing through a food material segmentation model, then the system can calculate the ratio of nutritional ingredients contained in all food materials according to the food material segmentation graphs, and a healthy and reasonable diet suggestion can be provided for the target patient according to the ratio of the nutritional ingredients. The invention can identify and position multiple dishes in the diet picture, supports the user to shoot multiple dishes at one time, avoids the user from separately shooting, simplifies the user operation, and improves the user experience and the compliance.
The steps of the diet segmentation method will be described in detail below with reference to fig. 2 and 3.
And S1, acquiring the diet picture of the target patient.
In this embodiment, the target patient is a patient suffering from chronic diseases, and the patient suffering from chronic diseases is a population suffering from diseases such as diabetes and hypertension. Chronic disease is a disease with a long and slow course and no specific treatment scheme. In the treatment process of chronic diseases, the out-of-hospital rehabilitation is more important than the in-hospital treatment, and the scientific and reasonable diet is very important in the out-of-hospital treatment process of chronic diseases. Of course, in other embodiments, healthy diet advice can be provided for patients with other diseases, and is not limited in this embodiment.
And S2, inputting the diet pictures into the diet target detection model for target recognition, and outputting a plurality of diet target pictures.
Through the steps, a plurality of diet targets in the diet picture can be identified and output to form a plurality of diet target graphs.
The food target identification comprises single food and tableware entities in the food pictures, specifically, the single food refers to independent and easily-recognized food such as steamed bread, eggs and apples, the tableware entities refer to tableware containing complex mixed food of multiple food materials, such as various fried dishes, salad and hot pots, and the tableware in the place comprises but is not limited to common household round dinner plates, grid dinner plates for working meals, lunch boxes, bowls, pots and the like. It should be noted that in the embodiment, in the process of identifying the diet target, complex mixed food does not need to be subdivided, and only the corresponding diet target in the diet picture is circled in a square frame in the image processing software according to the standard target detection labeling mode, so that the method is fast and efficient.
Further, before step S2, the method further includes: and S10, training a diet target detection model.
Step S10 specifically includes:
s101, acquiring a diet picture of a target patient for training;
s102, establishing a diet target tag set: inputting the diet picture into image processing software to circle out the single food and the tableware entity in the diet picture by a square frame respectively, and outputting to obtain a diet target tag set;
s103, sequentially inputting the diet picture and the corresponding diet target label set into the diet target detection model for training, and storing the diet target detection model obtained through training.
A large number of diet pictures need to be acquired, and the diet pictures can be acquired from network equipment and can also be provided by corresponding target patients.
The image processing software used in this step is a technology known in the art, and may be provided by a third party, or may be open source software obtained publicly and freely, and is not described herein again.
The diet target detection model is a neural network model and can be used for deep learning, diet pictures and corresponding diet target label sets are sequentially input into the diet target detection model (namely the neural network model) in small batches for training, all diet targets (single food and tableware containing complex mixed food) in the diet pictures can be identified by the diet target detection model obtained through training, all diet targets are circled by a square frame, the diet targets are cut out from the original diet pictures, and a plurality of diet target graphs are obtained through output. The invention supports the user to shoot a plurality of dishes at one time, avoids the user from separately shooting, and simplifies the user operation, thereby improving the user experience and the compliance.
Preferably, the output result of the diet target detection model is compared with the diet target label data obtained in step S102 by using the composite loss function, the loss is calculated, and the diet target detection model is iteratively optimized according to the loss until the loss value is reduced to a value stable region, the training is stopped, and the diet target detection model obtained by the training is stored, so as to improve the accuracy of the diet target detection model. In this embodiment, the composite loss function is composed of a CIOU (Complete Intersection) loss function and a target frame classification loss function, but in other embodiments, the output result of the diet target detection model may also be iteratively optimized by using a composite loss function composed of an IoU, GIoU, or DIoU loss function and a target frame classification loss function, which is not limited to the scheme of this embodiment.
Accordingly, after step S10 and before step S2, the method further includes: and S20, training the food material segmentation model.
Optionally, step S20 specifically includes:
s201, establishing a food material segmentation tag set: respectively corresponding the food material categories to corresponding food material areas in the diet pictures, and marking pixels in the food material areas as corresponding food material categories, so that each diet picture generates a template picture labeled in a pixel level manner, and a food material segmentation tag set is formed;
s202, inputting training data output by the diet target detection model as input data and the food material segmentation label set obtained in the step S201 as training labels into the food material segmentation model for training, regarding each pixel point on a diet picture as a classification operation, and storing the food material segmentation model obtained by training.
When the food material segmentation tag set is established, the classification granularity is adjusted according to the characteristics of diabetes diet management. For example: although lettuce, leaf lettuce and pakchoi are different vegetables, they are green vegetables which are beneficial to the health of patients for diabetes management, and the suggestions for the vegetables in the field of diabetes diet management are the same, so that the vegetables are encouraged to eat more, and therefore, the vegetables are not necessarily subdivided and can be combined into 'green vegetables'. For another example: the fish meat belongs to white meat with low fat and high quality protein if the cooking method is used, and can be recommended to the diabetics without gout diseases, and if the fish meat is fried, the fish meat contains higher fat and is not recommended to the diabetics to eat. Therefore, the fish meat should be classified into fried fish meat, light fish meat and the like according to cooking methods, and other various food materials are classified according to diabetes dietary management standards.
The method comprises the steps that a pixel-level labeling mode is adopted, diet pictures are input into an image labeling system, the system can display the pictures in sequence, a labeling person selects the type corresponding to each food material in the diet pictures and paints the food material in the corresponding food material area through a mouse, the food material areas are completely painted and then clicked and submitted, the system can generate a pixel-level template picture according to the painted area, and each pixel in the template picture marks which type of food material the pixel point corresponds to in the position of the original diet picture.
For example: as shown in fig. 3, the diet picture (the left photograph in fig. 3) contains "green vegetables", "soymilk", "boiled eggs", etc., and the annotator can respectively and sequentially select "green vegetables", "soymilk", "boiled eggs", and paint in the corresponding food material area, after the painting is completed, the system will automatically generate a pixel-level template picture, and the pixel points in each area in the picture represent the corresponding food material type. The right picture in fig. 3 is the result after being coated by different food materials, and finally the background bottom picture is removed from the right picture, and the upper white mask is the template picture marked in pixel level.
The food material classification method and the food material classification system adopt a picture classification scheme different from the existing food identification system, the food material components are analyzed through the dish names only by identifying the dish names in the existing technical scheme, and the food segmentation scheme provided by the invention can perform food material classification segmentation on uploaded food pictures with finer granularity, so that the nutrition component ratio of each food material is obtained. In conclusion, the food material segmentation scheme is more detailed and accurate, and can provide more accurate diet suggestions for users.
It should be noted that the image processing software and the image annotation system used in this step are well-known technologies in the field, and related software used in this step may be provided by a third party, or may be open source software obtained for public and free, and are not described herein again.
In the food material segmentation model training, a small-batch input segmentation neural network model of a food target graph output by a food target detection model and a corresponding segmentation template label graph is trained, the obtained trained food material segmentation model can classify each pixel point of an input food picture, namely, a pixel area occupied by each food material in the food picture is segmented, food material segmentation of food is realized, and therefore a target patient can know what the target patient eats each meal. The above diet target detection model and the food material segmentation model both utilize a neural network deep learning technology, and the principle and application thereof belong to the prior art, and are not described herein again.
In this embodiment, the diet target detection model and the food material segmentation model are sequentially connected in series to the system, and the diet picture of the target patient sequentially passes through the diet target detection model and the food material segmentation model, and finally a plurality of pixel-level food material segmentation maps are output, so as to complete diet segmentation.
Similarly, the output result of the food material segmentation model is compared with the food material segmentation label data obtained in the step S201 by using a cross entropy loss function, the loss is calculated, iterative optimization is performed on the food material segmentation model according to the loss, until the loss value is reduced to a value stable region, the training is stopped, and the food material segmentation model obtained through training is stored, so that the error of the food material segmentation model is reduced.
In this embodiment, the diet target detection model and the food material segmentation model are sequentially connected in series to the GPU server, the target patient is a diabetic, the diet picture uploaded by the diabetic firstly passes through the diet target detection model, the positions of diet targets (monomer food and tableware entity containing mixed food) in the diet picture are identified, the diet targets are cut out from the original diet picture by using a square frame to obtain a plurality of diet target maps, and then the diet target maps are respectively input into the food material segmentation model, so as to further segment each food material in the diet target map with fine granularity, thereby knowing what the diabetic eats at each meal, and being beneficial to providing accurate diet suggestions. In other embodiments, the target patient may be other chronically ill patients as well.
In this embodiment, the diet target detection model can provide a capability of identifying multiple dishes, and a diet picture (containing multiple dishes) is divided and output into a plurality of diet target pictures after being input into the diet target detection model, so that a user does not need to take a picture of each dish independently, user operation is simplified, and user compliance is improved. And further inputting the diet target map into the diet segmentation model can obtain which food materials are included in each diet target map so as to conveniently carry out targeted diet suggestion.
Before step S4, the method further includes:
and S40, perspective restoration is carried out on the food material segmentation chart.
Generally, when a target patient shoots a diet picture, a mobile phone is difficult or difficult to keep horizontal or not consciously to shoot a top view of diet dishes from the vertical direction, so that a plurality of diet targets in the shot diet picture have perspective effects of nearly big and small, and errors are caused when diet magnitude in an evaluation food material segmentation picture.
Specifically, step S40 includes:
s401, actually shooting videos of all tableware under perspective effects from all angles and all distances by using a shooting device, and splitting the videos into picture frames to obtain picture data of all tableware under different perspectives;
s402, calculating to obtain perspective zoom factor by analyzing different pixel sizes of the same tableware in different positions in a plurality of same scene pictures
Figure 590826DEST_PATH_IMAGE001
S403, optimizing the following formula by using a gradient descent method:
Figure 840541DEST_PATH_IMAGE002
wherein i is a data iteration vernier; t is a transpose, representing the conversion of a column vector into a row vector;
Figure 825815DEST_PATH_IMAGE003
for the ith sample<The pixel width of the tableware, the distance between the tableware and the reference point, and the shooting angle of the mobile phone>Inputting a feature vector; m is the number of samples participating in the calculation of the loss value at one time; w, b are parameters to be optimized;
after multiple iterations, obtaining W, b parameters which enable the formula to take a minimum value, and storing the parameters;
s404, passing formula
Figure 135574DEST_PATH_IMAGE004
Estimating a scaling coefficient y of the tableware in the diet picture due to perspective effect;
wherein x is < tableware pixel width, distance between tableware and a datum point, and shooting angle of a mobile phone > input characteristic vector;
s405, dividing the food material area located far away in the food material segmentation graph by the scaling coefficient y to obtain the reduction size of each food material based on the near food material.
The steps S401 to S403 are training for restoring the model, values of the parameters W and b can be obtained through iterative training, and the steps S404 to S405 are perspective restoration work on the actual line. By substituting the parameters W and b obtained by training into the formula
Figure 19347DEST_PATH_IMAGE004
The perspective scaling coefficient y of the corresponding tableware in the diet picture can be estimated, so that the food materials in the diet picture can be subjected to perspective reduction, the food materials are subjected to reference comparison on one reference point, and the diet is improvedThe accuracy of the analysis provides better factual basis for the diet proposal in the later period.
Specifically, in this embodiment, the camera is a mobile phone, a mobile phone level meter is used to record the current inclination angle of the mobile phone, the mobile phone is fixed, the video shooting is started, tableware related to diet pictures such as a standard dinner plate, a standard bowl, a lunch box and the like are pushed out from near to far, and then the video is split into video frames, so that picture data of the same tableware shot by the mobile phone at different inclination angles and at different distances is obtained. Because one video contains a large number of video frames, the method can obtain a large number of picture data with different sizes, which are displayed by the same tableware through real shooting by the mobile phone from different inclination angles and different distances, and is convenient for obtaining the zoom factor caused by perspective through comparison and calculation
Figure 72754DEST_PATH_IMAGE001
It should be noted that although the inclination angle of the mobile phone can be obtained by the mobile phone level during the shooting process, most mobile phones do not have the distance measurement function and cannot obtain distance data, the included angle between the mobile phone and the horizon is not equal to the included angle between the camera sight of the mobile phone and the tableware, and when the distance data is missing, the geometric distortion caused by the angle cannot be directly calculated by a geometric method.
Based on this, the present embodiment redefines the distance unit, where the pixel width of the tableware entity is used as the distance measurement unit, and when there is only one tableware entity in the diet picture, no correction is performed, and if there are a plurality of tableware entities in the diet picture, the closest tableware entity to the photographer is defined as the reference point (the lowest tableware on the diet picture), and the multiple of the distance from the reference point where other tableware entities appear with respect to the width thereof is defined as the distance of the tableware from the reference point. In the embodiment, the distance is defined without depending on any actual measuring tool or using any absolute value, only one reference point is defined, the distance is defined by using the relative quantity, and the measuring error is reduced while the calculation is simple.
Meanwhile, it is necessary to consider a point that if the sizes of the related dishes are different, when two dishes with different sizes are placed on the same horizontal line, if the pixel width of the dish is taken as a distance unit, the distance value is calculated, and the distance of the small dish is larger than that of the large dish, so that the width of the dish itself needs to be taken into consideration, and the reliability of data needs to be improved. Because the relationship among the features is difficult to artificially define, the iterative training, numerical optimization and automatic searching of the functional relationship are performed through the gradient descent formula (see the step S403). Finally, the feature vector of the tableware is constructed as < the width of the tableware pixel, the distance between the tableware and the reference point, the shooting angle of the mobile phone, and the zoom factor due to perspective >, wherein the width of the tableware pixel, the distance between the tableware and the reference point, and the shooting angle of the mobile phone are all input data, and the zoom factor due to perspective is a prediction target.
After a plurality of iterations using the gradient descent method, the parameter W, b for which the gradient descent function takes a minimum value is obtained, where W represents a weight vector and includes
Figure 912534DEST_PATH_IMAGE005
Three vector elements, each being<Distance between tableware and reference point, tableware pixel width and mobile phone shooting angle>The corresponding weight components, therefore W and b, resulting from the gradient descent function training described above, are essentially
Figure 393194DEST_PATH_IMAGE006
And a value b.
Further, substituting the parameters W and b into the formula
Figure 747952DEST_PATH_IMAGE004
Here, x represents<The pixel width of the tableware, the distance between the tableware and the reference point, and the shooting angle of the mobile phone>The input characteristic vector is the known input quantity, the mobile phone shooting angle is read by a mobile phone level meter, and the tableware pixel width and the distance between the tableware and the reference point are calculated from the picture pixel. In this embodiment, the angle for the mobile phone photographing angle, the width for the pixel width of the tableware, and the distance between the tableware and the reference point are substitutedSubstituting leaving L, then a general calculation formula is obtained:
Figure 605049DEST_PATH_IMAGE007
of course, if more specifically, width is set to 200, angle is set to 30, and L is set to 1, the formula is substituted to obtain:
Figure 299336DEST_PATH_IMAGE008
thereby calculating the scaling factor y caused by the perspective reason.
Finally, the area of the food material area (i.e. the pixel amount occupied by the food material) in the far food material segmentation map in step S4 is divided by the scaling factor y, so as to obtain the size of the food material after perspective restoration (taking the nearest food material area of the camera as the reference point). Of course, in other embodiments, the area occupied by each tableware or single food in the diet picture may be divided by the scaling coefficient y, so that each food in the diet picture may be reduced in perspective, and the pixel amount of the area occupied by each food in the tableware or the pixel amount occupied by the single food may be further read, so as to obtain the size of the food after reduction in perspective.
It can be understood that in other embodiments, tableware shooting at different angles and different distances can be performed by other shooting devices such as a camera, and meanwhile, values of the width, the angle, and the L can be changed according to actual needs, and are not limited in this embodiment.
In addition, after the perspective reduction in step S40, the position information of all dishes in the diet picture is obtained, and simultaneously, information such as the food material and the cooking method of each dish can be obtained, for example, the eggs fried with tomatoes are detected from the diet picture, and the food material including tomatoes and eggs is further obtained, and the cooking method is stir-frying.
Optionally, in step S4, calculating the ratio of the nutritional components contained in all the food materials based on the food material segmentation map.
Further, step S4 includes:
s41, merging similar food materials for dishes in all food material segmentation graphs to obtain the ratio of pixel amount of different food materials;
and S42, analyzing the content of the nutrient components of each food material, and calculating the ratio of the nutrient components of all food materials in the diet picture.
Specifically, similar food materials are combined in a food material segmentation graph after perspective restoration (mainly referring to combination of similar food materials among different dishes), for example, if eggs in tomato fried eggs account for 100 points of pixels, eggs in edible fungus fried eggs account for 200 points, the eggs of tomato fried eggs and edible fungus fried eggs are combined into 300 points, and other food materials are combined in this way, and the pixel amount of each combined food material is read, so that the ratio of the pixel amount of each food material is obtained. And further calculating the ratio of the nutrient components contained in each food material, and determining whether the ratio of the nutrient components eaten by the target patient meets the requirement or not according to the ratio of the nutrient components, so as to provide a healthier diet proposal for the target patient and help the target patient to recover the health.
It should be noted that the ratio of the nutritional components is used instead of the absolute value, because the visual meal intake amount photographed is affected by the distance between the camera and the meal, and the mobile phone does not have the distance measurement function, so the absolute value cannot be used, and the result is more reliable by evaluating the nutritional formula.
For the understanding of the present embodiment, the following description is made by way of example. Suppose that the diabetic takes three dishes, and the three dishes are unified to the same reference size through perspective reduction. Dish 1 is fried cucumber eggs, wherein the cucumber accounts for 1000 pixels, and the eggs account for 800 pixels; dish 2 is tomato fried eggs, wherein the tomato occupies 1500 pixels, and the eggs occupy 700 pixels; dish 3 is a fried chicken wing, wherein the chicken wing occupies 2000 pixels. Then, after the same food materials are combined, 1000 cucumber pixels, 1500 tomato pixels, 1500 egg pixels and 2000 fried chicken wing pixels are obtained, so that the ratio of the cucumber to the tomato to the egg to the fried chicken wing is 1:1.5:1.5: 2.
Then, the nutrition information of each food material is inquired according to the nutrition knowledge base to obtain: every 100g of cucumber contains 2.5 g of carbohydrate and 0.2 g of fat; every 100g of tomato contains 4 g of carbohydrate and 0.2 g of fat; every 100g of eggs contains 1.3 g of carbohydrate and 11.1 g of fat; each 100g of fried chicken wings contains 12.8g of carbohydrate and 23.6g of fat. The cucumber is known: tomato: egg: fried chicken wings =1:1.5:1.5:2, then the carbohydrate and fat are calculated as follows:
carbohydrate =2.5 × 1+4 × 1.5+1.3 × 1.5+12.8 × 2=36.05(g)
Fat =0.2 × 1+0.2 × 1.5+11.1 × 1.5+23.6 × 2=64.35(g)
Obtaining the ratio of the nutrient components as carbohydrate: fat =1: 1.79. in order to avoid redundancy, only two nutrient components of carbohydrate and fat are listed, which does not represent that only the two nutrient components exist in the system, and the calculation modes of the rest nutrient components are the same.
Meanwhile, the embodiment also discloses a diet nutrition management method for the chronic disease patients, which comprises the diet segmentation method, and based on the ratio of the nutrient components obtained by the segmentation method, the diet nutrition management can be better performed for the chronic disease patients, so that the healthy diet can be helped.
As shown in fig. 4, further, a step S30, a nutrition knowledge base, a chronic diet management knowledge base and a nutrition suggestion knowledge base are included before the step S4.
Specifically, the construction of the nutrition knowledge base basically covers common food material categories and contained nutritional ingredients by referring to a Chinese food ingredient table issued by the Chinese disease prevention and control center for nutrition and health, and the nutritional ingredients contained in various food materials are recorded, so that the nutrition knowledge base is constructed.
The chronic disease diet management knowledge base comprises diet control standards of various chronic diseases, such as diet control standards of diabetes, hypertension and the like, mainly derived from authoritative prevention and treatment guidelines and publications of various chronic diseases and suggestions of authoritative experts and dieticians of relevant departments.
The nutritional advice/operation library should include warning advice for various behaviors violating the diet control standards of chronic diseases (this embodiment specifically refers to diabetes), mainly from the advice of experts and dieticians in relevant departments. Specifically, the diabetic patient can be provided with out-of-hospital diet comment work by recruiting a large number of medical workers, so that the diet comment of professional medical workers is collected. For example, a diabetic takes excessive staple food, and the system prompts "the staple food intake is too much for this meal, the diabetic recommends a short meal and a long meal, and the staple food is kept for one punch for each meal.
Further, the collected comment suggestions are preprocessed, dirty data (namely useless information) is deleted, and high-quality natural language dialogs are screened out. Furthermore, the text classification technology is applied to divide the screened high-quality dialect into a plurality of early warning classifications, wherein the early warning classifications comprise diet nutrition proportion suggestions, special dish suggestions, food material category suggestions, cooking mode suggestions and the like, for example: the classification of ' too high fat/staple food intake and high fat food intake which should be reduced ' into the classification of diet nutrition proportion suggestion, the classification of ' no fat meat or chicken skin or duck skin and other high fat foods ' into the classification of special dish suggestion, the classification of ' green vegetable intake for the meal into the classification of food material type suggestion and the like, the classification of ' no edible frying and frying food ' or ' changing the frying food into water boiling and steaming substitution ' into the classification of cooking mode suggestion and the like are not listed any more. Further, in order to ensure the dialect quality and the classification accuracy, the embodiment further assists manual review of the classification results, and finally stores the classification results into the database as a nutrition suggestion dialect library.
According to the invention, by constructing the nutrition suggestion jargon library, a large number of nutrition suggestion jargon which can be intuitively accepted by patients with chronic diseases are stored in the nutrition suggestion jargon library, and compared with data type prompts such as common nutrient index prompts, the nutrition suggestion jargon library can enable target patients to easily understand and accept.
Next, after step S4, the method further includes:
and S5, providing personalized diet suggestions for the target patient.
In this embodiment, the target patient knows the ratio of the nutrients consumed by the target patient for one meal according to the calculated ratios of the nutrients, and proposes a healthy and reasonable diet recommendation for the target patient according to the ratios of the nutrients and the relevant dietary regulations of the target patient.
Step S5 specifically includes: carrying out personalized diet suggestion by combining the physiological sign data and the diseased information data of the target patient;
the personalized diet suggestions comprise diet nutrition proportion suggestions, special dish suggestions, food material type suggestions and cooking mode suggestions.
Specifically, personalized dietary recommendations are made with reference to a chronic diet management knowledge base and a nutritional recommendation jargon base. And comparing the nutrient intake condition of the diabetic patient by referring to the chronic disease diet management knowledge base, and performing personalized diet suggestion by combining the nutrient suggestion knowledge base.
Further, the individual diet advice is made by combining the physiological sign data and the illness information data of the target patient (in this embodiment, the diabetic patient). In this embodiment, the physiological sign data includes: gender, age, height, weight, BMI, labor intensity, etc.; the disease information data comprises disease types, disease time, medicine application conditions and the like.
The personalized diet suggestions comprise diet nutrition proportion suggestions, special dish suggestions, food material type suggestions and cooking mode suggestions. The 'diet nutrition proportion suggestion' is mainly suggested for the situation that the diet proportion of a target patient is unreasonable, such as the ingestion of too much or too little proportion of staple food, vegetables and the like, wherein the ingestion of the staple food or the vegetables is suggested; the 'special dish suggestion' is mainly suggested when a target patient eats food which is contraindicated to chronic diseases, for example, if the diet picture uploaded by a diabetic patient contains fat meat, chicken skin and the like, the target patient can be suggested not to eat the diet picture; the food material type suggestion mainly suggests the situation that the diet type of a target patient is unreasonable, for example, when no green vegetables exist on a diet picture, a proper amount of green vegetables are suggested to be taken, and a user is reminded of eating reasonably; the "cooking method suggestion" is mainly made for the purpose of suggesting that the target patient should not eat fried crispy meat and fried chicken wings to reduce fat intake when eating high-fat cooking food such as fried crispy meat and fried chicken wings on a diet picture. And finally, reasonably suggesting the healthy diet of the target patient from all aspects by combining the nutritional standards in the chronic disease diet management library, so that the method is more comprehensive and specific and is beneficial to the health recovery of the user.
Specifically, the above has been calculated to yield a carbohydrate: fat =1: 1.79. at this time, if the information of a type 2 diabetes patient is that the BMI exceeds the standard, the glycosylated hemoglobin exceeds the standard, and the postprandial blood sugar of the last meal exceeds the standard, the type 2 diabetes patient belongs to an obese diabetes patient. According to the requirements of diabetes patients on a daily carbohydrate ratio of 45-60% and a fat intake ratio of 25-30% in a type 2 diabetes prevention and treatment guideline in a chronic disease diet management knowledge base, the system requires that the diabetes patients eat food according to the ratio as much as possible every meal, and meanwhile, the fat reduction is strengthened for obese patients. However, from the results of the above calculation, the fat intake ratio was significantly high, and the main food causing the fat ratio to be high was the fried chicken wings. Therefore, the library should be suggested as follows in conjunction with the nutritional advice:
the diet nutrition proportion suggestion should suggest: "too high fat intake of this meal should reduce intake of high fat food";
the special dishes are recommended, and the meal does not have foods which are contraindicated to the diabetes symptoms, so no suggestion is made. However, if the wings contain the chicken skin and are shot, the chicken skin should be indicated to be not eaten because the fat content of the chicken skin is too high;
according to the food material type suggestion, the meal contains green vegetables and protein foods, but no staple food is ingested, so the system should prompt a patient to eat the staple food to avoid hypoglycemia after medication, but also prompt the patient to have the staple food amount of 2 or more, and if the measurement is inconvenient, the volume of one fist is suitable;
according to the cooking mode suggestion, the cooking mode of frying is used in the meal, the oil intake of the meal can be greatly improved, and therefore the fact that the fried chicken wings are not required to be eaten is prompted.
Of course, the above-mentioned cases are for better understanding of the present invention, and in other embodiments, the diet advice provided may be different for different types of patients or different diet pictures taken, as the case may be.
In summary, the embodiment of the present invention provides a diet segmentation method and a diet nutrition management method for chronic patients, which have the following advantages:
(1) the method has the advantages that multiple dishes in the diet picture can be identified and positioned, a user is supported to shoot multiple dishes at one time, the user is prevented from separately shooting, the user operation is simplified, and therefore the user experience and the compliance are improved;
(2) through perspective reduction of the food material segmentation chart, errors caused by the perspective effect are relieved to a certain extent, and therefore more accurate and healthy diet suggestions are provided for patients.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (8)

1. A method of meal segmentation, comprising the steps of:
s1, acquiring a diet picture of the target patient;
s10, training a diet target detection model, and performing iterative optimization on the diet target detection model by using a composite loss function to obtain an optimal diet target detection model;
s20, training a food material segmentation model;
s2, inputting the diet pictures into the diet target detection model for target recognition, and outputting a plurality of diet target pictures;
s3, sequentially inputting the food target graphs into the food material segmentation model for food material segmentation, and outputting to obtain a plurality of pixel-level food material segmentation graphs;
s40, perspective reduction is carried out on the segmentation chart of each food material;
s401, actually shooting videos of all tableware under perspective effects from all angles and all distances by using a shooting device, and splitting the videos into picture frames to obtain picture data of all tableware under different perspectives;
s402, calculating a perspective zoom coefficient y by analyzing different pixel sizes of the same tableware in different positions in a plurality of same scene pictures (i)
S403, optimizing the following formula by using a gradient descent method:
Figure FDA0003783806980000011
wherein i is a data iteration vernier; t is a transpose, representing the conversion of a column vector into a row vector; x is a radical of a fluorine atom (i) For the ith sample<The pixel width of the tableware, the distance between the tableware and the reference point, and the shooting angle of the mobile phone>Inputting a feature vector; m is the number of samples participating in the calculation of the loss value at one time; w, b is the parameter to be optimized;
after multiple iterations, obtaining W, b parameters which enable the formula to take a minimum value, and storing the parameters;
s404, obtaining the formula of W T x + b, estimating a scaling coefficient y of the tableware in the diet picture due to perspective effect;
wherein: x is the input characteristic vector of < the width of the tableware pixel, the distance between the tableware and the datum point, and the shooting angle of the mobile phone >;
s405, dividing the food material area located far away in the food material segmentation graph by the scaling coefficient y to obtain the reduction size of each food material based on the near food material;
s4, calculating the ratio of the nutrient components contained in all food materials based on the food material segmentation graph;
s41, merging similar food materials for dishes in all food material segmentation graphs to obtain the pixel ratio of different food materials;
and S42, analyzing the content of the nutrient components of each food material, and calculating the ratio of the nutrient components of all the food materials in the diet picture.
2. The diet segmentation method according to claim 1, wherein the step S10 specifically includes:
s101, acquiring a diet picture of a target patient for training;
s102, establishing a diet target tag set: inputting the diet picture into image processing software to circle out the single food and the tableware entity in the diet picture by a square frame respectively, and outputting to obtain a diet target tag set;
s103, sequentially inputting the diet picture and the corresponding diet target label set into the diet target detection model for training, and storing the diet target detection model obtained through training.
3. The diet segmentation method according to claim 2, characterized in that the output result of the diet target detection model is compared with the diet target label data obtained in step S102 by using a composite loss function, the loss is calculated, and the diet target detection model is iteratively optimized according to the loss until the loss value is reduced to a value stability region.
4. The diet segmentation method according to claim 1, wherein the step S20 specifically includes:
s201, establishing a food material segmentation tag set: respectively corresponding the food material categories to corresponding food material areas in the diet pictures, and marking pixels in the food material areas as corresponding food material categories, so that each diet picture generates a template picture labeled in a pixel level manner, and a food material segmentation tag set is formed;
s202, inputting training data output by the diet target detection model as input data and the food material segmentation label set obtained in the step S201 as training labels into the food material segmentation model for training, regarding each pixel point on a diet picture as a classification operation, and storing the food material segmentation model obtained by training.
5. The diet segmentation method as claimed in claim 4, wherein the output result of the food material segmentation model is compared with the food material segmentation label data obtained in step S201 by using a cross entropy loss function, the loss is calculated, and the food material segmentation model is iteratively optimized according to the loss until the loss value is reduced to a value stable region.
6. A method for dietary nutrition management of patients with chronic illnesses, comprising the method for dividing a diet according to any one of claims 1 to 5, further comprising, after step S4:
and S5, providing personalized diet suggestions for the target patient.
7. The method for dietary nutrition management of patients with chronic illnesses as claimed in claim 6, wherein step S5 specifically comprises: carrying out personalized diet suggestion by combining the physiological sign data and the diseased information data of the target patient;
the personalized diet suggestions comprise diet nutrition proportion suggestions, special dish suggestions, food material type suggestions and cooking mode suggestions.
8. The method for dietary nutritional management of chronically ill patients according to claim 6, further comprising, prior to step S4:
s30, constructing a nutrition knowledge base, a chronic disease diet management knowledge base and a nutrition suggestion technology base.
CN202210266348.XA 2022-03-18 2022-03-18 Diet segmentation method and diet nutrition management method for chronic disease patients Active CN114359299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210266348.XA CN114359299B (en) 2022-03-18 2022-03-18 Diet segmentation method and diet nutrition management method for chronic disease patients

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210266348.XA CN114359299B (en) 2022-03-18 2022-03-18 Diet segmentation method and diet nutrition management method for chronic disease patients

Publications (2)

Publication Number Publication Date
CN114359299A CN114359299A (en) 2022-04-15
CN114359299B true CN114359299B (en) 2022-09-30

Family

ID=81095211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210266348.XA Active CN114359299B (en) 2022-03-18 2022-03-18 Diet segmentation method and diet nutrition management method for chronic disease patients

Country Status (1)

Country Link
CN (1) CN114359299B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171848A (en) * 2022-07-25 2022-10-11 重庆邮电大学 Intelligent diet recommendation system based on food image segmentation and uric acid index

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201524187A (en) * 2013-12-02 2015-06-16 Nat Taichung University Science & Technology System for calibrating three dimension perspective image and method thereof
CN108140328A (en) * 2015-09-09 2018-06-08 菲特利公司 Using food image identification come the system and method for trophic analysis
CN108364675A (en) * 2018-01-23 2018-08-03 明纳信息技术深圳有限公司 A kind of identification method of food weight and nutrient content based on image recognition
CN108846314A (en) * 2018-05-08 2018-11-20 天津大学 A kind of food materials identification system and food materials discrimination method based on deep learning
WO2019110542A1 (en) * 2017-12-04 2019-06-13 Koninklijke Philips N.V. Optimizing micro-nutrients and macro-nutrients of a diet based on conditions of the patient
CN113724837A (en) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 Method and device for generating diet schedule of chronic patient and terminal equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648152B (en) * 2016-12-14 2019-04-05 吉林大学 It is a kind of to be based on rotation angle and the three-dimensional pen-based interaction interface Zoom method of distance
CN111652044A (en) * 2020-04-16 2020-09-11 复旦大学附属儿科医院 Dietary nutrition analysis method based on convolutional neural network target detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201524187A (en) * 2013-12-02 2015-06-16 Nat Taichung University Science & Technology System for calibrating three dimension perspective image and method thereof
CN108140328A (en) * 2015-09-09 2018-06-08 菲特利公司 Using food image identification come the system and method for trophic analysis
WO2019110542A1 (en) * 2017-12-04 2019-06-13 Koninklijke Philips N.V. Optimizing micro-nutrients and macro-nutrients of a diet based on conditions of the patient
CN108364675A (en) * 2018-01-23 2018-08-03 明纳信息技术深圳有限公司 A kind of identification method of food weight and nutrient content based on image recognition
CN108846314A (en) * 2018-05-08 2018-11-20 天津大学 A kind of food materials identification system and food materials discrimination method based on deep learning
CN113724837A (en) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 Method and device for generating diet schedule of chronic patient and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information;Robin Ruede et al.;《arXiv:2011.01082v1》;20201102;全文 *

Also Published As

Publication number Publication date
CN114359299A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
Lu et al. goFOODTM: an artificial intelligence system for dietary assessment
US20120179665A1 (en) Health monitoring system
US20160035248A1 (en) Providing Food-Portion Recommendations to Facilitate Dieting
US20130273509A1 (en) Method of Monitoring Nutritional Intake by Image Processing
CN104778374A (en) Automatic dietary estimation device based on image processing and recognizing method
Slimani et al. Methods to determine dietary intake
Pouladzadeh et al. You are what you eat: So measure what you eat!
Tay et al. Current developments in digital quantitative volume estimation for the optimisation of dietary assessment
CN106709525A (en) Method for measuring food nutritional component by means of camera
JP3143571U (en) Nutritional diagnosis system
US20190272774A1 (en) Fitness and Educational Game and Method of Playing the Same
TW201901598A (en) Dietary information suggestion system and its dietary information suggestion method
CN114359299B (en) Diet segmentation method and diet nutrition management method for chronic disease patients
CN114360690B (en) Method and system for managing diet nutrition of chronic disease patient
CN104765980A (en) Intelligent diet assessment method based on cloud computing
Chiang et al. Food calorie and nutrition analysis system based on mask r-cnn
CN116453652A (en) Diabetes patient food intake control method, device, computer equipment and storage medium thereof
EP2787459A1 (en) Method of monitoring nutritional intake by image processing
CN113035317A (en) User portrait generation method and device, storage medium and electronic equipment
JP2022530263A (en) Food measurement methods, equipment and programs
KR102473282B1 (en) System and method for providing nutritional information based on image analysis using artificial intelligence
CN114388102A (en) Diet recommendation method and device and electronic equipment
JP2013134763A (en) Food material nutritive value calculation server, food material nutritive value calculation system, and food material nutritive value calculation program
JP4496624B2 (en) Meal management support system and meal management support method
US20230274812A1 (en) Methods and systems for calculating an edible score in a display interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant