CN114581265A - System and method for analyzing eating preference of diner - Google Patents

System and method for analyzing eating preference of diner Download PDF

Info

Publication number
CN114581265A
CN114581265A CN202210220947.8A CN202210220947A CN114581265A CN 114581265 A CN114581265 A CN 114581265A CN 202210220947 A CN202210220947 A CN 202210220947A CN 114581265 A CN114581265 A CN 114581265A
Authority
CN
China
Prior art keywords
food
eating
characteristic information
person
diner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210220947.8A
Other languages
Chinese (zh)
Other versions
CN114581265B (en
Inventor
蓝海洋
王永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Nuwa Butian Technology Information Technology Co ltd
Original Assignee
Beijing Nuwa Butian Technology Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Nuwa Butian Technology Information Technology Co ltd filed Critical Beijing Nuwa Butian Technology Information Technology Co ltd
Priority to CN202210220947.8A priority Critical patent/CN114581265B/en
Publication of CN114581265A publication Critical patent/CN114581265A/en
Application granted granted Critical
Publication of CN114581265B publication Critical patent/CN114581265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention relates to the technical field of computers, and discloses a diner eating preference analysis system and a method, wherein a staff information acquisition terminal and a food information acquisition terminal are uniformly distributed at a diner receiving place and a diner receiving place, so that a background server can determine diner before eating and the food receiving amount before eating according to the information acquired at the diner receiving place, determine the diner after eating and the residual amount of the diner after eating according to the information acquired at the diner receiving place, and finally count the eating preference of single/multiple diner according to the historical receiving amount and the historical residual amount of the single/multiple diner, thereby not needing to count the residual vegetables in the kitchen garbage manually, greatly saving time and labor, reducing the difficulty in selecting foods in a dining room, and directly obtaining the preference of various foods, the error rate can be effectively reduced, the accuracy of the selection result is improved, and the method is convenient for practical application and popularization.

Description

System and method for analyzing eating preference of diner
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a system and a method for analyzing eating preferences of diners.
Background
At present, in primary and secondary schools, a plurality of students have meals in schools, and most of the students are assembled by the schools. Although schools provide nutritional lunch according to the catering requirements of nutritional meals, the situation of leftovers in nutritional lunch in schools is very common in reality for various reasons. The condition causes unbalanced nutrition intake of students, thereby bringing adverse effects to the growth, development and health of the students, and the problem of leftovers causes great waste of food and resources.
In traditional canteens of primary and secondary schools and universities, the varieties of the selected vegetables are rare, and the quality and the taste are different. In order to ensure the quality of dishes and develop new popular dish colors, an effective set of dish quality measures is needed, for example, the indexes such as sales volume, goodness and return rate are used to comprehensively select the most popular food, and the selection mode is generally favored by restaurant operators. However, the dining room adopts a unified catering, and the dish style of each diner is the same, so that the traditional evaluation mode is to manually count the leftovers in the kitchen waste and reversely deduce to obtain the most popular food according to the statistical result. However, the general swill recycling place has poor environment, bad smell, time and labor waste of manual statistics and high error rate, so that the problem that the evaluation work is difficult to develop exists, and the accuracy of the evaluation result is limited to be further improved.
Disclosure of Invention
The invention aims to provide a system and a method for analyzing eating preference of diners, and aims to solve the problems that eating preference evaluation work is difficult to develop and accuracy of evaluation results is limited in a dining room dining scene.
In a first aspect, the invention provides a diner eating preference analysis system, which comprises a first person information acquisition terminal, a second person information acquisition terminal, a first food information acquisition terminal, a second food information acquisition terminal and a background server, wherein the first person information acquisition terminal and the first food information acquisition terminal are arranged at a dinning place together, the second person information acquisition terminal and the second food information acquisition terminal are arranged at a dinning place together, and the first person information acquisition terminal, the second person information acquisition terminal, the first food information acquisition terminal and the second food information acquisition terminal are respectively in communication connection with the background server;
the first person information acquisition terminal is used for acquiring first person characteristic information of the diners before eating appearing at the dinning guide place and transmitting the first person characteristic information to the background server;
the second person information acquisition terminal is used for acquiring second person characteristic information of eaten diners appearing at the meal receiving place and transmitting the second person characteristic information to the background server;
the first food information acquisition terminal is used for acquiring first food characteristic information of food before eating appearing at the meal receiving position and transmitting the first food characteristic information to the background server;
the second food information acquisition terminal is used for acquiring second food characteristic information of the eaten food appearing at the food receiving place and transmitting the second food characteristic information to the background server;
the background server is used for identifying the diner before eating according to the first person characteristic information, determining the fetching amount of the diner before eating for the food before eating according to the first food characteristic information synchronously acquired with the first person characteristic information, identifying the diner after eating according to the second person characteristic information, determining the residual amount of the diner after eating for the food after eating according to the second food characteristic information synchronously acquired with the second person characteristic information, and counting the eating preference degree of the diner/diner for a certain food according to the historical fetching amount and the historical residual amount of the diner/diner for the certain food.
Based on the invention, a technical scheme for automatically analyzing the preference degree of diners for the canteen food is provided, namely, a first person information acquisition terminal and a first food information acquisition terminal are arranged at a dinning place, and a second person information acquisition terminal and a second food information acquisition terminal are arranged at a dinning place, so that a background server can determine the diners before eating and the food receiving amount before eating according to the information acquired at the dinning place, determine the remaining amount of the diners after eating and the food after eating according to the information acquired at the dinning place, and finally count the eating preference degree of the diners according to the historical receiving amount and the historical remaining amount of the diners for a certain food, thereby avoiding manually counting the leftover types in the kitchen waste, the method has the advantages of greatly saving time and labor, reducing the difficulty in selecting the foods in the canteen, effectively reducing the error rate due to directly obtaining the preference of various foods, improving the accuracy of selecting results and facilitating practical application and popularization.
In a possible design, the first person information collecting terminal or the second person information collecting terminal includes a camera, wherein the camera is used for collecting face images of the dinners before eating or the dinners after eating, and transmitting the face images as the first person characteristic information or the second person characteristic information to the background server, so that the background server can identify the dinners before eating or the dinners after eating through face recognition processing according to the face images.
In one possible design, the first person information collecting terminal or the second person information collecting terminal includes a first RFID reader, where the first RFID reader is used to collect first RFID information of a dining card held by the dinning person before eating or the dinning person after eating, and transmit the first RFID information as the first person characteristic information or the second person characteristic information to the background server, so that the background server identifies the dinning person corresponding to the first RFID information as the dinning person before eating or the dinning person after eating according to a one-to-one correspondence relationship between the dinning person and the dining card bound in advance, where the dining card is internally provided with a first RFID tag.
In one possible design, the first person information collecting terminal or the second person information collecting terminal includes a camera and a first RFID reader, where the camera is configured to collect a face image of the before-meal person or the after-meal person and transmit the face image to the backend server as a part of the content of the first person feature information or the second person feature information, and the first RFID reader is configured to collect first RFID information of a meal card held by the before-meal person or the after-meal person and transmit the first RFID information to the backend server as another part of the content of the first person feature information or the second person feature information, so that the backend server identifies the before-meal person or the after-meal person by a face recognition process according to the face image, and when the face recognition fails, recognizing the diners corresponding to the first RFID information as the diners before eating or the diners after eating according to the one-to-one correspondence relationship between the pre-bound diners and the diner cards, wherein the diner cards are internally provided with first RFID tags.
In one possible design, the first food information acquisition terminal or the second food information acquisition terminal comprises a depth camera, wherein the depth camera is used for acquiring food image data of the food before eating or the food after eating, and transmitting the food image data to the background server as the first food characteristic information or the second food characteristic information, so that the background server determines the acquisition amount of the food before eating by the diner or the residual amount of the food after eating by the diner after eating according to the food characteristic information acquired synchronously with the first person characteristic information or the second person characteristic information;
according to the food characteristic information which is synchronously acquired with the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food, wherein the method comprises the following steps:
inputting food image data which is acquired synchronously with the first person characteristic information or the second person characteristic information into a food identification model obtained by modeling in advance, and outputting and obtaining the identification result of the food before eating or the food after eating, wherein the food identification model adopts an artificial intelligence model obtained by modeling based on a support vector machine, a K nearest neighbor method, a random gradient descent method, a multivariate linear regression, a multilayer perceptron, a decision tree, a back propagation neural network, a convolutional neural network or a radial basis function network;
according to the food image data, estimating the food volume corresponding to the recognition result through the synthesis processing of the food stereo image;
determining the food volume as a quantity taken by the pre-prandial eatery for the pre-prandial food or a quantity remaining by the post-prandial eatery for the post-prandial food.
In one possible design, the depth camera is multiple and arranged in position at different viewing angles.
In one possible design, the depth camera includes a binocular camera and/or a time of flight camera.
In one possible design, the depth camera employs a single time-of-flight camera in conjunction with a monocular optical camera to acquire stereoscopic image data of the food containing color information.
In one possible design, inputting food image data collected in synchronization with the first person characteristic information or the second person characteristic information into a food recognition model obtained by modeling in advance, and outputting a recognition result of the food before eating or the food after eating, including:
acquiring a food catalog on the same day as the acquisition date of the first person characteristic information or the second person characteristic information, wherein the food catalog records various foods;
for each food in the plurality of foods, inputting food image data which is synchronously acquired with the first person characteristic information or the second person characteristic information into a corresponding food identification model obtained by modeling in advance, and outputting a confidence coefficient that the food before eating or the food after eating is the food;
determining the food in the plurality of foods corresponding to the maximum confidence coefficient as the recognition result of the food before eating or the food after eating.
In one possible design, when the depth camera includes a binocular camera and a time-of-flight camera, estimating a food volume corresponding to the recognition result through a synthesis process of a food stereo image based on the food image data, includes:
according to first image data which are in the food image data and acquired by the binocular camera, a first synthesis result is obtained through synthesis processing of a food stereo image, and a first volume corresponding to the recognition result is estimated according to the first synthesis result;
according to second image data which is in the food image data and acquired by the flight time camera, obtaining a second synthesis result through synthesis processing of a food stereo image, and estimating a second volume corresponding to the identification result according to the second synthesis result;
calculating the food volume V corresponding to the recognition result according to the following formula:
V=η1*V12*V2
in the formula, V1Representing said first volume, V2Representing said second volume, η1And η2Respectively represent preset weight coefficients andhas eta12=1。
In one possible design, the depth camera is configured to capture meal tray image data including food image data of at least one pre-meal food or at least one post-meal food by taking a picture of a meal tray held by the pre-meal or post-meal diner, and transmit the meal tray image data to the backend server as the first food characteristic information or the second food characteristic information, wherein the meal tray has a meal tray background color with a specific color, such that the meal tray image data further includes meal tray background color data;
according to the food characteristic information which is synchronously acquired with the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food, wherein the method comprises the following steps:
identifying meal tray background color data from meal tray image data collected contemporaneously with the first person characteristic information or the second person characteristic information;
taking the identified dinner plate bottom color data as background data, and carrying out image data segmentation processing on the dinner plate image data to obtain food image data of each food before eating or each food after eating;
and determining the amount of the taken food by the diner before eating or the residual amount of the diner after eating according to the corresponding food image data aiming at each food before eating or each food after eating.
In one possible design, the depth camera is configured to capture meal tray image data including food image data of at least one pre-meal food or at least one post-meal food by taking a picture of a meal tray held by the pre-meal or post-meal diners, and transmit the meal tray image data to the backend server as the first food characteristic information or the second food characteristic information;
according to the food characteristic information which is synchronously acquired with the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food, wherein the method comprises the following steps:
performing food heap clustering analysis on dinner plate image data which is synchronously acquired with the first person characteristic information or the second person characteristic information based on a clustering algorithm to identify at least one food heap;
for each food pile in the at least one food pile, carrying out image data segmentation processing on the dinner plate image data according to the pixel coordinates of the corresponding food pile boundary pixels in the dinner plate image data to obtain corresponding food pile image data;
for each food pile, determining corresponding food and food amount according to corresponding food pile image data;
for each pre-serving food item or each post-serving food item, the food quantities of all food items in the at least one food item stack belonging to that food item are summed, and the corresponding total food quantity is determined as the amount of food taken by the pre-serving eater or the remaining amount of food taken by the post-serving eater.
In a possible design, the backend server is further configured to obtain the number of before-eat identifications of the before-eat diner for each of all food materials through food material identification processing based on a target detection algorithm based on food material image data collected in synchronization with the first person characteristic information, obtain the number of after-eat identifications of the after-eat diner for each of the food materials through food material identification processing based on a target detection algorithm based on food material image data collected in synchronization with the second person characteristic information, and further calculate, for each of the food materials, the eating amount of the certain diner for the certain diner according to the number of before-eat identifications and the number of after-eat identifications of the certain diner for the certain diner, and then sequentially arrange all the food materials for the certain diner in order from large eating amount to small eating amount, and obtaining a food material sequence, and finally determining a plurality of food materials which are ranked at the front in the food material sequence as the individual preferred food materials of the certain diner.
In one possible design, the backend server is further configured to obtain the number of before-eating identifications of the before-eating diner for each of the food materials by food material identification processing based on a target detection algorithm according to food material image data collected in synchronization with the first person characteristic information, obtain the number of after-eating identifications of the after-eating diner for each of the food materials by food material identification processing based on a target detection algorithm according to food material image data collected in synchronization with the second person characteristic information, and further calculate, for each of the food materials, a single meal intake of each of the nutritional substances in the food material by the certain diner according to the number of before-eating identifications and the number of after-eating identifications of the certain diner for each of the food materials, and then calculate, for the certain diner and the certain food material, a single meal intake of each of the nutritional substances in the certain food material by the certain diner according to the corresponding single meal intake, and finally, summarizing the corresponding intake of all single meals aiming at a certain diner and a certain nutrient substance to obtain the corresponding total intake of the single meals.
In one possible design, the first food information collecting terminal or the second food information collecting terminal includes a weigher and a second RFID reader, where the weigher is configured to collect weight data of a dining bowl held by the diner before eating or the diner after eating and having a second RFID tag built therein, and transmit the weight data to the background server as a part of content of the first food characteristic information or the second food characteristic information, the second RFID reader is configured to collect second RFID information of the dining bowl and transmit the second RFID information to the background server as another part of content of the first food characteristic information or the second food characteristic information, so that the background server can conveniently collect the food characteristic information according to the same period as the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating food by the diner before eating or the residual amount of the after-eating food by the diner after eating according to the following steps:
determining an identification result of the food before eating or the food after eating based on a corresponding relation between the food and a meal bowl bound in advance according to second RFID information synchronously acquired with the first person characteristic information or the second person characteristic information;
calculating the weight of the food corresponding to the recognition result based on the known weight data of the empty bowl according to the weight data which is collected synchronously with the first person characteristic information or the second person characteristic information;
determining the food weight as the amount taken by the pre-prandial eatery for the pre-prandial food or the remaining amount of the post-prandial eatery for the post-prandial food.
In one possible design, the first food information collecting terminal or the second food information collecting terminal includes a weigher and a photographing camera, wherein the weigher is configured to collect weight data of a bowl held by the meal person before eating or the meal person after eating, and transmit the weight data to the backend server as a part of content of the first food characteristic information or the second food characteristic information, and the photographing camera is configured to collect bowl image data of the bowl and transmit the bowl image data to the backend server as another part of content of the first food characteristic information or the second food characteristic information, so that the backend server determines a receiving amount of the meal person before eating to the food before eating or the photographing camera according to the food characteristic information collected at the same time as the first person characteristic information or the second person characteristic information according to the following steps The rest amount of the eaten food by diner is as follows:
inputting the bowl image data which is acquired synchronously with the first person characteristic information or the second person characteristic information into a food identification model obtained by modeling in advance, and outputting and obtaining the identification result of the food before eating or the food after eating, wherein the food identification model adopts an artificial intelligence model obtained by modeling based on a support vector machine, a K nearest neighbor method, a random gradient descent method, a multivariate linear regression, a multilayer perceptron, a decision tree, a back propagation neural network, a convolutional neural network or a radial basis function network;
calculating the weight of the food corresponding to the recognition result based on the known weight data of the empty bowl according to the weight data which is collected synchronously with the first person characteristic information or the second person characteristic information;
determining the weight of the food as the amount of the food taken by the pre-meal eater or the remaining amount of the food after meal eaten by the post-meal eater.
In one possible design, the first food information collecting terminal or the second food information collecting terminal includes a depth camera and a second RFID reader, where the depth camera is configured to collect bowl image data of a bowl held by the diner before eating or the diner after eating and having a second RFID tag built therein, and transmit the bowl image data to the backend server as a part of the first food characteristic information or the second food characteristic information, and the second RFID reader is configured to collect second RFID information of the bowl and transmit the second RFID information to the server as another part of the first food characteristic information or the second food characteristic information, so that the backend server can collect the backend food characteristic information according to the backend food characteristic information collected at the same time as the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating food by the diner before eating or the residual amount of the after-eating food by the diner after eating according to the following steps:
determining an identification result of the food before eating or the food after eating based on a corresponding relation between the food and a meal bowl bound in advance according to second RFID information synchronously acquired with the first person characteristic information or the second person characteristic information;
according to the bowl image data, estimating the food volume corresponding to the recognition result through the synthesis processing of the food stereo image;
determining the food volume as a quantity taken by the pre-prandial eatery for the pre-prandial food or a quantity remaining by the post-prandial eatery for the post-prandial food.
In a possible design, the backend server is further configured to calculate, for various foods, an intake amount and a remaining amount of a certain dining person to the certain dining person according to the intake amount and the remaining amount of the certain dining person, then sequentially arrange all foods according to a descending order of the intake amount for the certain dining person to obtain a food sequence, and finally determine a plurality of foods ranked in front in the food sequence as the personal preference food of the certain dining person.
In a possible design, the background server is further configured to calculate, for various foods, a single meal consumption of a certain diner for the certain diner according to a receiving amount and a remaining amount of the certain diner for the certain diner, then calculate, for the certain diner and the certain food, a single meal intake of the certain diner for various nutrients in the certain food according to the corresponding single meal consumption, and finally sum up corresponding single meal intakes for the certain diner and the certain nutrients to obtain a corresponding total single meal intake.
In one possible design, the background server is further configured to determine a pre-meal timestamp of the meal taker before eating based on the first person characteristic information and/or the first food characteristic information collection timestamp, determine a post-meal timestamp of the meal taker after eating based on the second person characteristic information and/or the second food characteristic information collection timestamp, and generate a meal occupancy time recommendation for the meal taker based on the pre-meal timestamp and the post-meal timestamp of the meal taker.
In a second aspect, the invention provides a method for analyzing eating preferences of diners, comprising the following steps:
collecting first person characteristic information of diners before eating appearing at a meal leading place and first food characteristic information of food before eating;
collecting second person characteristic information of eaten diners appearing at the food receiving position and second food characteristic information of eaten food;
identifying the diner before eating according to the first person characteristic information, and determining the fetching amount of the diner before eating for the food before eating according to first food characteristic information which is synchronously acquired with the first person characteristic information;
identifying the eaten diner according to the second person characteristic information, and determining the residual amount of the eaten diner to the eaten food according to second food characteristic information which is synchronously acquired with the second person characteristic information;
for a certain food, the eating preference of the single/multiple diners is obtained through statistics according to the historical acquisition quantity and the historical residual quantity of the single/multiple diners.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a diner eating preference analysis system provided by the invention.
Fig. 2 is a schematic flow chart of a first food identification and quantification method provided by the present invention.
Fig. 3 is a schematic flow chart of food identification based on food catalogues provided by the present invention.
Fig. 4 is a schematic flow chart of food quantification based on two different depth cameras according to the present invention.
Fig. 5 is a schematic flow chart of a first method for identifying and quantifying food in a dinner plate according to the present invention.
Fig. 6 is a schematic flow chart of a second method for identifying and quantifying food in a dinner plate according to the present invention.
Fig. 7 is a schematic flow chart of a second food identification and quantification method provided by the present invention.
Fig. 8 is a schematic flow chart of a third food identification and quantification method provided by the present invention.
Fig. 9 is a schematic flow chart of a fourth food identification and quantification method provided by the present invention.
Fig. 10 is a flow chart of the method for analyzing eating preferences of diners according to the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely representative of exemplary embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly, a second object may be referred to as a first object, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone or A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists singly or A and B exist simultaneously; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
As shown in fig. 1, the eating habit analyzing system for dinners provided in the first aspect of the embodiment includes, but is not limited to, a first person information collecting terminal, a second person information collecting terminal, a first food information collecting terminal, a second food information collecting terminal and a background server, where the first person information collecting terminal and the first food information collecting terminal are arranged together at a meal taking place, the second person information collecting terminal and the second food information collecting terminal are arranged together at a meal receiving place, and the first person information collecting terminal, the second person information collecting terminal, the first food information collecting terminal and the second food information collecting terminal are respectively in communication connection with the background server; the first person information acquisition terminal is used for acquiring first person characteristic information of diners before eating appearing at the meal taking place and transmitting the first person characteristic information to the background server; the second person information acquisition terminal is used for acquiring second person characteristic information of the eaten diner appearing at the meal receiving place and transmitting the second person characteristic information to the background server; the first food information acquisition terminal is used for acquiring first food characteristic information of food before eating appearing at the meal receiving position and transmitting the first food characteristic information to the background server; the second food information acquisition terminal is used for acquiring second food characteristic information of the eaten food appearing at the food receiving place and transmitting the second food characteristic information to the background server; the background server is used for identifying the diner before eating according to the first person characteristic information, determining the fetching amount of the diner before eating for the food before eating according to the first food characteristic information synchronously acquired with the first person characteristic information, identifying the diner after eating according to the second person characteristic information, determining the residual amount of the diner after eating for the food after eating according to the second food characteristic information synchronously acquired with the second person characteristic information, and counting the eating preference degree of the diner/diner for a certain food according to the historical fetching amount and the historical residual amount of the diner/diner for the certain food.
As shown in fig. 1, in the specific structure of the eating habit analyzing system for diners, the meal getting place refers to a position in a dining hall where diners can get food before eating by using utensils such as dinner plates/bowls, etc., such as a "dish getting" position, a "meal getting" position, and/or a meal settlement position, etc., wherein the food before eating can include, but is not limited to, food such as rice, dishes, steamed stuffed buns, and/or steamed buns, etc. which are got. The food receiving place is a place in the dining hall for recovering food which is not eaten by diners and is eaten after being eaten, namely the place corresponding to the food receiving place, and the food receiving place can be called as a residual food receiving place sometimes because residual leftovers and leftovers are collected, wherein the food after being eaten refers to residual food corresponding to the food before being eaten, such as residual rice, dishes, steamed stuffed buns and/or steamed buns.
The first person information acquisition terminal can specifically comprise a camera, the lens view field of the camera covers the meal taking place, and further the first person information acquisition terminal can be used for acquiring the face image of the diner before eating and transmitting the face image to the background server as the first person characteristic information. The second person information acquisition terminal can specifically comprise another camera, the lens view field of the camera covers the meal receiving place, and the second person information acquisition terminal can be further used for acquiring the face image of the eaten diner and transmitting the face image to the background server as the second person characteristic information.
The first food information acquisition terminal may specifically include a group of depth cameras, and a lens view of the group of depth cameras covers the food lead, so that the first food information acquisition terminal may be configured to acquire food image data of the food before eating, and transmit the food image data to the background server as the first food characteristic information. The second food information acquisition terminal may specifically include another group of depth cameras, and a lens view of the group of depth cameras covers the food receiving place, so that the second food information acquisition terminal may be used to acquire food image data of the eaten food and transmit the food image data to the background server as the second food characteristic information. The depth camera is also called as a 3D camera, and as the name suggests, the depth distance of the shooting space can be detected by the camera, which is the biggest difference from a common camera, that is, the distance between each point in the image and the camera can be accurately known, so that the three-dimensional space coordinates of each point in the image can be obtained by adding the (x, y) coordinates of the point in the 2D image, and finally, the real scene can be restored by the three-dimensional coordinates, and the applications such as scene modeling and the like can be realized. In order to facilitate the subsequent process of precisely synthesizing the stereoscopic images of the food according to the food image data, the depth camera is preferably multiple and arranged at the position (i.e. the food lead or the food take place) with different viewing angles. In further detail, the depth camera includes, but is not limited to, a binocular camera and/or a time-of-flight camera, and/or the depth camera employs a single time-of-flight camera and a monocular optical camera to acquire food stereo image data containing color information.
The background server is used as a core device of the eating preference analysis system for the diner, and needs to have certain computing resources for data processing. After receiving the face image used as the first person feature information or the second person feature information, the background server may identify the before-eating diner or the after-eating diner through face recognition processing according to the face image, where a specific algorithm used in the face recognition processing is an existing algorithm. After receiving the food image data serving as the first food characteristic information or the second food characteristic information, the backend server may determine, according to the food characteristic information collected in synchronization with the first person characteristic information or the second person characteristic information, the receiving amount of the pre-eating food by the diner before eating or the remaining amount of the post-eating food by the diner after eating according to the following steps S11 to S13, as shown in fig. 2.
S11, inputting food image data which are collected synchronously with the first person characteristic information or the second person characteristic information into a food identification model obtained by modeling in advance, and outputting to obtain an identification result of the food before eating or the food after eating, wherein the food identification model can be but is not limited to an artificial intelligence model obtained by modeling based on a support vector machine, a K nearest neighbor method, a random gradient descent method, a multivariate linear regression, a multilayer perceptron, a decision tree, a back propagation neural network, a convolutional neural network or a radial basis function network and the like.
In step S11, specifically, the recognition result of the pre-edible food is output by inputting the food image data collected in synchronization with the first person feature information into the food recognition model; and inputting the food image data which is synchronously acquired with the second person characteristic information into the food identification model to output the identification result of the eaten food. The support vector machine, the K nearest neighbor method, the random gradient descent method, the multivariate linear regression, the multilayer perceptron, the decision tree, the back propagation neural network, the radial basis function network and the like are all common schemes in the existing artificial intelligence method, namely the food identification model can be obtained through a conventional rating verification modeling mode (the specific process of the conventional rating verification modeling mode comprises a rating process and a checking process of the model, namely, a process of firstly comparing a model simulation result with actual measurement data and then adjusting model parameters according to the comparison result to enable the simulation result to be consistent with the actual result).
And S12, according to the food image data, estimating the food volume corresponding to the recognition result through the synthesis processing of the food stereo image.
In step S12, since the food image data is 3D data acquired by a depth camera (e.g., a binocular camera or a time-of-flight camera), a composition process of a food stereo image can be performed according to the food image data in a conventional scene modeling manner to obtain a composition result, and then a food volume corresponding to the recognition result can be estimated according to the composition result.
S13, determining the volume of the food as the taking amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food.
In step S13, if the food image data is collected contemporaneously with the first person feature information, meaning that the pre-meal diner and the pre-meal food appear contemporaneously at the meal taking place, a trusted and person-to-food binding relationship may be established, and the food volume may be determined as the amount of the pre-meal diner' S pick up of the pre-meal food. If the food image data and the second person characteristic information are collected at the same time, it means that the eaten diner and the eaten food appear at the food receiving place at the same time, a credible binding relationship between the person and the food can be established, and the food volume can be determined as the residual amount of the eaten diner to the eaten food. In addition, the contemporaneous acquisition includes a case of simultaneous acquisition and a case where a time difference between the two acquisitions is not greater than a preset duration threshold (e.g., 1 second).
After determining the amount of each diner to take and the remaining amount of each food, specifically, for a certain food, the background server obtains the eating preference of the diner/diners according to the historical amount of each diner/diner to take and the historical remaining amount of each diner/diner, which includes but is not limited to: according to the historical getting amount and the historical remaining amount of a single/multiple diner for a certain food (such as steamed stuffed bun or shredded potato), for example, the getting amount and the remaining amount of each meal in the last week/month (if the remaining amount of the diner for the certain food is not determined in a certain meal, the remaining amount of the diner for the certain food is defaulted to be zero in the certain meal), the historical eating amount (namely the difference between the historical getting amount and the historical remaining amount) of the single/multiple diners for the certain food is calculated, and then the ratio of the historical eating amount and the historical getting amount is used as the eating preference of the single/multiple diners for the certain food, namely the ratio is larger, the preference is more. The plurality of diners can be but not limited to diners of one class, one grade or all schools, so that the eating preference of students of one class, students of one grade or all schools to certain food can be obtained through statistics, and further the eating preference selection work can be favorably carried out.
Therefore, based on the eating preference analysis system for diners, the technical scheme for automatically analyzing the preference degree of the diners for the canteens for the foods is provided, namely, the first person information acquisition terminal and the first food information acquisition terminal are arranged at the dinning place, and the second person information acquisition terminal and the second food information acquisition terminal are arranged at the dinning place, so that the background server can determine the diners before eating and the food receiving amount before eating according to the information acquired at the dinning place, determine the remaining amount of the diners after eating and the food after eating according to the information acquired at the dinning place, finally count the eating preference degree of the diners after eating according to the historical receiving amount and the historical remaining amount of the diners for a certain food, thereby not needing to manually count the food types of the leftover garbage, the method has the advantages of greatly saving time and labor, reducing the difficulty in selecting the foods in the canteen, effectively reducing the error rate due to directly obtaining the preference of various foods, improving the accuracy of selecting results and facilitating practical application and popularization.
The embodiment provides another possible design how to perform the person identification on the basis of the technical solution of the first aspect, namely, the first personnel information acquisition terminal or the second personnel information acquisition terminal comprises a first RFID (Radio Frequency Identification) reader, wherein the first RFID reader is used for collecting the first RFID information of the dining card held by the before-eating diner or the after-eating diner, and transmitting the first RFID information as the first person characteristic information or the second person characteristic information to the backend server, the background server identifies the diner corresponding to the first RFID information as the diner before eating or the diner after eating according to the one-to-one correspondence relationship between the diner and the dining card bound in advance, wherein a first RFID tag is arranged in the dining card. Specifically, when the first RFID reader is arranged at the meal taking place, the first RFID reader can be used for collecting first RFID information of a meal card held by the diner before eating; and when the first RFID reader is arranged at the meal receiving place, the first RFID reader can be used for collecting the first RFID information of the meal card held by the diner after eating. Because the binding relationship between the person and the card can be established in advance when the card is transacted by the meal card, the background server can identify the diner corresponding to the first RFID information as the diner before eating or the diner after eating according to the one-to-one correspondence relationship between the diner and the meal card bound in advance, and further can use the RFID reader to replace a camera to acquire the person information and realize the purpose of person identification.
In this embodiment, on the basis of the first aspect or the first technical solution, another possible design of how to perform person identification is further provided, that is, the first person information collecting terminal or the second person information collecting terminal includes a camera and a first RFID reader, where the camera is configured to collect a face image of the before-meal person or the after-meal person, and transmit the face image to the backend server as a part of the first person feature information or the second person feature information, the first RFID reader is configured to collect first RFID information of a meal card held by the before-meal person or the after-meal person, and transmit the first RFID information to the backend server as another part of the first person feature information or the second person feature information, the background server identifies the before-eating dinners or the after-eating dinners through face recognition processing according to the face images, and identifies the dinners corresponding to the first RFID information as the before-eating dinners or the after-eating dinners according to the one-to-one correspondence relationship between the pre-bound dinners and the dinning cards when the face recognition fails, wherein a first RFID tag is arranged in the dinning cards. Therefore, when face recognition fails, personnel recognition supplement can be carried out through the RFID technology, and personnel recognition results can be obtained.
In this embodiment, on the basis of the technical solution of the first aspect, a third possible design for how to accurately identify food is provided, that is, as shown in fig. 3, food image data collected in synchronization with the first person feature information or the second person feature information is input into a food identification model obtained by modeling in advance, and an identification result of the food before eating or the food after eating is output, where the design includes, but is not limited to, the following steps S111 to S113.
S111, obtaining a food catalog of the same day as the collection date of the first person characteristic information or the second person characteristic information, wherein the food catalog records various foods.
In the step S111, the food catalog may be specifically a menu catalog; the plurality of foods are foods which can be supplied by the canteen on the collection day, and diners can select one or more foods which are wanted to be eaten from the plurality of foods to eat.
And S112, inputting the food image data synchronously acquired with the first person characteristic information or the second person characteristic information into a corresponding food identification model obtained by modeling in advance aiming at various foods in the foods, and outputting the confidence coefficient that the food before eating or the food after eating is the food.
In the step S112, for example, if the plurality of kinds of food include steamed stuffed bun, bread, rice, and shredded potato, the food image data may be input as an input item into a steamed stuffed bun recognition model, a bread recognition model, a rice recognition model, and a shredded potato recognition model, respectively, so as to obtain confidence levels that the food before eating or the food after eating is steamed stuffed bun, bread, rice, and shredded potato.
S113, determining the food which is in the plurality of foods and corresponds to the maximum confidence coefficient as the recognition result of the food before eating or the food after eating.
In the step S113, for example, if the before-eating food or the after-eating food is the confidence level of the steamed stuffed bun, the bread, the rice and the shredded potato fried meat respectively: 10%, 5%, 2% and 80%, the shredded potato meat may be used as a recognition result of the pre-or post-meal food.
Therefore, through the third possible design described in the foregoing steps S111 to S113, a limited number of food identification models to be used can be determined in cooperation with the food catalog of the collection day, so as to obtain the final food identification result quickly and accurately.
In this embodiment, on the basis of the technical solution of the first aspect, a fourth possible design for how to perform accurate estimation of the food volume is further provided, that is, as shown in fig. 4, when the depth camera includes a binocular camera and a time-of-flight camera, the food volume corresponding to the recognition result is estimated through the synthesis processing of the food stereo image according to the food image data, which includes, but is not limited to, the following steps S121 to S123.
S121, obtaining a first synthesis result through synthesis processing of a food stereo image according to first image data which is in the food image data and acquired by the binocular camera, and estimating a first volume corresponding to the identification result according to the first synthesis result.
And S122, according to second image data which is in the food image data and acquired by the flight time camera, obtaining a second synthesis result through synthesis processing of a food stereo image, and estimating a second volume corresponding to the identification result according to the second synthesis result.
S123, calculating the food volume V corresponding to the recognition result according to the following formula:
V=η1*V12*V2
in the formula, V1Representing said first volume, V2Representing said second volume, η1And η2Respectively representing preset weight coefficients and having eta12=1。
In the step S123, for example,. eta.1And η2Respectively 0.5 and 0.5.
Therefore, by the four possible designs described in the foregoing steps S121 to S123, the acquisition results of the binocular camera and the flight time camera can be integrated, and the accuracy of the food volume estimation result can be improved.
In this embodiment, on the basis of the technical solution of the first aspect, a fifth possible design for how to quantify food in a dinner plate is further provided, that is, the depth camera is configured to take a picture of a dinner plate held by the pre-meal diner or the post-meal diner, acquire dinner plate image data including food image data of at least one pre-meal food or at least one post-meal food, and transmit the dinner plate image data to the background server as the first food characteristic information or the second food characteristic information, where the dinner plate has a dinner plate background color of a specific color, so that the dinner plate image data further includes dinner plate background color data; according to the characteristic information of the food collected in the same period as the first person characteristic information or the second person characteristic information, the fetching amount of the before-eating person for the before-eating food or the remaining amount of the after-eating food for the after-eating person is determined, as shown in fig. 5, including but not limited to the following steps S21 to S23.
S21, dinner plate background color data are identified from dinner plate image data which are collected synchronously with the first person characteristic information or the second person characteristic information.
In step S21, since the dinner plate is a special dinner plate and has a dinner plate background color of a specific color, dinner plate background color data can be identified from the dinner plate image data based on the known specific color, for example, when the dinner plate background color is blue, the blue data in the dinner plate image data can be used as the dinner plate background color data.
And S22, carrying out image data segmentation processing on the dinner plate image data by taking the identified dinner plate bottom color data as background data to obtain food image data of each food before eating or each food after eating.
In step S22, since the tray bottom color image in the tray image surrounds the image of each food pile (i.e., each pre-eaten food or each post-eaten food), the food image data of each food pile is obtained by performing a conventional image data segmentation process using the tray bottom color data as background data.
S23, aiming at each food before eating or each food after eating, determining the fetching amount of the diner before eating or the residual amount of the diner after eating according to the corresponding food image data.
In the step S23, the specific determination process of the fetching amount or the remaining amount may refer to the aforementioned steps S11 to S13, which are not described herein again.
Therefore, by the fifth possible design described in the foregoing steps S21 to S23, for a dinner plate with a specific dinner plate background color, food image data of each food stack is cut out from the dinner plate image data based on the identified dinner plate background color data, and then the food taking amount of each pre-eating food or the remaining amount of each post-eating food in the dinner plate is obtained through quantification, so as to achieve the purpose of quantifying at least one food in the dinner plate.
In this embodiment, on the basis of the technical solution of the first aspect, another possible design for how to quantify the food in the dinner plate is further provided, that is, the depth camera is configured to take a picture of the dinner plate held by the person before eating or the person after eating, acquire dinner plate image data including food image data of at least one food before eating or at least one food after eating, and transmit the dinner plate image data to the background server as the first food characteristic information or the second food characteristic information; according to the characteristic information of the food collected in synchronization with the characteristic information of the first person or the characteristic information of the second person, the amount of the meal taken by the before-meal diner for the before-meal diner or the amount of the meal left by the after-meal diner for the after-meal diner is determined, as shown in fig. 6, including but not limited to the following steps S31 to S34.
S31, carrying out food heap clustering analysis on the dinner plate image data which is collected synchronously with the first person characteristic information or the second person characteristic information based on a clustering algorithm, and identifying at least one food heap.
In the step S31, clustering is a widely applied heuristic data analysis technique, and the first perception of data generation by people is that data is meaningfully grouped, and objects are grouped, so that similar objects are classified into one class, and dissimilar objects are classified into different classes, so that the purpose of data classification of dinner plate image data can be realized by conventional modification of the existing clustering algorithm, and then a food aggregation analysis result is obtained, that is, at least one food heap is identified. Specifically, the clustering algorithm may be, but is not limited to, a K-means clustering algorithm (which is an iterative solution clustering analysis algorithm and includes the steps of dividing data into K groups in advance, randomly selecting K objects as initial clustering centers, calculating distances between each object and each seed clustering center, and assigning each object to the nearest clustering center).
And S32, aiming at each food pile in the at least one food pile, carrying out image data segmentation processing on the dinner plate image data according to the pixel coordinates of the corresponding food pile boundary pixels in the dinner plate image data to obtain corresponding food pile image data.
In step S32, for each food pile, all corresponding food pile boundary pixels form a closed boundary line, and then image data segmentation processing may be performed on the dinner plate image data based on the boundary line to obtain corresponding food pile image data.
And S33, aiming at each food pile, determining corresponding food and food quantity according to the corresponding food pile image data.
In step S33, the specific details for determining the food to which the user belongs can be obtained by referring to step S11 (i.e., inputting the image data of the food mass into the food identification model and outputting the result of food identification), and the specific details for determining the amount of the food can be obtained by referring to step S12 (i.e., estimating the volume of the food by synthesizing the three-dimensional images of the food according to the image data of the food mass), which will not be described herein again.
S34, summarizing the food quantity of all the food piles in the at least one food pile and belonging to the food aiming at various foods before eating or various foods after eating, and determining the corresponding total food quantity as the fetching quantity of the diner before eating or the residual quantity of the diner after eating.
In step S34, for example, if there are two food heaps in the dinner plate for a certain food, the food amount of the two food heaps can be summed up to obtain the amount of food to be taken or left.
Therefore, by the sixth possible design described in the foregoing steps S31 to S34, the food image segmentation can be directly performed based on the analysis result of the clustering algorithm, so that the goal of performing quantitative processing on at least one food in the dinner plate can be achieved without making a dinner plate specially.
In this embodiment, on the basis of the technical solution of the first aspect, a seventh possible design for how to perform preference analysis on food materials is further provided, that is, the backend server is further configured to obtain, according to food image data collected in synchronization with the first person characteristic information, a pre-eating identification number of the eater for each food material among all food materials by food material identification processing based on a target detection algorithm before eating, obtain, according to food image data collected in synchronization with the second person characteristic information, a post-eating identification number of the eater for each food material by food material identification processing based on the target detection algorithm after eating, and further calculate, for each food material, an eating amount of the eater for the eater according to the pre-eating identification number and the post-eating identification number of the eater for the eater, and then, aiming at a certain diner, sequentially arranging all food materials according to the sequence of the eating amount from large to small to obtain a food material sequence, and finally determining a plurality of food materials which are sequenced at the front in the food material sequence as the individual preferred food materials of the certain diner. The target detection algorithm can adopt the existing algorithm, for example, the yolo (young Only Look once) algorithm, which is an object identification and positioning algorithm based on a deep neural network, and the maximum characteristic of the target detection algorithm is that the operation speed is very high, and the target detection algorithm can be used for a real-time system. Taking the fried shredded potato meat as an example, the identification numbers of the potatoes and the meat before eating and the identification numbers of the potatoes and the meat after eating can be respectively obtained through food material identification processing based on a target detection algorithm, and the eating amount of the potatoes and the meat can be obtained according to the difference between the identification numbers before eating and the identification numbers after eating; and finally, for a certain diner, if the food material sequencing result is that meat is in front, determining that the certain diner prefers to eat meat.
Therefore, by the aid of the seven possible designs, whether the dining personnel have the problem of food preference or not can be found in time, the food preference phenomenon can be corrected in time by assistance, and the health and normal growth of the dining personnel are ensured.
On the basis of the technical solution of the first aspect, this embodiment further provides an eighth possible design for how to quantitatively analyze the intake of nutrients, that is, the backend server is further configured to obtain, through food material recognition processing based on a target detection algorithm, the identification number of the eater before eating for each food material among all food materials according to food image data collected in synchronization with the first person characteristic information, obtain, through food material recognition processing based on a target detection algorithm, the identification number of the eater after eating for each food material according to food material recognition processing based on a target detection algorithm according to food image data collected in synchronization with the second person characteristic information, and further calculate, for each food material, the single meal consumption of the eater according to the identification number of the eater before eating and the identification number of the eater after eating, and then calculating the single meal intake of the certain diner to various nutrient substances in the certain food material according to the corresponding single meal consumption for the certain diner and the certain food material, and finally summarizing all the corresponding single meal intakes for the certain diner and the certain nutrient substance to obtain the corresponding total single meal intake. Also taking the fried shredded potato and meat as an example, the identification numbers of the potato and the meat before eating and the identification numbers of the potato and the meat after eating can be respectively obtained through food material identification processing based on a target detection algorithm, the single meal consumption of the potato and the meat can be obtained according to the difference between the identification numbers before eating and the identification numbers after eating, and then the total single meal intake of the protein can be calculated according to the known corresponding relation between the potato and the meat and the protein (one of the nutrient substances).
Therefore, through the eight possible designs, whether the problem of unbalanced nutrient intake of the diner exists or not can be found in time, the phenomenon of unbalanced nutrient intake can be corrected in time by assistance, and the health and normal growth of the diner can be ensured.
In this embodiment, on the basis of the technical solution of the first aspect, another possible design for how to perform food identification and quantitative processing is further provided, that is, the first food information collecting terminal or the second food information collecting terminal includes a weigher and a second RFID reader, where the weigher is configured to collect weight data of a meal bowl held by a meal person before or after eating and having a second RFID tag embedded therein, and transmit the weight data to the background server as a part of content of the first food characteristic information or the second food characteristic information, the second RFID reader is configured to collect second RFID information of the meal bowl, and transmit the second RFID information to the background server as another part of content of the first food characteristic information or the second food characteristic information, so that the background server determines, according to the food characteristic information collected in the same period as the first person characteristic information or the second person characteristic information, the fetching amount of the before-eating diner for the before-eating food or the remaining amount of the after-eating diner for the after-eating food according to the following steps S41 to S43, as shown in fig. 7.
S41, determining the recognition result of the food before eating or the food after eating based on the corresponding relation between the food and the dining bowl bound in advance according to second RFID information synchronously acquired with the first person characteristic information or the second person characteristic information.
In the step S41, the pre-binding relationship between the bowl and the food may be, but is not limited to, when the meal bowl is loaded with the food in advance by the meal taker, the second RFID information corresponding to the loaded food is written into the second RFID tag by an RFID writer, so that the food in the bowl can be determined again by the RFID technology at the meal taking place and the meal receiving place.
And S42, calculating the weight of the food corresponding to the identification result based on the known weight data of the empty bowl according to the weight data synchronously acquired with the first person characteristic information or the second person characteristic information.
S43, determining the weight of the food as the taking amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food.
Therefore, the ninth possible design can be used for identifying and quantifying food in the bowl based on the RFID technology and the weighing means, and the accuracy and the practicability of a quantification result before and after the food is eaten are improved.
In this embodiment, on the basis of the technical solution of the first aspect, another possible design for how to perform food identification and quantitative processing is further provided, that is, the first food information collecting terminal or the second food information collecting terminal includes a weigher and a photographing camera, where the weigher is configured to collect weight data of dishes held by diners before or after eating and transmit the weight data to the backend server as a part of content of the first food characteristic information or the second food characteristic information, the photographing camera is configured to collect bowl image data of the bowls and transmit the bowl image data to the backend server as another part of content of the first food characteristic information or the second food characteristic information, so that the backend server transmits the food bowl image data to the backend server according to the food collected by the backend server in the same period as the first person characteristic information or the second person characteristic information The characteristic information is obtained by determining the amount of food taken by the eater before eating or the amount of food left by the eater after eating according to the following steps S51 to S53, as shown in fig. 8.
And S51, inputting the bowl image data which is acquired synchronously with the first person characteristic information or the second person characteristic information into a food identification model obtained by modeling in advance, and outputting the identification result of the food before eating or the food after eating, wherein the food identification model can be but not limited to an artificial intelligence model obtained by modeling based on a support vector machine, a K nearest neighbor method, a random gradient descent method, a multivariate linear regression, a multilayer perceptron, a decision tree, a back propagation neural network, a convolutional neural network or a radial basis function network and the like.
In the step S51, for details of identification, refer to the step S11, which is not described herein again.
S52, calculating the weight of the food corresponding to the identification result according to the weight data which is collected simultaneously with the first person characteristic information or the second person characteristic information and based on the known weight data of the empty bowls.
S53, determining the weight of the food as the taking amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food.
Therefore, through the ten possible designs, the food in the bowl can be identified and quantitatively processed based on the image identification technology and the weighing means, and the accuracy and the practicability of quantitative results before and after the food is eaten are improved.
In this embodiment, on the basis of the technical solution of the first aspect, another possible design for how to perform food identification and quantitative processing is further provided, that is, the first food information collecting terminal or the second food information collecting terminal includes a depth camera and a second RFID reader, where the depth camera is configured to collect bowl image data of a bowl held by the diner before eating or the diner after eating and having a second RFID tag built therein, and transmit the bowl image data to the backend server as a part of the content of the first food characteristic information or the second food characteristic information, and the second RFID reader is configured to collect second RFID information of the bowl and transmit the second RFID information to the backend server as another part of the content of the first food characteristic information or the second food characteristic information, so that the background server determines, according to the food characteristic information collected in the same period as the first person characteristic information or the second person characteristic information, the fetching amount of the before-eating diner for the before-eating food or the remaining amount of the after-eating diner for the after-eating food according to the following steps S61 to S63, as shown in fig. 9.
S61, determining the recognition result of the food before eating or the food after eating based on the corresponding relation between the food and the dining bowl bound in advance according to second RFID information synchronously acquired with the first person characteristic information or the second person characteristic information.
In the step S61, the specific identification details can be referred to in the foregoing step S41, which is not described herein again.
S62, according to the bowl image data, estimating the food volume corresponding to the identification result through the synthesis processing of the food three-dimensional image.
In the step S62, the detailed estimation details can be referred to the aforementioned step S12, which is not described herein again.
S63, determining the volume of the food as the taking amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food.
Therefore, by means of the eleven possible design, food in the bowl can be identified and quantitatively processed based on the RFID technology and the 3D modeling technology, and accuracy and practicability of quantitative results before and after food is eaten are improved.
In this embodiment, on the basis of the technical solution of the first aspect, a possible design of how to perform preference analysis on food is further provided, that is, the backend server is further configured to calculate, for various foods, an eating amount of a certain diner for the certain diner according to a getting amount and a remaining amount of the certain diner for the certain diner, then, for the certain diner, sequentially arrange all foods according to a descending order of the eating amount to obtain a food sequence, and finally determine a plurality of foods ranked in front in the food sequence as personal preference foods of the certain diner. So also can in time discover whether the personnel of having dinner have the food preference problem, and then can help the helping hand in time to correct the food preference phenomenon, ensure that the personnel of having dinner are healthy and normal growth.
In this embodiment, on the basis of the technical solution of the first aspect, a thirteen possible design for quantitatively analyzing the nutrient intake is further provided, that is, the background server is further configured to calculate, for each food, a single meal consumption of a certain dining person for the certain dining person according to a taken amount and a remaining amount of the certain dining person for the certain dining person, then calculate, for the certain dining person and the certain food, a single meal intake of the certain dining person for each nutrient in the certain food according to a corresponding single meal consumption, and finally summarize, for the certain dining person and the certain nutrient, all corresponding single intake to obtain a corresponding total single meal intake. So also can in time discover whether the personnel of having dinner have the nutrient substance to intake unbalance problem, and then can help the helping hand in time correct the nutrient substance and intake unbalance phenomenon, ensure that the personnel of having dinner are healthy and normal growth.
This embodiment provides, on the basis of the technical solution of the foregoing first aspect, a possible design of how to give a meal time recommendation, that is, the background server is further configured to determine a before-meal timestamp of the meal person before meal according to the collecting timestamp of the first person characteristic information and/or the first food characteristic information, determine a after-meal timestamp of the meal person after meal according to the collecting timestamp of the second person characteristic information and/or the second food characteristic information, and generate a meal occupation time recommendation for the meal person according to the before-meal timestamp and the after-meal timestamp of the meal person. Specifically, according to the timestamp before the meal of the diner and the timestamp after the meal, the meal occupation time suggestion is generated for the diner, and the meal occupation time suggestion includes but is not limited to: the method comprises the steps of counting to obtain average dining occupation time according to the difference between pre-dining time stamps and post-dining time stamps of all diners, calculating to obtain individual dining occupation time according to the difference between the pre-dining time stamps and the post-dining time stamps of a certain diner, and finally obtaining dining occupation time suggestions aiming at the certain diner according to a comparison result of the average dining occupation time and the individual dining occupation time, wherein the dining occupation time suggestions comprise the steps of accelerating the dining speed, maintaining the dining speed and/or reducing the dining speed. In detail, if the average dining occupation time is obviously less than the individual dining occupation time, suggesting that the dining occupation time of a certain diner is increased; if the average dining occupation time is slightly equal to the individual dining occupation time, suggesting that the dining occupation time of a certain dining person is the dining speed; and if the average dining occupation time is obviously longer than the individual dining occupation time, suggesting that the dining occupation time of a certain diner is reduced by the dining speed.
Fourteen possible designs can be adopted for generating dining occupation time suggestions for diners, scientifically guide the diners to have meals, and avoid bad dining habits such as swallowing tigers and swallowing.
As shown in fig. 10, the second aspect of the present embodiment further provides a method for analyzing eating preferences of diners, including but not limited to the following steps S1 to S5.
S1, collecting first person characteristic information of diners before eating and first food characteristic information of food before eating, wherein the first person characteristic information of diners before eating appears at a meal leading place.
S2, collecting second person characteristic information of the eaten diner appearing at the food receiving place and second food characteristic information of the eaten food.
S3, identifying the diner before eating according to the first person characteristic information, and determining the fetching amount of the diner before eating on the food before eating according to the first food characteristic information synchronously acquired with the first person characteristic information.
And S4, identifying the eating personnel according to the second personnel characteristic information, and determining the residual quantity of the eating personnel to the eating food according to the second food characteristic information synchronously acquired with the second personnel characteristic information.
S5, counting the eating preference of the single/multiple diners for certain food according to the historical acquisition amount and the historical residual amount of the single/multiple diners for the certain food.
The working process, working details and technical effects of the foregoing method provided in the second aspect of this embodiment may refer to any one of the first aspect or the first aspect that may be designed into the eating preference analysis system for dinners, which is not described herein again.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined by the appended claims, which are intended to be interpreted according to the breadth to which the description is entitled.

Claims (15)

1. A diner eating preference analysis system is characterized by comprising a first person information acquisition terminal, a second person information acquisition terminal, a first food information acquisition terminal, a second food information acquisition terminal and a background server, wherein the first person information acquisition terminal and the first food information acquisition terminal are arranged at a dinning receiving place together;
the first person information acquisition terminal is used for acquiring first person characteristic information of diners before eating appearing at the meal taking place and transmitting the first person characteristic information to the background server;
the second person information acquisition terminal is used for acquiring second person characteristic information of eaten diners appearing at the meal receiving place and transmitting the second person characteristic information to the background server;
the first food information acquisition terminal is used for acquiring first food characteristic information of food before eating appearing at the meal receiving position and transmitting the first food characteristic information to the background server;
the second food information acquisition terminal is used for acquiring second food characteristic information of the eaten food appearing at the food receiving place and transmitting the second food characteristic information to the background server;
the background server is used for identifying the diner before eating according to the first person characteristic information, determining the fetching amount of the diner before eating for the food before eating according to the first food characteristic information synchronously acquired with the first person characteristic information, identifying the diner after eating according to the second person characteristic information, determining the residual amount of the diner after eating for the food after eating according to the second food characteristic information synchronously acquired with the second person characteristic information, and counting the eating preference degree of the diner/diner for a certain food according to the historical fetching amount and the historical residual amount of the diner/diner for the certain food.
2. The system for analyzing eating habits of eating people according to claim 1, wherein the first person information collecting terminal or the second person information collecting terminal comprises a camera, wherein the camera is used for collecting a face image of the eating people before eating or the eating people after eating, and transmitting the face image to the background server as the first person feature information or the second person feature information, so that the background server can identify the eating people before eating or the eating people after eating through face recognition processing according to the face image;
or the first person information acquisition terminal or the second person information acquisition terminal comprises a first RFID reader, wherein the first RFID reader is used for acquiring first RFID information of a meal card held by the before-meal diner or the after-meal diner, and transmitting the first RFID information as the first person characteristic information or the second person characteristic information to the background server, so that the background server identifies the diner corresponding to the first RFID information as the before-meal diner or the after-meal diner according to a one-to-one correspondence relationship between the pre-bound diner and the meal card, wherein a first RFID tag is arranged in the meal card;
or the first person information collecting terminal or the second person information collecting terminal comprises a camera and a first RFID reader, wherein the camera is used for collecting a face image of the meal person before eating or the meal person after eating, and transmitting the face image to the background server as a part of the content of the first person characteristic information or the second person characteristic information, the first RFID reader is used for collecting first RFID information of a meal card held by the meal person before eating or the meal person after eating, and transmitting the first RFID information to the background server as the other part of the content of the first person characteristic information or the second person characteristic information, so that the background server identifies the meal person before eating or the meal person after eating through recognition processing according to the face image, and when the face recognition fails, recognizing the diners corresponding to the first RFID information as the diners before eating or the diners after eating according to the one-to-one correspondence relationship between the pre-bound diners and the diner cards, wherein the diner cards are internally provided with first RFID tags.
3. The eating habit analyzing system of claim 1, wherein the first food information collecting terminal or the second food information collecting terminal comprises a depth camera, wherein the depth camera is configured to collect food image data of the food before eating or the food after eating, and transmit the food image data to the background server as the first food characteristic information or the second food characteristic information, so that the background server determines the amount of food before eating taken by the eating person or the amount of food after eating left by the eating person according to the food characteristic information collected at the same time as the first person characteristic information or the second person characteristic information;
according to the food characteristic information which is synchronously acquired with the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food, wherein the method comprises the following steps:
inputting food image data which is acquired synchronously with the first person characteristic information or the second person characteristic information into a food identification model obtained by modeling in advance, and outputting and obtaining the identification result of the food before eating or the food after eating, wherein the food identification model adopts an artificial intelligence model obtained by modeling based on a support vector machine, a K nearest neighbor method, a random gradient descent method, a multivariate linear regression, a multilayer perceptron, a decision tree, a back propagation neural network, a convolutional neural network or a radial basis function network;
according to the food image data, estimating the food volume corresponding to the recognition result through the synthesis processing of the food stereo image;
determining the food volume as a quantity taken by the pre-prandial eatery for the pre-prandial food or a quantity remaining by the post-prandial eatery for the post-prandial food.
4. The eating preference analysis system of diner as recited in claim 3 wherein said depth camera is plural and arranged at locations at different perspectives;
and/or the depth camera comprises a binocular camera and/or a time-of-flight camera;
and/or the depth camera adopts a matching scheme of a single time-of-flight camera and a monocular optical camera so as to acquire and obtain food three-dimensional image data containing color information.
5. The eating preference analyzing system of claim 3, wherein the step of inputting the food image data collected in synchronization with the first person characteristic information or the second person characteristic information into a food recognition model obtained by modeling in advance and outputting the recognition result of the pre-eating food or the post-eating food comprises:
acquiring a food catalog on the same day as the acquisition date of the first person characteristic information or the second person characteristic information, wherein the food catalog records various foods;
for each food in the plurality of foods, inputting food image data which is synchronously acquired with the first person characteristic information or the second person characteristic information into a corresponding food identification model obtained by modeling in advance, and outputting a confidence coefficient that the food before eating or the food after eating is the food;
determining the food which is in the plurality of foods and corresponds to the maximum confidence coefficient as the recognition result of the food before eating or the food after eating.
6. The eating preference analyzing system of claim 3, wherein when said depth camera includes a binocular camera and a time-of-flight camera, estimating a food volume corresponding to said recognition result through a synthesizing process of a stereoscopic image of food based on said food image data, comprises:
according to first image data which are in the food image data and acquired by the binocular camera, a first synthesis result is obtained through synthesis processing of a food stereo image, and a first volume corresponding to the recognition result is estimated according to the first synthesis result;
according to second image data which is in the food image data and acquired by the flight time camera, obtaining a second synthesis result through synthesis processing of a food stereo image, and estimating a second volume corresponding to the identification result according to the second synthesis result;
calculating the food volume V corresponding to the recognition result according to the following formula:
V=η1*V12*V2
in the formula, V1Representing said first volume, V2Representing said second volume, η1And η2Respectively representing preset weight coefficients and having eta12=1。
7. The meal worker eating preference analysis system of claim 3, wherein the depth camera is configured to capture meal tray image data including food image data of at least one of the pre-meal food or the post-meal food by taking a picture of a meal tray held by the pre-meal or post-meal worker, and transmit the meal tray image data to the backend server as the first food characteristic information or the second food characteristic information, wherein the meal tray has a meal tray background color of a specific color, such that the meal tray image data further includes meal tray background color data;
according to the food characteristic information which is synchronously acquired with the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food, wherein the method comprises the following steps:
identifying meal tray background color data from meal tray image data collected contemporaneously with the first person characteristic information or the second person characteristic information;
taking the identified dinner plate bottom color data as background data, and carrying out image data segmentation processing on the dinner plate image data to obtain food image data of each food before eating or each food after eating;
and aiming at each pre-edible food or each post-edible food, determining the taking amount of the diner before eating or the residual amount of the diner after eating according to the corresponding food image data.
8. The eating habit analyzing system of claim 3, wherein said depth camera is configured to capture meal tray image data including food image data of at least one pre-eat food or at least one post-eat food by taking a picture of a meal tray held by said pre-eat or post-eat diners, and transmit said meal tray image data to said backend server as said first or second food characteristic information;
according to the food characteristic information which is synchronously acquired with the first person characteristic information or the second person characteristic information, determining the fetching amount of the before-eating diner to the before-eating food or the residual amount of the after-eating diner to the after-eating food, wherein the method comprises the following steps:
performing food heap clustering analysis on dinner plate image data which is synchronously acquired with the first person characteristic information or the second person characteristic information based on a clustering algorithm to identify at least one food heap;
for each food pile in the at least one food pile, carrying out image data segmentation processing on the dinner plate image data according to the pixel coordinates of the corresponding food pile boundary pixels in the dinner plate image data to obtain corresponding food pile image data;
for each food pile, determining corresponding food and food amount according to corresponding food pile image data;
for each pre-serving food item or each post-serving food item, the food quantities of all food items in the at least one food item stack belonging to that food item are summed, and the corresponding total food quantity is determined as the amount of food taken by the pre-serving eater or the remaining amount of food taken by the post-serving eater.
9. The eating habit analyzing system of claim 3, wherein the background server is further configured to obtain the number of before-eating recognitions of each food material in all food materials by the eating habit information of the eating person through the food material recognition process based on the target detection algorithm according to the food material image data collected in synchronization with the first person characteristic information, obtain the number of after-eating recognitions of each food material by the after-eating person through the food material recognition process based on the target detection algorithm according to the food material image data collected in synchronization with the second person characteristic information, and further calculate the eating amount of each food material by the certain eating person according to the number of before-eating recognitions and the number of after-eating recognitions of each food material by the certain eating person, and then for the certain eating person, and sequentially arranging all food materials according to the sequence of the food consumption from large to small to obtain a food material sequence, and finally determining a plurality of food materials which are sequenced at the front in the food material sequence as the individual preferred food materials of a diner.
10. The system of claim 3, wherein the backend server is further configured to obtain the pre-eating identification number of the eating person for each of the food materials in all the food materials through food material identification processing based on a target detection algorithm based on the food image data collected in synchronization with the first person characteristic information, obtain the post-eating identification number of the eating person for each of the food materials through food material identification processing based on a target detection algorithm based on the food material identification processing collected in synchronization with the second person characteristic information, and further calculate the single-meal consumption of the certain eating person for each of the food materials according to the pre-eating identification number and the post-eating identification number of the certain eating person for each of the food materials, and then calculate the single-meal consumption of the certain eating person for the certain eating person and the certain food material, and finally, summarizing all corresponding single-meal intakes for the certain dinnerer and the certain nutrient substance to obtain the corresponding total single-meal intakes.
11. The eating preference analyzing system of claim 1, wherein the first food information collecting terminal or the second food information collecting terminal comprises a weigher and a second RFID reader, wherein the weigher is used for collecting weight data of a dining bowl which is held by the eating person or the eating person before eating and is internally provided with a second RFID tag, and transmitting the weight data to the background server as a part of the content of the first food characteristic information or the second food characteristic information, the second RFID reader is used for collecting second RFID information of the dining bowl and transmitting the second RFID information to the background server as the other part of the content of the first food characteristic information or the second food characteristic information, so that the background server can conveniently collect food characteristic information according to the same period as the first person characteristic information or the second person characteristic information, determining the amount of food taken by the before-meal eater or the remaining amount of food after meal eater according to the following steps:
determining an identification result of the food before eating or the food after eating based on a corresponding relation between the food and a meal bowl bound in advance according to second RFID information synchronously acquired with the first person characteristic information or the second person characteristic information;
calculating the weight of the food corresponding to the recognition result based on the known weight data of the empty bowl according to the weight data which is collected synchronously with the first person characteristic information or the second person characteristic information;
determining the food weight as the amount taken by the pre-prandial eatery for the pre-prandial food or the amount left by the post-prandial eatery for the post-prandial food;
or the first food information acquisition terminal or the second food information acquisition terminal comprises a weigher and a photographing camera, wherein the weigher is used for acquiring weight data of a dining bowl held by the diner before eating or the diner after eating, and transmitting the weight data to the background server as part of the content of the first food characteristic information or the second food characteristic information, the photographing camera is used for acquiring dining bowl image data of the dining bowl and transmitting the dining bowl image data to the background server as the other part of the content of the first food characteristic information or the second food characteristic information, so that the background server determines the receiving amount of the diner before eating to the food before eating or the diner after eating according to the following steps according to the food characteristic information acquired synchronously with the first person characteristic information or the second person characteristic information For the remaining amount of the food after eating:
inputting the bowl image data which is acquired synchronously with the first person characteristic information or the second person characteristic information into a food identification model obtained by modeling in advance, and outputting and obtaining the identification result of the food before eating or the food after eating, wherein the food identification model adopts an artificial intelligence model obtained by modeling based on a support vector machine, a K nearest neighbor method, a random gradient descent method, a multivariate linear regression, a multilayer perceptron, a decision tree, a back propagation neural network, a convolutional neural network or a radial basis function network;
calculating the weight of the food corresponding to the recognition result based on the known weight data of the empty bowl according to the weight data which is collected synchronously with the first person characteristic information or the second person characteristic information;
determining the food weight as the amount taken by the pre-prandial eatery for the pre-prandial food or the amount left by the post-prandial eatery for the post-prandial food;
or the first food information acquisition terminal or the second food information acquisition terminal comprises a depth camera and a second RFID reader, wherein the depth camera is used for acquiring the image data of a dining bowl held by the diner before eating or the diner after eating and provided with a second RFID tag inside, and transmitting the image data of the dining bowl to the background server as a part of the content of the first food characteristic information or the second food characteristic information, the second RFID reader is used for acquiring the second RFID information of the dining bowl and transmitting the second RFID information to the background server as the other part of the content of the first food characteristic information or the second food characteristic information, so that the background server can conveniently acquire the food characteristic information according to the first person characteristic information or the second person characteristic information at the same time, determining the fetching amount of the before-eating food by the diner before eating or the residual amount of the after-eating food by the diner after eating according to the following steps:
determining an identification result of the food before eating or the food after eating based on a corresponding relation between the food bound in advance and a meal bowl according to second RFID information synchronously acquired with the first person characteristic information or the second person characteristic information;
according to the bowl image data, estimating the food volume corresponding to the recognition result through the synthesis processing of the food stereo image;
determining the food volume as a quantity taken by the pre-prandial eatery for the pre-prandial food or a quantity remaining by the post-prandial eatery for the post-prandial food.
12. The eating habit analyzing system of claim 1, wherein the backend server is further configured to calculate, for various foods, the eating amount of a certain eating person according to the getting amount and the remaining amount of the certain eating person, then arrange all foods in the order of the eating amount from large to small for the certain eating person to obtain a food sequence, and finally determine a plurality of foods ranked first in the food sequence as the personal preference food of the certain eating person.
13. The system for analyzing eating habits of diners according to claim 1, wherein the background server is further configured to calculate, for each food, the single meal consumption of the diner for the diner according to the amount of the diner to be taken and the remaining amount of the diner, then calculate, for the diner and the food, the single meal intake of the diner for each nutrient in the food according to the corresponding single meal consumption, and finally sum all the corresponding single meal intakes for the diner and the nutrient to obtain the corresponding total single meal intake.
14. The eating habit analysis system of claim 1, wherein the background server is further configured to determine the pre-meal timestamp of the eating person according to the collected timestamps of the first person characteristic information and/or the first food characteristic information, and to determine the post-meal timestamp of the eating person according to the collected timestamps of the second person characteristic information and/or the second food characteristic information, and to generate the meal occupancy time recommendation for the eating person according to the pre-meal timestamp and the post-meal timestamp of the eating person.
15. A method for analyzing eating preferences of diners is characterized by comprising the following steps:
collecting first person characteristic information of diners before eating appearing at a meal leading place and first food characteristic information of food before eating;
collecting second person characteristic information of eaten diners appearing at the food receiving position and second food characteristic information of eaten food;
identifying the diner before eating according to the first person characteristic information, and determining the fetching amount of the diner before eating for the food before eating according to first food characteristic information which is synchronously acquired with the first person characteristic information;
identifying the eaten diner according to the second person characteristic information, and determining the residual amount of the eaten diner to the eaten food according to second food characteristic information which is synchronously acquired with the second person characteristic information;
for a certain food, the eating preference of the single/multiple diners is obtained through statistics according to the historical acquisition quantity and the historical residual quantity of the single/multiple diners.
CN202210220947.8A 2022-03-08 2022-03-08 System and method for analyzing eating preference of diner Active CN114581265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210220947.8A CN114581265B (en) 2022-03-08 2022-03-08 System and method for analyzing eating preference of diner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210220947.8A CN114581265B (en) 2022-03-08 2022-03-08 System and method for analyzing eating preference of diner

Publications (2)

Publication Number Publication Date
CN114581265A true CN114581265A (en) 2022-06-03
CN114581265B CN114581265B (en) 2022-09-20

Family

ID=81773550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210220947.8A Active CN114581265B (en) 2022-03-08 2022-03-08 System and method for analyzing eating preference of diner

Country Status (1)

Country Link
CN (1) CN114581265B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090111099A1 (en) * 2007-10-27 2009-04-30 Yongsheng Ma Promoter Detection and Analysis
US20130095459A1 (en) * 2006-05-12 2013-04-18 Bao Tran Health monitoring system
CN109509535A (en) * 2018-10-08 2019-03-22 北京健康有益科技有限公司 The acquisition methods of food volume, the acquisition methods of fuel value of food, electronic equipment
CN110852299A (en) * 2019-11-19 2020-02-28 秒针信息技术有限公司 Method and device for determining eating habits of customers
CN112074248A (en) * 2018-04-27 2020-12-11 爱尔康公司 Three-dimensional visual camera and integrated robot technology platform
CN112381506A (en) * 2020-11-10 2021-02-19 广东电力信息科技有限公司 Intelligent canteen management system based on Internet of things and intelligent prediction recommendation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130095459A1 (en) * 2006-05-12 2013-04-18 Bao Tran Health monitoring system
US20090111099A1 (en) * 2007-10-27 2009-04-30 Yongsheng Ma Promoter Detection and Analysis
CN112074248A (en) * 2018-04-27 2020-12-11 爱尔康公司 Three-dimensional visual camera and integrated robot technology platform
CN109509535A (en) * 2018-10-08 2019-03-22 北京健康有益科技有限公司 The acquisition methods of food volume, the acquisition methods of fuel value of food, electronic equipment
CN110852299A (en) * 2019-11-19 2020-02-28 秒针信息技术有限公司 Method and device for determining eating habits of customers
CN112381506A (en) * 2020-11-10 2021-02-19 广东电力信息科技有限公司 Intelligent canteen management system based on Internet of things and intelligent prediction recommendation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马跃龙等: "一种基于深度相机的机器人室内导航点云地图生成方法", 《测绘工程》 *

Also Published As

Publication number Publication date
CN114581265B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
US9916520B2 (en) Automated food recognition and nutritional estimation with a personal mobile electronic device
US9977980B2 (en) Food logging from images
US11823042B2 (en) System for measuring food weight
CN107658001B (en) Household oil health management method and system
TW201228632A (en) Health monitoring system
CN111080493B (en) Dish information identification method and device and dish self-service settlement system
US20210313039A1 (en) Systems and Methods for Diet Quality Photo Navigation Utilizing Dietary Fingerprints for Diet Assessment
CN112329866A (en) Intelligent healthy ordering system and method for restaurant
CN112053428A (en) Method and device for identifying nutritional information contained in food
CN110444271A (en) Dietary recommendation generation method, device, computer equipment and storage medium
CN111666893A (en) Ordering processing method and device
CN110366731B (en) Systems, methods, and storage media for image capture for instruction meal
KR102487925B1 (en) Method, server and program for providing information on the operation and management information of the group feeding
KR102326540B1 (en) Methods for management of nutrition and disease using food images
JP2021513708A (en) Methods and systems for classifying food
CN113158036A (en) Automatic recipe recommendation method, device, terminal and storage medium
EP3964793A1 (en) Food measurement method, device, and program
CN114581265B (en) System and method for analyzing eating preference of diner
KR102473282B1 (en) System and method for providing nutritional information based on image analysis using artificial intelligence
CN114359299B (en) Diet segmentation method and diet nutrition management method for chronic disease patients
CN111415328B (en) Method and device for determining article analysis data and electronic equipment
CN114882973A (en) Daily nutrient intake analysis method and system based on standard food recognition
CN114360690A (en) Method and system for managing diet nutrition of chronic disease patient
KR20210049704A (en) A method, device and program for measuring food
KR102473072B1 (en) System and method for measuring tableware size and providing nutritional information using artificial intelligence-based image recognition and augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant