CN113034312A - Catering information pushing method, device, equipment and storage medium - Google Patents

Catering information pushing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113034312A
CN113034312A CN201911345190.XA CN201911345190A CN113034312A CN 113034312 A CN113034312 A CN 113034312A CN 201911345190 A CN201911345190 A CN 201911345190A CN 113034312 A CN113034312 A CN 113034312A
Authority
CN
China
Prior art keywords
user
information
meal
determining
meal taking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911345190.XA
Other languages
Chinese (zh)
Other versions
CN113034312B (en
Inventor
张磊
冯劲苗
张晓颖
于晋瑄
冀兰菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Tianjin Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Tianjin Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911345190.XA priority Critical patent/CN113034312B/en
Publication of CN113034312A publication Critical patent/CN113034312A/en
Application granted granted Critical
Publication of CN113034312B publication Critical patent/CN113034312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for pushing catering information, which aim to solve the problem that catering information cannot be pushed to a user in a personalized manner in the prior art. The method comprises the following steps: when the user is detected to enter the target area, determining the human body posture characteristics of the user in the target area; the human posture characteristics comprise a trunk angle and a space angle; judging whether a user has a meal taking action or not according to the human body posture characteristics; if so, determining the meal taking behavior information of the user; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior; and pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information. According to the technical scheme, personalized catering information can be pushed for the user in a targeted manner, so that the user can know the catering information matched with the user's own catering behavior without thinking and calculating, the intelligent pushing of the catering information is realized, and the dining experience of the user is improved.

Description

Catering information pushing method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of data services, in particular to a catering information pushing method, a catering information pushing device, catering information pushing equipment and a storage medium.
Background
At present, the face recognition consumption machine adopts an advanced face recognition technology to carry out identity verification, and provides accurate personnel passing permission or consumption record by analyzing face characteristics as the basis of identity recognition. But the technology is only applicable to the authentication payment link. At present, people generally pay more attention to their diet conditions, and hope that their diet habits can promote their health. Not everyone can spend the time planning their own diet. Therefore, the problem to be solved at present is to provide a catering information pushing method.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for pushing catering information, and aims to solve the problem that catering information cannot be pushed to a user in a personalized manner in the prior art.
To solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a method for pushing restaurant information, including:
when the fact that a user enters a target area is detected, determining human body posture features of the user in the target area; the human posture characteristics comprise a trunk angle and a space angle;
judging whether the user has a meal taking action or not according to the human body posture characteristics;
if so, determining the meal taking behavior information of the user; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior;
and pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
In a second aspect, an embodiment of the present invention further provides a dining information pushing device, including:
the first determination module is used for determining human body posture characteristics of a user in a target area when the user is detected to enter the target area; the human posture characteristics comprise a trunk angle and a space angle;
the judging module is used for judging whether the user has a meal taking action according to the human body posture characteristics;
the second determining module is used for determining the meal taking behavior information of the user if the meal taking behavior information is positive; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior;
and the pushing module is used for pushing the catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
In a third aspect, an embodiment of the present invention further provides a dining information pushing device, including:
a memory storing computer program instructions;
a processor, wherein the computer program instructions, when executed by the processor, implement the dining information pushing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes instructions, and when the instructions are executed on a computer, the computer is caused to execute the catering information pushing method according to the first aspect.
In the embodiment of the invention, when the fact that the user enters the target area is detected, the trunk angle and the space angle of the user in the target area are determined, so that whether the user has the meal taking action or not is judged according to the trunk angle and the space angle of the user, the meal taking action information of the user is determined when the user has the meal taking action, and the meal information matched with the meal taking action information is pushed for the user. Therefore, the technical scheme can be combined with the meal taking behavior information of the user, personalized meal information can be pushed for the user in a targeted manner, so that the user can know the meal information matched with the meal taking behavior of the user without thinking and calculating, the intelligent pushing of the meal information is realized, and the meal experience degree of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a dining information pushing method in an embodiment of the present invention.
Fig. 2 is a schematic range diagram of a target area of a camera according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of 3D skeleton coordinates in an embodiment of the invention.
Fig. 4 is a schematic flow chart of a dining information pushing method in another embodiment of the invention.
Fig. 5 is a schematic structural diagram of a dining information pushing device in an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a dining information pushing device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a dining information pushing method in an embodiment of the present invention. The method of fig. 1 may include:
s102, when the fact that the user enters the target area is detected, the human body posture feature of the user in the target area is determined.
Wherein, the human posture characteristics comprise a trunk angle and a space angle.
Optionally, the posture of the human body of the user can be detected through the camera, the target area is a monitoring effective area, when the user walks in the monitoring effective area, the camera can perform skeleton tracking on the human body by acquiring the gait samples, acquire skeleton joint point information of each frame of image of each walker, acquire a corresponding depth image, and store the acquired skeleton joint point information and the depth image corresponding to the skeleton joint point information to form a gait database. So as to call the data in the gait database to calculate the meal taking behavior information of the user when the meal information needs to be pushed to the user in the following.
For example, when detecting the human body posture of the user, the camera is horizontally placed, and as shown in fig. 2, an isosceles triangle with a height of 3.8 meters and a base length of 4.13 meters is the target area.
In the embodiment, when the user is detected to enter the target area, only the information of the skeletal joint points and the depth image of the user can be collected and stored in the gait database, and each data in the gait database is extracted according to the preset calculation time period for calculation; and the collected information of the bone joint points and the depth image of the user can be directly calculated in real time.
And S104, judging whether the user has the meal taking behavior or not according to the human body posture characteristics.
And S106, if the user is determined to have the meal taking behavior, determining the meal taking behavior information of the user.
The meal taking behavior information comprises a meal type, a first meal calorie and the like corresponding to the meal taking behavior.
And S108, pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
In the embodiment of the invention, when the fact that the user enters the target area is detected, the trunk angle and the space angle of the user in the target area are determined, so that whether the user has the meal taking action or not is judged according to the trunk angle and the space angle of the user, the meal taking action information of the user is determined when the user has the meal taking action, and the meal information matched with the meal taking action information is pushed for the user. Therefore, the technical scheme can be combined with the meal taking behavior information of the user, personalized meal information can be pushed for the user in a targeted manner, so that the user can know the meal information matched with the meal taking behavior of the user without thinking and calculating, the intelligent pushing of the meal information is realized, and the meal experience degree of the user is improved.
In one embodiment, when the human posture features of the user in the target area are determined, the skeletal joint point information of the user can be determined according to the depth image of the user collected in advance, and the trunk angle and the space angle of the user can be determined according to the skeletal joint point information of the user.
Wherein the torso angle comprises a first angle between the arms and the torso, a second angle between the hips and the upper body, and the like. The spatial angle may be the frontal orientation of the user in space.
Optionally, the human skeleton data can be captured by performing skeleton tracking, image identification, motion recognition and other technical means on the user through a 3D virtual technology device, and the color data stream and the depth data stream are simultaneously acquired through three cameras of the device, so that the gray scale and color image data are synchronously used, the three-dimensional depth information of the human body is acquired to obtain the depth image of the human body, and further the information of the skeletal joint point is determined.
After the depth image and the skeletal joint information of the user are determined, the torso angle and the frontal orientation in space of the user can be determined by the following methods:
optionally, the coordinates of the bone joint points of the user may be determined according to the collected information of the bone joint points of the user, and the coordinates of the bone joint points are connected to obtain a 3D skeleton diagram (as shown in fig. 3, the coordinates of the bone joint points are not shown in the diagram, and only a connected planar effect diagram is shown in the diagram). Determining vectors among all skeletal joint points in the 3D skeleton diagram, drawing vector lines in the 3D skeleton diagram, and determining a first angle between the arm and the trunk and a second angle between the hip and the upper body through included angles between the vector lines among all joints and a gravity vector line.
For example, when a user stands upright, the vector line between the joints of the arm and the vector line between the head and the hip joint center are perpendicular to the ground (i.e., parallel to the gravity vector line), and when the user moves (e.g., takes a meal or walks), the vector lines between the joints bend and form an angle with the gravity vector line. Therefore, a first angle between the arm and the trunk can be determined through the included angle between the vector line between the joints of the arm and the gravity vector line; the second angle between the hip and the upper body may be determined by the angle formed by the vector line of each joint between the head and the center of the hip joint and the gravity vector line.
Optionally, a depth image of the user in the target area may be acquired by using a camera, and any pixel in the depth image is set as x and d1(x) Is the depth value (grey value) at the x point. The set D is set of direction, which is set of plane octave angle,
Figure BDA0002333152100000051
in addition, Kα=K1,K2Representing an offset vector with an angle alpha to the horizontal right direction by taking the origin as a starting point, and satisfying
Figure BDA0002333152100000052
And is
Figure BDA0002333152100000053
In equation (1): k when t is 2 x 2m +1, m is formed by Z1=0,K2Taking a constant value; t is 2 x (2m), and when m is Z, K is2=0,K1Taking a constant value; in other cases K1,K2Take a constant value. Another is a vector pair composed of any 2 offset vectors, θ KU,KVAnd U, V epsilon D, and the total number of the pairs is 25. For each θ local gradient feature, the following is calculated: f. ofθl,x=d1x+KU-d1(x+KV). Wherein f isθl, x reflects the gradient information around the pixel x and thus represents the characteristic of the pixel x. For the same object, the local gradient feature has spatial position invariance, that is, when the object is translated freely in a scene, the feature value of a point on the surface of the object is invariant, so that the feature can well distinguish the object with the uneven surface, that is, the frontal orientation of a user in the space can be accurately determined.
In this embodiment, the skeletal joint point information of the user can be determined according to the depth image of the user collected in advance, the trunk angle and the space angle of the user can be determined according to the skeletal joint point information of the user, a data base is provided for subsequently judging whether the user has a meal taking action, and the trunk angle of the front orientation of the user in the space can be accurately judged through the dual judgment of the trunk angle and the space angle of the user, namely, the influence of body turning on the determination of the posture characteristics of the user in the walking process of the user is avoided.
In one embodiment, when whether the user has a meal taking behavior is judged according to the human posture characteristics, if a first angle between an arm and a trunk is larger than or equal to a first preset angle and a second angle between a hip and an upper body is larger than or equal to a second preset angle, it is determined that the user has the meal taking behavior.
Optionally, the first preset angle may be set to 45 degrees and the second preset angle may be set to 25 degrees according to actual conditions (for example, most of the users take meals).
In this embodiment, it can be determined that the user has a meal taking action according to the fact that the first angle between the arm and the trunk of the user is greater than or equal to the first preset angle and the second angle between the hip and the upper body is greater than or equal to the second preset angle, so that the way of determining the meal taking action of the user is simpler and accords with the actual situation, and the misjudgment situation is avoided.
After determining that the user has the meal taking action, determining the meal taking action information of the user, so as to push the meal information matched with the meal taking action information for the user. The meal taking behavior information comprises meal type, first meal calorie and the like corresponding to the meal taking behavior.
In one embodiment, the meal taking behavior information includes a meal type corresponding to the meal taking behavior. The type of the meal corresponding to the meal taking action of the user can be determined by at least one of the following modes:
the method comprises the steps of obtaining first image information of a food corresponding to a food taking behavior, and determining the type of the food corresponding to the food taking behavior according to the first image information.
Optionally, the type of the food corresponding to the image information of each food may be pre-stored, after the first image information of the food corresponding to the food taking behavior is acquired, the first image information is matched with the pre-stored image information of the food, and the type of the food corresponding to the matched image information of the food is determined as the type of the food corresponding to the food taking behavior.
Optionally, the acquired first image information of the food corresponding to the food taking behavior may be identified through an image identification technology, and the type of the food corresponding to the food taking behavior is determined according to the identification result.
And determining the food taking position corresponding to the food taking behavior according to the human body posture characteristics, and determining the type of the food corresponding to the food taking behavior according to the preset corresponding relation between the food taking position and the type of the food.
Optionally, the types of the food items corresponding to the food taking positions can be stored in advance, and when the user takes food, the food taking positions corresponding to the food taking behaviors of the user are determined through the human body posture characteristics, so that the types of the food items are determined.
In one embodiment, the meal taking action information includes a meal type and a first meal calorie corresponding to the meal taking action. The first meal heat corresponding to the meal taking behavior can be determined according to the preset corresponding relation between each meal type and the meal heat.
In the embodiment, the information such as the type of the food and the heat of the first food corresponding to the food taking action of the user can be determined, a data base is provided for pushing the food and beverage information, and the determination of the food and beverage information to be pushed is facilitated.
In one embodiment, when catering information matched with the meal taking behavior information is pushed for a user according to the meal taking behavior information, human body parameter information of the user can be determined according to a depth image of the user collected in advance; secondly, determining reference calorie information which is required to be taken by a user every day according to the human body parameter information; and then, determining the second food calorie required to be taken by the user within a preset time period according to the first food calorie corresponding to the food taking behavior of the user and the reference calorie information required to be taken by the user every day, and pushing the food and beverage information for the user within the preset time period according to the second food calorie.
Wherein, the human body parameter information comprises height, weight, sex and the like. The actual width corresponding to the pixel width of each depth image frame can be calculated by utilizing the ratio between the depth image data and the actual data, and then the accurate actual height of the human body is calculated according to the actual width; similarly, the accurate human body actual body width can be calculated according to the method; the actual weight of the human body can be estimated through the actual width and the actual height, and the gender of the user can be accurately identified according to different human body structures in the depth image. The preset time period can be breakfast time period, lunch time period, dinner time period and the like. The catering information at which time(s) the user pushes may be preset.
Optionally, according to the human body parameter information, the calculation formula of the heat quantity required by the human body every day can be used:
female 655+ (9.6x body weight kg) + (1.8x height cm) - (4.7x age years),
male, 66+ (13.7x body weight kg) + (5x height cm) - (6.8x age years),
and determining the reference calorie information which is needed to be ingested by the user every day.
In this embodiment, the human body parameter information of the user can be determined according to the depth image of the user collected in advance, and then the reference heat information required to be ingested by the user every day is determined, then the second meal heat required to be ingested by the user within the preset time period is determined according to the first meal heat and the reference heat information, and the catering information within the preset time period is pushed for the user according to the second meal heat, so that personalized catering information is pushed for the user instead of the catering information calculated through other ways (such as big data), and the pushed catering information is more referential.
In one embodiment, the catering preference information of the user can be determined according to the meal taking behavior information, and then the catering information in the preset time period can be pushed for the user according to the catering preference information of the user and the determined second meal calorie required to be taken in by the user in the preset time period.
The catering preference information of the user can be determined according to the type of the food corresponding to the food taking behavior of the user.
For example, according to the type of the food corresponding to the food taking behavior of a certain user, the food taken by the user can be determined to include stewed meat, vegetables and eggs, so that the catering preference of the user can be determined to be stewed meat, vegetables and eggs, and then the catering information in the preset time period is pushed for the user according to the catering preference information of the user and the determined heat of the second food required to be taken in by the user in the preset time period.
In this embodiment, can confirm user's food and beverage preference information according to user's action information of getting meal to according to user's food and beverage preference information and the second meal heat that needs intake in the definite user preset period, for the food and beverage information in the user propelling movement preset period, make the food and beverage information for user's propelling movement accord with user's preference more, improved user's experience degree of having a dinner.
Fig. 4 is a schematic flow chart of a dining information pushing method in another embodiment of the invention.
The method of FIG. 4 may include:
s401, collecting the depth image and the bone joint point information of the user in the target area.
The three cameras of the device can simultaneously acquire the color data stream and the depth data stream, so that the gray scale and color image data are synchronously used, the three-dimensional depth information of the human body is acquired to obtain the depth image of the human body, and further the information of the skeletal joint points is determined. The target area may be a monitoring active area of the 3D virtual technology device.
S402, according to the collected depth image and the bone joint point information of the user, determining a first angle between an arm and a trunk of the user, a second angle between a hip and an upper body of the user and the front orientation of the user in the space.
The method comprises the steps of acquiring skeleton joint point information of a user, determining skeleton joint point coordinates of the user, and connecting the skeleton joint point coordinates to obtain a 3D skeleton map. Determining vectors among all skeletal joint points in the 3D skeleton diagram, drawing vector lines in the 3D skeleton diagram, and determining a first angle between the arm and the trunk and a second angle between the hip and the upper body through included angles between the vector lines among all joints and the gravity vector.
Generally, when a user stands vertically, vector lines between the joints of the arm and vector lines between the head and the center of the hip joint are perpendicular to the ground (i.e., parallel to the gravity vector line), and when the user moves (e.g., takes meals or walks), the vector lines between the joints bend and form an included angle with the gravity vector line. Therefore, a first angle between the arm and the trunk can be determined through the included angle between the vector line between the joints of the arm and the gravity vector line; the second angle between the hip and the upper body may be determined by the angle formed by the vector line of each joint between the head and the center of the hip joint and the gravity vector line.
The depth image of a user in a target area can be acquired by utilizing a camera, and any pixel in the depth image is set as x and d1(x) Is the depth value (grey value) at the x point. The set D is set of direction, which is set of plane octave angle,
Figure BDA0002333152100000091
in addition, Kα=K1,K2Representing an offset vector with an angle alpha to the horizontal right direction by taking the origin as a starting point, and satisfying
Figure BDA0002333152100000092
And is
Figure BDA0002333152100000093
In equation (1): t 2 x 2m +1, m eZ is, K1=0,K2Taking a constant value; t is 2 x (2m), and when m is Z, K is2=0,K1Taking a constant value; in other cases K1,K2Take a constant value. Another is a vector pair composed of any 2 offset vectors, θ KU,KVAnd U, V epsilon D, and the total number of the pairs is 25. For each θ local gradient feature, the following is calculated: f. ofθl,x=d1x+KU-d1(x+KV). Wherein f isθl, x reflects the gradient information around the pixel x and thus represents the characteristic of the pixel x. For the same object, the local gradient feature has spatial position invariance, that is, when the object is translated freely in a scene, the feature value of a point on the surface of the object is invariant, so that the feature can well distinguish the object with the uneven surface, that is, the frontal orientation of a user in the space can be accurately determined.
S403, whether a first angle between the arm and the trunk of the user is larger than or equal to a first preset angle and whether a second angle between the hip and the upper body of the user is larger than or equal to a second preset angle are judged. If yes, go to S404.
The first preset angle can be preset to be 45 degrees and the second preset angle can be preset to be 25 degrees according to actual conditions (such as meal taking actions of most users).
S404, determining that the user has the meal taking action.
S405, determining the type of the food and the heat of the first food corresponding to the food taking action of the user.
The method comprises the steps of pre-storing food types corresponding to image information of various foods, matching the first image information with pre-stored image information of the foods after acquiring the first image information of the foods corresponding to the food taking behavior, and determining the food type corresponding to the matched image information of the foods as the food type corresponding to the food taking behavior. The first meal heat corresponding to the meal taking behavior can be determined according to the preset corresponding relation between each meal type and the meal heat.
And S406, determining the human body parameter information of the user according to the depth image.
Wherein, the human body parameter information comprises height, weight, sex and the like. The actual width corresponding to the pixel width of each depth image frame can be calculated by utilizing the ratio between the depth image data and the actual data, and then the accurate actual height of the human body is calculated according to the actual width; similarly, the accurate human body actual body width can be calculated according to the method; the actual weight of the human body can be estimated through the actual width and the actual height, and the gender of the user can be accurately identified according to different human body structures in the depth image.
And S407, determining reference calorie information which is required to be taken by the user every day according to the human body parameter information.
Wherein, according to the human parameter information, the heat quantity required by the human body every day can be calculated by the formula:
female 655+ (9.6x body weight kg) + (1.8x height cm) - (4.7x age years),
male, 66+ (13.7x body weight kg) + (5x height cm) - (6.8x age years),
and determining the reference calorie information which is needed to be ingested by the user every day.
For example, for a 25-year-old woman, the height and weight of the user are estimated from the depth image corresponding to the skeletal joint information of the user, and assuming that the height of the user is 160 and the weight is 50 kg, the calorie of the woman is 1540.5 calories (655+9.6 × 50+1.8 × 160-4.7 × 25).
S408, determining the calorie of a second meal which is required to be taken by the user within a preset time period according to the calorie of the first meal corresponding to the meal taking behavior of the user and the reference calorie information required to be taken by the user every day.
The preset time interval can be a breakfast time interval, a lunch time interval, a dinner time interval and the like. The catering information at which time(s) the user pushes may be preset.
And S409, pushing the catering information within a preset time period for the user according to the heat of the second food.
Optionally, the catering preference information of the user can be determined according to the type of the food corresponding to the food taking behavior, and then the catering information in the preset time period can be pushed for the user according to the catering preference information of the user and the heat quantity of the second food required to be taken in the preset time period of the user.
In addition, continuing with the above example, if the woman enters the target area during breakfast, the user can recommend a reasonable meal suggestion according to the food information uploaded on the current day in combination with the proportion of the intake of three meals. For example, the calorie intake today is suggested to be 1504 calories, and the following is recommended:
breakfast takes up about 25% -30% of the total calories per day. Recommending breakfast: 1 cup of yogurt, one piece of bread, 2 eggs. Heat quantity: 180+170+100 ═ 450 cards.
The intake of calories at lunch accounts for about 30% -40% of the daily calories. Recommending lunch: cooked rice (100g), stewed spareribs with potato and pine mushroom, and pickled cabbage (100 g). Heat quantity: 100+366+100 ═ 566 cards.
The intake of the evening meal accounts for about 30% -40% of the total calories of the day. Recommending dinner: 10 dumplings, one part of tomato egg soup. Heat quantity: 370+118 488 card.
In the embodiment of the invention, when the fact that the user enters the target area is detected, the trunk angle and the space angle of the user in the target area are determined, so that whether the user has the meal taking action or not is judged according to the trunk angle and the space angle of the user, the meal taking action information of the user is determined when the user has the meal taking action, and the meal information matched with the meal taking action information is pushed for the user. Therefore, the technical scheme can be combined with the meal taking behavior information of the user, personalized meal information can be pushed for the user in a targeted manner, so that the user can know the meal information matched with the meal taking behavior of the user without thinking and calculating, the intelligent pushing of the meal information is realized, and the meal experience degree of the user is improved.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Fig. 5 is a schematic structural diagram of a dining information pushing device in an embodiment of the present invention. Referring to fig. 5, a dining information pushing apparatus may include:
a first determining module 510, configured to determine, when it is detected that a user enters a target area, a human body posture feature of the user in the target area; the human posture characteristics comprise a trunk angle and a space angle;
the judging module 520 is configured to judge whether a meal taking behavior of the user exists according to the human body posture characteristics;
a second determining module 530, configured to determine meal taking behavior information of the user if it is determined that the user has meal taking behavior; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior;
and the pushing module 540 is configured to push the catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
In one embodiment, the first determining module 510 includes:
the first determining unit is used for determining the bone joint point information of the user according to the depth image of the user acquired in advance;
the second determining unit is used for determining the trunk angle and the space angle of the user according to the skeletal joint point information of the user; the torso angle includes at least one of a first angle between the arms and the torso, and a second angle between the hips and the upper body.
In one embodiment, the determining module 520 includes:
and the third determining unit is used for determining that the user has the meal taking action if the first angle is larger than or equal to a first preset angle and the second angle is larger than or equal to a second preset angle.
In one embodiment, the meal taking behavior information includes a meal type corresponding to the meal taking behavior; the second determining module 530 includes:
the fourth determining unit is used for acquiring first image information of the food corresponding to the food taking action and determining the type of the food corresponding to the food taking action according to the first image information; and/or the presence of a gas in the gas,
the fifth determining unit is used for determining a meal taking position corresponding to the meal taking behavior according to the human body posture characteristics; and determining the type of the food corresponding to the food taking action according to the preset corresponding relation between the food taking position and the type of the food.
In one embodiment, the meal taking behavior information further comprises a first meal calorie; the second determining module 530 further includes:
and the sixth determining unit is used for determining the first meal heat corresponding to the meal taking action according to the preset corresponding relation between each meal type and the meal heat.
In one embodiment, the pushing module 540 includes:
the seventh determining unit is used for determining the human body parameter information of the user according to the depth image of the user acquired in advance; the human body parameter information comprises at least one of height, weight and gender;
the eighth determining unit is used for determining the reference calorie information which is required to be taken by the user every day according to the human body parameter information;
the ninth determining unit is used for determining the calorie of the second meal required to be taken in by the user within a preset time period according to the calorie of the first meal and the reference calorie information;
and the pushing unit is used for pushing the catering information within the preset time period for the user according to the heat of the second catering product.
In one embodiment, the pushing module 540 further comprises:
the tenth determining unit is used for determining the catering preference information of the user according to the meal taking behavior information;
according to the heat of the second meal, the catering information within the preset time period is pushed for the user, and the method comprises the following steps:
and pushing the catering information within a preset time period for the user according to the catering preference information and the heat of the second meal of the user.
The catering information pushing device provided by the embodiment of the invention can realize each process realized by the catering information pushing method in the method embodiment, and is not repeated here to avoid repetition.
In the embodiment of the invention, when the fact that the user enters the target area is detected, the trunk angle and the space angle of the user in the target area are determined, so that whether the user has the meal taking action or not is judged according to the trunk angle and the space angle of the user, the meal taking action information of the user is determined when the user has the meal taking action, and the meal information matched with the meal taking action information is pushed for the user. Therefore, the device can be combined with the meal taking behavior information of the user, personalized meal information can be pushed for the user in a targeted mode, the user can know the meal information matched with the meal taking behavior of the user without thinking and calculating, the intelligent pushing of the meal information is achieved, and the meal experience degree of the user is improved.
Fig. 6 is a schematic structural diagram of a dining information pushing device in an embodiment of the present invention.
The dining information pushing device 600 includes but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. It will be understood by those skilled in the art that the configuration of the catering information push device shown in fig. 6 does not constitute a limitation of the catering information push device, and the catering information push device may comprise more or less components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the invention, the catering information pushing device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
The processor 610 is configured to determine human body posture features of the user in the target area when it is detected that the user enters the target area; the human posture characteristics comprise a trunk angle and a space angle; judging whether a user has a meal taking action or not according to the human body posture characteristics; if so, determining the meal taking behavior information of the user; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior; and pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
In the embodiment of the invention, when the fact that the user enters the target area is detected, the trunk angle and the space angle of the user in the target area are determined, so that whether the user has the meal taking action or not is judged according to the trunk angle and the space angle of the user, the meal taking action information of the user is determined when the user has the meal taking action, and the meal information matched with the meal taking action information is pushed for the user. Therefore, the equipment can be combined with the meal taking behavior information of the user to push personalized meal information for the user in a targeted manner, so that the user can know the meal information matched with the meal taking behavior of the user without thinking and calculating, the intelligent pushing of the meal information is realized, and the meal experience degree of the user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The catering information pushing device provides wireless broadband internet access for the user through the network module 602, such as helping the user to send and receive e-mails, browse webpages, access streaming media and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Furthermore, the audio output unit 603 may also provide audio output related to the specific function performed by the dining information pushing device 600 (e.g., a call signal receiving sound, a message receiving sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The dining information pushing device 600 further comprises at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 6061 and/or backlight when the dining information pushing device 600 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), can detect the magnitude and direction of gravity when stationary, and can be used for identifying the posture of the catering information pushing device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer and tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the dining information push apparatus. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are two independent components in fig. 6 to implement the input and output functions of the dining information pushing apparatus, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the dining information pushing apparatus, and this is not limited here.
The interface unit 608 is an interface for connecting an external device with the dining information pushing apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the dining information push apparatus 600 or may be used to transmit data between the dining information push apparatus 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the food and beverage information push device, connects various parts of the whole food and beverage information push device by using various interfaces and lines, and executes various functions and processes data of the food and beverage information push device by running or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby integrally monitoring the food and beverage information push device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The dining information pushing device 600 may further include a power supply 611 (such as a battery) for supplying power to each component, and preferably, the power supply 611 may be logically connected to the processor 610 through a power management system, so that functions of managing charging, discharging, and power consumption management are implemented through the power management system.
In addition, the dining information pushing device 600 includes some functional modules that are not shown, and are not described herein again.
Preferably, an embodiment of the present invention further provides a dining information pushing device, which includes a processor 610, a memory 609, and a computer program that is stored in the memory 609 and can be run on the processor 610, and when being executed by the processor 610, the computer program implements each process of the dining information pushing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program realizes each process of the catering information pushing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A catering information pushing method is characterized by comprising the following steps:
when the fact that a user enters a target area is detected, determining human body posture features of the user in the target area; the human posture characteristics comprise a trunk angle and a space angle;
judging whether the user has a meal taking action or not according to the human body posture characteristics;
if so, determining the meal taking behavior information of the user; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior;
and pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
2. The method of claim 1, wherein the determining the human body posture feature of the user within the target region comprises:
determining skeletal joint point information of the user according to a depth image of the user collected in advance;
determining the trunk angle and the space angle of the user according to the skeletal joint point information of the user; the torso angle includes at least one of a first angle between an arm and a torso, and a second angle between a hip and an upper body.
3. The method according to claim 2, wherein the determining whether the user has meal taking action according to the human posture features comprises:
and if the first angle is larger than or equal to a first preset angle and the second angle is larger than or equal to a second preset angle, determining that the user has the meal taking action.
4. The method of claim 3, wherein the meal fetching behavior information comprises a meal type corresponding to the meal fetching behavior;
the determining the meal taking behavior information of the user comprises:
acquiring first image information of a food corresponding to the food taking action, and determining the type of the food corresponding to the food taking action according to the first image information; and/or the presence of a gas in the gas,
determining a meal taking position corresponding to the meal taking behavior according to the human body posture characteristics; and determining the type of the food corresponding to the food taking action according to the corresponding relation between the preset food taking position and the type of the food.
5. The method of claim 4, wherein the meal fetch behavior information further comprises the first meal calories;
the determining the meal taking behavior information of the user further comprises:
and determining the first meal heat corresponding to the meal taking action according to the preset corresponding relation between each meal type and the meal heat.
6. The method according to claim 5, wherein the pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information comprises:
determining human body parameter information of the user according to the depth image of the user acquired in advance; the human body parameter information comprises at least one of height, weight and gender;
determining reference calorie information which is required to be ingested by the user every day according to the human body parameter information;
determining the calorie of a second meal which needs to be taken by the user within a preset time period according to the calorie of the first meal and the reference calorie information;
and pushing the catering information within the preset time period for the user according to the heat of the second catering product.
7. The method according to claim 6, wherein the pushing catering information matched with the meal taking behavior information for the user according to the meal taking behavior information further comprises:
determining the catering preference information of the user according to the meal taking behavior information;
according to the heat of the second food, the catering information in the preset time period is pushed to the user, and the method comprises the following steps:
and pushing the catering information within the preset time period for the user according to the catering preference information of the user and the heat of the second meal.
8. The utility model provides a food and beverage information pusher which characterized in that includes:
the first determination module is used for determining human body posture characteristics of a user in a target area when the user is detected to enter the target area; the human posture characteristics comprise a trunk angle and a space angle;
the judging module is used for judging whether the user has a meal taking action according to the human body posture characteristics;
the second determining module is used for determining the meal taking behavior information of the user if the meal taking behavior information is positive; the meal taking behavior information comprises at least one of a meal type and a first meal calorie corresponding to the meal taking behavior;
and the pushing module is used for pushing the catering information matched with the meal taking behavior information for the user according to the meal taking behavior information.
9. The utility model provides a food and beverage information propelling movement equipment which characterized in that includes:
a memory storing computer program instructions;
processor, which when executed by said processor implements the catering information pushing method as claimed in any of claims 1 to 7.
10. A computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the dining information pushing method according to any one of claims 1-7.
CN201911345190.XA 2019-12-24 2019-12-24 Dining information pushing method, device, equipment and storage medium Active CN113034312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911345190.XA CN113034312B (en) 2019-12-24 2019-12-24 Dining information pushing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911345190.XA CN113034312B (en) 2019-12-24 2019-12-24 Dining information pushing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113034312A true CN113034312A (en) 2021-06-25
CN113034312B CN113034312B (en) 2023-07-04

Family

ID=76451504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911345190.XA Active CN113034312B (en) 2019-12-24 2019-12-24 Dining information pushing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113034312B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108690A (en) * 2017-12-19 2018-06-01 深圳创维数字技术有限公司 A kind of method, apparatus, equipment and storage medium for monitoring diet
CN108549660A (en) * 2018-03-12 2018-09-18 维沃移动通信有限公司 Information-pushing method and device
CN109346153A (en) * 2018-08-31 2019-02-15 北京唐冠天朗科技开发有限公司 It is a kind of to digitize system and method for having dinner
CN109447672A (en) * 2018-08-30 2019-03-08 深圳壹账通智能科技有限公司 Using recommended method, device, storage medium and computer equipment
CN110135957A (en) * 2019-05-20 2019-08-16 梁志鹏 A kind of vegetable recommended method, device and the storage medium of intelligent restaurant healthy diet

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108690A (en) * 2017-12-19 2018-06-01 深圳创维数字技术有限公司 A kind of method, apparatus, equipment and storage medium for monitoring diet
CN108549660A (en) * 2018-03-12 2018-09-18 维沃移动通信有限公司 Information-pushing method and device
CN109447672A (en) * 2018-08-30 2019-03-08 深圳壹账通智能科技有限公司 Using recommended method, device, storage medium and computer equipment
CN109346153A (en) * 2018-08-31 2019-02-15 北京唐冠天朗科技开发有限公司 It is a kind of to digitize system and method for having dinner
CN110135957A (en) * 2019-05-20 2019-08-16 梁志鹏 A kind of vegetable recommended method, device and the storage medium of intelligent restaurant healthy diet

Also Published As

Publication number Publication date
CN113034312B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN109682011A (en) A kind of temperature control method, device and terminal device
CN109409244B (en) Output method of object placement scheme and mobile terminal
CN103955272B (en) A kind of terminal user's attitude detection system
CN109381165B (en) Skin detection method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN109005336B (en) Image shooting method and terminal equipment
CN108804546B (en) Clothing matching recommendation method and terminal
CN110533651B (en) Image processing method and device
CN110765525B (en) Method, device, electronic equipment and medium for generating scene picture
CN110536479A (en) Object transmission method and electronic equipment
CN110807405A (en) Detection method of candid camera device and electronic equipment
CN111488057A (en) Page content processing method and electronic equipment
CN111222569A (en) Method, device, electronic equipment and medium for identifying food
CN108881544A (en) A kind of method taken pictures and mobile terminal
CN109669611A (en) Fitting method and terminal
CN109241832A (en) A kind of method and terminal device of face In vivo detection
CN109658198B (en) Commodity recommendation method and mobile terminal
CN109545321A (en) A kind of policy recommendation method and device
CN111091519A (en) Image processing method and device
CN110942022A (en) Shooting data output method and electronic equipment
CN110622218A (en) Image display method, device, storage medium and terminal
CN109740493A (en) A kind of target object recommended method and mobile terminal
CN111405361B (en) Video acquisition method, electronic equipment and computer readable storage medium
CN112818733B (en) Information processing method, device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant