WO2022011509A1 - 监管饮食行为的方法和装置 - Google Patents

监管饮食行为的方法和装置 Download PDF

Info

Publication number
WO2022011509A1
WO2022011509A1 PCT/CN2020/101691 CN2020101691W WO2022011509A1 WO 2022011509 A1 WO2022011509 A1 WO 2022011509A1 CN 2020101691 W CN2020101691 W CN 2020101691W WO 2022011509 A1 WO2022011509 A1 WO 2022011509A1
Authority
WO
WIPO (PCT)
Prior art keywords
food
target user
chewing
user
type
Prior art date
Application number
PCT/CN2020/101691
Other languages
English (en)
French (fr)
Inventor
刘畅
张立斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/101691 priority Critical patent/WO2022011509A1/zh
Priority to CN202080005309.3A priority patent/CN114190074A/zh
Publication of WO2022011509A1 publication Critical patent/WO2022011509A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb

Definitions

  • This application relates to the field of artificial intelligence, and more particularly, to methods and apparatus for regulating eating behavior.
  • Diet is the basic condition for human beings to maintain life, and to live a healthy, happy, energetic and intelligent life, one must not only be satisfied with a full stomach, but must also consider a healthy diet.
  • the present application provides a method and related device for monitoring eating behavior, which can make people's eating habits healthier.
  • the present application provides a method of regulating eating behavior.
  • the method includes: acquiring the first chewing times of the target user eating the current food in the current eating behavior; acquiring the food type of the current food; acquiring, by the smart device, the minimum number of chewing times required for eating the food of the food type; When the first number of times of chewing is less than the minimum number of times of chewing, prompt information is output.
  • prompt information is output to prompt the target user to increase the number of chewing times, so that the target user can increase the number of chewing times. healthier eating habits.
  • the minimum number of chews in this application is the minimum number of chews required for the type of food the target user currently eats, which can more accurately determine the target user’s current Whether the number of chews reaches the standard, the target user can be reminded more accurately in real time, and then the target user can have a healthier diet.
  • the method of the present application can be executed by the local user equipment, also can be executed by the server, can also be executed by the local user equipment and the server in cooperation, or can be executed by one or more smart chips.
  • the user equipment may be any device with data processing capability, for example, the user equipment may be a wearable device, or may be a smart terminal capable of short-range communication with the wearable device.
  • the server may be a device that communicates with the user device over a long distance through wifi or a communication network.
  • a smart terminal capable of short-range communication with a wearable device is referred to as a paired device.
  • a paired device For example, smart phones, tablets, computers, etc. that can communicate with wearable devices at short distances through Bluetooth or data lines can all be paired devices.
  • the user equipment When the user equipment is a wearable device, some examples of the user equipment include virtual reality (VR) glasses, augmented reality (AR) glasses, or a headset.
  • VR virtual reality
  • AR augmented reality
  • the user equipment When the user equipment is a smart terminal, some examples of the user equipment include a smart phone, a tablet computer, a notebook computer or a desktop computer, and the like.
  • a target user refers to a user whose current eating behavior is supervised, and the currently supervised eating behavior is called a target eating behavior.
  • the food that the target user performs in the target eating behavior is called the target food
  • the food type to which the target food belongs is called the target type
  • the number of times the target user chews the target food is called the target chewing times
  • the minimum chewing required for humans to chew the target type of food The number of times is called the number of chews threshold.
  • the threshold for the number of chews may be preset, for example, pre-configured in a smart device according to scientific knowledge of healthy human diet, or may be manually set by a user.
  • the acquiring the minimum number of chews required for eating the food of the food type includes: acquiring the minimum number of chews according to the corresponding relationship between the food type and pre-stored, wherein the The corresponding relationship includes the corresponding relationship between the food type and the minimum number of chews.
  • the corresponding relationship between the food type and the minimum number of chews can be stored in advance, so that after obtaining the food type of the current food, the minimum number of chews can be directly obtained according to the corresponding relationship.
  • the pre-stored corresponding relationship may be configured on the smart device, may also be acquired from other devices, or may also be generated according to information input by the user.
  • the corresponding relationship may be generated according to the corresponding relationship directly input by the user, or may be generated according to the minimum number of chewing times input by the user, or may be generated according to information such as age, gender, or physical health status input by the user. In this way, a more accurate minimum number of chews can be obtained for the target user, thereby making the target user's diet healthier.
  • the method further includes: acquiring the age of the target user; wherein, acquiring the minimum number of chews required for eating the food of the food type includes: acquiring the age of the target user. The minimum number of chews required by the user to eat food of that food type.
  • the minimum number of chews is not only related to the type of food, but also to the age of the user.
  • determining the minimum number of chewing times corresponding to the user's eating behavior refer to both the food type and the user's age, so that a more accurate chewing threshold can be obtained, so that more accurate prompt information can be output, which can be more conducive to the user's dietary health.
  • the obtaining the minimum number of chewing times required for the user of the age to eat the food of the food type includes: determining according to the food type, the age and a preset corresponding relationship The minimum number of chews, wherein the corresponding relationship includes the corresponding relationship between the food type, the minimum number of chews and the age.
  • the correspondence between the food type, the user's age and the minimum number of chews can be stored in advance, so that after obtaining the food type of the current food and the user's age, the minimum number of chews can be directly obtained according to the correspondence.
  • the pre-stored corresponding relationship may be configured on the smart device, may also be acquired from other devices, or may also be generated according to information input by the user.
  • the corresponding relationship may be generated according to the corresponding relationship directly input by the user, or may be generated according to the minimum number of chewing times input by the user, or may be generated according to information such as gender or physical health status input by the user. In this way, a more accurate minimum number of chews can be obtained for the target user, thereby making the target user's diet healthier.
  • the method further includes: acquiring a plurality of dietary data of the target user, where the plurality of dietary data corresponds to a plurality of dietary behaviors of the target user, the plurality of dietary data
  • Each dietary data in the data includes the chewing times of the target food eaten by the target user in the corresponding dietary behavior and the food type of the target food; the dietary behavior of the target user is analyzed according to the plurality of dietary data, Obtain the analysis result of eating behavior; output the analysis result of eating habit.
  • the target user's eating habits are analyzed according to the long-term recorded diet data, so that the user can adjust his own eating behavior according to the analysis result, thereby making the user's diet healthier.
  • the eating habits analysis results include one or more of the following:
  • the number of times of eating and drinking of the target user, the total number of mouths of each type of food consumed by the target user, the average number of mouthfuls of each type of food eaten by the target user in each eating behavior, and the target user eating each type of food The total number of times of chewing the food, the number of times when the target user eats each type of food is less than the minimum number of times of chewing the mouth corresponding to the threshold of each type of food.
  • each diet data further includes identification information of the target user; wherein acquiring multiple diet data of the target user includes: acquiring identification information of the target user; The dietary data including the identification information is acquired from the dietary database, and the plurality of dietary data are obtained.
  • the obtaining the identification information of the target user may include: identifying the target user, and obtaining the identification information of the target user. For example, the characteristics of the target user are identified through the data collected by the sensor, and the identification information of the target user is acquired according to the correspondence between the pre-stored user characteristics and the user identification information.
  • the identification information of the target user may also be obtained according to the information input by the target user, for example, the identification information input by the user may be obtained by logging in to verify the legitimate use right of the target user.
  • the user identifier is also recorded. This enables the first device to obtain corresponding dietary data for each user and analyze their personal eating habits; thus, multiple people can share the same first device, thereby making more people's diets healthier.
  • the first number of times of chewing is the number of times of chewing that is smaller than the corresponding minimum number of times of chewing by the target user in the current eating behavior, and N is a preset positive integer.
  • the user is prompted only when the accumulated N times of chewing times do not meet the standard. In this way, it is possible to comprehensively consider the multiple eating situations of the target user for prompting, thereby further making the prompting more reasonable, and further making the user's diet more healthy.
  • the food corresponding to the current food and the N-1 chewing times before the first chewing times that is less than the corresponding minimum number of chewing times is the N mouths that the target user eats continuously in the current eating behavior. food.
  • the present application provides an apparatus for monitoring eating behavior, the apparatus comprising a module for performing the method in the first aspect or any one of the implementation manners.
  • the present application provides a device for monitoring eating behavior, the device comprising: a memory for storing a program; a processor for executing the program stored in the memory, when the program stored in the memory is executed , the processor is configured to execute the method in the first aspect or any one of the implementation manners.
  • the apparatus may further include a communication interface or a transceiver, so that the apparatus can transmit relevant information required for implementing the method in the first aspect or any one of the implementation manners to other apparatuses or devices.
  • the memory may be used to store relevant information required for implementing the method in the first aspect or any one of the implementation manners. Examples of such relevant information include sensor-collected data, food type based on sensor-collected data, number of chews, dietary data, dietary analysis results, user identity or user age, and the like.
  • a computer-readable medium storing program code for execution by a device, the program code comprising a method for performing the first aspect or any one of the implementations thereof.
  • a computer program product containing instructions, when the computer program product is run on a computer, the computer program product causes the computer to execute the method in the first aspect or any one of the implementation manners.
  • a sixth aspect provides a chip, the chip includes a processor and a data interface, the processor reads an instruction stored in a memory through the data interface, and executes the first aspect or any one of the implementations. method.
  • the chip may further include a memory, in which instructions are stored, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to execute the method in the first aspect or any one of the implementation manners thereof.
  • a device comprising: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the processor is used for The method in the first aspect or any one of the implementation manners is performed.
  • a system in an eighth aspect, includes a user equipment and a server, the user equipment and the server are respectively configured to execute part of the steps in the method in the first aspect or any one of the implementation manners, so as to realize the first aspect Aspect or a method in any one of the implementations thereof.
  • a system in a ninth aspect, includes a first user equipment and a second user equipment, the first user equipment and the second user equipment are respectively configured to execute the first aspect or any one of the implementation manners. Some steps in the method are used to implement the method in the first aspect or any one of the implementation manners.
  • An example of the second user equipment is a paired device of the first user equipment.
  • a system in a tenth aspect, includes a first user equipment, a second user equipment and a server, the first user equipment, the second user equipment and the server are respectively configured to execute the first aspect or any one of them Some steps in the method in the implementation manner are to implement the first aspect or the method in any one of the implementation manners.
  • An example of the second user equipment is a paired device of the first user equipment.
  • FIG. 1 is an exemplary structural diagram of a system architecture of an embodiment of the present application.
  • FIG. 2 is an exemplary flowchart of a method for monitoring eating behavior according to an embodiment of the present application
  • FIG. 3 is an exemplary flowchart of a method for monitoring eating behavior according to still another embodiment of the present application.
  • FIG. 4 is an exemplary flowchart of a method for monitoring eating behavior according to another embodiment of the present application.
  • FIG. 5 is an exemplary structural diagram of a system architecture of another embodiment of the present application.
  • FIG. 6 is an exemplary structural diagram of a system architecture of another embodiment of the present application.
  • FIG. 7 is an exemplary structural diagram of a system architecture of still another embodiment of the present application.
  • FIG. 8 is an exemplary structural diagram of a device for monitoring eating behavior according to an embodiment of the present application.
  • FIG. 9 is an exemplary structural diagram of a device for monitoring eating behavior according to another embodiment of the present application.
  • FIG. 10 is an exemplary structural diagram of a computer program product according to an embodiment of the present application.
  • FIG. 1 is an exemplary structural diagram of a system architecture to which the method for monitoring eating behavior of the present application can be applied.
  • the wearable device 110 may be included in the system architecture shown in FIG. 1 .
  • the wearable device 110 may include a sensing module 111, a processing module 112, a storage module 113, and an output module 114.
  • the sensing module 111 may include at least two sensors such as myoelectric activity sensors, cameras, and/or microphones.
  • one EMG activity sensor is placed on the skin surface of the masticatory muscle in front of the target user's head and ears to detect the chewing action; another EMG activity sensor is placed on the skin surface of the thyrohyoid muscle part of the target user's neck, using It is used to detect the swallowing action; the camera can capture the image of the food the target user is currently eating; the microphone can capture the sound of the target user chewing the food.
  • the sensing module 111 may include a sound sensor and a camera, and the sound sensor may collect the sound of the target user during the eating process.
  • the camera can be mounted near the microphone location to facilitate taking images of the food.
  • the sensing module 111 After the sensing module 111 collects various data in the process of eating food by the target user, it can be transmitted to the processing module 121 in a wired or wireless manner. That is to say, the communication between the sensing module 111 and the processing module 112 may be performed in a wired or wireless manner.
  • various data collected by the sensing module 111 may be referred to as sensor data.
  • the processing module 112 may be any processor or chip with computing capabilities. After the processing module 112 receives the sensor data from the sensing module 111, it can perform corresponding processing according to the sensor data.
  • the output module 114 may be a microphone, a speaker, and/or a display device, and the like.
  • the output module 114 may output corresponding information under the control of the processing module 112 .
  • the storage module 113 may be any memory with storage capability.
  • FIG. 2 is an exemplary flowchart of a method for monitoring eating behavior according to an embodiment of the present application. As shown in FIG. 2, the method may include S210 to S270. The method may be performed by the wearable device 110 shown in FIG. 1 .
  • the sensing module 111 of the wearable device 110 may collect behavior data of the target user during the current eating behavior process, and detect the chewing action of the target user through the collected behavior data. It can be understood that the detection of the chewing action in this step may be the detection of each chewing action.
  • the masticatory muscles when the target user wears the wearable device 110 to eat food, the masticatory muscles will generate EMG signals when the target user chews the food. At this time, the EMG activity sensor placed on the skin surface of the target user's masticatory muscle will detect the EMG signal.
  • the sensing module 111 can transmit the detected EMG signal to the processing module 112 .
  • the processing module 112 detects the chewing action according to the EMG signal.
  • the processing module 112 may determine that the target user performs one chewing, that is, one chewing action is detected each time the EMG activity sensor detects an EMG signal of mastication.
  • the sensing module 111 of the wearable device 110 includes other sensors (for example, an image acquisition sensor such as a camera or a sound sensor such as a microphone).
  • chewing motion detection may be performed based on these sensors.
  • the sensing module 111 of the wearable device 110 may collect behavior data of the target user during the current eating behavior process, and detect the swallowing action of the target user through the collected behavior data.
  • the thyrohyoid muscle when the target user wears the wearable device 110 and eats food, the thyrohyoid muscle will generate an EMG signal when the target user swallows food. At this time, the EMG activity sensor placed on the skin surface of the thyrohyoid muscle can detect the EMG signal.
  • the sensing module 111 can transmit the detected EMG signal to the processing module 112 .
  • the processing module 112 performs swallowing motion detection according to the EMG signal.
  • the processing module 112 may determine that the target user has performed a swallowing every time the EMG activity sensor placed on the skin surface of the thyrohyoid muscle detects a swallowing EMG signal.
  • the sensing module 111 of the wearable device 110 includes other sensors (for example, an image acquisition sensor such as a camera or a sound sensor such as a microphone).
  • swallowing motion detection may also be performed based on these sensors.
  • the number of times of chewing between every two swallowing actions may be determined as the number of times of chewing the current food by the target user.
  • the number of times of chewing is referred to as the current number of times of chewing.
  • the food type to which the food currently eaten by the target user belongs can be identified according to the data collected by the sensing module 111 of the wearable device 110 .
  • the food type is referred to as the current food type.
  • the camera can collect images of the food currently eaten by the target user.
  • the processing module 112 After the processing module 112 receives the image captured by the camera, the image of the food can be compared with the food image pre-stored in the storage module 113 to identify the current food type; or, the image of the food can be based on the Markov random field Recognition is performed to identify the current food type; alternatively, the image of the food can be recognized based on a model for recognizing the food type to obtain the current food type.
  • a model that recognizes food types is a trained convolutional neural network model.
  • the type of the current food can also be assisted in determining the type of the current food according to the sound collected by the microphone.
  • the specific implementation can refer to the prior art, which will not be repeated here. .
  • the storage module 113 of the wearable device 110 may store a model of the minimum number of times of chewing food.
  • the model for the minimum number of times of chewing food may include a corresponding relationship between the food type and the minimum number of chewing times, and the corresponding relationship may be stored in the storage module 213 in the form of a table or a functional relationship.
  • the independent variable of the functional relationship may include food types, and the dependent variable of the functional relationship is the minimum number of chews.
  • the processing module 212 may determine the minimum number of chews required for the food of the current food type according to the model of the minimum number of chews of the food stored in the storage module 213 .
  • the processing module 212 can determine the food required for the current food type according to the correspondence minimum number of chews.
  • Table 1 An example of a model for the minimum number of chews of food is shown in Table 1.
  • Table 1 exemplifies examples of foods for each food type and the minimum number of chews required by humans to chew each food type when the food types include solid, semi-liquid, and liquid types.
  • the semi-liquid type can also be understood as the semi-solid type.
  • the processing module 112 of the wearable device 110 compares the current number of chews with the corresponding minimum number of chews, and determines whether the current number of chews is less than the corresponding minimum number of times according to the magnitude of the current number of chews and the corresponding minimum number of chews.
  • the output module 114 of the wearable device 110 may output prompt information by means of sound, short message, vibration, or image, etc., to remind the target user that the number of chews does not meet the standard.
  • the wearable device 110 may output prompt information in the form of sound through a microphone; as another example, the wearable device 110 may output prompt information in the form of an image or a short message through a display device.
  • outputting prompt information may include: once the processing module 112 of the wearable device 110 determines that the current number of chews is less than the corresponding minimum number of chews, invoking the output module 114 Output prompt information. That is to say, every time the wearable device 110 detects that the current number of chews of a mouthful of the target user is less than the corresponding minimum number of chews, prompt information is output.
  • outputting prompt information may include: when the processing module 112 of the wearable device 110 determines that the current number of chews is less than the corresponding minimum number of chews, It is counted whether the number of times of chewing N mouths of food by the target user in a row is less than the corresponding minimum number of times of chewing, and if so, the output module 114 is called to output prompt information.
  • N is a preset positive integer.
  • outputting prompt information may include: after determining that the current number of chews is less than the corresponding minimum number of chews, the processing module 112 of the wearable device 110 counts the target Whether the user has accumulated the number of times of chewing the food for the M mouth is smaller than the corresponding minimum number of times of chewing, and if so, the output module 114 is called to output prompt information.
  • M is a preset positive integer.
  • the processing module 112 of the wearable device 110 obtains the current number of chews and the current food type of the target user
  • the current number of chews and the current food type can be stored in the storage module 113 as the target user's diet data.
  • the processing module 112 may also store the comparison results of the times of swallowing and the times of chewing with the corresponding minimum times of chewing, as the dietary data of the target user.
  • the processing module 112 of the wearable device 110 may periodically (for example, a meal, a day, a week, or a month), or at the request of the target user, count the diet of the target user according to the target user's diet data, so as to obtain the target user results of an analysis of dietary habits.
  • the results of eating habits analysis can include one or more of the following information: the number of times the user eats in a fixed period, the total number of mouthfuls of each type of food consumed, and the average number of mouthfuls of each type of food consumed per meal (the total number of mouthfuls of each type of food consumed above).
  • the number of times of eating and drinking above the total number of chewing times ingested each type of food, the average number of chewing times ingesting each type of food (the total number of chewing times of each type of food ingested above/the total number of mouthfuls ingesting each type of food above), each type of food
  • the number of mouthfuls of food that did not meet the standard the proportion of each type of food that did not meet the standard (the number of mouthfuls of the above-mentioned types of food that did not meet the standard/the total number of mouthfuls of the above-mentioned intake of each type of food), and so on.
  • the output module 114 of the wearable device 110 can display the analysis result of the eating habits to the target user through voice, text message or image, etc.
  • S285 ie, user identification
  • S285 may be included to obtain identification information of the target user.
  • the wearable device 110 can use the sensing module 111 (camera or microphone) to collect the user's image or the user's voice, and then the processing module 112 can identify the identity information of the target user according to the user's image or the user's voice, and the identity information of the target user is obtained through the identification information.
  • the sensing module 111 camera or microphone
  • the processing module 112 can identify the identity information of the target user according to the user's image or the user's voice, and the identity information of the target user is obtained through the identification information.
  • logo Specifically, biometric identification methods such as face recognition, iris recognition, or voiceprint recognition can also be used to identify the target user.
  • identification information of the target user may be added to the dietary data of the target user.
  • the identification information of the target user may be the target user's identity (identity, ID), name or nickname, or the like.
  • the processing module 112 analyzes the dietary habits of the target user, that is, when S270 is executed, the dietary data of the target user can be obtained according to the identification information of the target user, and then analyzed according to the dietary data of the target user to obtain the dietary data of the target user. Eating habits analysis results.
  • the user's dietary data is identified by the user's identification information, so that multiple users can share the same device for monitoring dietary behaviors, thereby reducing the monitoring cost and enabling more people to monitor dietary behaviors , which can ultimately lead to healthier diets for more people.
  • the identification information of the target user may be manually input by the user. That is to say, instead of identifying the identity of the target user according to the sensor data to obtain the identification information of the target user, the target user manually inputs his own identification information.
  • S290 may also be included, that is, user age identification, to obtain the age of the target user.
  • the sensing module 110 of the wearable device 110 collects images and/or sounds of the target user, and the processing module 112 identifies the age of the target user based on the images and/or sounds of the target user, thereby obtaining the age of the target user.
  • the age of the target user may be manually input, for example, the target user's age may not be identified based on sensor data, but the target user's own age may be manually input; or, the target user age may be based on the target user's age.
  • the identity information is obtained from the relevant account information of the target user, for example, the age of the target user can be obtained from the sports account information of the target user.
  • both S285 and S290 may be included.
  • the model for the minimum number of times of chewing food may be established according to information input by the user.
  • the user may manually configure the minimum number of chews corresponding to each food type, or may manually configure the minimum number of chews corresponding to each food type at each age.
  • FIG. 5 is another exemplary structural diagram of a system architecture to which the method of monitoring eating behavior of the present application can be applied.
  • the system architecture may include a wearable device 510 and a cloud server 520 .
  • the wearable device 510 may include a sensing module 511 , a processing module 512 , a storage module 513 , an output module 514 and a communication module 515 .
  • the sensing module 511 , the processing module 512 , the storage module 513 and the output module 514 reference may be made to the sensing module 111 , the processing module 112 , the storage module 113 and the output module 114 in FIG. 1 , which will not be repeated here.
  • the communication module 515 may be a communication interface or a transceiver.
  • the wearable device 510 can communicate with the cloud server 520 through the communication module 515 .
  • the cloud server 420 may include a processing module 521 , a storage module 522 and a communication module 523 .
  • the processing module 521 may be any processor or chip with computing capability.
  • the storage module 522 may be any memory with storage capability.
  • the communication module 523 may be a communication interface or a transceiver.
  • the cloud server 520 can communicate with the wearable device 510 through the communication module 523.
  • the wearable device 510 and the cloud server 520 cooperate to implement the monitoring shown in any one of FIGS. 2 to 4 .
  • Methods of eating behavior The wearable device 510 and the cloud server 520 can cooperate in many different ways to implement the method of monitoring eating behavior of the present application.
  • the model of the minimum chewing times of the food type may be stored in the storage module 522 of the cloud server.
  • the processing module 512 can control the communication module 514 to transmit the sensor data to the cloud server 520.
  • the processing module 521 of the cloud server 520 can detect and obtain the current number of chews and the food type of the current food of the target user according to the sensor data, and according to the data stored in the storage module 522
  • the stored food minimum chewing times model obtains the minimum chewing times for the current food type.
  • the processing module 521 of the cloud server 520 detects the current number of chews, the food type, and the implementation manner of obtaining the minimum number of chews.
  • the processing module 111 of the wearable device 110 detects the current number of chews, the food type and the acquisition method. The implementation of the minimum number of chews will not be repeated here.
  • the processing module 521 of the cloud server 520 may send prompt information to the wearable device 510 through the communication device 523 when it is determined that the current number of chews is less than the corresponding minimum number of chews.
  • the communication module 515 of the wearable device 510 receives the prompt information from the cloud server 520 , it can output the prompt information through the output module 514 .
  • the current food type and the current chewing times may be acquired by the wearable device 510 , and then the wearable device 510 may obtain the current food type and the current chewing times through the communication module 515
  • the current food type is sent to the cloud server 520, and the cloud server 520 is requested to obtain the minimum chewing times corresponding to the current food type based on the food type minimum chewing times model.
  • the cloud server 520 After the cloud server 520 receives the current food type from the wearable device 510 through the communication module 523 , its processing module 521 can determine the minimum number of chews corresponding to the current food type based on the model of the minimum number of chews of the food type stored in the storage module 522 , and send the data to the communication module 522 . Module 523 sends the minimum number of chews to wearable device 510 . After receiving the minimum number of chews from the cloud server 520 through the communication module 515, the wearable device 510 may output prompt information based on the minimum number of chews and the acquired current number of chews.
  • the wearable device 510 may request the cloud server 520 for the food type through the communication module 515
  • the minimum chewing times model is stored in the storage module 513, and then prompt information is output based on the minimum chewing times model for the food type and the acquired current chewing times.
  • the model of the minimum chewing times of the food type in the storage module 513 may be deleted to save storage space of the wearable device 510 .
  • the dietary data of the target user may be stored in the storage module 522 of the cloud server 520 .
  • the processing module 521 of the cloud server 520 periodically analyzes the dietary data of the user based on the dietary data stored in the storage module 522. , and send the analysis result to the wearable device 510 through the communication module 523 .
  • the communication module 514 of the wearable device 510 receives the analysis result from the cloud server 520, the analysis result can be output through the output module 514.
  • the wearable device 510 may request the target user's diet data from the cloud server 520, and perform subsequent operations based on the diet data.
  • the processing module 521 of the cloud server 520 may also perform user identification according to the sensor data reported by the wearable device 510 to obtain identification information of the target user.
  • the dietary data stored in the storage module 522 also includes the identification information of the user.
  • the dietary data used by the processing module 521 to analyze the dietary habits of the target user is the dietary data stored from the storage module 522 based on the identification information of the target user; as another example, the wearable device 510 may obtain the target user's dietary data identification information, and send the identification information to the cloud server 520, so as to obtain the dietary data of the target user from the cloud server 520, and perform subsequent operations according to the dietary data.
  • the food type chewing times model stored in the storage module 522 of the cloud server 520 may contain the correspondence between the user's age, the food type and the minimum chewing times.
  • the processing module 521 may also perform age identification of the user according to the sensor data reported by the wearable device 510 to obtain the age of the target user.
  • the processing module 521 obtains the minimum number of times of chewing from the number of times of chewing of the food type in the storage module 522 according to the age and the current food type.
  • the wearable device 510 may send the age of the target user and the current food type to the cloud server 520, so as to obtain the minimum number of chewing times corresponding to the age and the current food type from the cloud server 520, and perform follow-up according to the minimum number of chewing times operate.
  • FIG. 6 is yet another exemplary structural diagram of a system architecture to which the method of monitoring eating behavior of the present application can be applied.
  • the system architecture may include a wearable device 610 and a paired device 620 .
  • Some examples of paired devices 620 are cell phones, tablets, or televisions, among others.
  • the wearable device 610 may include a sensing module 611 , a processing module 612 , a storage module 613 , an output module 614 and a communication module 615 ; the paired device 620 may include a processing module 621 , a storage module 622 and a communication module 623 .
  • the sensing module 611 , the processing module 612 , the storage module 613 , the output module 614 and the communication module 615 of the wearable device 610 may refer to the sensing module 511 , the processing module 512 , the storage module 513 , the output module 514 and the communication module in FIG. 5 , respectively
  • Module 615, the processing module 621, the storage module 622 and the communication module 623 of the pairing device 620 may refer to the processing module 521, storage module 522 and communication module 523 of the cloud server 520 in FIG.
  • the wearable device 610 and the paired device 620 cooperate to implement the monitoring shown in any one of FIGS. 2 to 4 .
  • Methods of eating behavior The wearable device 610 and the cloud server 620 can cooperate in many different ways to implement the method of monitoring eating behavior of the present application.
  • the method for implementing the method for monitoring eating behavior in the system of the embodiment of the present application may refer to the method for implementing the method for monitoring eating behavior in the system shown in FIG. 5 , the difference is that the cloud server 510 in FIG.
  • the pairing device 620 determines that the current number of chews is less than the minimum number of chews, the pairing device 620 can output prompt information through its own output module; and, after the pairing device 620 obtains the eating habit analysis result , you can output the analysis results of eating habits through its own output module.
  • FIG. 7 is a system architecture of another embodiment of the present application, which may include a wearable device 710, a pairing device 720, and a cloud server 730.
  • the wearable device 710 may include a sensing module 711, a processing module 712, a storage module 713, an output module 714, and a communication module 715;
  • the paired device 720 may include a processing module 721, a storage module 722, and a communication module 723;
  • the cloud server 730 may include a processing module 721, a storage module 722, and a communication module 723; module 731 , storage module 732 and communication module 733 .
  • sensing module 711 the processing module 712 , the storage module 713 , the output module 714 and the communication module 715 .
  • sensing module 511 the processing module 512 , the storage module 513 , the output module 514 and the communication module 515 in FIG. 5 , respectively. , and will not be repeated here.
  • the processing module 721 For the general functions of the processing module 721, the storage module 722 and the communication module 723, reference may be made to the processing module 621, the storage module 622 and the communication module 623 of the paired device 620 in FIG. 6, and the general functions of the processing module 731, the storage module 732 and the communication module 733 Reference may be made to the processing module 531 , the storage module 532 and the communication module 533 in FIG. 5 , which will be repeated here.
  • the method of monitoring eating behavior shown in any one of FIGS. 2-4 may be performed by the system shown in FIG. 7 .
  • part of the steps may be performed by the wearable device 710
  • another part of the steps may be performed by the pairing device 720
  • the remaining part of the steps may be performed by the cloud server 730 .
  • the sensing module 711 of the wearable device 710 collects the data
  • the data collected by the sensing module 711 may be transmitted to the pairing device 720 through the communication module 715 .
  • the processing module 721 of the paired device 720 performs operations similar to S210, S220 and S230 based on the data received by the communication module 723, and performs S240 based on the food type chewing times model stored in the storage module 722 or the food type chewing times model obtained from the cloud server , S250, S260 and S285 similar operations.
  • the diet data of the target user is sent to the cloud server 730 by the communication module 723 of the paired device 720 .
  • the communication module 733 of the cloud server 730 After the communication module 733 of the cloud server 730 receives the dietary data of the target user from the paired device 720 , it is stored in the storage module 732 . And, operations similar to S270 and S280 are performed by the processing module 731 of the cloud server 730 .
  • the wearable device 710 transmits the data collected by the sensor to the paired device 720
  • the paired device can transmit the data to the cloud server 730, and the cloud server 730 executes the method shown in FIG. 2, FIG. 3 or FIG. 4. related operations.
  • FIG. 8 is a schematic structural diagram of an apparatus for monitoring eating behavior according to an embodiment of the present application.
  • the apparatus 800 in this embodiment may include a processing module 810 and an output module 820 . It can be understood that, the apparatus in this embodiment may further include other modules, such as a storage module and/or a communication module.
  • the apparatus shown in FIG. 8 may be a user equipment, a pairing device or a server, or may be a chip that can be applied to the user equipment, the pairing device or the server.
  • the apparatus 800 may be used to perform all or part of the operations in the method shown in any one of FIGS. 2 to 4 , and one or more apparatuses 800 may be used to implement the method shown in any one of FIGS. 2 to 4 .
  • the processing module 810 can be used to perform all or part of the operations in S210, S220, S235, S230, S240, S250, S270, S285 and S290;
  • the output module 820 can be used to perform all or part of the operations in S260 and S280.
  • FIG. 9 is a schematic structural diagram of an apparatus 900 for monitoring eating behavior provided by an embodiment of the present application.
  • the apparatus 900 includes a processor 902 , a communication interface 903 and a memory 904 .
  • the apparatus shown in FIG. 8 may be a user equipment, a pairing device or a server, or may be a chip that can be applied to the user equipment, the pairing device or the server.
  • the apparatus 800 may be used to perform all or part of the operations in the method shown in any one of FIGS. 2 to 4 , and one or more apparatuses 800 may be used to implement the method shown in any one of FIGS. 2 to 4 . method.
  • the processor 902, the memory 904 and the communication interface 903 can communicate through a bus.
  • Executable code is stored in the memory 904, and the processor 902 reads the executable code in the memory 904 to execute the corresponding method.
  • the memory 904 may also include other software modules required for running processes such as an operating system.
  • the operating system can be LINUX TM , UNIX TM , WINDOWS TM and the like.
  • the executable code in the memory 904 is used to implement all or part of the operations in the methods shown in any one of FIGS. 2 to 4 , and the processor 902 reads the executable code in the memory 904 to execute FIGS. 2 to 4 All or part of any of the methods shown.
  • the processor 902 may be a central processing unit (central processing unit, CPU).
  • Memory 904 may include volatile memory, such as random access memory (RAM).
  • RAM random access memory
  • the memory 904 may also include non-volatile memory (2non-volatile memory, 2NVM), such as 2read-only memory (2ROM), flash memory, hard disk drive (HDD) or solid state drive ( solid state disk, SSD).
  • 2NVM non-volatile memory
  • 2ROM 2read-only memory
  • flash memory such as hard disk drive (HDD) or solid state drive ( solid state disk, SSD).
  • example computer program product 1000 is provided using signal bearing medium 1001 .
  • the signal bearing medium 1001 may include one or more program instructions 1002, which, when executed by one or more processors, may provide the functions or parts of the functions described above with respect to the methods shown in any of FIGS. 2-4. .
  • one or more of the features of S210 to S290 may be undertaken by one or more instructions associated with the signal bearing medium 1001 .
  • the signal bearing medium 1001 may include a computer readable medium 1003 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (read only memory) -only memory, ROM) or random access memory (RAM), etc.
  • the signal bearing medium 1001 may include a computer recordable medium 1004 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • the signal bearing medium, 1001 can include a communication medium 1005, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.) .
  • signal bearing medium 1001 may be conveyed by a wireless form of communication medium 1005 (eg, a wireless communication medium that conforms to the IEEE 802.11 standard or other transmission protocol).
  • the one or more program instructions 1002 may be, for example, computer-executable instructions or logic-implemented instructions.
  • the aforementioned communication device may be configured to, in response to program instructions 1002 communicated to the communication device via one or more of computer readable medium 1003 , computer recordable medium 1004 , and/or communication medium 1005 , Provides various operations, functions, or actions.
  • program instructions 1002 communicated to the communication device via one or more of computer readable medium 1003 , computer recordable medium 1004 , and/or communication medium 1005 .
  • the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will understand that other arrangements and other elements (eg, machines, interfaces, functions, sequences, and groups of functions, etc.) can be used instead and that some elements may be omitted altogether depending on the desired results . Additionally, many of the described elements are functional entities that may be implemented as discrete or distributed components, or in conjunction with other components in any suitable combination and position.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a U disk, a removable hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk and other media that can store program codes.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

一种监管饮食行为的方法和相关装置。所述方法中,获取用户当前进食的食物的咀嚼次数之后,根据当前进食的食物所属的食物类型所对应的最小咀嚼次数来判断用户当前的咀嚼次数是否达标,在不达标的情况下提醒用户,从而可以使得用户的饮食更健康。进一步地,所述最小咀嚼次数是用户这个年龄的人进食所述食物类型的食物所需的最小咀嚼次数,这可以进一步使得用户的饮食更健康。再进一步,可以定期根据用户的饮食数据分析用户的饮食习惯。此外,还可以根据用户的标识信息来区分不同用户的饮食数据,从而可以使得多个用户可以共用同一个监管设备,降低监管成本。

Description

监管饮食行为的方法和装置 技术领域
本申请涉及人工智能领域,并且更具体地,涉及监管饮食行为的方法和装置。
背景技术
饮食是人类维持生命的基本条件,而要使人活得健康愉快、充满活力和智慧,则不仅仅满足于吃饱肚子,还必须考虑饮食健康。
目前,越来越多的人关注饮食健康,围绕饮食健康的产品和技术也越来越多。其中,通过图像分析食物营养成分、卡路里等,并与用户运动情况相结合进行饮食健康管理和饮食推荐是一大热点,但对人类饮食时的咀嚼情况监控和管理关注度却较低,而饮食时的咀嚼情况的监控和管理对人类的健康又有着重要影响。
因此,如何实现对饮食行为的监控,以使得人们的饮食习惯更健康,是亟待解决的技术问题。
发明内容
本申请提供一种监管饮食行为的方法和相关装置,能够使得人们的饮食习惯更健康。
第一方面,本申请提供了一种监管饮食行为的方法。所述方法包括:获取目标用户在当前饮食行为中进食当前食物的第一咀嚼次数;获取所述当前食物的食物类型;所述智能设备获取进食所述食物类型的食物所需的最小咀嚼次数;在所述第一咀嚼次数小于所述最小咀嚼次数的情况下,输出提示信息。
本申请的方法中,在确定目标用户当前食物的咀嚼次数小于最小咀嚼次数的情况下,即当前咀嚼次数不达标的情况下,输出提示信息,以提示目标用户增加咀嚼次数,从而可以使得目标用户的饮食习惯更健康。
并且,本申请中的最小咀嚼次数是目标用户当前进食的食物所属的类型的食物所需的最小咀嚼次数,这与所有食物都对应相同最小咀嚼次数相比,能够更准确地确定目标用户当前的咀嚼次数是否达标,从而可以更准确地实时提醒目标用户,进而可以使得目标用户的饮食更健康。
本申请的方法可以由本地用户设备执行,也可以由服务器执行,还可以由本地用户设备和服务器协作执行,或者可以由一个或多个智能芯片来执行。
本申请中,用户设备可以是任意具有数据处理能力的设备,例如,用户设备可以是可穿戴设备,或者可以是能够与可穿戴设备近距离通信的智能终端。服务器可以是与用户设备之间通过wifi或者通信网络等远距离通信的设备。
本申请中,将能够与可穿戴设备近距离通信的智能终端称为配对设备。例如,能够通过蓝牙或者数据线等方式与可穿戴设备近距离通信的智能手机、平板、电脑等,都可以成为配对设备。
用户设备为可穿戴设备时,用户设备的一些示例包括虚拟现实(virtual reality,VR)眼镜、增强现实(augmented reality,AR)眼镜或头戴式耳机。用户设备为智能终端时,用户设备的一些示例包括智能手机、平板电脑、笔记本电脑或台式电脑等。
目标用户是指当前饮食行为被监管的用户,且当前被监管的饮食行为称为目标饮食行为。目标用户在目标饮食行为中进行的食物称为目标食物,目标食物所属的食物类型称为目标类型,目标用户咀嚼目标食物的次数称为目标咀嚼次数,人类咀嚼目标类型的食物所需的最小咀嚼次数称为咀嚼次数阈值。
通常情况下,咀嚼次数阈值可以是预设设置好的,例如根据人类健康饮食的科学知识预先配置在智能设备中,或者可以是用户手动设置的。
在一些可能的实现方式中,所述获取进食所述食物类型的食物所需的最小咀嚼次数,包括:根据所述食物类型和预先存储的对应关系,获取所述最小咀嚼次数,其中,所述对应关系中包含所述食物类型与所述最小咀嚼次数的对应关系。
也就是说,上可以预先存储食物类型与最小咀嚼次数的对应关系,这样,在获取到当前食物的食物类型之后就可以直接根据该对应关系获知最小咀嚼次数了。
该预先存储的对应关系可以是智能设备上配置好的,也可以是从其他设备获取的,或者还可以是根据用户输入的信息生成的。
例如,该对应关系可以是根据用户直接输入的对应关系,或者可以根据用户输入的最小咀嚼次数生成的,或者可以是根据用户输入的年龄、性别或者身体健康状况等信息生成的。这样可以为目标用户获取到更精准的最小咀嚼次数,从而可以使得目标用户的饮食更健康。
在一些可能的实现方式中,所述方法还包括:所述获取所述目标用户的年龄;其中,所述获取进食所述食物类型的食物所需的最小咀嚼次数,包括:获取所述年龄的用户进食所述食物类型的食物所需的最小咀嚼次数。
也就是说,最小咀嚼次数不仅仅与食物类型有关,还与用户年龄有关。在确定用户饮食行为对应的最小咀嚼次数时,即参考食物类型又参考用户年龄,这样可以得到更准确的咀嚼阈值,从而可以输出更准确的提示信息,从而可以更有利于用户的饮食健康。
在一种可能的实现方式中,所述获取所述年龄的用户进食所述食物类型的食物所需的最小咀嚼次数,包括:根据所述食物类型、所述年龄和预设的对应关系,确定所述最小咀嚼次数,其中,所述对应关系包含所述食物类型、所述最小咀嚼次数和所述年龄之间的对应关系。
也就是说,可以预先存储食物类型、用户年龄与最小咀嚼次数的对应关系,这样,在获取到当前食物的食物类型和用户年龄之后就可以直接根据该对应关系获知最小咀嚼次数了。
该预先存储的对应关系可以是智能设备上配置好的,也可以是从其他设备获取的,或者还可以是根据用户输入的信息生成的。
例如,该对应关系可以是根据用户直接输入的对应关系,或者可以根据用户输入的最小咀嚼次数生成的,或者可以是根据用户输入的性别或者身体健康状况等信息生成的。这样可以为目标用户获取到更精准的最小咀嚼次数,从而可以使得目标用户的饮食更健康。
在一些可能的实现方式中,所述方法还包括:获取所述目标用户的多个饮食数据,所 述多个饮食数据与所述目标用户的多次饮食行为一一对应,所述多个饮食数据中每个饮食数据包含所述目标用户在对应饮食行为中进食的目标食物的咀嚼次数和所述目标食物的食物类型;根据所述多个饮食数据对所述目标用户的饮食行为进行分析,得到饮食行为分析结果;输出所述饮食习惯分析结果。
该实现方式中,根据这些长期记录的饮食数据分析目标用户的饮食习惯,以便于用户根据该分析结果调整自己的饮食行为,从而使得用户的饮食更健康。
其中,所述饮食习惯分析结果包括以下一项或多项:
所述目标用户的饮食次数,所述目标用户摄入的每类食物的总口数,所述目标用户每次饮食行为中进食所述每类食物的平均口数,所述目标用户进食所述每类食物的总咀嚼次数,所述目标用户进食所述每类食物时的咀嚼次数小于所述每类食物对应的最小咀嚼次数阈值的口数。
在一些可能的实现方式中,所述每个饮食数据中还包含所述目标用户的标识信息;其中,获取所述目标用户的多个饮食数据,包括:获取所述目标用户的标识信息;从饮食数据库中获取包含所述标识信息的饮食数据,得到所述多个饮食数据。
其中,所述获取所述目标用户的标识信息,可以包括:对所述目标用户进行识别,获得所述目标用户的标识信息。例如,通过传感器采集的数据识别目标用户的特征,并根据预存的用户特征与用户标识信息的对应关系获取目标用户的标识信息。
或者,也可以是根据目标用户输入的信息获取目标用户的标识信息,例如通过在登录验证目标用户的合法使用权的情况下,获取用户输入的标识信息。
也就是说,记录第一饮食数据的同时,还记录用户标识。这使得第一设备可以针对每个用户,获取其对应的饮食数据,分析其个人饮食习惯;从而可以使得多人可以共享同一个第一设备,从而可以使得更多人的饮食更健康。
在一些可能的实现方式中,所述第一咀嚼次数为所述目标用户在所述当前饮食行为中第N个小于对应最小咀嚼次数的咀嚼次数,N为预设的正整数。
也就是说,累计N次咀嚼次数不达标才提示用户。这样可以综合考虑目标用户的多次进食情况进行提示,从而可以进一步使得提示更合理,进而可以使得用户饮食更健康。
可选地,所述当前食物与所述第一咀嚼次数之前的N-1个小于对应最小咀嚼次数的咀嚼次数所对应的食物为所述目标用户在所述当前饮食行为中连续进食的N口食物。
第二方面,本申请提供一种监控饮食行为的装置,该装置包括用于执行上述第一方面或其中任意一种实现方式中的方法的模块。
第三方面,本申请提供了一种监控饮食行为的装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行第一方面或者其中任意一种实现方式中的方法。
可选地,该装置还可以包括通信接口或者收发器,以便于该装置于其他装置或设备传输实现第一方面或者其中任意一种实现方式中的方法所需的相关信息。或者,可选地,该存储器可以用于存储实现第一方面或者其中任意一种实现方式中的方法所需的相关信息。这些相关信息的示例包含传感器采集的数据,基于传感器采集的数据得到的食物类型、咀嚼次数、饮食数据、饮食分析结果、用户身份或用户年龄等等。
第四方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代 码,该程序代码包括用于执行第一方面或其中任意一种实现方式中的方法。
第五方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面或其中任意一种实现方式中的方法。
第六方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面或其中任意一种实现方式中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面或其中任意一种实现方式中的方法。
第七方面,提供了一种设备,该设备包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行第一方面或者其中任意一种实现方式中的方法。
第八方面,提供了一种系统,该系统包含用户设备和服务器,所述用户设备和服务器分别用于执行第一方面或者其中任意一种实现方式中的方法中的部分步骤,以实现第一方面或者其中任意一种实现方式中的方法。
第九方面,提供了一种系统,该系统包含第一用户设备和第二用户设备,所述第一用户设备和第二用户设备分别用于执行第一方面或者其中任意一种实现方式中的方法中的部分步骤,以实现第一方面或者其中任意一种实现方式中的方法。其中,第二用户设备的一种示例为第一用户设备的配对设备。
第十方面,提供了一种系统,该系统包含第一用户设备、第二用户设备和服务器,所述第一用户设备、第二用户设备和服务器分别用于执行第一方面或者其中任意一种实现方式中的方法中的部分步骤,以实现第一方面或者其中任意一种实现方式中的方法。其中,第二用户设备的一种示例为第一用户设备的配对设备。
附图说明
图1是本申请一个实施例的系统架构的示例性结构图;
图2是本申请一个实施例的监控饮食行为的方法的示例性流程图;
图3是本申请又一个实施例的监控饮食行为的方法的示例性流程图;
图4是本申请另一个实施例的监控饮食行为的方法的示例性流程图;
图5是本申请又一个实施例的系统架构的示例性结构图;
图6是本申请另一个实施例的系统架构的示例性结构图;
图7是本申请再一个实施例的系统架构的示例性结构图;
图8是本申请一个实施例的监控饮食行为的装置的示例性结构图;
图9是本申请另一个实施例的监控饮食行为的装置的示例性结构图;
图10是本申请一个实施例的计算机程序产品的示例性结构图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1为可以应用本申请的监管饮食行为的方法的系统架构的一种示例性结构图。图1所示的系统架构中可以包括可穿戴设备110。可穿戴设备110可以包括传感模块111、处 理模块112、存储模块113和输出模块114。
以可穿戴设备110为VR眼镜或AR眼镜为例,传感模块111可以包括至少两个肌电活动传感器、摄像头和/或麦克风等传感器。
其中,一个肌电活动传感器置于目标用户头部耳前的咀嚼肌皮肤表面,用于检测咀嚼动作;另一个肌电活动传感器置于目标用户颈部的甲状舌骨肌部位的皮肤表面,用于检测吞咽动作;摄像头可以采集目标用户当前进食的食物的图像;麦克风可以采集目标用户咀嚼食物的声音。
以可穿戴设备110为头戴式耳机为例,该传感模块111可以包括声音传感器和相机,该声音传感器可以采集目标用户进食过程中的声音。该相机可以安装在麦克风位置附近,以便于拍摄食物的图像。
传感模块111采集到目标用户进食食物过程中的各种数据之后,可以通过有线或无线的方式传输给处理模块121。也就是说,传感模块111与处理模块112之间可以通过有线或无线的方式进行通信。本实施例中,为了描述方便,可以将传感模块111采集到的各种数据称为传感器数据。
处理模块112可以是任意具有计算能力的处理器或芯片。处理模块112从传感模块111接收到传感器数据之后,可以根据传感器数据进行相应处理。
输出模块114可以是话筒、音箱和/或显示装置等。输出模块114可以在处理模块112的控制下输出相应信息。
存储模块113可以是任意具有存储能力的存储器。
图2是本申请一个实施例的监控饮食行为的方法的示例性流程图。如图2所示,该方法可以包括S210至S270。该方法可以由图1所示的可穿戴设备110执行。
S210,咀嚼动作检测。
本实施例中,可以通过可穿戴设备110的传感模块111采集目标用户当前进食行为过程中的行为数据,并通过采集的行为数据检测目标用户的咀嚼动作。可以理解的是,该步骤中所说的咀嚼动作检测可以为每一次咀嚼动作的检测。
以可穿戴设备110的传感模块111包含肌电活动传感器为例,目标用户佩戴可穿戴设备110进食食物的过程中,目标用户咀嚼食物时,咀嚼肌会产生肌电信号。此时,放置在目标用户的咀嚼肌皮肤表面的肌电活动传感器会检测到该肌电信号。
传感模块111可以将检测到的肌电信号传输给处理模块112。处理模块112根据该肌电信号进行咀嚼动作检测。
作为一种示例,肌电活动传感器每检测到一次咀嚼肌电电信号,处理模块112可以确定目标用户进行了一次咀嚼,即检测到一次咀嚼动作。
可以理解的是,上述通过肌电活动传感器来检测咀嚼动作的方法仅是一种示例,可穿戴设备110的传感模块111包含其他传感器(例如,摄像头等图像采集传感器或麦克风等声音传感器)的情况下,也可以基于这些传感器进行咀嚼动作检测。基于这些传感器实现咀嚼动作检测的方法可以参考现有技术中的相关方法,此处不再赘述。
S220,吞咽动作检测。
本实施例中,可以通过可穿戴设备110的传感模块111采集目标用户当前进食行为过程中的行为数据,并通过采集的行为数据检测目标用户的吞咽动作。
以可穿戴设备110的传感模块111包含肌电活动传感器为例,目标用户佩戴可穿戴设备110进食食物的过程中,目标用户在吞咽食物时,甲状舌骨肌会产生肌电信号,因此。此时,放置在甲状舌骨肌部位的皮肤表面的肌电活动传感器可以检测到肌电信号。
传感模块111可以将检测到的肌电信号传输给处理模块112。处理模块112以根据该肌电信号进行吞咽动作检测。
作为一种示例,放置在甲状舌骨肌部位的皮肤表面的肌电活动传感器每检测到一次吞咽肌电电信号,处理模块112可以确定目标用户进行了一次吞咽。
可以理解的是,上述通过肌电活动传感器来检测吞咽动作的方法仅是一种示例,可穿戴设备110的传感模块111包含其他传感器(例如,摄像头等图像采集传感器或麦克风等声音传感器)的情况下,也可以基于这些传感器进行吞咽动作检测。基于这些传感器实现吞咽动作检测的方法可以参考现有技术中的相关方法,此处不再赘述。
S230,进行咀嚼次数检测。
作为一种示例,处理模块112每检测到两个吞咽动作,则可以将每两次吞咽动作之间的咀嚼次数确定为目标用户咀嚼当前食物的咀嚼次数。为了方便描述,本实施例将该咀嚼次数称为当前咀嚼次数。
S235,食物类型识别。
本实施例中,可以根据可穿戴设备110的传感模块111采集的数据来识别目标用户当前进食的食物所属的食物类型。为了方便描述,本实施例将该食物类型称为当前食物类型。
以可穿戴设备110的传感模块111包含摄像头为例,该摄像头可以采集目标用户当前进食的食物的图像。
处理模块112接收到摄像头采集的图像之后,可以将该食物的图像与存储模块113中预存的食物图像进行对比,以识别出当前食物类型;或者,可以基于马尔科夫随机场对该食物的图像进行识别,以识别出当前食物类型;或者,可以基于用于识别食物类型的模型来对该食物的图像进行识别,以得到当前食物类型。识别食物类型的模型的一种示例为训练过的卷积神经网络模型。
在一些实现方式中,可穿戴设备110的传感模块111包含麦克风的情况下,还可以根据麦克风采集到的声音辅助确定当前食物的类型,具体实现方式可以参考现有技术,此处不再赘述。
S240,获取所述食物类型的食物所需的最小咀嚼次数。
例如,可穿戴设备110的存储模块113中可以存储食物最小咀嚼次数模型。作为一种示例,该食物最小咀嚼次数模型可以包含食物类型与最小咀嚼次数的对应关系,该对应关系可以通过表格或者函数关系式的方式存储在存储模块213中。该对应关系通过函数关系式存储时,该函数关系式的自变量可以包含食物类型,该函数关系式的因变量为最小咀嚼次数。
处理模块212可以根据存储模块213中存储的食物最小咀嚼次数模型来确定当前食物类型的食物所需的最小咀嚼次数。
例如,以食物最小咀嚼次数模型包含一个或多个食物类型与每个食物类型的食物的最小咀嚼次数的对应关系为例,处理模块212可以根据该对应关系可以确定出当前食物类型的食物所需的最小咀嚼次数。
食物最小咀嚼次数模型的一种示例如表1所示。表1中示例性给出了食物类型包含固体类型、半流质类型和流质类型的情况下,每种食物类型的食物的示例以及人类咀嚼每种食物类型的食物所需的最小咀嚼次数。其中,半流质类型也可以理解为半固体类型。
表1食物最小咀嚼次数模型
Figure PCTCN2020101691-appb-000001
S250,判断所述咀嚼次数是否小于最小咀嚼次数。
例如,可穿戴设备110的处理模块112对当前咀嚼次数和对应最小咀嚼次数进行对比,根据当前咀嚼次数与对应最小咀嚼次数的数值大小确定当前咀嚼次数是否小于对应最小咀嚼次数。
S260,在所述咀嚼次数小于所述最小咀嚼次数阈值的情况下,输出提示信息。
例如,可穿戴设备110的输出模块114可以通过声音、短信息、震动或者图像等方式输出提示信息,以提醒目标用户其咀嚼次数不达标。
作为一种示例,可穿戴设备110可以通过麦克风输出声音形式的提示信息;作为另一个示例,可穿戴设备110可以通过显示装置输出图像或短信息形式的提示信息。
在一种实现方式中,在当前咀嚼次数小于对应最小咀嚼次数的情况下,输出提示信息,可以包括:可穿戴设备110的处理模块112一旦确定当前咀嚼次数小于对应最小咀嚼次数,就调用输出模块114输出提示信息。也就是说,可穿戴设备110每检测到目标用户的一口食物的当前咀嚼次数小于对应最小咀嚼次数就输出提示信息。
在另一种实现方式中,在当前咀嚼次数小于对应最小咀嚼次数阈值的情况下,输出提示信息,可以包括:可穿戴设备110的处理模块112在确定当前咀嚼次数小于对应最小咀嚼次数的同时,统计目标用户是否已经连续N口食物的咀嚼次数均小于对应最小咀嚼次数,若是,则调用输出模块114输出提示信息。其中,N为预设的正整数。
在又一种实现方式中,在当前咀嚼次数小于对应最小咀嚼次数的情况下,输出提示信息,可以包括:可穿戴设备110的处理模块112在确定当前咀嚼次数小于对应最小咀嚼次数之后,统计目标用户是否已经累计M口食物的咀嚼次数均小于对应最小咀嚼次数,若是,则调用输出模块114输出提示信息。其中,M为预设的正整数。
S270,饮食数据管理,得到饮食习惯分析结果。
例如,可穿戴设备110的处理模块112获取目标用户的当前咀嚼次数和当前食物类型之后,可以将该当前咀嚼次数和当前食物类型存储到存储模块113中,作为目标用户的饮 食数据。可选地,处理模块112还可以将吞咽次数和咀嚼次数与相应最小咀嚼次数的比较结果存储来下,作为目标用户的饮食数据。
这样,可穿戴设备110的处理模块112可以定期(例如一餐、一天、一周或者一个月),或者在目标用户的请求下,根据目标用户的饮食数据统计目标用户的饮食情况,以得到目标用户的饮食习惯分析结果。
饮食习惯分析结果可以包含以下一项或多项信息:用户固定周期内饮食次数,摄入各类别食物的总口数,每餐摄入各类别食物的平均口数(上述摄入各类别食物的总口数/上述饮食次数),摄入各类别食物的总咀嚼次数,摄入各类别食物的平均咀嚼次数(上述摄入各类别食物的总咀嚼次数/上述摄入各类别食物的总口数),各类别食物不达标的口数,各类别食物不达标的比例(上述各类别食物不达标的口数/上述摄入各类别食物的总口数),等等。
饮食习惯分析结果的一个示例如下:
您本周内10餐摄入固体类食物,共计180口,平均每餐18口,共计咀嚼324次,每口食物平均咀嚼18次,60口咀嚼次数不达标,不达标比例为33%。
S280,输出饮食习惯分析结果。
例如,可穿戴设备110的输出模块114可以通过声音、短信或者图像等向目标用户显示该饮食习惯分析结果
本申请的又一些实施例中,可选地,如图3所示,还可以包括S285,即用户身份识别,得到目标用户的标识信息。
例如,可穿戴设备110可以传感模块111(摄像头或麦克风)采集用户图像或者用户声音,然后处理模块112根据用户图像或者用户声音来识别目标用户的身份信息,目标用户的身份信息通过标识信息来标识。具体地,也可以采用人脸识别、虹膜识别或者声纹识别等生物特征识别方法来识别目标用户。
处理模块112识别目标用户之后,在存储模块113中记录或者说存储目标用户的饮食数据时,可以在目标用户的饮食数据中增加目标用户的标识信息。目标用户的标识信息可以目标用户的身份标识(identity,ID)或者姓名或者昵称等。
这样,在处理模块112对目标用户的饮食习惯进行分析时,即执行S270时,可以根据目标用户的标识信息获取目标用户的饮食数据,然后根据目标用户的饮食数据进行分析,以得到目标用户的饮食习惯分析结果。
本实施例中,通过用户的标识信息来标识用户的饮食数据,可以使得多个用户可以共用同一个监管饮食行为的设备,从而可以降低监管成本,进而可以使得更多人能够进行饮食行为的监管,最终可以使得更多人的饮食更健康。
可选地,目标用户的标识信息可以是用户手动输入的。也就是说,可以不用根据传感器数据对目标用户的身份进行识别,以获取目标用户的标识信息,而是目标用户手动输入自己的标识信息。
本申请的又一些实施例中,可选地,如图4所示,还可以包括S290,即用户年龄识别,得到目标用户的年龄。
例如,可穿戴设备110的传感模块110采集目标用户的图像和/或声音,处理模块112基于目标用户的图像和/或声音对目标用户的年龄进行辨识,从而得到目标用户的年龄。
该实施例中,食物最小咀嚼次数模型的一种示例如表2所示。
表2食物最小咀嚼次数模型
Figure PCTCN2020101691-appb-000002
本实施例中,针对同一类型的食物,为不同年龄的用户设置不同的最小咀嚼次数,更精准地监管用户的饮食行为,从而可以督促用户养成更健康的饮食习惯,更好地保证用户的饮食健康。
可选地,目标用户的年龄可以是手动输入的,例如,可以不用根据传感器数据识别目标用户的年龄,而是由目标用户自己手动输入自己的年龄;或者,目标用户年龄可以是根据目标用户的身份信息从该目标用户的相关账号信息中获取的,例如,可以从目标用户的运动账号信息中获取该目标用户的年龄。
可以理解的是,本申请一些实施例中,可以既包含S285,又包含S290。
本申请各个实施例中,可选地,食物最小咀嚼次数模型可以是根据用户输入的信息建立的。例如,用户可以手动配置每种食物类型所对应的最小咀嚼次数,或者可以手动配置每个年龄下每种食物类型所对应的最小咀嚼次数。
图5为可以应用本申请的监管饮食行为的方法的系统架构的另一个示例性结构图。如图5所示,该系统架构中可以包含可穿戴设备510和云服务器520。
可穿戴设备510可以包含传感模块511、处理模块512、存储模块513、输出模块514和通信模块515。传感模块511、处理模块512、存储模块513和输出模块514的通用功能可以分别参考图1中的传感模块111、处理模块112、存储模块113和输出模块114,此处不再赘述。通信模块515可以是通信接口或者收发器。可穿戴设备510通过通信模块515可以与云服务器520通信。
云服务器420可以包括处理模块521、存储模块522和通信模块523。其中,处理模块521可以是任意具有计算能力的处理器或芯片。存储模块522可以是任意具有存储能力的存储器。通信模块523可以是通信接口或者收发器。云服务器520通过通信模块523可 以与可穿戴设备510通信。
在如图5所示的系统架构中,不再是可穿戴设备510独自执行监管饮食行为的方法,而是可穿戴设备510和云服务器520协作实现图2至图4中任意一个所示的监管饮食行为的方法。可穿戴设备510和云服务器520可以通过多种不同的方式进行协作,以实现本申请的监管饮食行为的方法。
在一种可能的实现方式中,食物类型最小咀嚼次数模型可以存储在云服务器的存储模块522中。
食物类型最小咀嚼次数模型存储在云服务器的存储模块522中的情况下,作为一个示例,可穿戴设备510的传感模块511采集到数据之后,可以通过处理模块512控制通信模块514将传感器数据传输给云服务器520。云服务器520的通信模块523从可穿戴设备510接收传感器数据之后,云服务器520的处理模块521可以根据该传感器数据检测得到目标用户的当前咀嚼次数和当前食物的食物类型,并根据存储模块522中存储的食物最小咀嚼次数模型获取当前食物类型的最小咀嚼次数。其中,云服务器520的处理模块521检测当前咀嚼次数、食物类型和获取最小咀嚼次数的实现方式,可以参考图2中描述的由可穿戴设备110的处理模块111检测当前咀嚼次数、食物类型和获取最小咀嚼次数的实现方式,此处不再赘述。
云服务器520的处理模块521获取当前咀嚼次数和对应最小咀嚼次数之后,在确定当前咀嚼次数小于对应最小咀嚼次数的情况下,可以通过通信装置523向可穿戴设备510发送提示信息。可穿戴设备510的通信模块515从云服务器520接收提示信息之后,可以通过输出模块514输出提示信息。
食物类型最小咀嚼次数模型存储在云服务器的存储模块522中的情况下,作为另一个示例,可以由可穿戴设备510获取当前食物类型和当前咀嚼次数,然后,可穿戴设备510可以通过通信模块515向云服务器520发送当前食物类型,请求云服务器520基于食物类型最小咀嚼次数模型获取当前食物类型对应的最小咀嚼次数。云服务器520通过通信模块523从可穿戴设备510接收到当前食物类型之后,其处理模块521可以基于存储模块522中存储的食物类型最小咀嚼次数模型确定当前食物类型对应的最小咀嚼次数,向通过通信模块523向可穿戴设备510发送该最小咀嚼次数。可穿戴设备510通过通信模块515从云服务器520接收该最小咀嚼次数之后,可以基于该最小咀嚼次数和获取的当前咀嚼次数输出提示信息。
食物类型最小咀嚼次数模型存储在云服务器的存储模块522中的情况下,作为又一个示例,可穿戴设备510获取当前食物类型和当前咀嚼次数之后,可以通过通信模块515向云服务器520请求食物类型最小咀嚼次数模型,并存储在存储模块513中,然后基于该食物类型最小咀嚼次数模型和获取的当前咀嚼次数输出提示信息。其中,可选地,在可穿戴设备510完成目标用户的饮食行为的监管之后,可以删除存储模块513中的食物类型最小咀嚼次数模型,以节省可穿戴设备510的存储空间。
该实现方式中,进一步地,目标用户的饮食数据可以存储在云服务器520的存储模块522中。目标用户的饮食数据可以存储在云服务器520的存储模块522中的情况下,作为一个示例,由云服务器520的处理模块521定期基于存储模块522中存储的饮食数据来对用户的饮食习惯进行分析,并通过通信模块523将分析结果发送给可穿戴设备510。可穿 戴设备510的通信模块514从云服务器520接收到分析结果之后,可以通过输出模块514输出该分析结果。
目标用户的饮食数据可以存储在云服务器520的存储模块522中的情况下,作为另一个示例,可穿戴设备510可以向云服务器520请求目标用户的饮食数据,并基于该饮食数据执行后续操作。
该实现方式中,可选地,云服务器520的处理模块521还可以根据可穿戴设备510上报的传感器数据进行用户识别,得到目标用户的标识信息。这种情况下,存储模块522中存储的饮食数据中还包含用户的标识信息。作为一个示例,处理模块521对目标用户的饮食习惯进行分析所使用的饮食数据是基于目标用户的标识信息从存储模块522存储的饮食数据;作为另一个示例,可穿戴设备510可以获取目标用户的标识信息,并向云服务器520发送该标识信息,以从云服务器520获取目标用户的饮食数据,以及根据该饮食数据执行后续操作。
该实现方式中,可选地,云服务器520的存储模块522中存储的食物类型咀嚼次数模型中包含的可以是用户年龄、食物类型和最小咀嚼次数之间的对应关系。这种情况下,作为一个示例,处理模块521还可以根据可穿戴设备510上报的传感器数据进行用户的年龄识别,得到目标用户的年龄。相应地,处理模块521根据该年龄、当前食物类型从存储模块522中的食物类型咀嚼次数模型中获取最小咀嚼次数。作为另一个示例,可穿戴设备510可以向云服务器520发送目标用户的年龄和当前食物类型,以从云服务器520获取该年龄和当前食物类型对应的最小咀嚼次数,并根据该最小咀嚼次数进行后续操作。
图6为可以应用本申请的监管饮食行为的方法的系统架构的又一个示例性结构图。如图6所示,该系统架构中可以包括可穿戴设备610和配对设备620。配对设备620的一些示例为手机、平板电脑或电视等。
可穿戴设备610可以包括传感模块611、处理模块612、存储模块613、输出模块614和通信模块615;配对设备620可以包括处理模块621、存储模块622和通信模块623。
可穿戴设备610的传感模块611、处理模块612、存储模块613、输出模块614和通信模块615可以分别参考图5中的传感模块511、处理模块512、存储模块513、输出模块514和通信模块615,配对设备620的处理模块621、存储模块622和通信模块623可以分别参考图5中的云服务器520的处理模块521、存储模块522和通信模块523,此处不再赘述。
在如图6所示的系统架构中,不再是可穿戴设备610独自执行监管饮食行为的方法,而是可穿戴设备610和配对设备620协作实现图2至图4中任意一个所示的监管饮食行为的方法。可穿戴设备610和云服务器620可以通过多种不同的方式进行协作,以实现本申请的监管饮食行为的方法。
本申请实施例的系统实现监管饮食行为的方法的方式,可以参考图5所示的系统实现监管饮食行为的方法的方式,区别在于将图5中的云服务器510替换为配对设备620。
此外,还有一个不同之处在于,配对设备620在确定当前咀嚼次数小于最小咀嚼次数的情况下,配对设备620可以通过自己的输出模块输出提示信息;以及,配对设备620获得饮食习惯分析结果之后,可以通过自己的输出模块输出饮食习惯分析结果。
图7为本申请另一个实施例的系统架构中可以包括可穿戴设备710、配对设备720和 云服务器730。
可穿戴设备710可以包括传感模块711、处理模块712、存储模块713、输出模块714和通信模块715;配对设备720可以包括处理模块721、存储模块722和通信模块723;云服务器730可以包括处理模块731、存储模块732和通信模块733。
传感模块711、处理模块712、存储模块713、输出模块714和通信模块715的通用功能可以分别参考图5中的传感模块511、处理模块512、存储模块513、输出模块514和通信模块515,此处不再赘述。
处理模块721、存储模块722和通信模块723的通用功能可以参考图6中的配对设备620的处理模块621、存储模块622和通信模块623,处理模块731、存储模块732和通信模块733的通用功能可以参考图5中的处理模块531、存储模块532和通信模块533,此处再赘述。
图2至图4中任意一个所示的监管饮食行为的方法可以由图7所示的系统来执行。其中,一部分步骤可以由可穿戴设备710执行,另部分步骤可以由配对设备720执行,剩下部分步骤可以由云服务器730执行。
例如,可穿戴设备710的传感模块711采集到数据之后,在处理模块712的控制下,可以通过通信模块715将传感模块711采集的数据传输给配对设备720。配对设备720的处理模块721基于通信模块723接收的数据执行S210、S220和S230类似的操作,并基于存储模块722中存储的食物类型咀嚼次数模型或者从云服务器获取的食物类型咀嚼次数模型执行S240、S250、S260和S285类似的操作。目标用户的饮食数据由配对设备720的通信模块723发送给云服务器730。云服务器730的通信模块733从配对设备720接收到目标用户的饮食数据之后,存储在存储模块732中。并且,由云服务器730的处理模块731来执行S270和S280类似的操作。
又如,可穿戴设备710将传感器采集的数据传输给配对设备720之后,配对设备可以将这些数据传输给云服务器730,由云服务器730来执行图2、图3或图4所示方法中的相关操作。
图8为本申请一个实施例的监管饮食行为的装置的示意性结构图。如图8所示,本实施例中的装置800可以包括处理模块810和输出模块820。可以理解的是,本实施例的装置中还可以包括其他模块,例如存储模块和/或通信模块。
图8所示的装置可以是用户设备、配对设备或者服务器,或者可以是能够应用于用户设备、配对设备或者服务器的芯片。
在一个示例中,装置800可以用于执行图2至图4中任意一个所示的方法中全部或部分操作,一个或多个装置800可以用于实现图2至图4中任意一个所示的方法。例如,处理模块810可以用于执行S210、S220、S235、S230、S240、S250、S270、S285和S290中全部或部分操作;输出模块820可以用于执行S260和S280中全部或部分操作。
图9为本申请实施例提供的一种监管饮食行为的装置900的结构示意图。装置900包括处理器902、通信接口903和存储器904。图8所示的装置可以是用户设备、配对设备或者服务器,或者可以是能够应用于用户设备、配对设备或者服务器的芯片。
在一个示例中,装置800可以用于执行图2至图4中任意一个所示的方法中全部或部分操作,一个或多个装置800可以用于实现图2至图4中任意一个所示的方法。
处理器902、存储器904和通信接口903之间可以通过总线通信。存储器904中存储有可执行代码,处理器902读取存储器904中的可执行代码以执行对应的方法。存储器904中还可以包括操作系统等其他运行进程所需的软件模块。操作系统可以为LINUX TM,UNIX TM,WINDOWS TM等。
例如,存储器904中的可执行代码用于实现图2至图4任意一个所示的方法中的全部或部分操作,处理器902读取存储器904中的该可执行代码以执行图2至图4任意一个所示的方法中的全部或部分操作。
其中,处理器902可以为中央处理器(central processing unit,CPU)。存储器904可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。存储器904还可以包括非易失性存储器(2non-volatile memory,2NVM),例如只读存储器(2read-only memory,2ROM),快闪存储器,硬盘驱动器(hard disk drive,HDD)或固态启动器(solid state disk,SSD)。
在本申请的一些实施例中,所公开的方法可以实施为以机器可读格式被编码在计算机可读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。图10示意性地示出根据这里展示的至少一些实施例而布置的示例计算机程序产品的概念性局部视图,所述示例计算机程序产品包括用于在通信设备上执行计算机进程的计算机程序。在一个实施例中,示例计算机程序产品1000是使用信号承载介质1001来提供的。所述信号承载介质1001可以包括一个或多个程序指令1002,其当被一个或多个处理器运行时可以提供以上针对图2至图4中任意一个所示的方法中描述的功能或者部分功能。例如,参考图4中所示的实施例,S210至S290的一个或多个特征可以由与信号承载介质1001相关联的一个或多个指令来承担。
在一些示例中,信号承载介质1001可以包含计算机可读介质1003,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(read-only memory,ROM)或随机存储记忆体(random access memory,RAM)等等。在一些实施方式中,信号承载介质1001可以包含计算机可记录介质1004,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质,1001可以包含通信介质1005,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质1001可以由无线形式的通信介质1005(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传达。一个或多个程序指令1002可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,前述的通信设备可以被配置为,响应于通过计算机可读介质1003、计算机可记录介质1004、和/或通信介质1005中的一个或多个传达到通信设备的程序指令1002,提供各种操作、功能、或者动作。应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以 硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种监管饮食行为的方法,其特征在于,包括:
    获取目标用户在当前饮食行为中进食当前食物的第一咀嚼次数;
    获取所述当前食物的食物类型;
    获取进食所述食物类型的食物所需的最小咀嚼次数;
    在所述第一咀嚼次数小于所述最小咀嚼次数的情况下,输出提示信息。
  2. 如权利要求1所述的方法,其特征在于,所述获取进食所述食物类型的食物所需的最小咀嚼次数,包括:
    根据所述食物类型和预先存储的对应关系,获取所述最小咀嚼次数,其中,所述对应关系中包含所述食物类型与所述最小咀嚼次数的对应关系。
  3. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述目标用户的年龄;
    其中,所述获取进食所述食物类型的食物所需的最小咀嚼次数,包括:
    获取所述年龄的用户进食所述食物类型的食物所需的最小咀嚼次数。
  4. 如权利要求3所述的方法,其特征在于,所述获取所述年龄的用户进食所述食物类型的食物所需的最小咀嚼次数,包括:
    根据所述食物类型、所述年龄和预设的对应关系,确定所述最小咀嚼次数,其中,所述对应关系包含所述食物类型、所述最小咀嚼次数和所述年龄之间的对应关系。
  5. 如权利要求2或4所述的方法,其特征在于,所述对应关系是根据用户输入的信息生成的。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述目标用户的多个饮食数据,所述多个饮食数据与所述目标用户的多次饮食行为一一对应,所述多个饮食数据中每个饮食数据包含所述目标用户在对应饮食行为中进食的目标食物的咀嚼次数和所述目标食物的食物类型;
    根据所述多个饮食数据对所述目标用户的饮食行为进行分析,得到饮食行为分析结果;
    输出所述饮食习惯分析结果。
  7. 如权利要求6所述的方法,其特征在于,所述饮食习惯分析结果包括以下一项或多项:
    所述目标用户的饮食次数,所述目标用户摄入的每类食物的总口数,所述目标用户每次饮食行为中进食所述每类食物的平均口数,所述目标用户进食所述每类食物的总咀嚼次数,所述目标用户进食所述每类食物时的咀嚼次数小于所述每类食物对应的最小咀嚼次数阈值的口数。
  8. 如权利要求6或7所述的方法,其特征在于,所述每个饮食数据中还包含所述目标用户的标识信息;
    其中,所述获取所述目标用户的多个饮食数据,包括:
    获取所述目标用户的标识信息;
    从饮食数据库中获取包含所述标识信息的饮食数据,得到所述多个饮食数据。
  9. 如权利要求1至8中任一项所述的方法,其特征在于,所述第一咀嚼次数为所述目标用户在所述当前饮食行为中第N个小于对应最小咀嚼次数的咀嚼次数,N为预设的正整数。
  10. 如权利要求9所示的方法,其特征在于,所述当前食物与所述第一咀嚼次数之前的N-1个小于对应最小咀嚼次数的咀嚼次数所对应的食物为所述目标用户在所述当前饮食行为中连续进食的N口食物。
  11. 一种监管饮食行为的装置,其特征在于,包括:
    处理模块,用于获取目标用户在当前饮食行为中进食当前食物的第一咀嚼次数;
    所述处理模块还用于获取所述当前食物的食物类型;
    所述处理模块还用于获取进食所述食物类型的食物所需的最小咀嚼次数;
    输出模块,用于在所述第一咀嚼次数小于所述最小咀嚼次数的情况下,输出提示信息。
  12. 如权利要求所述的装置,其特征在于,所述处理模块具体用于:
    根据所述食物类型和预先存储的对应关系,获取所述最小咀嚼次数,其中,所述对应关系中包含所述食物类型与所述最小咀嚼次数的对应关系。
  13. 如权利要求12所述的装置,其特征在于,所述处理模块具体用于:
    获取所述目标用户的年龄;
    获取所述年龄的用户进食所述食物类型的食物所需的最小咀嚼次数。
  14. 如权利要求13所述的装置,其特征在于,所述处理模块具体用于:
    根据所述食物类型、所述年龄和预设的对应关系,确定所述最小咀嚼次数,其中,所述对应关系包含所述食物类型、所述最小咀嚼次数和所述年龄之间的对应关系。
  15. 如权利要求12或14所述的装置,其特征在于,所述对应关系是根据用户输入的信息生成的。
  16. 如权利要求11至15中任一项所述的装置,其特征在于,所述处理模块还用于:
    获取所述目标用户的多个饮食数据,所述多个饮食数据与所述目标用户的多次饮食行为一一对应,所述多个饮食数据中每个饮食数据包含所述目标用户在对应饮食行为中进食的目标食物的咀嚼次数和所述目标食物的食物类型;
    根据所述多个饮食数据对所述目标用户的饮食行为进行分析,得到饮食行为分析结果;
    所述输出模块还用于所述智能设备输出所述饮食习惯分析结果。
  17. 如权利要求16所述的装置,其特征在于,所述饮食习惯分析结果包括以下一项或多项:
    所述目标用户的饮食次数,所述目标用户摄入的每类食物的总口数,所述目标用户每次饮食行为中进食所述每类食物的平均口数,所述目标用户进食所述每类食物的总咀嚼次数,所述目标用户进食所述每类食物时的咀嚼次数小于所述每类食物对应的最小咀嚼次数阈值的口数。
  18. 如权利要求16或17所述的装置,其特征在于,所述每个饮食数据中还包含所述目标用户的标识信息;
    其中,所述处理模块具体用于:
    获取所述目标用户的标识信息;
    从饮食数据库中获取包含所述标识信息的饮食数据,得到所述多个饮食数据。
  19. 如权利要求11至18中任一项所述的装置,其特征在于,所述第一咀嚼次数为所述目标用户在所述当前饮食行为中第N个小于对应最小咀嚼次数的咀嚼次数,N为预设的正整数。
  20. 如权利要求19所示的装置,其特征在于,所述当前食物与所述第一咀嚼次数之前的N-1个小于对应最小咀嚼次数的咀嚼次数所对应的食物为所述目标用户在所述当前饮食行为中连续进食的N口食物。
  21. 一种监管饮食行为的装置,其特征在于,包括:处理器,所述处理器与存储器耦合;
    所述存储器用于存储指令;
    所述处理器用于执行所述存储器中存储的指令,以使得所述装置执行如权利要求1至10中任一项所述的方法。
  22. 一种计算机可读介质,其特征在于,包括指令,当所述指令在处理器上运行时,使得所述处理器执行如权利要求1至10中任一项所述的方法。
  23. 一种计算机程序产品,其特征在于,所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至10中任一项所述的方法
  24. 一种监管饮食行为的系统,其特征在于,所述系统包含用户设备和服务器,所述服务器用于执行如权利要求1至10中任一项所述的方法中的一部分步骤,所述用户设备用于执行所述方法中的另一部分步骤。
  25. 一种监管饮食行为的系统,其特征在于,所述系统包含第一用户设备和第二用户设备,所述第一用户设备用于执行如权利要求1至10中任一项所述的方法中的一部分步骤,所述第二用户设备用于执行所述方法中的另一部分步骤。
  26. 一种监管饮食行为的系统,其特征在于,所述系统包含第一用户设备、第二用户设备和服务器,所述第一用户设备用于执行如权利要求1至10中任一项所述的方法中的第一部分步骤,所述第二用户设备用于执行所述方法中的第二部分步骤,所述服务器用于执行所述方法中的第三部分步骤。
PCT/CN2020/101691 2020-07-13 2020-07-13 监管饮食行为的方法和装置 WO2022011509A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/101691 WO2022011509A1 (zh) 2020-07-13 2020-07-13 监管饮食行为的方法和装置
CN202080005309.3A CN114190074A (zh) 2020-07-13 2020-07-13 监管饮食行为的方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/101691 WO2022011509A1 (zh) 2020-07-13 2020-07-13 监管饮食行为的方法和装置

Publications (1)

Publication Number Publication Date
WO2022011509A1 true WO2022011509A1 (zh) 2022-01-20

Family

ID=79556036

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/101691 WO2022011509A1 (zh) 2020-07-13 2020-07-13 监管饮食行为的方法和装置

Country Status (2)

Country Link
CN (1) CN114190074A (zh)
WO (1) WO2022011509A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131783A (zh) * 2022-06-23 2022-09-30 贵州大学 基于机器视觉的用户饮食营养成分信息自主感知方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102670206A (zh) * 2011-03-18 2012-09-19 索尼公司 咀嚼检测装置和咀嚼检测方法
CN102920461A (zh) * 2012-09-13 2013-02-13 中国计量学院 一种进食习惯监测装置
CN104814743A (zh) * 2015-05-12 2015-08-05 上海市同仁医院 一种减肥咀嚼控制器及其使用方法
CN105528525A (zh) * 2016-01-07 2016-04-27 中国农业大学 一种饮食习惯监测系统与监测方法
CN106859653A (zh) * 2015-09-24 2017-06-20 富士通株式会社 饮食行为检测装置及饮食行为检测方法
CN107403066A (zh) * 2017-07-31 2017-11-28 京东方科技集团股份有限公司 一种饮食习惯监测方法及系统
CN110236526A (zh) * 2019-06-28 2019-09-17 李秋 基于咀嚼吞咽动作及心电活动的摄食行为分析和检测方法
CN111012354A (zh) * 2018-10-10 2020-04-17 夏普株式会社 进食监控方法、存储介质以及进食监控装置
CN111368676A (zh) * 2020-02-26 2020-07-03 上海幂方电子科技有限公司 一种数据采集方法、设备及计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116391A1 (en) * 2015-10-26 2017-04-27 Sk Planet Co., Ltd. Health management service system according to eating habit pattern and method thereof
JP6761702B2 (ja) * 2016-08-30 2020-09-30 株式会社吉田製作所 食生活管理装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102670206A (zh) * 2011-03-18 2012-09-19 索尼公司 咀嚼检测装置和咀嚼检测方法
CN102920461A (zh) * 2012-09-13 2013-02-13 中国计量学院 一种进食习惯监测装置
CN104814743A (zh) * 2015-05-12 2015-08-05 上海市同仁医院 一种减肥咀嚼控制器及其使用方法
CN106859653A (zh) * 2015-09-24 2017-06-20 富士通株式会社 饮食行为检测装置及饮食行为检测方法
CN105528525A (zh) * 2016-01-07 2016-04-27 中国农业大学 一种饮食习惯监测系统与监测方法
CN107403066A (zh) * 2017-07-31 2017-11-28 京东方科技集团股份有限公司 一种饮食习惯监测方法及系统
CN111012354A (zh) * 2018-10-10 2020-04-17 夏普株式会社 进食监控方法、存储介质以及进食监控装置
CN110236526A (zh) * 2019-06-28 2019-09-17 李秋 基于咀嚼吞咽动作及心电活动的摄食行为分析和检测方法
CN111368676A (zh) * 2020-02-26 2020-07-03 上海幂方电子科技有限公司 一种数据采集方法、设备及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131783A (zh) * 2022-06-23 2022-09-30 贵州大学 基于机器视觉的用户饮食营养成分信息自主感知方法
CN115131783B (zh) * 2022-06-23 2024-02-20 贵州大学 基于机器视觉的用户饮食营养成分信息自主感知方法

Also Published As

Publication number Publication date
CN114190074A (zh) 2022-03-15

Similar Documents

Publication Publication Date Title
Thomaz et al. A practical approach for recognizing eating moments with wrist-mounted inertial sensing
Kalantarian et al. A survey of diet monitoring technology
US10843078B2 (en) Affect usage within a gaming context
JP2021057057A (ja) 精神障害の療法のためのモバイルおよびウェアラブルビデオ捕捉およびフィードバックプラットフォーム
Farooq et al. Segmentation and characterization of chewing bouts by monitoring temporalis muscle using smart glasses with piezoelectric sensor
US20170238859A1 (en) Mental state data tagging and mood analysis for data collected from multiple sources
CN109068983A (zh) 用于跟踪食物摄入和其它行为并提供相关反馈的方法和设备
US9408562B2 (en) Pet medical checkup device, pet medical checkup method, and non-transitory computer readable recording medium storing program
KR102364093B1 (ko) 반려 동물 데이터 처리 장치
CN111975772B (zh) 机器人控制方法、装置、电子设备及存储介质
US11308787B2 (en) Information processing system, recording medium, and information processing method
JP7009342B2 (ja) 咀嚼や笑みに係る量に基づき食事を評価可能な装置、プログラム及び方法
US10952669B2 (en) System for monitoring eating habit using a wearable device
US10959646B2 (en) Image detection method and image detection device for determining position of user
KR20200071837A (ko) 인공지능을 이용한 반려동물 감성봇 장치 및 이를 이용한 교감 방법
CN111771240A (zh) 频谱图用于监视饮食活动的系统和方法
WO2022011509A1 (zh) 监管饮食行为的方法和装置
JP6781545B2 (ja) ロボット
KR102279958B1 (ko) 영상과 소리를 이용한 동물 상태 인식 방법 및 장치
KR20210060246A (ko) 생체 데이터를 획득하는 장치 및 그 방법
KR20190028021A (ko) 센서 기술을 활용한 애완동물 상태 분석 방법 및 시스템
KR20200059626A (ko) 섭식행위 및 섭식내용을 자동으로 인식하는 시스템, 사용자 디바이스 및 방법
CN114746949A (zh) 饮食生活推定装置
JP2020052847A (ja) 感情管理システム、感情管理方法及びプログラム
KR102270637B1 (ko) 보호자와의 상호작용에 기반한 반려동물 행동 분석 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945429

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945429

Country of ref document: EP

Kind code of ref document: A1