CN112418025A - Weight detection method and device based on deep learning - Google Patents

Weight detection method and device based on deep learning Download PDF

Info

Publication number
CN112418025A
CN112418025A CN202011249528.4A CN202011249528A CN112418025A CN 112418025 A CN112418025 A CN 112418025A CN 202011249528 A CN202011249528 A CN 202011249528A CN 112418025 A CN112418025 A CN 112418025A
Authority
CN
China
Prior art keywords
target person
information
weight
face image
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011249528.4A
Other languages
Chinese (zh)
Inventor
傅峰峰
江志强
王培彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fugang Life Intelligent Technology Co Ltd
Original Assignee
Guangzhou Fugang Wanjia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fugang Wanjia Intelligent Technology Co Ltd filed Critical Guangzhou Fugang Wanjia Intelligent Technology Co Ltd
Priority to CN202011249528.4A priority Critical patent/CN112418025A/en
Publication of CN112418025A publication Critical patent/CN112418025A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/60ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Nutrition Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a weight detection method and a weight detection device based on deep learning, wherein the method comprises the following steps: determining information of a target person, wherein the information of the target person comprises a face image of the target person and height information of the target person; and inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, and determining the first analysis result as the weight information of the target person. Therefore, after the face image and the height of the user are determined, the height and the face image of the user are automatically input into the predetermined weight analysis model for analysis, so that the intelligent detection of the weight of the user is realized, the accurate weight can be quickly obtained, and other operations based on the weight can be favorably executed, such as: recommending matched recipes, sports items and the like according to the weight; and the height can be detected without a special weight meter, the cost is saved, the weight can be detected at any time and any place, and the convenience of weight detection is improved.

Description

Weight detection method and device based on deep learning
Technical Field
The invention relates to the technical field of intelligence, in particular to a weight detection method and device based on deep learning.
Background
Along with the development of society and the improvement of living standard, the attention of people to health is continuously improved. Among them, body weight is one of the signs of good health condition, for example: if the weight of the user changes suddenly, the user is indicated to be or is about to face health problems, so the weight measuring instrument becomes an indispensable article in daily life and production of people, and has wide application in the aspects of medical application, school physical examination, family application and the like.
In actual life, the method for detecting the weight generally comprises the following steps: firstly, detecting the body weight through a traditional mechanical body weight measuring instrument; and secondly, detecting the weight through an electronic weight scale. In both detection methods, a person stands on a weight measuring instrument, and the weight measuring instrument converts the pressure applied to the person into corresponding weight. However, practice shows that both of the two measurement methods have the defects of inconvenience in carrying, single function, high possibility of failure due to the fact that a person needs to stand on the weight measuring instrument, and inaccurate detection caused by the fact that the person is prone to malfunction along with the increase of the using time. Therefore, how to provide a scheme for intelligently detecting the weight of the user is very important.
Disclosure of Invention
The invention aims to provide a weight detection method and device based on deep learning, which can intelligently detect the weight of a user.
In order to solve the technical problem, a first aspect of the embodiments of the present invention discloses a weight detection method based on deep learning, including:
determining information of a target person, wherein the information of the target person comprises a face image of the target person and height information of the target person;
and inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, and determining the first analysis result as the weight information of the target person.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining information of the target person includes:
inputting the acquired image of the target person into the determined image segmentation model for analysis to obtain a second analysis result, and determining the human body image of the target person and the face image of the target person according to the second analysis result;
calculating a height pixel value of the human body image of the target person in the image of the target person based on the top end position of the human body image of the target person and the bottom end position of the image of the target person;
and acquiring the height matched with the height pixel value according to the determined actual height of the unit represented by the unit pixel, and taking the height as the height information of the target person.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the determining the information of the target person, and before the inputting the information of the target person into the determined weight analysis model for analysis, and obtaining a first analysis result, the method further includes:
judging whether the face image of the target person is matched with the determined standardized face image, and triggering and executing the operation of inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result when the face image of the target person is matched with the determined standardized face image;
when the judgment result shows that the target person is not matched with the standard face image, correcting the face image of the target person based on the determined correction mode so as to enable the corrected face image of the target person to be matched with the standard face image, and updating the information of the target person based on the corrected face image of the target person to obtain the updated information of the target person;
wherein, the inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result comprises:
and inputting the updated information of the target person into the determined weight analysis model for analysis to obtain a first analysis result.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the modifying the face image of the target person based on the determined modification manner so that the modified face image of the target person is matched with the normalized face image includes:
acquiring coordinates of key features in the face image of the target person according to the face image of the target person, wherein the coordinates of the key features in the face image of the target person are used for correcting the face image of the target person;
and performing transformation processing on the coordinates of the key features in the face image of the target person by taking the coordinates of the key features in the acquired standardized face image as a reference until the coordinate difference value between the coordinates of the key features in the face image of the target person after transformation and the coordinates of the key features in the standardized face image is less than or equal to the determined coordinate difference value threshold.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
acquiring identification information of an article worn by the target person, and determining height information of the article worn by the target person according to the identification information of the article worn by the target person, wherein the article worn by the target person comprises a shoe worn by the target person and/or a hat worn by the target person;
and after determining the information of the target person, the method further comprises:
and calculating height difference information between the height information of the target person and the height information of the article worn by the target person, and updating the height information of the target person into the height difference information.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
acquiring sample data of a plurality of sample persons, wherein the sample data of each sample person comprises a face image of the sample person, height information of the sample person and weight information of the sample person;
training a predetermined analysis model based on sample data of all the sample personnel to obtain the trained analysis model, and determining the trained analysis model as the determined weight analysis model;
wherein the sample data of all the sample persons is standardized sample data.
As an alternative implementation, in the first aspect of the embodiments of the present invention, the method is applied to an intelligent cabinet, the intelligent cabinet is used for cooking food materials, and the intelligent cabinet has a corresponding recipe database;
and after determining the information of the target person, the method further comprises:
analyzing a face image of the target person included in the information of the target person to obtain attribute information of the target person, wherein the attribute information of the target person includes at least one of gender of the target person, age of the target person and face color of the target person;
and after determining the first analysis result as the weight information of the target person, the method further comprises:
and inquiring a recipe matched with the target information from the recipe database according to the target information of the target person, and outputting the recipe matched with the target information to the target person, wherein the target information of the target person comprises attribute information of the target person, height information of the target person and weight information of the target person.
The second aspect of the embodiment of the invention discloses a weight detection device based on deep learning, which comprises:
the determining module is used for determining information of a target person, wherein the information of the target person comprises a face image of the target person and height information of the target person;
the analysis module is used for inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result;
and the acquisition module is used for determining the first analysis result as the weight information of the target person.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the manner of determining the information of the target person by the determining module is specifically:
inputting the acquired image of the target person into the determined image segmentation model for analysis to obtain a second analysis result, and determining the second analysis result as a human body image of the target person and a face image of the target person;
calculating a height pixel value of the human body image of the target person in the image of the target person based on the top end position of the human body image of the target person and the bottom end position of the image of the target person;
and acquiring the height matched with the height pixel value according to the determined actual height of the unit represented by the unit pixel, and taking the height as the height information of the target person.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the apparatus further includes:
the judging module is used for judging whether the face image of the target person is matched with the determined standardized face image or not after the information of the target person is determined by the determining module and before the information of the target person is input into the determined weight analysis model by the analyzing module for analysis to obtain a first analysis result, and triggering the analyzing module to execute the operation of inputting the information of the target person into the determined weight analysis model for analysis to obtain the first analysis result when the matching is judged;
the correction module is used for correcting the face image of the target person based on the determined correction mode when the judgment module judges that the face image is not matched with the standard face image, so that the corrected face image of the target person is matched with the standard face image;
the first updating module is used for updating the information of the target person based on the corrected face image of the target person to obtain the updated information of the target person;
the analysis module inputs the information of the target person into the determined weight analysis model for analysis, and the mode of obtaining a first analysis result specifically comprises the following steps:
and inputting the updated information of the target person into the determined weight analysis model for analysis to obtain a first analysis result.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the modifying module modifies the face image of the target person based on the determined modification manner, so that a manner of matching the modified face image of the target person with the standardized face image is specifically:
acquiring coordinates of key features in the face image of the target person according to the face image of the target person, wherein the coordinates of the key features in the face image of the target person are used for correcting the face image of the target person;
and performing transformation processing on the coordinates of the key features in the face image of the target person by taking the coordinates of the key features in the acquired standardized face image as a reference until the coordinate difference value between the coordinates of the key features in the face image of the target person after transformation and the coordinates of the key features in the standardized face image is less than or equal to the determined coordinate difference value threshold.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the obtaining module is further configured to obtain identification information of an article worn by the target person, and determine height information of the article worn by the target person according to the identification information of the article worn by the target person, where the article worn by the target person includes a shoe worn by the target person and/or a hat worn by the target person;
and, the apparatus further comprises:
the calculating module is used for calculating height difference information between the height information of the target person and the height information of the object worn by the target person after the determining module determines the information of the target person;
and the second updating module is used for updating the height information of the target person into the height difference information.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the obtaining module is further configured to obtain sample data of a plurality of sample persons, where the sample data of each sample person includes a face image of the sample person, height information of the sample person, and weight information of the sample person;
and, the apparatus further comprises:
the training module is used for training a predetermined analysis model based on the sample data of all the sample personnel to obtain the trained analysis model;
the determining module is further configured to determine that the trained analysis model is the determined weight analysis model;
wherein the sample data of all the sample persons is standardized sample data.
As an alternative implementation, in the second aspect of the embodiment of the present invention, the apparatus is applied to an intelligent cabinet, the intelligent cabinet is used for cooking food materials, and the intelligent cabinet has a corresponding recipe database;
the analysis module is further configured to, after the determination module determines the information of the target person, analyze a face image of the target person included in the information of the target person to obtain attribute information of the target person, where the attribute information of the target person includes at least one of a gender of the target person, an age of the target person, and a complexion of the target person;
and, the apparatus further comprises:
the query module is used for querying a recipe matched with the target information from the recipe database according to the target information of the target person after the acquisition module determines that the first analysis result is used as the weight information of the target person, wherein the target information of the target person comprises attribute information of the target person, height information of the target person and weight information of the target person;
and the output module is used for outputting the recipes matched with the target information to the target personnel.
The invention discloses another weight detection device based on deep learning in a third aspect, which comprises:
a storage storing executable program code;
a processor coupled to the depository;
the processor calls the executable program code stored in the storage device to execute the weight detection method based on deep learning disclosed by the first aspect of the invention.
The invention discloses a computer storage medium, wherein the computer storage medium stores computer instructions, and when the computer instructions are called, the computer instructions are used for executing the weight detection method based on deep learning disclosed by the first aspect of the invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a weight detection method and device based on deep learning, wherein the method comprises the steps of determining information of a target person, wherein the information of the target person comprises a face image of the target person and height information of the target person; and inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, and determining the first analysis result as the weight information of the target person. Therefore, after the face image and the height of the user are determined, the height and the face image of the user are automatically input into the predetermined weight analysis model for analysis, so that the intelligent detection of the weight of the user is realized, the accurate weight of the user can be quickly obtained, and other operations can be executed based on the weight of the user, for example: recommending recipes, sports items and the like matched with the user according to the weight of the user; and the height can be detected without a special weight meter, the cost is saved, the weight can be detected at any time and any place, and the convenience of weight detection is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a weight detection method based on deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another weight detection method based on deep learning according to the embodiment of the present invention;
fig. 3 is a schematic structural diagram of a weight detection device based on deep learning according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another weight detection device based on deep learning according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of another weight detection device based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, product, or apparatus that comprises a list of steps or elements is not limited to those listed but may alternatively include other steps or elements not listed or inherent to such process, method, product, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention discloses a weight detection method and a weight detection device based on deep learning, which can automatically input the height and the face image of a user into a predetermined weight analysis model for analysis after determining the face image and the height of the user, realize the intelligent detection of the weight of the user, and quickly acquire the accurate weight of the user, thereby being beneficial to executing other operations based on the weight of the user, such as: recommending recipes, sports items and the like matched with the user according to the weight of the user; and the height can be detected without a special weight meter, the cost is saved, the weight can be detected at any time and any place, and the convenience of weight detection is improved. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a weight detection method based on deep learning according to an embodiment of the present invention. The weight detection method based on deep learning described in fig. 1 may be applied to an image analysis system/an image analysis server (including a local server or a cloud server)/an image analysis platform, where the image analysis system/the image analysis server/the image analysis platform may communicate with authorized terminal devices, where the terminal devices include terminal devices having a communication function, such as a smart phone and a computer. As shown in fig. 1, the weight detection method based on deep learning may include the following operations:
101. and determining the information of the target person, wherein the information of the target person comprises the face image of the target person and the height information of the target person.
In the embodiment of the invention, the image of the target person is acquired through the image acquisition equipment set in the current scene. Wherein, the target person is any user needing to acquire images. Wherein, the target person needs to stand at a preset image acquisition position, for example: and the horizontal distance between the horizontal position of the image acquisition equipment and the image acquisition position is 0.5-2 meters. Further optionally, people with different heights may correspond to different image capturing positions, wherein the higher the height is, the farther the corresponding image capturing position is, that is, the greater the horizontal distance from the horizontal position of the image capturing device is. Wherein, be provided with the height sign on the image acquisition position, for example: the height mark set at the image acquisition position A is 1.5 meters, and the height mark set at the image acquisition position B is 1.7 meters. Therefore, different image acquisition positions are set according to users with different heights, so that the human body image of the complete user can be acquired, the detection efficiency and accuracy of the height of the user can be improved, and the detection efficiency and accuracy of the weight of the user can be improved.
In the embodiment of the present invention, as an optional implementation manner, determining information of a target person may include:
inputting the acquired image of the target person into the determined image segmentation model for analysis to obtain a second analysis result, and determining a human body image of the target person and a human face image of the target person according to the second analysis result;
calculating the height pixel value of the human body image of the target person in the image of the target person based on the top end position of the human body image of the target person and the bottom end position of the image of the target person;
and acquiring the actual height corresponding to the height pixel value as the height information of the target person based on the unit actual height represented by the determined unit pixel.
In the embodiment of the invention, the top position of the human body image of the target person is also called the top position of the head. Optionally, based on the top position of the human body image of the target person and the bottom position of the image of the target person, calculating a height pixel value of the human body image of the target person in the image of the target person, specifically: and acquiring a first longitudinal pixel value in the middle of the top position of the human body image of the target person and a second longitudinal pixel value at the bottom position of the image of the target person, subtracting the second longitudinal pixel value from the first longitudinal pixel value to obtain a longitudinal pixel value difference value, and determining the longitudinal pixel value difference value as the number of height pixel values of the human body image of the target person in the image of the target person.
For example, the actual unit height represented by the unit pixel is 10mm, and the difference between the vertical pixels at the top of the head of the small red human body image and the bottom of the small red human body image is 150 pixels, so that the actual height corresponding to the small red human body image is 10 × 150 — 1500mm — 1.5m, that is, the height of the small red is 1.5 m.
Therefore, the optional embodiment can improve the accuracy and efficiency of acquiring the human body image of the user by inputting the acquired image of the user into the image segmentation model for automatic analysis; and acquiring the height corresponding to the height pixel value between the top position of the human body image of the user and the bottom position of the image of the user according to the actual unit height represented by the unit pixel, so that the intelligent detection of the height of the user can be realized, the detection accuracy and efficiency of the height of the user are improved, and the acquisition efficiency and accuracy of the weight of the user are further improved.
In the embodiment of the present invention, optionally, the information of the target person may also be sent by a user terminal that receives the target person, which is not limited in the embodiment of the present invention.
In an embodiment of the present invention, the information of the target person may further include other information of the target person, where the other information of the target person includes at least one of an age, a sex, a motion condition, a diet condition, and a toileting condition of the target person. Therefore, the more the information of the user is, the better the analysis accuracy of the information of the user is, and the acquisition accuracy of the weight information of the user is further improved.
102. And inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, and determining the first analysis result as the weight information of the target person.
It can be seen that, by implementing the weight detection method based on deep learning described in fig. 1, after determining the face image and the height of the user, the height and the face image of the user can be automatically input into a predetermined weight analysis model for analysis, so that the intelligent detection of the weight of the user is realized, the accurate weight of the user can be quickly obtained, and thus, other operations based on the weight of the user can be favorably performed, for example: recommending recipes, sports items and the like matched with the user according to the weight of the user; and the height can be detected without a special weight meter, the cost is saved, the weight can be detected at any time and any place, and the convenience of weight detection is improved.
In an alternative embodiment, after performing step 101 and before performing step 102, the method may further include the steps of:
judging whether the face image of the target person is matched with the determined standardized face image, and triggering and executing the operation of inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result when the face image of the target person is matched with the determined standardized face image;
when the judgment result shows that the target person is not matched with the standard face image, the face image of the target person is corrected based on the determined correction mode so that the corrected face image of the target person is matched with the standard face image, and the information of the target person is updated based on the corrected face image of the target person to obtain the updated information of the target person;
wherein, input the information input of target person and carry out the analysis in the weight analysis model of confirming, obtain first analysis result, include:
and inputting the updated information of the target person into the determined weight analysis model for analysis to obtain a first analysis result.
In this optional embodiment, optionally, the determining whether the face image of the target person matches the determined standardized face image may include: and when the deviation degree of the central line of the facial image of the target person and the central line of the standardized facial image is more than or equal to the determined deviation degree threshold value (for example: 5 degrees), determining that the facial image of the target person does not match with the determined standardized facial image. The method comprises the steps that a human face image of a target person and a standardized human face image adopt the same coordinate system, the intersection point of the center line of the human face image of the target person and the center line of the standardized human face image is obtained, and the included angle between the two center lines is measured based on the intersection point and is used as the offset degree of the center line of the human face image of the target person and the center line of the standardized human face image. Therefore, the accuracy and efficiency of judging whether the face image of the user is matched with the standardized face image can be improved.
Therefore, in the optional embodiment, after the face image of the user is acquired, whether the face image of the user is matched with the standardized face image is further judged, if so, subsequent user weight acquisition operation is executed, if not, the face image of the user is corrected, so that the corrected face image of the user is matched with the standardized face image, and then, subsequent user weight acquisition operation is executed, so that the possibility of acquiring accurate user weight can be improved, and the intelligent function of the weight detection device is enriched.
In another alternative embodiment, modifying the facial image of the target person based on the determined modification manner so that the modified facial image of the target person matches the standardized facial image may include:
acquiring coordinates of key features in the face image of the target person according to the face image of the target person, wherein the coordinates of the key features in the face image of the target person are used for correcting the face image of the target person;
and performing transformation processing on the coordinates of the key features in the face image of the target person by taking the coordinates of the key features in the acquired standardized face image as a reference until the coordinate difference value between the coordinates of the key features in the face image of the target person after transformation and the coordinates of the key features in the standardized face image is less than or equal to the determined coordinate difference value threshold.
In this optional embodiment, optionally, the key features in the face image of the target person include a left eye feature of the target person, a right eye feature of the target person, a nose feature of the target person, a left mouth angle feature of the target person, and a right mouth angle feature of the target person. Further optionally, the key features in the face image of the target person may further include a left cheek feature of the target person and a right cheek feature of the target person. Still further optionally, the key features in the face image of the target person may further include a left ear feature of the target person and a right ear feature of the target person. Therefore, the more the characteristics of the key characteristics in the face image of the user are, the more the correction accuracy, reliability and efficiency of the face image of the user are improved, and the acquisition accuracy and reliability of the weight of the user are further improved.
In this optional embodiment, optionally, a coordinate difference between the coordinates of the key features in the transformed face image of the target person and the coordinates of the key features in the normalized face image is less than or equal to the determined coordinate difference threshold, specifically: the coordinate difference between the coordinates of the key features in the transformed face image of the target person and the coordinates of the corresponding key features in the standardized face image is less than or equal to a determined coordinate difference threshold, for example: the coordinate difference between the abscissa of the nose in the face image of the user and the abscissa of the nose in the standardized face image is less than or equal to 1mm, and the judgment mode of key features such as mouth, eyes and the like is similar to that of the nose, and is not repeated herein. Further optionally, the coordinate difference between the coordinates of the key features in the transformed face image of the target person and the coordinates of the key features in the standardized face image is less than or equal to the determined coordinate difference threshold, which may also be understood as that the offset degree between the central line of the face image of the target person and the central line of the standardized face image is less than the offset threshold.
Therefore, in the optional embodiment, the standardized face image is used as a reference, the correction of the face image of the user can be realized by performing transformation processing on the coordinates of the key features of the face image of the user, the correction operation is performed through the key features of the face image, and the correction accuracy of the face image of the user can be improved while the correction efficiency is improved.
In yet another alternative embodiment, the method may further comprise the steps of:
acquiring identification information of an article worn by a target person, and determining height information of the article according to the identification information of the article;
and, after determining the information of the target person, the method may further comprise the steps of:
and calculating height difference information between the height information of the target person and the height information of the article worn by the target person, and updating the height information of the target person into the height difference information.
In this alternative embodiment, the item worn by the target person may optionally include shoes worn by the target person and/or hats worn by the target person.
In this optional embodiment, optionally, the identification information of the article worn by the target person is obtained, and the library may be:
receiving information input by a target person as identification information of an article worn by the target person, wherein the information input by the target person can be directly input through an input device (such as a display screen and a sound pick-up) corresponding to the weight detection device and can also be transmitted through a terminal device of the target person; alternatively, the first and second electrodes may be,
and identifying the article worn by the target person through the image acquisition equipment to obtain the identification information of the article worn by the target person.
By providing a plurality of types of obtaining modes of the identification information of the article worn by the user, the possibility of obtaining the identification information of the article worn by the user can be improved.
In this alternative embodiment, the identification information of the item may optionally include, but is not limited to, a brand identification and/or a shape identification of the item. Further optionally, height information corresponding to different articles is stored in advance.
Therefore, according to the optional embodiment, the height of the article is determined according to the identification information of the article worn by the user, the obtained height of the user can be corrected based on the height of the article, the actual height of the user can be obtained, the obtaining accuracy of the height information of the user is further improved, the obtaining accuracy of the weight of the user is further improved, and the intelligent function of the weight detection device is further enriched.
In yet another alternative embodiment, the method may further comprise the steps of:
acquiring sample data of a plurality of sample persons, wherein the sample data of each sample person comprises a face image of the sample person, height information of the sample person and weight information of the sample person;
training a predetermined analysis model based on sample data of all sample personnel to obtain a trained analysis model, and determining the trained analysis model as a determined weight analysis model;
wherein, the sample data of all the sample persons is standardized sample data.
In this optional embodiment, optionally, the sample data of each sample person includes the number of face images of the sample person greater than or equal to 1. The image acquisition equipment is used for sequentially acquiring a plurality of images of each sample person and sequentially executing face recognition operation on each image to obtain a plurality of face images of each sample person. Still further alternatively, a transformation process (also referred to as a normalization operation) is performed on the face image of each sample person to obtain a standard face image. For a description about the conversion process performed on the face image of each sample person, refer to the above detailed description of the face image of the target person. Therefore, by executing the standardized operation on the face images of the sample personnel, the occurrence situation of inaccurate training of the analysis model caused by the collected abnormal images (such as the face leaning to the right, the face leaning to the left, the face leaning to the low head and the like) can be reduced, the training accuracy and the reliability of the weight analysis model are improved, and the accuracy, the reliability and the efficiency of obtaining the weight of a subsequent user are improved.
Therefore, in the optional embodiment, the required weight analysis model is obtained by pre-training the analysis model based on the standardized sample data, and the subsequent direct use of the weight analysis model for analyzing the obtained user information is facilitated, so that the analysis efficiency and the accuracy of the user weight are improved, the obtaining efficiency and the accuracy of the user weight are further improved, and the intelligent detection of the user weight is realized.
In yet another alternative embodiment, the method may further comprise the steps of:
acquiring a plurality of sample images, wherein the heights of human body images in the plurality of sample images are different, namely the plurality of sample images are images from users with different heights, and the plurality of sample images are images of users acquired by image acquisition equipment corresponding to the images of the acquisition target personnel;
and training the determined segmentation model based on the plurality of sample images to obtain the trained segmentation model, and determining the trained segmentation model as the determined image segmentation model.
In this alternative embodiment, the segmentation model may optionally include a semantic segmentation model and/or an instance segmentation model. Further alternatively, the plurality of sample images may be images for different positional heights of the image acquisition device.
Therefore, the optional embodiment can conveniently and accurately segment the human body image of the user from the acquired image by directly using the image segmentation model in the follow-up process through pre-training the required image segmentation model, thereby being beneficial to improving the accuracy and efficiency of obtaining the height of the user.
In yet another alternative embodiment, after performing step 102, the method may further comprise the steps of:
acquiring human body parameters of the target personnel according to the data of the target personnel;
and establishing an incidence relation between the human body parameters of the target person and the target person, and storing the incidence relation and the human body parameters of the target person.
In this alternative embodiment, the data of the target person includes weight information of the target person and height information of the target person. Further optionally, the data of the target person further comprises an age of the target person.
In this alternative embodiment, the human body parameter of the target person includes at least one of a body type of the target person, a body mass index of the target person, a body fat ratio of the target person, and a basal metabolism of the target person.
In this alternative embodiment, optionally, the human body parameters of the target person are output to the target person.
Therefore, after the weight information of the user is acquired, the optional embodiment further acquires the body parameters of the user according to the weight, the height and other information of the user, stores the body parameters and the association relation of the user, can monitor the body condition of the user, is beneficial to helping the user manage the body, and further enriches the intelligent function of the weight detection device.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another weight detection method based on deep learning according to an embodiment of the present invention. The weight detection method based on deep learning described in fig. 2 may be applied to an image analysis system/image analysis server (including a local server or a cloud server)/image analysis platform corresponding to an intelligent cabinet, where the image analysis system/image analysis server/image analysis platform may communicate with authorized terminal devices, where the terminal devices include terminal devices having a communication function, such as a smart phone and a computer. As shown in fig. 2, the weight detection method based on deep learning may include the following operations:
201. and determining the information of the target person, wherein the information of the target person comprises the face image of the target person and the height information of the target person.
202. And analyzing the face image of the target person included in the information of the target person to obtain the attribute information of the target person.
In the embodiment of the invention, the intelligent cabinet is provided with corresponding image acquisition equipment, and the image acquisition equipment is used for acquiring the image of the target person. The intelligent cabinet is used for cooking food materials, and the corresponding recipe database exists in the intelligent cabinet. Wherein recipes for different physical situations are stored in the recipe database.
In the embodiment of the present invention, the attribute information of the target person includes at least one of a gender of the target person, an age of the target person, and a face color of the target person.
203. And inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, and determining the first analysis result as the weight information of the target person.
In the embodiment of the present invention, please refer to the related detailed description of step 101 and step 102 in the first embodiment for the other descriptions of step 201 and step 202, which is not described again in the embodiment of the present invention.
204. And inquiring the recipes matched with the target information from the recipe database according to the target information of the target person.
In the embodiment of the invention, the target information of the target person comprises attribute information of the target person, height information of the target person and weight information of the target person.
In this embodiment of the present invention, optionally, after the step 204 is executed, the method may further include: and outputting the recipes matched with the target information to the target person.
Therefore, the embodiment of the invention obtains the attribute information of the user by analyzing the face image of the user, for example: gender, age, complexion etc. can obtain the recipe that accords with user's health according to user's attribute information and user's height, weight, promote the experience that the user used intelligent cupboard.
In an optional embodiment, after analyzing the facial image of the target person included in the information of the target person to obtain the attribute information of the target person, the method may further include the following steps:
acquiring target attribute information matched with the face image of the target person from the determined attribute information database according to the face image of the target person;
and judging whether the target attribute information is matched with the attribute information of the target person, and triggering to execute the operation of inquiring the recipe matched with the target information from the recipe database according to the target information of the target person when the target attribute information is judged to be matched with the attribute information of the target person.
Therefore, after the face image of the user is acquired, the optional embodiment further acquires the existing attribute information matched with the face image of the user from the attribute information database, and when the current attribute information of the user is judged to be matched with the existing attribute information, the subsequent operation of inquiring the recipe matched with the user is continuously executed, so that the occurrence of the situation that the recipe is continuously inquired due to the fact that the current attribute information of the user is not matched with the existing attribute information can be reduced, the accuracy of the recipe inquiring operation is improved, the accurate recipe is further acquired, the experience of the user using the intelligent cabinet is further improved, the use viscosity of the intelligent cabinet is improved, and the popularization of the intelligent cabinet is facilitated.
In another optional embodiment, the method may further comprise the steps of:
when the target attribute information is judged not to be matched with the attribute information of the target person, acquiring a recipe corresponding to the target attribute information;
and determining distinguishing attribute information of the target attribute information and the attribute information of the target person, and determining a recipe required by the target person according to the recipe corresponding to the distinguishing attribute information and the target attribute information.
Therefore, when the optional embodiment judges that the current attribute information of the user is not matched with the existing attribute information of the user, the accuracy and the reliability of acquiring the recipes conforming to the user are further improved by comprehensively analyzing the current attribute information of the user and the existing attribute information, and the experience of the user in using the intelligent cabinet is further improved.
It can be seen that, by implementing the weight detection method based on deep learning described in fig. 2, after determining the face image and the height of the user, the height and the face image of the user can be automatically input into the predetermined weight analysis model for analysis, so that the intelligent detection of the weight of the user is realized, the accurate weight of the user can be quickly obtained, and thus, other operations based on the weight of the user can be favorably performed, for example: recommending recipes, sports items and the like matched with the user according to the weight of the user; and the height can be detected without a special weight meter, the cost is saved, the weight can be detected at any time and any place, and the convenience of weight detection is improved. The recipe according with the physical condition of the user can be obtained according to the attribute information of the user and the height and the weight of the user, the experience of the user in using the intelligent cabinet is improved, the use viscosity of the intelligent cabinet is favorably improved, and the popularization of the intelligent cabinet is facilitated.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a weight detection device based on deep learning according to an embodiment of the present invention. The weight detection apparatus based on deep learning depicted in fig. 3 can be applied to an image analysis system/image analysis server (including a local server or a cloud server)/image analysis platform, where the image analysis system/image analysis server/image analysis platform can communicate with authorized terminal devices, where the terminal devices include terminal devices having a communication function, such as a smart phone and a computer. As shown in fig. 3, the deep learning based weight detecting apparatus may include: a determination module 301, an analysis module 302, and an acquisition module 303, wherein:
a determining module 301, configured to determine information of a target person, where the information of the target person includes a face image of the target person and height information of the target person.
The analysis module 302 is configured to input information of the target person into the determined weight analysis model for analysis, so as to obtain a first analysis result.
And the obtaining module 303 is configured to determine the first analysis result as weight information of the target person.
It can be seen that, by implementing the weight detection device based on deep learning described in fig. 3, after determining the face image and the height of the user, the height and the face image of the user can be automatically input into the predetermined weight analysis model for analysis, so that the intelligent detection of the weight of the user is realized, the accurate weight of the user can be rapidly obtained, and thus, other operations based on the weight of the user can be favorably performed, for example: recommending recipes, sports items and the like matched with the user according to the weight of the user; and the height can be detected without a special weight meter, the cost is saved, the weight can be detected at any time and any place, and the convenience of weight detection is improved.
In an alternative embodiment, the determining module 301 determines the information of the target person in a specific manner:
inputting the acquired image of the target person into the determined image segmentation model for analysis to obtain a second analysis result, and determining the second analysis result as a human body image of the target person and a face image of the target person;
calculating the height pixel value of the human body image of the target person in the image of the target person based on the top end position of the human body image of the target person and the bottom end position of the image of the target person;
and acquiring the height matched with the height pixel value according to the determined actual height of the unit represented by the unit pixel, and taking the height as the height information of the target person.
Therefore, the device described in the embodiment of fig. 3 can also automatically analyze the acquired image of the user by inputting the image into the image segmentation model, so that the acquisition accuracy and efficiency of the human body image of the user can be improved; and the height corresponding to the height pixel value between the top position of the human body image of the user and the bottom position of the image of the user is obtained from the actual height represented by the unit pixel, so that the intelligent detection of the height of the user can be realized, the detection accuracy and efficiency of the height of the user are improved, and the obtaining efficiency and accuracy of the weight of the user are further improved.
In another alternative embodiment, as shown in fig. 4, the apparatus further comprises: a determination module 304, a modification module 305, and a first update module 306, wherein:
a judging module 304, configured to, after the determining module 301 determines the information of the target person, and before the analyzing module 302 inputs the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, judge whether the face image of the target person matches the determined standardized face image, and when a match is judged, trigger the analyzing module 302 to perform the above-mentioned operation of inputting the information of the target person into the determined weight analysis model for analysis to obtain the first analysis result.
And a correcting module 305, configured to, when the judging module 304 judges that the face images do not match, correct the face image of the target person based on the determined correction manner, so that the corrected face image of the target person matches the standardized face image.
And a first updating module 306, configured to update the information of the target person based on the corrected face image of the target person, so as to obtain updated information of the target person.
The analysis module 302 inputs the information of the target person into the determined weight analysis model for analysis, and the manner of obtaining the first analysis result specifically includes:
and inputting the updated information of the target person into the determined weight analysis model for analysis to obtain a first analysis result.
It can be seen that, with the implementation of the apparatus described in fig. 4, after the face image of the user is acquired, it can be further determined whether the face image of the user matches the standardized face image, if so, a subsequent user weight acquisition operation is performed, if not, the face image of the user is corrected, so that the corrected face image of the user matches the standardized face image, and then the subsequent user weight acquisition operation is performed, which can improve the possibility of acquiring accurate user weight and enrich the intelligent function of the weight detection apparatus.
In yet another alternative embodiment, as shown in fig. 4, the modification module 35 modifies the face image of the target person based on the determined modification manner, so that the manner of matching the modified face image of the target person with the standardized face image is specifically as follows:
acquiring coordinates of key features in the face image of the target person according to the face image of the target person, wherein the coordinates of the key features in the face image of the target person are used for correcting the face image of the target person;
and performing transformation processing on the coordinates of the key features in the face image of the target person by taking the coordinates of the key features in the acquired standardized face image as a reference until the coordinate difference value between the coordinates of the key features in the face image of the target person after transformation and the coordinates of the key features in the standardized face image is less than or equal to the determined coordinate difference value threshold.
It can be seen that, with the implementation of the apparatus described in fig. 4, the correction of the face image of the user can be achieved by performing transformation processing on the coordinates of the key features of the face image of the user based on the standardized face image, and the correction operation has been performed through the key features of the face image, so that the correction accuracy of the face image of the user can be improved, and the correction efficiency can be improved at the same time.
In yet another alternative embodiment, as shown in fig. 4, the apparatus further comprises: a calculation module 307 and a second update module 308, wherein:
the obtaining module 303 is further configured to obtain identification information of an article worn by the target person, and determine height information of the article according to the identification information of the article, where the article worn by the target person includes shoes worn by the target person and/or hats worn by the target person.
A calculating module 307, configured to calculate height difference information between the height information of the target person and the height information of the item after the determining module 301 determines the information of the target person.
And a second updating module 308 for updating the height information of the target person to the height difference information.
It can be seen that the device described in fig. 4 can also determine the height of an article according to the identification information of the article worn by the user, correct the obtained height of the user based on the height of the article, obtain the actual height of the user, and further improve the accuracy of obtaining height information of the user, so that the accuracy of obtaining the weight of the user is further improved, and the intelligent function of the weight detection device is further enriched.
In yet another alternative embodiment, as shown in fig. 4, the apparatus further comprises: a training module 309, wherein:
the obtaining module 303 is further configured to obtain sample data of a plurality of sample persons, where the sample data of each sample person includes a face image of the sample person, height information of the sample person, and weight information of the sample person.
The training module 309 is configured to train a predetermined analysis model based on sample data of all sample personnel to obtain a trained analysis model.
The determining module 301 is further configured to determine the trained analysis model as the determined weight analysis model.
Wherein, the sample data of all the sample persons is standardized sample data.
Therefore, the device described in fig. 4 can also be implemented to pre-train an analysis model based on standardized sample data to obtain a required weight analysis model, which is beneficial to analyzing the obtained user information by directly using the weight analysis model subsequently, so that the analysis efficiency and accuracy of the user weight are improved, the acquisition efficiency and accuracy of the user weight are further improved, and the intelligent detection of the user weight is realized.
In yet another alternative embodiment, the device is applied to an intelligent cabinet, which is used for cooking food materials, and which has a corresponding recipe database. And, as shown in fig. 4, the apparatus further comprises: an inquiry module 310 and an output module 311, wherein:
the analysis module 302 is further configured to, after the determination module 301 determines the information of the target person, analyze a face image of the target person included in the information of the target person to obtain attribute information of the target person, where the attribute information of the target person includes at least one of a gender of the target person, an age of the target person, and a complexion of the target person.
And the query module 310 is configured to, after the obtaining module 303 determines that the first analysis result is the weight information of the target person, query, from the recipe database, a recipe matched with the target information according to the target information of the target person, where the target information of the target person includes attribute information of the target person, height information of the target person, and weight information of the target person.
Optionally, the apparatus may further include: and the output module 311 is used for outputting the recipe matched with the target information to the target person.
It can be seen that the apparatus described in fig. 4 can also analyze the attribute information of the user by analyzing the facial image of the user, for example: gender, age, face etc. can acquire the recipe that accords with user's health condition according to user's attribute information and user's height, weight, promote the user and use the experience of intelligent cupboard, are favorable to improving the use viscosity of intelligent cupboard, are convenient for the popularization of intelligent cupboard.
Example four
Referring to fig. 5, fig. 5 is a schematic diagram illustrating another weight detection device based on deep learning according to an embodiment of the present invention. As shown in fig. 5, the deep learning based weight detecting apparatus may include:
a storage 501 in which executable program code is stored;
a processor 502 coupled to a store 501;
further, an input interface 503 and an output interface 504 coupled to the processor 502 may be included;
the processor 502 calls the executable program code stored in the storage 501 for executing the steps of the weight detection method based on deep learning described in the first embodiment or the second embodiment.
EXAMPLE five
The embodiment of the invention discloses a computer storage medium which stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute the steps of the weight detection method based on deep learning described in the first embodiment or the second embodiment.
EXAMPLE six
The embodiment of the invention discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to make a computer execute the steps of the weight detection method based on deep learning described in the first embodiment or the second embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate components may or may not be physically separate, and the components shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above detailed description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on such understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, wherein the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EPROM-on-ROM), an optical Disc (EEPROM), a Read-Only optical Disc (CD-on-ROM), or other magnetic Disc, or a ROM, a magnetic disk, or a combination thereof, A tape storage, or any other medium readable by a computer that can be used to carry or store data.
Finally, it should be noted that: the weight detection method and device based on deep learning disclosed in the embodiments of the present invention are only preferred embodiments of the present invention, and are only used for illustrating the technical solutions of the present invention, not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A weight detection method based on deep learning is characterized by comprising the following steps:
determining information of a target person, wherein the information of the target person comprises a face image of the target person and height information of the target person;
and inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result, and determining the first analysis result as the weight information of the target person.
2. The weight detection method based on deep learning of claim 1, wherein the determining information of the target person comprises:
inputting the acquired image of the target person into the determined image segmentation model for analysis to obtain a second analysis result, and determining the second analysis result as a human body image of the target person and a face image of the target person;
calculating a height pixel value of the human body image of the target person in the image of the target person based on the top end position of the human body image of the target person and the bottom end position of the image of the target person;
and acquiring the height matched with the height pixel value according to the determined actual height of the unit represented by the unit pixel, and taking the height as the height information of the target person.
3. The weight detection method based on deep learning of claim 1 or 2, wherein after the information of the target person is determined and before the information of the target person is input into the determined weight analysis model for analysis to obtain a first analysis result, the method further comprises:
judging whether the face image of the target person is matched with the determined standardized face image, and triggering and executing the operation of inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result when the face image of the target person is matched with the determined standardized face image;
when the judgment result shows that the target person is not matched with the standard face image, correcting the face image of the target person based on the determined correction mode so as to enable the corrected face image of the target person to be matched with the standard face image, and updating the information of the target person based on the corrected face image of the target person to obtain the updated information of the target person;
wherein, the inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result comprises:
and inputting the updated information of the target person into the determined weight analysis model for analysis to obtain a first analysis result.
4. The weight detection method based on deep learning of claim 3, wherein the modifying the facial image of the target person based on the determined modification manner so that the modified facial image of the target person matches the standardized facial image comprises:
acquiring coordinates of key features in the face image of the target person according to the face image of the target person, wherein the coordinates of the key features in the face image of the target person are used for correcting the face image of the target person;
and performing transformation processing on the coordinates of the key features in the face image of the target person by taking the coordinates of the key features in the acquired standardized face image as a reference until the coordinate difference value between the coordinates of the key features in the face image of the target person after transformation and the coordinates of the key features in the standardized face image is less than or equal to the determined coordinate difference value threshold.
5. The method for detecting body weight based on deep learning of claim 2, wherein the method further comprises:
acquiring identification information of an article worn by the target person, and determining height information of the article worn by the target person according to the identification information of the article worn by the target person, wherein the article worn by the target person comprises a shoe worn by the target person and/or a hat worn by the target person;
and after determining the information of the target person, the method further comprises:
and calculating height difference information between the height information of the target person and the height information of the article worn by the target person, and updating the height information of the target person into the height difference information.
6. The deep learning based weight detection method according to claim 1, 2, 4 or 5, further comprising:
acquiring sample data of a plurality of sample persons, wherein the sample data of each sample person comprises a face image of the sample person, height information of the sample person and weight information of the sample person;
training a predetermined analysis model based on sample data of all the sample personnel to obtain the trained analysis model, and determining the trained analysis model as the determined weight analysis model;
wherein the sample data of all the sample persons is standardized sample data.
7. The deep learning based weight detection method according to claim 1, 2, 4 or 5, wherein the method is applied to a smart cabinet, the smart cabinet is used for cooking food materials, and the smart cabinet has a corresponding recipe database;
and after determining the information of the target person, the method further comprises:
analyzing a face image of the target person included in the information of the target person to obtain attribute information of the target person, wherein the attribute information of the target person includes at least one of gender of the target person, age of the target person and face color of the target person;
and after determining the first analysis result as the weight information of the target person, the method further comprises:
and inquiring a recipe matched with the target information from the recipe database according to the target information of the target person, and outputting the recipe matched with the target information to the target person, wherein the target information of the target person comprises attribute information of the target person, height information of the target person and weight information of the target person.
8. A weight detection device based on deep learning, the device comprising:
the determining module is used for determining information of a target person, wherein the information of the target person comprises a face image of the target person and height information of the target person;
the analysis module is used for inputting the information of the target person into the determined weight analysis model for analysis to obtain a first analysis result;
and the acquisition module is used for determining the first analysis result as the weight information of the target person.
9. A weight detection device based on deep learning, the device comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the deep learning based weight detection method according to any one of claims 1 to 7.
10. A computer storage medium storing computer instructions which, when invoked, perform a method for weight detection based on deep learning according to any one of claims 1 to 7.
CN202011249528.4A 2020-11-10 2020-11-10 Weight detection method and device based on deep learning Pending CN112418025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011249528.4A CN112418025A (en) 2020-11-10 2020-11-10 Weight detection method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011249528.4A CN112418025A (en) 2020-11-10 2020-11-10 Weight detection method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN112418025A true CN112418025A (en) 2021-02-26

Family

ID=74781752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011249528.4A Pending CN112418025A (en) 2020-11-10 2020-11-10 Weight detection method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN112418025A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113405638A (en) * 2021-07-26 2021-09-17 成都睿畜电子科技有限公司 Mobile livestock weight measuring equipment and method based on visual identification technology
CN113591704A (en) * 2021-07-30 2021-11-02 四川大学 Body mass index estimation model training method and device and terminal equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103697820A (en) * 2013-12-17 2014-04-02 杭州华为数字技术有限公司 Method for measuring sizes based on terminal and terminal equipment
CN103902978A (en) * 2014-04-01 2014-07-02 浙江大学 Face detection and identification method
CN105286871A (en) * 2015-11-27 2016-02-03 西安交通大学 Video processing-based body height measurement method
JP2017041218A (en) * 2015-08-20 2017-02-23 仁一 石▲崎▼ System for estimating weight based on face image
CN107280118A (en) * 2016-03-30 2017-10-24 深圳市祈飞科技有限公司 A kind of Human Height information acquisition method and the fitting cabinet system using this method
CN108416253A (en) * 2018-01-17 2018-08-17 深圳天珑无线科技有限公司 Avoirdupois monitoring method, system and mobile terminal based on facial image
CN109033972A (en) * 2018-06-27 2018-12-18 上海数迹智能科技有限公司 A kind of object detection method, device, equipment and storage medium
CN109166614A (en) * 2018-08-14 2019-01-08 四川虹美智能科技有限公司 A kind of system and method for recommending personal health menu
CN110705421A (en) * 2019-09-25 2020-01-17 浙江鸿泉电子科技有限公司 Body type data processing method and device
CN110717391A (en) * 2019-09-05 2020-01-21 武汉亘星智能技术有限公司 Height measuring method, system, device and medium based on video image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103697820A (en) * 2013-12-17 2014-04-02 杭州华为数字技术有限公司 Method for measuring sizes based on terminal and terminal equipment
CN103902978A (en) * 2014-04-01 2014-07-02 浙江大学 Face detection and identification method
JP2017041218A (en) * 2015-08-20 2017-02-23 仁一 石▲崎▼ System for estimating weight based on face image
CN105286871A (en) * 2015-11-27 2016-02-03 西安交通大学 Video processing-based body height measurement method
CN107280118A (en) * 2016-03-30 2017-10-24 深圳市祈飞科技有限公司 A kind of Human Height information acquisition method and the fitting cabinet system using this method
CN108416253A (en) * 2018-01-17 2018-08-17 深圳天珑无线科技有限公司 Avoirdupois monitoring method, system and mobile terminal based on facial image
CN109033972A (en) * 2018-06-27 2018-12-18 上海数迹智能科技有限公司 A kind of object detection method, device, equipment and storage medium
CN109166614A (en) * 2018-08-14 2019-01-08 四川虹美智能科技有限公司 A kind of system and method for recommending personal health menu
CN110717391A (en) * 2019-09-05 2020-01-21 武汉亘星智能技术有限公司 Height measuring method, system, device and medium based on video image
CN110705421A (en) * 2019-09-25 2020-01-17 浙江鸿泉电子科技有限公司 Body type data processing method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113405638A (en) * 2021-07-26 2021-09-17 成都睿畜电子科技有限公司 Mobile livestock weight measuring equipment and method based on visual identification technology
CN113591704A (en) * 2021-07-30 2021-11-02 四川大学 Body mass index estimation model training method and device and terminal equipment
CN113591704B (en) * 2021-07-30 2023-08-08 四川大学 Body mass index estimation model training method and device and terminal equipment

Similar Documents

Publication Publication Date Title
CN108701216B (en) Face recognition method and device and intelligent terminal
CN107742100B (en) A kind of examinee's auth method and terminal device
US9449221B2 (en) System and method for determining the characteristics of human personality and providing real-time recommendations
CN107194361B (en) Two-dimensional posture detection method and device
KR20090004348A (en) Personal identification device, personal identification method, updating method for identification dictionary data, and updating program for identification dictionary data
CN112418025A (en) Weight detection method and device based on deep learning
CN112101124A (en) Sitting posture detection method and device
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
US20210030268A1 (en) Systems and methods for displaying objects on a screen at a desired visual angle
CN113177468A (en) Human behavior detection method and device, electronic equipment and storage medium
CN111681234A (en) Method, system and equipment for detecting standard of trial product placed on store shelf
KR102030131B1 (en) Continuous skin condition estimating method using infrared image
US20210019656A1 (en) Information processing device, information processing method, and computer program
CN111629265B (en) Television and television control method, control device and readable storage medium thereof
CN112418022B (en) Human body data detection method and device
CN111784660B (en) Method and system for analyzing frontal face degree of face image
CN110298684B (en) Vehicle type matching method and device, computer equipment and storage medium
US20200065631A1 (en) Produce Assessment System
CN112464747A (en) Height detection method and device based on image acquisition equipment
CN111048202A (en) Intelligent traditional Chinese medicine diagnosis system and method thereof
CN114732350A (en) Vision detection method and device, computer readable medium and electronic equipment
CN113313050A (en) Skin intelligent detection system based on video streaming
CN114267043A (en) Household health instrument digital reading identification method and device
CN112086193A (en) Face recognition health prediction system and method based on Internet of things
CN113470818A (en) Disease prediction method, device, system, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211029

Address after: 510663 501-2, Guangzheng science and Technology Industrial Park, No. 11, Nanyun fifth road, Science City, Huangpu District, Guangzhou, Guangdong Province

Applicant after: GUANGZHOU FUGANG LIFE INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 510000 501-1, Guangzheng science and Technology Industrial Park, No. 11, Nanyun 5th Road, Science City, Huangpu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU FUGANG WANJIA INTELLIGENT TECHNOLOGY Co.,Ltd.