CN111354463A - Human health measuring method, device, computer equipment and storage medium - Google Patents

Human health measuring method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111354463A
CN111354463A CN201811582450.0A CN201811582450A CN111354463A CN 111354463 A CN111354463 A CN 111354463A CN 201811582450 A CN201811582450 A CN 201811582450A CN 111354463 A CN111354463 A CN 111354463A
Authority
CN
China
Prior art keywords
image
training
data
acquiring
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811582450.0A
Other languages
Chinese (zh)
Other versions
CN111354463B (en
Inventor
沈忱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youpin International Science And Technology Shenzhen Co ltd
Original Assignee
Binke Puda Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binke Puda Beijing Technology Co ltd filed Critical Binke Puda Beijing Technology Co ltd
Priority to CN201811582450.0A priority Critical patent/CN111354463B/en
Publication of CN111354463A publication Critical patent/CN111354463A/en
Application granted granted Critical
Publication of CN111354463B publication Critical patent/CN111354463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Abstract

The invention discloses a method and a device for measuring human health, computer equipment and a storage medium, wherein the method comprises the following steps: detecting the legality of the original image to obtain a legal image; preprocessing a legal image to obtain a target image; adopting a face detection algorithm to identify a target image and acquiring a face image to be identified; inputting a face image to be recognized into a constitution type recognition model, and acquiring a target constitution type corresponding to the face image to be recognized; acquiring exercise frequent indexes, and selecting a target health recognition model corresponding to a target constitution type from the original health recognition model based on the exercise frequent indexes and the target constitution type; the method comprises the steps of obtaining data to be recognized sent by a human body measuring device, recognizing the data to be recognized by using a target health recognition model, and obtaining a health recognition result, so that a relatively accurate target health recognition model is obtained by recognizing human body images and motion frequent indexes, and the accuracy of the health recognition result is improved.

Description

Human health measuring method, device, computer equipment and storage medium
Technical Field
The invention relates to a human health measuring method, a human health measuring device, computer equipment and a storage medium.
Background
With the development of science and technology, the improvement of the living standard of people and the increasing importance of people on health, the body fat scale increasingly enters thousands of households, and body components such as the weight, the fat rate, the muscle rate and the moisture of a user can be measured through the body fat scale, but the existing body fat scale can only measure data and cannot well reflect the health condition of the user, so that the user cannot intuitively and easily know the result whether the body of the user is healthy or not.
Disclosure of Invention
The embodiment of the invention provides a human health measuring method, a human health measuring device, computer equipment and a storage medium, and aims to solve the problem that the physical health condition of a user cannot be accurately known through the measured data of a human body.
A human health measurement method, comprising:
acquiring an original image sent by a client, detecting the legality of the original image, and acquiring a legal image;
preprocessing the legal image to obtain a target image;
adopting a face detection algorithm to identify the target image, and acquiring a face image to be identified;
inputting the face image to be recognized into a constitution type recognition model, and acquiring a target constitution type corresponding to the face image to be recognized;
acquiring exercise frequent indexes, and selecting a target health recognition model corresponding to the target constitution type from an original health recognition model based on the exercise frequent indexes and the target constitution type;
and acquiring data to be identified sent by the human body measuring equipment, and identifying the data to be identified by using the target health identification model to acquire a health identification result.
A human health measurement device comprising:
the image validity judging module is used for acquiring an original image sent by a client, detecting the validity of the original image and acquiring a valid image;
the image preprocessing module is used for preprocessing the legal image to obtain a target image;
the face detection module is used for identifying the target image by adopting a face detection algorithm to acquire a face image to be identified;
the target constitution type acquisition module is used for inputting the face image to be recognized into a constitution type recognition model and acquiring a target constitution type corresponding to the face image to be recognized;
the target health recognition model selection module is used for acquiring the exercise frequent index and selecting a target health recognition model corresponding to the target constitution type from the original health recognition model based on the exercise frequent index and the target constitution type;
and the health recognition result acquisition module is used for acquiring data to be recognized sent by the human body measuring equipment, recognizing the data to be recognized by using the target health recognition model and acquiring a health recognition result.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the above human health measurement method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the above-mentioned human health measurement method.
According to the human health measuring method, the human health measuring device, the computer equipment and the storage medium, the original image is obtained, the original image is subjected to legality judgment, preprocessing and face detection, the face image to be recognized is obtained, and an effective data source is provided for obtaining the target physique type by using the face image to be recognized subsequently. After the face image to be recognized is obtained, the face image to be recognized is input into the constitution type recognition model, the skin color, the age and the gender of the face image to be recognized are obtained by recognizing the face image to be recognized, then the target constitution type is obtained by combining with the motion regularity index, and the recognition speed and the accuracy of the face image to be recognized are improved. After the target physique type is determined, the target health recognition model is obtained according to the exercise regularity indexes, and therefore a relatively accurate human health recognition result is obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a method for measuring human health according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for measuring human health according to an embodiment of the present invention;
FIG. 3 is another flow chart of a method for measuring human health according to an embodiment of the present invention;
FIG. 4 is another flow chart of a method for measuring human health in accordance with an embodiment of the present invention;
FIG. 5 is another flow chart of a method for measuring human health in accordance with an embodiment of the present invention;
FIG. 6 is a schematic view of a human health measuring device in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The human health measurement method provided by the embodiment of the invention is applied to the developed human health measurement APP, and the human health measurement APP can be installed in the user client, wherein the user client comprises but is not limited to a personal computer, a notebook computer, a smart phone and a tablet computer. The APP comprises a client for downloading and installing the data for a user and a server for processing the obtained data, wherein the client communicates with the server through a network.
In an embodiment, as shown in fig. 2, a method for measuring human health is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s10: and acquiring an original image sent by the client, detecting the legality of the original image, and acquiring a legal image.
The original image refers to an image shot by a user through a mobile phone, a tablet computer, a camera or other terminal equipment. The legal image refers to an image which is detected from an original image and has no illegal contents.
Specifically, a server corresponding to the human body measurement APP acquires an original image through terminal equipment, and after the original image is acquired, an image security detection plug-in is called to detect whether the original image includes illegal contents. The image security detection plug-in refers to a plug-in written by a developer and used for detecting whether an image is legal or not. In the embodiment, the server uses the security detection plug-in to detect the legality of the original image, automatic detection can be realized, manual participation is not needed, and the efficiency of legality detection on the original image is effectively improved.
S20: and preprocessing the legal image to obtain a target image.
Specifically, due to illumination deviation, shadow, color difference and the like, the phenomenon of uneven illumination and blurring generally occurs when a terminal device (such as a mobile phone camera) collects an original image, and in order to make the original image clearer, after the original image is subjected to legality detection, the legal image needs to be preprocessed to obtain a target image. The target image refers to a legal image which eliminates the phenomena of uneven illumination and blurring.
In this embodiment, the preprocessing of the legal image includes two steps of blur detection and illumination compensation.
The fuzzy detection process comprises the following steps: graying a legal image, performing convolution processing by using a Laplacian operator (which can be understood as a matrix of 3x 3) of 3x3, calculating a standard deviation of the convolved legal image by using an std2() function, squaring the standard deviation to obtain a variance of the legal image, and finally judging the variance of the legal image and a preset threshold, wherein the preset threshold is a preset threshold for judging the variance of the legal image. If the variance of the legal image is larger than or equal to the preset threshold, the definition of the legal image reaches the preset threshold, and the legal image does not need to be uploaded again. The std2() function is a function for calculating the standard deviation of the convolved legal image. If the variance corresponding to the legal image is smaller than the preset threshold, the definition of the legal image does not reach the preset threshold, and the legal image needs to be prompted to be uploaded again until a clearer legal image is obtained, so that the accuracy of subsequently identifying the target image is improved.
The illumination compensation process comprises the following steps: calculating the average values avgR, avgG, avgB of the three color components (R, G, B) in the legal image respectively; then, calculating an average gray value avgGray of the legal image, and dividing the average gray value avgGray of the legal image by the average values avgR, avgG and avgB of the three color components (R, G, B) respectively to obtain adjustment values aR, aG and aB corresponding to the three color components; and finally, multiplying the R, G, B value of each pixel by the corresponding adjustment values aR, aG and aB respectively to obtain effective values cR, cG and cB, and carrying out normalization processing on the effective values. The specific process of the normalization treatment is as follows: and selecting the maximum pixel value in the legal image, and acquiring the normalization factor through the normalization factor which is the maximum pixel value/255. And then dividing the effective values cR, cG and cB by the normalization factor respectively to obtain a target image.
S30: and identifying the target image by adopting a face detection algorithm to obtain a face image to be identified.
Specifically, after the target image is obtained, a face detection algorithm (such as opencv) is adopted to identify a face in the target image, and a face image to be identified in the target image is obtained. The face image to be recognized refers to the image which is detected in a target image by a face detection algorithm and only contains the face.
S40: and inputting the face image to be recognized into the constitution type recognition model, and acquiring the target constitution type corresponding to the face image to be recognized.
In this embodiment, a DCNN (Deep Cable News Network) is used as the body constitution type identification model.
The target constitution type refers to a constitution type determined by the skin color, age and gender of the face image to be recognized. Including a white breed young man type, a white breed young woman type, a white breed middle age man type, a white breed middle age woman type, a white breed old man type, a black breed young woman type, a black breed middle age man type, a black breed middle age woman type, a black breed old man type, a yellow breed young man type, a yellow breed middle age man type, a yellow breed old man type, and a yellow breed old man type.
S50: acquiring exercise frequent indexes, and selecting a target health recognition model corresponding to the target constitution type from the original health recognition model based on the exercise frequent indexes and the target constitution type.
Because the health indexes of people who do not exercise frequently are different from those of people who do not exercise frequently, such as athletes or people who do exercise frequently, the hearts of the people are developed, the amount of blood pumped by each heart beat is relatively increased, and the heart rate of the athletes is lower in a static state compared with that of ordinary people. Studies have shown that the average adult heart rate is 75 beats per minute, with normal ranges of variation typically ranging from 60 to 100 beats per minute, and that the static heart rate of athletes is less than 60 beats per minute.
In order to more accurately identify the physical health condition of the person corresponding to different body types, the original health identification models in the present embodiment are classified into original health identification models corresponding to different races (white, yellow, and black), different ages (young, middle, and old), different sexes (male and female), and whether to exercise frequently (exercise frequently and exercise infrequently).
Specifically, after the target constitution type is obtained, the user inputs the exercise regularity index at the client corresponding to the human body measurement APP and sends the exercise regularity index to the server corresponding to the human body measurement APP, and the server selects the target health recognition model corresponding to the exercise regularity index and the target constitution type from the original health recognition model according to the target constitution type and the exercise regularity so as to improve the accuracy of model recognition by subsequently using the target health recognition model.
S60: and acquiring data to be recognized sent by the human body measuring equipment, and recognizing the data to be recognized by using the target health recognition model to acquire a health recognition result.
The data to be identified refers to the measured data of the human body to be identified, which is sent by the human body measuring device, and the human body measuring device in this embodiment takes a body fat scale as an example for explanation. Anthropometric data are data obtained from a body fat scale, including but not limited to basal metabolic rate, visceral fat index, storage muscle capacity rating, fat rate, muscle rate, bone mass, protein and moisture.
After the target health identification model is determined, the server corresponding to the human health measurement APP sends a data acquisition instruction to the human body measurement device, and after the human body measurement device acquires the instruction, the to-be-identified data is sent to the server corresponding to the human health measurement APP based on the instruction. After acquiring the data to be identified sent by the human body measuring equipment, the server inputs the data to be identified into the target health identification model for identification, and acquires a health identification result corresponding to the data to be identified sent by the human body measuring equipment. The health recognition results include health and unhealthy.
And S10-S60, acquiring the original image, and performing legality judgment, preprocessing and face detection on the original image to acquire the face image to be recognized, so as to provide an effective data source for acquiring the target constitution type by using the face image to be recognized subsequently. After the face image to be recognized is obtained, the face image to be recognized is input into the constitution type recognition model, the skin color, the age and the gender of the face image to be recognized are obtained by recognizing the face image to be recognized, then the target constitution type is obtained by combining with the motion regularity index, and the recognition speed and the accuracy of the face image to be recognized are improved. After the target physique type is determined, the target health recognition model is obtained according to the exercise regularity indexes, and therefore a relatively accurate human health recognition result is obtained.
In an embodiment, in step S10, before obtaining the original image sent by the client and detecting the validity of the original image, the method for measuring human health further includes training a physique type recognition model in advance, so that step S40 is facilitated to recognize the face image to be recognized using the physique type recognition model, and obtain the target physique type corresponding to the face image to be recognized, as shown in fig. 3, the method specifically includes the following steps:
s401: the method comprises the steps of obtaining training samples, dividing the training samples into a training set and a testing set, wherein the training samples comprise face images, and each face image carries a corresponding image label.
Specifically, a training sample is obtained, the training sample includes face images of different ages, different skin colors, and different sexes, and each face image carries a corresponding image tag. The image tag in this embodiment includes race, sex, and age, such as a yellow-breed young female.
Further, after the training samples are obtained, the training samples are divided into a training set and a test set. The training set refers to a data set formed by training samples for training DCNN (Deep Cable News Network); the test set refers to a data set formed by training samples for testing the accuracy of the trained DCNN.
S402: and inputting the training set into the DCNN for training, and acquiring the output of the DCNN to the face images in the training set.
After the face image is obtained, the face image is input into the DCNN for training, and the DCNN in the embodiment includes hidden layers corresponding to three channels, and respectively trains race, gender and age corresponding to the face image.
Specifically, after the face image is input into the DCNN through the input layer, the hidden layer in the DCNN processes the face image to obtain an output of the hidden layer, and then the output of the hidden layer is input into the output layer to obtain the output of the DCNN on the face image.
S403: and constructing a loss function based on the output of the face image and the image label corresponding to the face image, updating the weight and the bias of the DCNN based on the loss function, and obtaining the updated DCNN.
After the output of the DCNN to the face image is obtained, a loss function is constructed based on the output of the face image and the image label corresponding to the face image, then the bias of the loss function is calculated to update the weight and the bias of the DCNN, and the DCNN is enabled to be more consistent with the recognition of the face image after continuous training by continuously adjusting the weight and the bias in the DCNN.
S404: and detecting the updated DCNN by using a test set to obtain a physique type identification model.
After updating the weight and the bias in the DCNN, inputting the test set into the updated DCNN to obtain the identification probability corresponding to the test set, and when the identification probability reaches (i.e. is greater than or equal to) a preset probability value, considering the weight and the bias in the updated DCNN as parameters meeting the conditions, and calling the updated DCNN as a constitution type identification model. Wherein, the identification probability is the probability obtained by the updated DCNN identifying the test set.
Further, if the recognition probability of the updated DCNN on the test set does not reach (i.e., is smaller than) the preset probability value, the weight and the bias in the updated DCNN are considered to be not compliant with the conditions, and the training set is continuously obtained to train the updated DCNN until the test on the DCNN by using the test set reaches the preset conditions.
Step S401-step S404, the training samples are obtained and divided into a training set and a testing set for training and testing the DCNN, and the physique type recognition model is obtained, so that the accuracy of the obtained physique type recognition model is higher.
In an embodiment, in step S10, the method for measuring human health further includes, before obtaining the legal image, training an original health recognition model in advance, which is convenient for step S60, and obtaining a human measurement result according to the pre-trained original health recognition model, as shown in fig. 4, specifically including the following steps:
s601: and acquiring training samples, dividing the training samples according to a data division rule, and acquiring effective training data corresponding to different data types, wherein the effective training data carries original labels.
In order to ensure the consistency and accuracy of data training, the training book in this embodiment is consistent with the training sample in step S41. The data partitioning rule refers to a rule for partitioning the original training data, and includes race, age, gender, and exercise frequency. Wherein, the movement frequently comprises frequent movement and infrequent movement.
Specifically, after obtaining the original training data, the original training data is divided based on a data division rule, the original training data is divided into 36 types of effective training data corresponding to 36 types of data, such as a young common man data type, a young common woman data type, a middle common man data type, a middle common woman data type, an old common man data type, an old common woman data type, a young sports man data type, a middle sports man data type, an old sports man data type, and an old sports man data type, which correspond to different races according to the data division rule, so as to facilitate step S604, after obtaining the effective training data, a corresponding original label is added to each effective training data, the original label includes healthy and unhealthy.
S602: and randomly extracting K sample sets from the effective training data of each data type, and generating a random forest corresponding to each data type based on the K sample sets.
Specifically, after effective training data of different data types are obtained, K sample sets are randomly extracted from the effective training data of each data type, and a random forest corresponding to each data type is generated based on the K sample sets.
Taking a data type as an example, randomly extracting part of effective training data from the effective training data of the data type, dividing the part of effective measurement data into K sample sets, wherein one sample set corresponds to one decision tree, and the K sample sets correspondingly generate K decision trees to form a random forest. Wherein the valid measurement data in each sample set includes M attributes. Attributes in this embodiment include, but are not limited to, basal metabolic rate, visceral fat index, muscle storage capacity rating, fat rate, muscle rate, bone mass, protein, and moisture.
The step of generating the corresponding random forest based on the K sample sets comprises the following steps: (1) for each sample set, M attributes are chosen from the M attributes (typically, M is the root mean square of M). Selecting one of the m attributes as a split point for constructing a first decision tree by using a certain strategy (including but not limited to information gain, information gain ratio and a kini index); (2) repeating the step (1) on the remaining m-1 attributes until the splitting can not be performed any more (if the attribute selected by the node next time is the attribute used when the parent node is split, the node reaches the leaf node, which indicates that the splitting does not need to be continued), and obtaining a decision tree; (3) and forming a random forest by the generated plurality of sub decision trees.
S603: inputting the valid measurement data which are not extracted from each data type into a random forest, and taking the output result of the random forest as a training label.
Specifically, after the effective measurement data corresponding to each data type is extracted by K sample sets, the remaining effective measurement data in each data type, that is, the effective measurement data which is not extracted, is input into a random forest, the random forest votes for the output result of each decision tree, and the output result which votes most is used as the output result of the random forest. And after the output result of the random forest is obtained, taking the output result corresponding to the random forest as a training label of the effective measurement data. The training labels are obtained through random forests, so that the data training speed is improved. For example: the random forest is provided with 100 sub-decision trees, the classification results of 80 sub-decision trees are healthy, the classification results of 20 sub-decision trees are not healthy, the output result of the random forest is healthy through voting, and the result is a training label obtained by effectively measuring data through the random forest.
S604: and comparing the training labels with the original labels, and when the training labels are matched with the original labels, taking the random forest as an original health recognition model.
After the training labels are obtained, the training labels are compared with the original labels, when the training labels are matched with the original labels, the training of the random forest is successful, and the random forest can be used as an original health recognition model.
Step S601-step S604, the original training data is divided into effective training data of different data types through the data division rule, and therefore random forests of different data types can be generated conveniently according to the effective training data. The random forest is used as an original health recognition model, so that the attribute in the effective training data can be rapidly split, manual intervention is not needed, and the splitting efficiency is improved. Different original health recognition models are trained for different data types, so that the subsequently acquired human body measurement result is more accurate.
In an embodiment, before the step of acquiring the data to be identified sent by the human body measurement device in step S60, it is further required to check in advance whether the human body measurement device can perform normal communication with the health measurement APP, as shown in fig. 5, the human body health measurement method further includes:
s61: and detecting whether the access of the human body measuring equipment is successful, and if the access is successful, sending a data acquisition instruction.
Specifically, before the step of obtaining the data to be identified sent by the body measurement device, the health measurement APP needs to detect whether the body measurement device is successfully accessed, and if the access is successful, it indicates that the body fat scale can normally communicate with the health measurement APP. After the health measurement APP obtains the health request, whether the human body measurement equipment is successfully accessed to the smart phone or not is detected, and if the access is successful, a data obtaining instruction is sent to the human body measurement equipment. The data acquisition instruction refers to an instruction sent by the health measurement APP to the human body measurement equipment to acquire data.
S62: and acquiring the data to be identified sent by the human body measuring equipment based on the data acquisition instruction.
Specifically, after the body measurement device obtains the data acquisition instruction, the body measurement device sends the measured data to be identified to the server corresponding to the health measurement APP, so that the health measurement APP executes the subsequent steps according to the data to be identified.
Step S61-step S62, whether the measurement device is successfully accessed to the smart phone where the health measurement APP is located is detected, so that the fact that the human body measurement device and the health measurement APP can normally communicate is guaranteed, data transmission is conducted, and a data transmission channel is provided for obtaining to-be-identified data sent by the human body measurement device.
According to the human body health measurement method provided by the invention, the original image is subjected to legality judgment, preprocessing and face detection to obtain the face image to be recognized, so that an effective data source is provided for obtaining the target constitution type by using the face image to be recognized subsequently. After the face image to be recognized is obtained, the face image to be recognized is input into the constitution type recognition model, the skin color, the age and the gender of the face image to be recognized are obtained by recognizing the face image to be recognized, then the target constitution type is obtained by combining with the motion regularity index, and the recognition speed and the accuracy of the face image to be recognized are improved. After the target physique type is determined, the target health recognition model is obtained according to the exercise regularity indexes, and therefore a relatively accurate human health recognition result is obtained.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, a human health measuring device is provided, which corresponds to the human health measuring method in the above embodiments one to one. As shown in fig. 6, the human health measuring apparatus includes an image validity judging module 10, an image preprocessing module 20, a face detecting module 30, a target body constitution type obtaining module 40, a target health recognition model selecting module 50 and a health recognition result obtaining module 60. The functional modules are explained in detail as follows:
the image validity judging module 10 is configured to acquire an original image sent by a client, detect validity of the original image, and acquire a valid image.
And the image preprocessing module 20 is configured to preprocess the legal image to obtain a target image.
And the face detection module 30 is configured to identify the target image by using a face detection algorithm, and acquire a face image to be identified.
And the target constitution type obtaining module 40 is used for inputting the face image to be recognized into the constitution type recognition model and obtaining the target constitution type corresponding to the face image to be recognized.
And the target health recognition model selecting module 50 is used for acquiring the exercise frequent index, and selecting a target health recognition model corresponding to the target constitution type from the original health recognition model based on the exercise frequent index and the target constitution type.
And the health recognition result acquisition module 60 is configured to acquire the data to be recognized sent by the anthropometric device, and recognize the data to be recognized by using the target health recognition model to acquire a health recognition result.
Furthermore, the human health measuring device also comprises a first training sample processing unit, a model training unit, a model parameter updating unit and a constitution type identification model obtaining unit.
The first training sample processing unit is used for acquiring training samples, dividing the training samples into a training set and a testing set, wherein the training samples comprise face images, and each face image carries a corresponding image label.
And the model training unit is used for inputting the training set into the DCNN for training and acquiring the output of the DCNN on the face images in the training set.
And the model parameter updating unit is used for constructing a loss function based on the output of the face image and the image label corresponding to the face image, updating the weight and the bias of the DCNN based on the loss function, and acquiring the updated DCNN.
And the physique type identification model acquisition unit is used for detecting the updated DCNN by using the test set to acquire the physique type identification model.
Furthermore, the human health measuring device also comprises a second training sample processing unit, a random forest generating unit, a training label obtaining unit and an original health recognition model obtaining unit.
And the second training sample processing unit is used for acquiring training samples, dividing the training samples according to the data division rule and acquiring effective training data corresponding to different data types, wherein the effective training data carries the original label.
And the random forest generation unit is used for randomly extracting K sample sets from the effective training data of each data type and generating a random forest corresponding to each data type based on the K sample sets.
And the training label acquisition unit is used for inputting the effective measurement data which are not extracted in each data type into the random forest and taking the output result of the random forest as a training label.
And the original health recognition model obtaining unit is used for comparing the training label with the original label, and when the training label is matched with the original label, the random forest is used as the original health recognition model.
Further, the data partitioning rule in the second training sample processing unit includes race, age, sex, and exercise frequency.
The data types comprise data types of young common men, young common women, middle-aged common men, middle-aged common women, old common men, old common women, young sports man, middle-aged sports man, old sports man and old sports woman corresponding to different races of yellow race, black race and white race.
Further, the human health measuring device also comprises a measuring equipment detecting unit and a data acquiring unit to be identified.
And the measuring equipment detection unit is used for detecting whether the human body measuring equipment is successfully accessed, and if the human body measuring equipment is successfully accessed, sending a data acquisition instruction.
And the data to be identified acquiring unit is used for acquiring the data to be identified sent by the human body measuring equipment based on the data acquiring instruction.
For specific limitations of the human health measuring device, reference may be made to the above limitations of the human health measuring method, which are not described herein again. The modules in the human health measuring device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the human health measurement method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of human health measurement.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method for measuring human health in the foregoing embodiments is implemented, for example, steps S10 to S60 shown in fig. 2, or steps shown in fig. 3 to 5, which are not repeated herein to avoid repetition. Alternatively, the processor implements the functions of the modules/units in the embodiment of the human health measuring apparatus when executing the computer program, for example, the functions of the image validity determining module 10, the image preprocessing module 20, the face detecting module 30, the target physique type obtaining module 40, the target health recognition model selecting module 50, and the health recognition result obtaining module 60 shown in fig. 6, and are not repeated here to avoid repetition.
In an embodiment, a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the method for measuring human health in the foregoing embodiments, for example, steps S10 to S60 shown in fig. 2, or steps shown in fig. 3 to fig. 5, which are not repeated herein for avoiding repetition. Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units in the embodiment of the human health measuring apparatus, such as the functions of the image validity determining module 10, the image preprocessing module 20, the face detecting module 30, the target physique type obtaining module 40, the target health recognition model selecting module 50, and the health recognition result obtaining module 60 shown in fig. 6, which are not repeated herein for avoiding repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method of measuring human health, comprising:
acquiring an original image sent by a client, detecting the legality of the original image, and acquiring a legal image;
preprocessing the legal image to obtain a target image;
adopting a face detection algorithm to identify the target image, and acquiring a face image to be identified;
inputting the face image to be recognized into a constitution type recognition model, and acquiring a target constitution type corresponding to the face image to be recognized;
acquiring exercise frequent indexes, and selecting a target health recognition model corresponding to the target constitution type from an original health recognition model based on the exercise frequent indexes and the target constitution type;
and acquiring data to be identified sent by the human body measuring equipment, and identifying the data to be identified by using the target health identification model to acquire a health identification result.
2. The human health measurement method of claim 1, wherein before the acquiring the raw image sent by the client and detecting the validity of the raw image, the human health measurement method further comprises:
acquiring training samples, and dividing the training samples into a training set and a test set, wherein the training samples comprise face images, and each face image carries a corresponding image label;
inputting the training set into a DCNN for training, and acquiring the output of the DCNN to the face images in the training set;
constructing a loss function based on the output of the face image and an image label corresponding to the face image, updating the weight and the bias of the DCNN based on the loss function, and obtaining the updated DCNN;
and detecting the updated DCNN by using the test set to obtain a constitution type identification model.
3. The human health measurement method of claim 1, wherein before the acquiring the raw image sent by the client and detecting the validity of the raw image, the human health measurement method further comprises:
acquiring the training samples, dividing the training samples according to a data division rule, and acquiring effective training data corresponding to different data types, wherein the effective training data carries original labels;
randomly extracting K sample sets from the effective training data of each data type, and generating a random forest corresponding to each data type based on the K sample sets;
inputting the effective measurement data which are not extracted from each data type into a random forest, and taking the output result of the random forest as a training label;
and comparing the training label with the original label, and when the training label is matched with the original label, taking the random forest as an original health recognition model.
4. The human health measurement method of claim 3, wherein the data division rule includes race, age, sex, and exercise frequency;
the data types comprise data types of young common men, young common women, middle-aged common men, middle-aged common women, old common men, old common women, young sports man, middle-aged sports man, old sports man and old sports woman corresponding to different races of yellow race, black race and white race.
5. The method for measuring human health according to claim 1, wherein the acquiring data to be identified transmitted from the human body measuring device comprises:
detecting whether the access of the human body measuring equipment is successful, and if the access is successful, sending a data acquisition instruction;
and acquiring the data to be identified sent by the body measurement equipment based on the data acquisition instruction.
6. A human health measurement device, comprising:
the image validity judging module is used for acquiring an original image sent by a client, detecting the validity of the original image and acquiring a valid image;
the image preprocessing module is used for preprocessing the legal image to obtain a target image;
the face detection module is used for identifying the target image by adopting a face detection algorithm to acquire a face image to be identified;
the target constitution type acquisition module is used for inputting the face image to be recognized into a constitution type recognition model and acquiring a target constitution type corresponding to the face image to be recognized;
the target health recognition model selection module is used for acquiring the exercise frequent index and selecting a target health recognition model corresponding to the target constitution type from the original health recognition model based on the exercise frequent index and the target constitution type;
and the health recognition result acquisition module is used for acquiring data to be recognized sent by the human body measuring equipment, recognizing the data to be recognized by using the target health recognition model and acquiring a health recognition result.
7. The human health measurement device of claim 6, further comprising:
the system comprises a first training sample processing unit, a second training sample processing unit and a third training sample processing unit, wherein the first training sample processing unit is used for acquiring a training sample, and dividing the training sample into a training set and a testing set, the training sample comprises face images, and each face image carries a corresponding image label;
the model training unit is used for inputting the training set into the DCNN for training and acquiring the output of the DCNN on the face images in the training set;
the model parameter updating unit is used for constructing a loss function based on the output of the face image and the image label corresponding to the face image, updating the weight and the bias of the DCNN based on the loss function and acquiring the updated DCNN;
and the physique type identification model acquisition unit is used for detecting the updated DCNN by using the test set to acquire the physique type identification model.
8. The human health measurement device of claim 6,
the second training sample processing unit is used for acquiring the training samples, dividing the training samples according to a data division rule and acquiring effective training data corresponding to different data types, wherein the effective training data carries original labels;
the random forest generating unit is used for randomly extracting K sample sets from the effective training data of each data type and generating a random forest corresponding to each data type based on the K sample sets;
the training label acquisition unit is used for inputting the effective measurement data which are not extracted in each data type into a random forest and taking the output result of the random forest as a training label;
and the original health recognition model obtaining unit is used for comparing the training label with the original label, and when the training label is matched with the original label, the random forest is used as an original health recognition model.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of measuring human health as claimed in any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the method of measuring the health of a person according to any one of claims 1 to 5.
CN201811582450.0A 2018-12-24 2018-12-24 Human health measurement method, device, computer equipment and storage medium Active CN111354463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811582450.0A CN111354463B (en) 2018-12-24 2018-12-24 Human health measurement method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811582450.0A CN111354463B (en) 2018-12-24 2018-12-24 Human health measurement method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111354463A true CN111354463A (en) 2020-06-30
CN111354463B CN111354463B (en) 2023-11-14

Family

ID=71193778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811582450.0A Active CN111354463B (en) 2018-12-24 2018-12-24 Human health measurement method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111354463B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516002A (en) * 2021-03-05 2021-10-19 武汉特斯联智能工程有限公司 Face recognition method and device based on face recognition model and applying smart community
CN113610120A (en) * 2021-07-21 2021-11-05 燕山大学 App image content safety detection method based on weak supervised learning
CN116487050A (en) * 2023-06-21 2023-07-25 深圳市万佳安智能科技有限公司 Human health monitoring method, device and computer equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080294016A1 (en) * 2007-05-22 2008-11-27 Gobeyn Kevin M Establishing baseline data for physiological monitoring system
US20080294018A1 (en) * 2007-05-22 2008-11-27 Kurtz Andrew F Privacy management for well-being monitoring
US20090043613A1 (en) * 2006-06-29 2009-02-12 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Generating output data based on patient monitoring
KR20140099362A (en) * 2013-02-01 2014-08-12 남궁용주 security system and method for electronic health record using biometric
CN104657816A (en) * 2015-01-26 2015-05-27 合肥博谐电子科技有限公司 Intelligent human health evaluation and promotion service management system
US20150157243A1 (en) * 2013-12-11 2015-06-11 Korea Institute Of Oriental Medicine Health state determining method and apparatus using facial image
US20160180050A1 (en) * 2014-10-28 2016-06-23 Tapgenes, Inc. Methods for determining health risks
CN105808903A (en) * 2014-12-29 2016-07-27 中兴通讯股份有限公司 Health report generation method and apparatus
CN106821352A (en) * 2017-03-14 2017-06-13 广州视源电子科技股份有限公司 A kind of health status reminding method and device
US20170300655A1 (en) * 2016-04-19 2017-10-19 Vivametrica Ltd. Apparatus and methodologies for personal health analysis
CN107863153A (en) * 2017-11-24 2018-03-30 中南大学 A kind of human health characteristic modeling measuring method and platform based on intelligent big data
US20180113986A1 (en) * 2016-10-20 2018-04-26 Jiping Zhu Method and system for quantitative classification of health conditions via a mobile health monitor and application thereof
CN107967941A (en) * 2017-11-24 2018-04-27 中南大学 A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct
CN108236454A (en) * 2016-12-26 2018-07-03 阿里巴巴集团控股有限公司 Health measurement collecting method and electronic equipment
CN113593693A (en) * 2021-05-25 2021-11-02 水欢怡 Remote health management platform

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090043613A1 (en) * 2006-06-29 2009-02-12 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Generating output data based on patient monitoring
US20080294018A1 (en) * 2007-05-22 2008-11-27 Kurtz Andrew F Privacy management for well-being monitoring
US20080294016A1 (en) * 2007-05-22 2008-11-27 Gobeyn Kevin M Establishing baseline data for physiological monitoring system
KR20140099362A (en) * 2013-02-01 2014-08-12 남궁용주 security system and method for electronic health record using biometric
US20150157243A1 (en) * 2013-12-11 2015-06-11 Korea Institute Of Oriental Medicine Health state determining method and apparatus using facial image
US20160180050A1 (en) * 2014-10-28 2016-06-23 Tapgenes, Inc. Methods for determining health risks
CN105808903A (en) * 2014-12-29 2016-07-27 中兴通讯股份有限公司 Health report generation method and apparatus
CN104657816A (en) * 2015-01-26 2015-05-27 合肥博谐电子科技有限公司 Intelligent human health evaluation and promotion service management system
US20170300655A1 (en) * 2016-04-19 2017-10-19 Vivametrica Ltd. Apparatus and methodologies for personal health analysis
US20180113986A1 (en) * 2016-10-20 2018-04-26 Jiping Zhu Method and system for quantitative classification of health conditions via a mobile health monitor and application thereof
CN108236454A (en) * 2016-12-26 2018-07-03 阿里巴巴集团控股有限公司 Health measurement collecting method and electronic equipment
CN106821352A (en) * 2017-03-14 2017-06-13 广州视源电子科技股份有限公司 A kind of health status reminding method and device
CN107863153A (en) * 2017-11-24 2018-03-30 中南大学 A kind of human health characteristic modeling measuring method and platform based on intelligent big data
CN107967941A (en) * 2017-11-24 2018-04-27 中南大学 A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct
CN113593693A (en) * 2021-05-25 2021-11-02 水欢怡 Remote health management platform

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HEIKKI AILISTO,等: "Soft biometrics-combining body weight and fat measurements with fingerprint biometrics", 《PATTERN RECOGNITION LETTERS》, vol. 27, no. 5, pages 325 - 334 *
YAN DONG LI,等: "Influence of sex and body mass index on facial soft tissue thickness measurements of the northern Chinese adult population", 《FORENSIC SCIENCE INTERNATIONAL》, vol. 222, no. 1, pages 1 - 396 *
段豪: "基于Android的健康监测应用研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 3, pages 138 - 1296 *
汪呈智: "基于多传感器的健康监测技术研究与设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, pages 136 - 2246 *
阳俊: "面向健康监测的多源数据感知与分析", 《中国博士学位论文全文数据库 信息科技辑》, no. 5, pages 140 - 18 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516002A (en) * 2021-03-05 2021-10-19 武汉特斯联智能工程有限公司 Face recognition method and device based on face recognition model and applying smart community
CN113610120A (en) * 2021-07-21 2021-11-05 燕山大学 App image content safety detection method based on weak supervised learning
CN113610120B (en) * 2021-07-21 2023-09-29 燕山大学 App image content safety detection method based on weak supervision learning
CN116487050A (en) * 2023-06-21 2023-07-25 深圳市万佳安智能科技有限公司 Human health monitoring method, device and computer equipment
CN116487050B (en) * 2023-06-21 2023-12-22 深圳市万佳安智能科技有限公司 Human health monitoring method, device and computer equipment

Also Published As

Publication number Publication date
CN111354463B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN109472213B (en) Palm print recognition method and device, computer equipment and storage medium
CN111354463B (en) Human health measurement method, device, computer equipment and storage medium
WO2020220545A1 (en) Long short-term memory model-based disease prediction method and apparatus, and computer device
CN111967465A (en) Method, system, computer device and storage medium for evaluating tumor cell content
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
CN110796161A (en) Recognition model training method, recognition device, recognition equipment and recognition medium for eye ground characteristics
CN110781976B (en) Extension method of training image, training method and related device
EP3780000A1 (en) Beauty counseling information providing device and beauty counseling information providing method
CN108197592B (en) Information acquisition method and device
US20220164852A1 (en) Digital Imaging and Learning Systems and Methods for Analyzing Pixel Data of an Image of a Hair Region of a User's Head to Generate One or More User-Specific Recommendations
CN110751171A (en) Image data classification method and device, computer equipment and storage medium
CN111028218A (en) Method and device for training fundus image quality judgment model and computer equipment
CN111666890A (en) Spine deformation crowd identification method and device, computer equipment and storage medium
CN113705685A (en) Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment
CN108596094B (en) Character style detection system, method, terminal and medium
CN110600133A (en) Health monitoring method and equipment for old people in smart community and readable storage medium
CN110298684B (en) Vehicle type matching method and device, computer equipment and storage medium
CN110334575B (en) Fundus picture recognition method, device, equipment and storage medium
CN114881943B (en) Brain age prediction method, device, equipment and storage medium based on artificial intelligence
CN110688875B (en) Face quality evaluation network training method, face quality evaluation method and device
CN110866146A (en) Video recommendation method and device, computer equipment and storage medium
WO2022246398A1 (en) Artificial intelligence based systems and methods for analyzing user-specific skin or hair data to predict user-specific skin or hair conditions
CN110909566A (en) Health analysis method, mobile terminal and computer-readable storage medium
US11798268B2 (en) Method for improving reliability of artificial intelligence-based object recognition using collective intelligence-based mutual verification
CN114519729A (en) Image registration quality evaluation model training method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231008

Address after: Room 206, Building 4, Software Industry Base, No. 19, 17, 18, Haitian 1st Road, Binhai Community, Yuehai Street, Nanshan District, Shenzhen, Guangdong, 518000

Applicant after: Youpin International Science and Technology (Shenzhen) Co.,Ltd.

Address before: No.312, block C, 28 xinjiekouwai street, Xicheng District, Beijing

Applicant before: BINKE PUDA (BEIJING) TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant