CN112086193A - Face recognition health prediction system and method based on Internet of things - Google Patents
Face recognition health prediction system and method based on Internet of things Download PDFInfo
- Publication number
- CN112086193A CN112086193A CN202010960417.8A CN202010960417A CN112086193A CN 112086193 A CN112086193 A CN 112086193A CN 202010960417 A CN202010960417 A CN 202010960417A CN 112086193 A CN112086193 A CN 112086193A
- Authority
- CN
- China
- Prior art keywords
- face
- unit
- user
- module
- feature points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000036541 health Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000003745 diagnosis Methods 0.000 claims abstract description 39
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 241000282414 Homo sapiens Species 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000001815 facial effect Effects 0.000 claims description 44
- 230000008859 change Effects 0.000 claims description 28
- 238000013527 convolutional neural network Methods 0.000 claims description 17
- 210000001061 forehead Anatomy 0.000 claims description 14
- 238000012795 verification Methods 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 9
- 238000007405 data analysis Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000012417 linear regression Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 238000012790 confirmation Methods 0.000 claims description 2
- 230000006855 networking Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 18
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000003814 drug Substances 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to the field of Internet of things, and particularly discloses a human face recognition health prediction system and a human face recognition health prediction method based on the Internet of things, wherein the system comprises a central hub recognition module, a face pre-detection module, a diagnosis prediction module, an expert diagnosis module and a platform processing unit, the central hub recognition module is used for dividing and confirming the face of a user, the face pre-detection module is used for dividing feature points on the face, the diagnosis prediction module is used for fitting and preliminarily judging data changed on each feature point on the face, the expert diagnosis module is used for finally judging a feature point fitting function and a shot picture, the system is scientific and reasonable, the central hub recognition module can be used for storing the face of the user in a database according to the basic information of the user so as to facilitate the next recognition, and the face pre-detection module is used, the fuzzy area in the user photo can be repaired, so that the platform processing unit can be more accurate.
Description
Technical Field
The invention relates to the technical field of Internet of things, in particular to a face recognition health prediction system and method based on the Internet of things.
Background
Along with the gradual improvement of the living condition level of people, the physical problems of human beings are like the shadow, and the deterioration of certain parts of the human body is caused by paying no attention to the maintenance of the human body;
at present, in order to be responsible for the body of an employee, a plurality of companies can perform physical examination to prevent the body of the employee from being in a problem, but the examination period is long, one-time physical examination cost is high, the examination time is long, in the traditional Chinese medicine, doctors judge the state of an illness by adopting a method of looking for and hearing the patients, in the 'looking' science, the doctors learn to observe the surface characteristics of the patients to make basic judgment on the state of the illness, and the doctors are also the most basic inquiry means in the traditional Chinese medicine, and the investigation finds that whether the person is healthy or not can be identified from the facial characteristics of the face, because the five sense organs of the person are in serious contact with the organs in the body, when the organs are damaged, the five sense organs of the person can be shown, and the result is displayed on the face, so that the attention of the person is attracted;
in the traditional face detection method, the condition of the mood of a person at the moment is judged according to the emotion of the face, and whether the person is in a fatigue state is judged according to the condition, the method for judging the fatigue state of the person is not scientific, and the change of facial feature points is scientific and reasonable because the emotion is short and the facial feature points can be detected for a long time;
therefore, a human face recognition health prediction system and method based on the internet of things are needed to solve the above problems.
Disclosure of Invention
The invention aims to provide a face recognition health prediction system and method based on the Internet of things, and aims to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: a human face recognition health prediction system based on the Internet of things comprises a central pivot recognition module, a face pre-detection module, a diagnosis prediction module, an expert diagnosis module and a platform processing unit, wherein the central pivot recognition module is used for dividing and confirming the face of a user so that each part of the face can be detected respectively, the detection practicability is improved, the face pre-detection module is used for dividing feature points on the face and judging whether the feature points are displaced or not and change phenomena occur, therefore, the health state of the user can be judged scientifically and strictly, the diagnosis prediction module is used for fitting and preliminarily judging data changed on each feature point on the face, the judgment data can be more detailed and more comprehensive, the judgment result can be more accurate, and the expert diagnosis module is used for finally judging a feature point fitting function and a shot picture, the platform processing unit is used for feeding back each analysis result on the platform, the center recognition module is connected with the face pre-detection module, the face pre-detection module is connected with the diagnosis prediction module, and the expert diagnosis module is connected with the diagnosis prediction module.
The central pivot identification module comprises a basic verification unit, a part dividing unit, a part identification unit and a photo comparison unit, wherein the basic verification unit is used for detecting various information of a user, so that a platform can basically master the information of the user and know the range of facial regions which the user wants to detect, the part identification unit is used for identifying the facial parts of the user, so that the platform can basically master the facial form of the user, the facial form of the user is stored in a database, the facial form unit identified at the time can be called in time when the user identifies again next time and is used as the basis for continuing diagnosis next time, the part dividing unit is used for carrying out region division on the facial parts of the user, and the photo comparison unit is used for comparing a recently shot photo of the user with a currently shot photo, the output end of the basic verification unit is connected with the input end of the part dividing unit, and the output end of the part dividing unit is connected with the input ends of the part identifying unit and the photo comparison unit.
The face pre-detection module comprises a characteristic point control unit, a characteristic point extraction unit and a characteristic point comparison unit, the characteristic point control unit is used for amplifying and reducing each characteristic point on the face and setting coordinate points, so that the platform can observe the face of the user carefully and clearly, the feature point extraction unit is used for extracting facial feature points on the shot picture by utilizing a convolutional neural network, so that the platform classifies the modules to be observed by the user, the judgment result is more accurate, the feature point comparison unit is used for comparing the facial feature points on the pictures taken in the two time periods and judging whether the feature points change or not, the change of the characteristic points is known, so that the change can be used as the basis of primary judgment, and the output end of the characteristic point control unit is connected with the input ends of the characteristic point extraction unit and the health pre-judgment unit.
The diagnosis prediction module comprises a numerical value fitting unit and a pre-judging unit, wherein the numerical value fitting unit is used for carrying out function fitting on the vector change value of the facial feature points, so that the change value of the observed feature points can be more vivid, the pre-judging unit is used for carrying out primary judgment on the health condition of a user on the condition of the feature points of the function fitting, so that the user can know the health condition of the user, and the output end of the numerical value fitting unit is connected with the input end of the pre-judging unit.
The expert diagnosis module comprises a data analysis unit and a data transmission unit, wherein the data analysis unit is used for analyzing the function fitting condition by professional medical staff, so that a user can know the change of facial feature points of the user more accurately, the data transmission unit is used for transmitting a final analysis result to the platform so as to know the self condition of the user, and the output end of the data analysis unit is connected with the input end of the data transmission unit.
A human face recognition health prediction method based on the Internet of things comprises the following steps:
a1, verifying the gender, the age and the main detection range of a user and taking pictures for a plurality of times on the face range of the user by using a central recognition module, wherein the face range comprises the forehead, the nose, the lower jaw, the mouth corner, the face, the cheek, the tongue and the teeth, and fitting the face range of the user by using a three-dimensional deformation model to ensure that each part can be recognized in a high-definition manner;
a 2: utilizing a face pre-detection module to perform feature selection on the face range of a user by using a convolutional neural network according to the comparison condition of a recent picture and a shot picture and verifying whether the detected feature points generate displacement and deformation conditions;
a3, utilizing a diagnosis prediction module to perform a fitting function on the data change of the facial range feature points by using a multiple linear regression equation in a least square method, and analyzing the result according to the fitting function to obtain a primary judgment result by a platform;
a4, using the expert diagnosis module, medical staff analyzes the characteristic point displacement situation and the initial judgment situation in the fitting function and judges the health situation of the user again.
In the step a1, the three-dimensional deformation model is used to reconstruct the facial texture of the user, and the three-dimensional face of the user is projected onto the two-dimensional plane according to the formula:
S=cRS+T;
wherein: s is a three-dimensional face shape point, c is a scale factor, R is a perspective projection matrix, and T is an offset;
assigning the texture on the face to each feature point, and directly assigning the skin color to the three-dimensional face feature points;
when the feature points are detected to be fuzzy and all the feature points of the three-dimensional face are not shielded, directly assigning the skin color corresponding to the fuzzy feature points;
when the feature point is detected to be fuzzy and each feature point of the three-dimensional face is shielded, assigning a value to the fuzzy area by the feature point with the closest texture distance;
according to the formula:
wherein: di(1,2,3,4) indicating a distance, λ, of the shielding point closest to the feature point in the up-down, left-right directions of the two-dimensional spaceiFor occlusion points, C is the scale factor and (R, G, B) is the skin color.
In step a2, the step of extracting the feature points by the convolutional neural network and determining the health condition is as follows,
b1, extracting each feature point in the uploaded picture, and setting the coordinate Z { (x)1,y1),(x2,y2)…(xm,ym) Placing each part feature point in a specified rectangular frame for analysis, and when the feature points of the surface parts are not contained in the rectangular frame, carrying out voice prompt on a platform until the feature points are all contained in the specified rectangular frame;
b2, distinguishing and judging the feature points in the two photos in different periods, and judging whether the feature points have local changes;
b3, analyzing the health condition of the user according to the change of the characteristic points.
In the step a3, facial feature points are extracted and checked by using a convolutional neural network, in a rectangular region of the forehead, new substances on the forehead change and the newly changed coordinates are (e, f), in a rectangular region of the cheek, the cheek region is judged in real time, when the skin color degree of the cheek changes and the change time is more than or equal to 10 hours, data fitting is performed on the changed data of the feature points in the two pictures by using MATLAB, and the health state of the user is judged for the first time according to the changed data.
In step S1, the facial feature points may be detected based on the voice prompt, and the facial feature points of the user may be recognized based on the voice broadcast, so that the facial feature points of the user may be divided into regions.
Compared with the prior art, the invention has the following beneficial effects:
1. the central hub identification module is used for carrying out basic identification on the facial information of the user, the facial information of the user is stored by using big data so as to be convenient for calling in time when the facial information is detected next time, a picture does not need to be searched in a gallery, the facial parts of the user can be divided by using the part dividing unit, so that the platform can carefully identify each part of the facial part, and the health condition of the user can be clearly judged;
2. the facial pre-detection module is used for extracting each feature point of a user by using a convolutional neural network, carrying out amplification control on the feature points of the face, comparing a plurality of pictures to judge the change condition of the feature points of the face of the user so as to carry out primary judgment on the health condition of the user, and assigning values to the feature points of a fuzzy area by using textures on the face when the pictures are not clear in the detected pictures by using a three-dimensional deformation model method so that the platform can more accurately detect the face color feature points of the user;
3. by utilizing the diagnosis and prediction module, the change conditions of each data of the facial feature points of the user can be summarized, the least square method is utilized to fit the function to each feature point, and the health condition of the user is finally judged according to the specific change of each feature point in the function, so that the judgment capability of the platform on the health condition of the user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram of module components of a face recognition health prediction system and method based on the internet of things.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution:
a human face recognition health prediction system based on the Internet of things comprises a central hub recognition module, a face pre-detection module, a diagnosis prediction module, an expert diagnosis module and a platform processing unit, wherein the central hub recognition module is used for carrying out division confirmation on the face of a user, the face pre-detection module is used for dividing feature points on the face and judging whether the feature points are displaced or changed, the diagnosis prediction module is used for simulating and preliminarily judging data changed on each feature point on the face, the expert diagnosis module is used for carrying out final judgment on a feature point fitting function and a shot picture, the platform processing unit is used for feeding back each analysis result on a platform, the central hub recognition module is connected with the face pre-detection module, and the face pre-detection module is connected with the diagnosis prediction module, the expert diagnosis module is connected with the diagnosis prediction module.
The central pivot identification module comprises a basic verification unit, a part division unit, a part identification unit and a photo comparison unit, wherein the basic verification unit is used for detecting various information of a user, the part identification unit is used for identifying the facial part of the user, the part division unit is used for carrying out region division on the facial part of the user, the photo comparison unit is used for comparing a recently shot photo of the user with a currently shot photo, the output end of the basic verification unit is connected with the input end of the part division unit, and the output end of the part division unit is connected with the input ends of the part identification unit and the photo comparison unit;
by utilizing the basic verification unit, the gender, the age, the main detection unit and the recently provided photo of the user can be judged according to the facial feature points of the person by the basic conditions provided by the user, so that the health condition of the user can be judged.
The face pre-detection module comprises a feature point control unit, a feature point extraction unit and a feature point comparison unit, wherein the feature point control unit is used for amplifying and reducing each feature point on the face and setting coordinate points, the feature point extraction unit is used for extracting facial feature points on a shot picture by using a convolutional neural network, the feature point comparison unit is used for comparing facial feature points on the shot picture in two time periods and judging whether the feature points are changed or not, and the output end of the feature point control unit is connected with the input ends of the feature point extraction unit and the health pre-determination unit;
judging feature points on the face of a user by using MATLAB in cooperation with a convolutional neural network, taking the comparison condition of a picture and the change of the face feature points as an input layer of the convolutional neural network, taking the good physical condition of the user, slight influence on the physical condition of the user and poor physical condition of the user as an output layer of the convolutional neural network, and judging whether the physical condition of the user is normal or not according to the feature points in the uploaded picture;
according to the three-dimensional deformation model, the matching process of the deformation model is a three-dimensional face modeling process aiming at an input two-dimensional image, for the input two-dimensional face image, a three-dimensional face sample which is most similar to the input two-dimensional face image is obtained by adjusting the combination parameters of the model, and the fuzzy feature point area in the picture is modified by adjusting the texture in the three-dimensional image, so that the platform can clearly identify the fuzzy feature point area.
The diagnosis and prediction module comprises a numerical fitting unit and a pre-judging unit, wherein the numerical fitting unit is used for carrying out function fitting on the vector change values of the facial feature points, the pre-judging unit is used for carrying out primary judgment on the health condition of a user on the condition of the feature points subjected to function fitting, and the output end of the numerical fitting unit is connected with the input end of the pre-judging unit;
the method comprises the steps of judging the data of the change of the facial feature points by utilizing a least square method, wherein the least square method is a mathematical optimization technology, finding the optimal function matching of the data by minimizing the sum of squares of errors, simply and conveniently solving unknown data by utilizing the least square method, and carrying out curve fitting by utilizing the least square method, so that the importance of the data is highlighted.
The expert diagnosis module comprises a data analysis unit and a data transmission unit, wherein the data analysis unit is used for analyzing the function fitting condition by professional medical staff, the data transmission unit is used for transmitting the final analysis result to the platform, and the output end of the data analysis unit is connected with the input end of the data transmission unit;
and judging the final result of the facial feature points of the user according to the polynomial fitting function result and the photo comparison condition.
A human face recognition health prediction method based on the Internet of things comprises the following steps:
a1, verifying the gender, the age and the main detection range of a user and taking pictures for a plurality of times on the face range of the user by using a central recognition module, wherein the face range comprises the forehead, the nose, the lower jaw, the mouth corner, the face, the cheek, the tongue and the teeth, and fitting the face range of the user by using a three-dimensional deformation model to ensure that each part can be recognized in a high-definition manner;
a 2: utilizing a face pre-detection module to perform feature selection on the face range of a user by using a convolutional neural network according to the comparison condition of a recent picture and a shot picture and verifying whether the detected feature points generate displacement and deformation conditions;
a3, utilizing a diagnosis prediction module to perform a fitting function on the data change of the facial range feature points by using a multiple linear regression equation in a least square method, and analyzing the result according to the fitting function to obtain a primary judgment result by a platform;
a4, using the expert diagnosis module, medical staff analyzes the characteristic point displacement situation and the initial judgment situation in the fitting function and judges the health situation of the user again.
In the step a1, the three-dimensional deformation model is used to reconstruct the facial texture of the user, and the three-dimensional face of the user is projected onto the two-dimensional plane according to the formula:
S=cRS+T;
wherein: s is a three-dimensional face shape point, c is a scale factor, R is a perspective projection matrix, and T is an offset;
assigning the texture on the face to each feature point, and directly assigning the skin color to the three-dimensional face feature points;
when the feature points are detected to be fuzzy and all the feature points of the three-dimensional face are not shielded, directly assigning the skin color corresponding to the fuzzy feature points;
when the feature point is detected to be fuzzy and each feature point of the three-dimensional face is shielded, assigning a value to the fuzzy area by the feature point with the closest texture distance;
according to the formula:
wherein: di(1,2,3,4) indicating a distance, λ, of the shielding point closest to the feature point in the up-down, left-right directions of the two-dimensional spaceiFor occlusion points, C is the scale factor and (R, G, B) is the skin color.
In step a2, the step of extracting the feature points by the convolutional neural network and determining the health condition is as follows,
b1, extracting each feature point in the uploaded picture, and setting the coordinate Z { (x)1,y1),(x2,y2)…(xm,ym) Placing each part feature point in a specified rectangular frame for analysis, and when the feature points of the surface parts are not contained in the rectangular frame, carrying out voice prompt on a platform until the feature points are all contained in the specified rectangular frame;
b2, distinguishing and judging the feature points in the two photos in different periods, and judging whether the feature points have local changes;
b3, analyzing the health condition of the user according to the change of the characteristic points.
In the step a3, facial feature points are extracted and checked by using a convolutional neural network, in a rectangular region of the forehead, new substances on the forehead change and the newly changed coordinates are (e, f), in a rectangular region of the cheek, the cheek region is judged in real time, when the skin color degree of the cheek changes and the change time is more than or equal to 10 hours, data fitting is performed on the changed data of the feature points in the two pictures by using MATLAB, and the health state of the user is judged for the first time according to the changed data.
In step S1, each feature point of the face may be detected according to a voice prompt.
Example 1: fitting facial feature points of a user by using MATLAB and a convolutional neural network in a combined manner, detecting a forehead feature point region, a cheek feature point region and a tongue feature point region of the face according to basic requirements of the user, and finding that the forehead feature point region in a shot picture is fuzzy when the user uploads the shot picture, so that the feature points around the forehead region are assigned to the fuzzy region;
wherein: di(1,2,3,4) indicating a distance, λ, of the shielding point closest to the feature point in the up-down, left-right directions of the two-dimensional spaceiIs the occlusion point, C is the scale factor, and (R, G, B) is the skin color;
after the assignment of the feature points is successful and the forehead feature point region is high-definition, comparing a recently shot picture of the user with a current shot picture, finding that the forehead feature point region is consistent with the shot picture before, detecting that no problem is found, and greatly distinguishing the cheek feature point region from the shot picture before, detecting that the cheek region is red and has long duration by controlling and amplifying the feature point region of the cheek region, opening the mouth of the user according to a voice prompt, detecting the tongue, finding that the thickness of the tongue is large, and obtaining a final result by the platform processing unit, wherein the table is as follows:
it is to be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. The utility model provides a health prediction system of face identification based on thing networking which characterized in that: the prediction system comprises a central pivot identification module, a face pre-detection module, a diagnosis prediction module, an expert diagnosis module and a platform processing unit, the central pivot identification module is used for carrying out division confirmation on the face of the user, the face pre-detection module is used for carrying out division on the feature points on the face, and judging whether the characteristic points have displacement and change phenomena, the diagnosis and prediction module is used for fitting the changed data on each characteristic point on the face and making a preliminary judgment, the expert diagnosis module is used for carrying out final judgment on the characteristic point fitting function and the shot picture, the platform processing unit is used for feeding back each analysis result on the platform, the central pivot identification module is connected with the face pre-detection module, the face pre-detection module is connected with the diagnosis prediction module, and the expert diagnosis module is connected with the diagnosis prediction module.
2. The internet of things-based face recognition health prediction system of claim 1, wherein: the center identification module comprises a basic verification unit, a part division unit, a part identification unit and a photo comparison unit, wherein the basic verification unit is used for detecting various information of a user, the part identification unit is used for identifying facial parts of the user, the part division unit is used for carrying out region division on the facial parts of the user, the photo comparison unit is used for comparing a recently shot photo of the user with a currently shot photo, the output end of the basic verification unit is connected with the input end of the part division unit, and the output end of the part division unit is connected with the input ends of the part identification unit and the photo comparison unit.
3. The internet of things-based face recognition health prediction system of claim 1, wherein: the face pre-detection module comprises a feature point control unit, a feature point extraction unit and a feature point comparison unit, wherein the feature point control unit is used for amplifying and reducing each feature point on the face and setting coordinate points, the feature point extraction unit is used for extracting the face feature points on a shot picture by using a convolutional neural network, the feature point comparison unit is used for comparing the face feature points on the shot picture in two time periods and judging whether the feature points are changed, and the output end of the feature point control unit is connected with the input ends of the feature point extraction unit and the health pre-determination unit.
4. The internet of things-based face recognition health prediction system of claim 1, wherein: the diagnosis and prediction module comprises a numerical value fitting unit and a pre-judging unit, wherein the numerical value fitting unit is used for carrying out function fitting on the vector change values of the facial feature points, the pre-judging unit is used for carrying out primary judgment on the health condition of a user on the condition of the feature points subjected to function fitting, and the output end of the numerical value fitting unit is connected with the input end of the pre-judging unit.
5. The internet of things-based face recognition health prediction system of claim 1, wherein: the expert diagnosis module comprises a data analysis unit and a data transmission unit, wherein the data analysis unit is used for analyzing the function fitting condition by professional medical staff, the data transmission unit is used for transmitting the final analysis result to the platform, and the output end of the data analysis unit is connected with the input end of the data transmission unit.
6. A human face recognition health prediction method based on the Internet of things is characterized by comprising the following steps: the steps of the prediction are as follows:
a1, verifying the gender, the age and the main detection range of a user and taking pictures for a plurality of times on the face range of the user by using a central recognition module, wherein the face range comprises the forehead, the nose, the lower jaw, the mouth corner, the face, the cheek, the tongue and the teeth, and fitting the face range of the user by using a three-dimensional deformation model to ensure that each part can be recognized in a high-definition manner;
a 2: utilizing a face pre-detection module to perform feature selection on the face range of a user by using a convolutional neural network according to the comparison condition of a recent picture and a shot picture and verifying whether the detected feature points generate displacement and deformation conditions;
a3, utilizing a diagnosis prediction module to perform a fitting function on the data change of the facial range feature points by using a multiple linear regression equation in a least square method, and analyzing the result according to the fitting function to obtain a primary judgment result by a platform;
a4, using the expert diagnosis module, medical staff analyzes the characteristic point displacement situation and the initial judgment situation in the fitting function and judges the health situation of the user again.
7. The internet of things-based face recognition health prediction method according to claim 6, characterized in that: in the step a1, the three-dimensional deformation model is used to reconstruct the facial texture of the user, and the three-dimensional face of the user is projected onto the two-dimensional plane according to the formula:
S=cRS+T;
wherein: s is a three-dimensional face shape point, c is a scale factor, R is a perspective projection matrix, and T is an offset;
assigning the texture on the face to each feature point, and directly assigning the skin color to the three-dimensional face feature points;
when the feature points are detected to be fuzzy and all the feature points of the three-dimensional face are not shielded, directly assigning the skin color corresponding to the fuzzy feature points;
when the feature point is detected to be fuzzy and each feature point of the three-dimensional face is shielded, assigning a value to the fuzzy area by the feature point with the closest texture distance;
according to the formula:
wherein: di(1,2,3,4) indicating a distance, λ, of the shielding point closest to the feature point in the up-down, left-right directions of the two-dimensional spaceiFor occlusion points, C is the scale factor and (R, G, B) is the skin color.
8. The internet of things-based face recognition health prediction method according to claim 6, characterized in that: in step a2, the step of extracting the feature points by the convolutional neural network and determining the health condition is as follows,
b1, extracting each feature point in the uploaded picture, and setting the coordinate Z { (x)1,y1),(x2,y2)…(xm,ym) Placing each part feature point in a specified rectangular frame for analysis, and when the feature points of the surface parts are not contained in the rectangular frame, carrying out voice prompt on a platform until the feature points are all contained in the specified rectangular frame;
b2, distinguishing and judging the feature points in the two photos in different periods, and judging whether the feature points have local changes;
b3, analyzing the health condition of the user according to the change of the characteristic points.
9. The internet of things-based face recognition health prediction method according to claim 1, characterized in that: in the step a3, facial feature points are extracted and checked by using a convolutional neural network, in a rectangular region of the forehead, new substances on the forehead change and the newly changed coordinates are (e, f), in a rectangular region of the cheek, the cheek region is judged in real time, when the skin color degree of the cheek changes and the change time is more than or equal to N hours, data fitting is performed on the changed data of the feature points in the two pictures by using MATLAB, and the health state of the user is judged for the first time according to the changed data.
10. The internet of things-based face recognition health prediction method according to claim 6, characterized in that: in step S1, each feature point of the face may be detected according to a voice prompt.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010960417.8A CN112086193A (en) | 2020-09-14 | 2020-09-14 | Face recognition health prediction system and method based on Internet of things |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010960417.8A CN112086193A (en) | 2020-09-14 | 2020-09-14 | Face recognition health prediction system and method based on Internet of things |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112086193A true CN112086193A (en) | 2020-12-15 |
Family
ID=73736755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010960417.8A Pending CN112086193A (en) | 2020-09-14 | 2020-09-14 | Face recognition health prediction system and method based on Internet of things |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112086193A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117976173A (en) * | 2024-03-28 | 2024-05-03 | 深圳捷工医疗装备股份有限公司 | Signal transmission call management system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404091A (en) * | 2008-11-07 | 2009-04-08 | 重庆邮电大学 | Three-dimensional human face reconstruction method and system based on two-step shape modeling |
US20150242812A1 (en) * | 2014-02-26 | 2015-08-27 | William Bert Nelson | Device and system for implementing advance directives |
CN105011903A (en) * | 2014-04-30 | 2015-11-04 | 上海华博信息服务有限公司 | Intelligent health diagnosis system |
CN109147935A (en) * | 2018-07-19 | 2019-01-04 | 山东和合信息科技有限公司 | The health data platform of identification technology is acquired based on characteristics of human body |
WO2019014521A1 (en) * | 2017-07-13 | 2019-01-17 | Peyman Gholam A | Dynamic image recognition system for security and telemedicine |
CN110459304A (en) * | 2019-07-19 | 2019-11-15 | 汕头大学 | A kind of health status diagnostic system based on face-image |
US20200066405A1 (en) * | 2010-10-13 | 2020-02-27 | Gholam A. Peyman | Telemedicine System With Dynamic Imaging |
CN111048209A (en) * | 2019-12-28 | 2020-04-21 | 安徽硕威智能科技有限公司 | Health assessment method and device based on living body face recognition and storage medium thereof |
CN111611979A (en) * | 2020-06-08 | 2020-09-01 | 绍兴文理学院 | Intelligent health monitoring system and method based on facial scanning |
-
2020
- 2020-09-14 CN CN202010960417.8A patent/CN112086193A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404091A (en) * | 2008-11-07 | 2009-04-08 | 重庆邮电大学 | Three-dimensional human face reconstruction method and system based on two-step shape modeling |
US20200066405A1 (en) * | 2010-10-13 | 2020-02-27 | Gholam A. Peyman | Telemedicine System With Dynamic Imaging |
US20150242812A1 (en) * | 2014-02-26 | 2015-08-27 | William Bert Nelson | Device and system for implementing advance directives |
CN105011903A (en) * | 2014-04-30 | 2015-11-04 | 上海华博信息服务有限公司 | Intelligent health diagnosis system |
WO2019014521A1 (en) * | 2017-07-13 | 2019-01-17 | Peyman Gholam A | Dynamic image recognition system for security and telemedicine |
CN109147935A (en) * | 2018-07-19 | 2019-01-04 | 山东和合信息科技有限公司 | The health data platform of identification technology is acquired based on characteristics of human body |
CN110459304A (en) * | 2019-07-19 | 2019-11-15 | 汕头大学 | A kind of health status diagnostic system based on face-image |
CN111048209A (en) * | 2019-12-28 | 2020-04-21 | 安徽硕威智能科技有限公司 | Health assessment method and device based on living body face recognition and storage medium thereof |
CN111611979A (en) * | 2020-06-08 | 2020-09-01 | 绍兴文理学院 | Intelligent health monitoring system and method based on facial scanning |
Non-Patent Citations (3)
Title |
---|
R. L. PALMER 等: "CLINIFACE: PHENOTYPIC VISUALISATION AND ANALYSIS USING NON-RIGID REGISTRATION OF 3D FACIAL IMAGES", 《 ISPRS - INTERNATIONAL ARCHIVES OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》, pages 301 - 308 * |
陈梦竹 等: "基于图像处理的望诊面色自动识别研究", 《中国中医药信息杂志》, vol. 25, no. 12, pages 97 - 101 * |
陈梦竹;岑翼刚;许家佗;崔龙涛;王文强;屠立平;黄景斌;荆聪聪;张建峰;: "基于图像处理的望诊面色自动识别研究", 中国中医药信息杂志, no. 12, 26 November 2018 (2018-11-26), pages 97 - 101 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117976173A (en) * | 2024-03-28 | 2024-05-03 | 深圳捷工医疗装备股份有限公司 | Signal transmission call management system |
CN117976173B (en) * | 2024-03-28 | 2024-05-28 | 深圳捷工医疗装备股份有限公司 | Signal transmission call management system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9639769B2 (en) | Liveness detection | |
KR101301821B1 (en) | Apparatus and method for detecting complexion, apparatus and method for determinig health using complexion, apparatus and method for generating health sort function | |
CN111344713A (en) | Camera and image calibration for object recognition | |
CN108888277B (en) | Psychological test method, psychological test system and terminal equipment | |
CN105378793A (en) | Systems, methods, and computer-readable media for identifying when a subject is likely to be affected by a medical condition | |
US20140185926A1 (en) | Demographic Analysis of Facial Landmarks | |
KR20100005072A (en) | Method and system for recommending a product based upon skin color estimated from an image | |
CN107392151A (en) | Face image various dimensions emotion judgement system and method based on neutral net | |
CN111588353A (en) | Body temperature measuring method | |
Liu et al. | Simple model for encoding natural images by retinal ganglion cells with nonlinear spatial integration | |
US20200065967A1 (en) | Computer system, method, and program for diagnosing subject | |
JP2007102482A (en) | Automatic counting apparatus, program, and method | |
CN111105881A (en) | Database system for 3D measurement of human phenotype | |
CN115497123A (en) | Method for acquiring state parameters of region of interest | |
CN115862819A (en) | Medical image management method based on image processing | |
Douglas et al. | A review of facial image analysis for delineation of the facial phenotype associated with fetal alcohol syndrome | |
TWI430776B (en) | Smart video skin test system and method of the same | |
CN112086193A (en) | Face recognition health prediction system and method based on Internet of things | |
Gaber et al. | Comprehensive assessment of facial paralysis based on facial animation units | |
CN111048202A (en) | Intelligent traditional Chinese medicine diagnosis system and method thereof | |
CN110175522A (en) | Work attendance method, system and Related product | |
CN110135357A (en) | A kind of happiness real-time detection method based on long-range remote sensing | |
Charih et al. | Audiogram digitization tool for audiological reports | |
KR102351169B1 (en) | Big data and AI-based color recognition measurement platform and method using the same | |
CN110859599B (en) | Automatic cognitive function screening system for cerebrovascular disease nerve injury patients |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |