CN116530981A - Facial recognition-based qi and blood state analysis system and method - Google Patents
Facial recognition-based qi and blood state analysis system and method Download PDFInfo
- Publication number
- CN116530981A CN116530981A CN202310489454.9A CN202310489454A CN116530981A CN 116530981 A CN116530981 A CN 116530981A CN 202310489454 A CN202310489454 A CN 202310489454A CN 116530981 A CN116530981 A CN 116530981A
- Authority
- CN
- China
- Prior art keywords
- analysis
- blood
- module
- data
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000008280 blood Substances 0.000 title claims abstract description 143
- 238000004458 analytical method Methods 0.000 title claims abstract description 134
- 210000004369 blood Anatomy 0.000 title claims abstract description 89
- 230000001815 facial effect Effects 0.000 title claims abstract description 75
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 41
- 238000007405 data analysis Methods 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 claims abstract description 37
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 17
- 230000007812 deficiency Effects 0.000 claims abstract description 12
- 238000004088 simulation Methods 0.000 claims abstract description 11
- 230000000474 nursing effect Effects 0.000 claims abstract description 10
- 230000000737 periodic effect Effects 0.000 claims abstract description 10
- 238000012549 training Methods 0.000 claims description 39
- 238000012360 testing method Methods 0.000 claims description 20
- 230000009467 reduction Effects 0.000 claims description 18
- 230000003750 conditioning effect Effects 0.000 claims description 17
- 238000009826 distribution Methods 0.000 claims description 17
- 238000011156 evaluation Methods 0.000 claims description 17
- 210000000056 organ Anatomy 0.000 claims description 16
- 238000005457 optimization Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 238000005520 cutting process Methods 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 11
- 238000010276 construction Methods 0.000 claims description 9
- 239000003814 drug Substances 0.000 claims description 9
- 238000004140 cleaning Methods 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000012423 maintenance Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000015654 memory Effects 0.000 claims description 5
- 238000005065 mining Methods 0.000 claims description 5
- 230000008921 facial expression Effects 0.000 claims description 4
- 230000007787 long-term memory Effects 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 239000002932 luster Substances 0.000 claims description 2
- 238000004159 blood analysis Methods 0.000 abstract description 27
- 230000036541 health Effects 0.000 description 26
- 210000004761 scalp Anatomy 0.000 description 18
- 230000009286 beneficial effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 238000007689 inspection Methods 0.000 description 3
- 238000009412 basement excavation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 230000036996 cardiovascular health Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002526 effect on cardiovascular system Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 230000003183 myoelectrical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/1032—Determining colour for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
- A61B5/1171—Identification of persons based on the shapes or appearances of their bodies or parts thereof
- A61B5/1176—Recognition of faces
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
- A61B5/14542—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
- A61B5/444—Evaluating skin marks, e.g. mole, nevi, tumour, scar
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4854—Diagnosis based on concepts of traditional oriental medicine
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- General Physics & Mathematics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Human Computer Interaction (AREA)
- Dermatology (AREA)
- Optics & Photonics (AREA)
- Alternative & Traditional Medicine (AREA)
- Dentistry (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention provides a system and a method for analyzing qi and blood states based on facial recognition, wherein the system comprises the following components: the device comprises a video processing module, a feature extraction module, a data analysis module, a qi-blood model building module and a periodic analysis module; the method comprises the following steps: shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment; judging whether the physical sign data of the object to be detected has deficiency of qi and blood according to the qi-blood model, and carrying out qi-blood nursing suggestion when the physical sign data has deficiency of qi and blood. The invention realizes the intelligent processing of the qi and blood analysis and improves the accuracy of the qi and blood analysis data result.
Description
Technical Field
The invention relates to the technical field of intelligent analysis of data, in particular to a system and a method for analyzing qi and blood states based on facial recognition.
Background
Along with the continuous progress of medical level and the continuous update of medical equipment, the current medical level has long-term progress and development, and the diagnosis results are mostly based on blood collection, electrocardio, X-ray and the like to diagnose diseases of the body. However, the traditional Chinese medicine is used for seeing a patient by inspection, diagnosis by smelling, inquiry and diagnosis by cutting, and qi and blood are an important concept in the traditional Chinese medicine, namely the energy and blood circulation condition in the human body; the inspection is to primarily judge the health degree of qi and blood according to the facial color and the facial features of the person. Other health problems can be caused by long-term deficiency of qi and blood and accumulation of daily life and month. Therefore, how to know the qi and blood state of the user in daily life and to carry out timely nursing under the condition of qi and blood deficiency is a necessary condition for keeping health of the body; however, the current prior art lacks a device for analyzing qi and blood, resulting in inaccurate evaluation of health data and increased inspection costs.
First, application number: CN202210264326.X discloses a scalp health condition evaluation method, apparatus, storage medium and computer device, when the scalp health condition of a target person is evaluated, the scalp health condition of the target person can be detected by using a pre-configured scalp health condition detection model, and a scalp health condition detection value is obtained, and the scalp qi and blood condition of the target person can be evaluated by using a pre-configured scalp qi and blood condition detection model, and a scalp health index of the target person can be determined according to the scalp health condition detection value and the scalp qi and blood condition detection value, and the scalp health condition of the target person is evaluated according to the scalp health index; while the scalp health condition of the target person is evaluated, not only the scalp health condition but also the scalp qi and blood condition are considered, so that the scalp health condition is evaluated more comprehensively, and the obtained scalp health condition evaluation result is more accurate, but only the scalp is evaluated, the whole health data cannot be evaluated at all, and the data result is compared on one side.
Second prior art, application number: CN201310205895.8 discloses a USB type fingertip pulse meter, comprising: the device comprises a shell, a USB plug and a pulse feeling module arranged inside the shell; the pulse feeling module comprises: the device comprises a reflective sensor, a multifunctional microcontroller, a driving and detecting program memory; the multifunctional microcontroller comprises: a controllable gain amplifier, a filter, a reference power supply and a multifunctional microprocessor. Although pulse wave waveforms, cardiac states, qi and blood states and cardiovascular comprehensive figures of merit can be detected, the computer expands the function of screening and diagnosing cardiovascular health states; however, the qi and blood state is obtained through pulse diagnosis, so that the qi and blood analysis result is inaccurate, and the face image is not acquired and processed, so that the accuracy of the qi and blood analysis data is affected.
Third, application number: CN201910962353.2 discloses a health detecting device, comprising: the electrodes are used for collecting electromyographic signals of acupoints at different positions of a human body; the amplifying module comprises a first preamplifier and a second amplifier, and the myoelectric signal is finally amplified by 800-1000 times; the filter module filters the electromyographic signals amplified by the amplifying module in an analog filter mode and a digital filter mode in sequence; the processing module receives the electromyographic signals filtered by the filtering module and analyzes the health condition of the user according to the electromyographic signals; and the wireless communication module sends the health condition of the user to the cloud or the mobile terminal. Although the accuracy of the electromyographic signals is improved, the health condition of the user can be comprehensively reflected by detecting the electromyographic signals of acupoints at different positions of the body at the same time; but the data source is single and the intelligent degree is low.
The first, second and third existing technologies have the problems that the data sources are single, the accurate analysis and processing of the qi and blood data cannot be carried out, and the accuracy of the analysis data of the qi and blood data is affected.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a facial recognition-based qi-blood state analysis system, comprising:
the video processing module is responsible for shooting through the intelligent camera, generating a face generation video, and preprocessing the shot video to obtain effective face image information;
the feature extraction module is in charge of carrying out feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points;
the data analysis module is responsible for transmitting the facial complexion characteristic points of the human face to the big data analysis platform and carrying out analysis calculation, determining physical sign data, transmitting the physical sign data to the artificial intelligence algorithm for analysis processing, obtaining an analysis result, and judging whether a part with poor complexion and a specific part exists in the object to be detected or not by the analysis result;
the qi-blood model building module is responsible for building a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment;
the periodic analysis module is in charge of judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to the preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously sending the comparison result and the conditioning scheme to the intelligent terminal.
Optionally, the preprocessing of the video processing module includes: denoising and cutting;
the facial color feature points of the feature extraction module comprise: color distribution, mottle and luster of the face.
Optionally, the video processing module includes:
the face information acquisition sub-module is responsible for detecting the face information of an object to be detected through a camera to obtain captured face information;
the acquisition condition setting sub-module is in charge of determining acquisition conditions such as definition of an object to be detected according to the captured face information and sending out an instruction for starting acquisition;
the face image processing sub-module is responsible for receiving an instruction for starting acquisition, acquiring face information of an object to be detected, and performing noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image;
and the face image sending sub-module is responsible for sending the face image subjected to noise reduction and clipping to the feature extraction module, and carrying out feature point information of face parts and face color related distribution according to the face image.
Optionally, the data analysis module includes:
the information sending sub-module is responsible for acquiring facial color feature points of people and transmitting the pulse facial color feature points to the big data analysis platform;
the information analysis sub-module is responsible for analyzing the feature points of the face colors of the incoming people by the big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data;
and the result output sub-module is responsible for mining and screening the sign data of the user through an artificial intelligence algorithm and determining an analysis result.
Optionally, the qi-blood model building module includes:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
the network building sub-module is responsible for building a deep neural network comprising a long-short-term memory network and a convolutional neural network;
the parameter adjustment optimization sub-module is responsible for parameter adjustment optimization through training a model and through prediction and deviation of a label control;
the test set generation sub-module is responsible for manufacturing a test set corresponding to the training set;
the model output sub-module is responsible for inputting the training set and the testing set into the deep neural network, determining an evaluation result when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, and establishing a qi-blood model as a final and deep neural network learning algorithm.
Optionally, the periodic analysis module includes:
the analysis node setting submodule is responsible for setting time nodes required to compare the qi and blood state data and setting time nodes of historical analysis results;
the model service construction sub-module is responsible for carrying out service construction on the qi-blood model, analyzing and comparing the previous analysis results at fixed time, giving out the analysis results according to trend analysis, and then giving out corresponding maintenance strategies by combining a platform strategy database;
and the result and strategy sending sub-module is responsible for sending the comparison result and the conditioning scheme to the intelligent terminal.
The invention provides a facial recognition-based qi and blood state analysis method, which comprises the following steps of:
shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine;
based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment;
judging whether the physical sign data of the object to be detected has deficiency of qi and blood according to the qi-blood model, and carrying out qi-blood nursing suggestion when the physical sign data has deficiency of qi and blood.
Optionally, the process of determining the facial feature points includes the steps of:
acquiring and processing original video data by a camera, determining a face position image of a face, and detecting face information of an object to be detected;
according to the facial information of the object to be detected captured by the camera, determining acquisition conditions such as the face definition of the object to be detected, and sending out an instruction for starting acquisition;
carrying out noise reduction and clipping on face information of an object to be detected; and carrying out characteristic point information of face parts and face color related distribution according to the obtained face image.
Optionally, the process of determining the analysis result includes the following steps:
acquiring characteristic point information of the face position and the face color related distribution of the person, and transmitting the characteristic point information to a big data analysis platform;
analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data;
and excavating and screening the sign data of the user through a preset artificial intelligence algorithm, and determining an analysis result.
Optionally, the establishing process of the qi-blood model comprises the following steps:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
constructing a deep neural network comprising a long-term memory network and a convolutional neural network;
parameter adjustment optimization is carried out through training a model and through prediction and deviation of a label control;
preparing a test set corresponding to the training set;
and inputting the training set and the testing set into the deep neural network, and determining an evaluation result when the average absolute error, the root mean square error and the average absolute error meet a preset threshold value at the same time, and finally establishing a qi-blood model based on a deep neural network learning algorithm.
The video processing module of the invention shoots through the intelligent camera, generates a face generation video, and preprocesses the shot video to obtain effective face image information, wherein the preprocessing comprises: denoising and cutting; the feature extraction module performs feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points; the facial color feature points include: color distribution, color spots, gloss conditions and the like of the parts of the face; the data analysis module transmits the facial complexion characteristic points to the big data analysis platform and performs analysis and calculation to determine physical sign data, and transmits the physical sign data to the artificial intelligence algorithm for analysis and processing to obtain an analysis result, wherein the analysis result is used for judging whether a part with poor complexion and a specific part exists in an object to be detected; the qi-blood model building module builds a qi-blood model by periodically collecting analysis results and transmitting the analysis results to simulation equipment based on a deep neural network learning algorithm; the periodic analysis module judges the qi and blood state according to the established qi and blood model and a preset time node, compares the qi and blood state with a historical analysis result, gives a conditioning scheme corresponding to the current qi and blood state, and simultaneously sends the comparison result and the conditioning scheme to the intelligent terminal; according to the scheme, through qi and blood analysis of facial recognition of the face, basic information of a wearer can be quickly queried, related indexes of traditional Chinese medicine with high matching degree are prompted, professional preliminary physiological data analysis is given, and a user can conveniently know own conditions in advance and make a proper nursing mode; the intelligent terminal realizes resource sharing, so that users and families can know the physical condition of the whole family conveniently, and early knowledge, early maintenance and early health are realized; meanwhile, the video of the face is analyzed and processed, and the data is processed by the qi and blood analysis model to obtain the current qi and blood analysis result, so that the intelligent processing of qi and blood analysis is realized, and the accuracy of the qi and blood analysis data result is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a block diagram of an analysis system for qi and blood status based on facial recognition in embodiment 1 of the present invention;
FIG. 2 is a block diagram of a video processing module in embodiment 2 of the present invention;
FIG. 3 is a block diagram of a data analysis module in embodiment 3 of the present invention;
FIG. 4 is a block diagram of a qi and blood model building module according to embodiment 4 of the present invention;
FIG. 5 is a block diagram of a periodic analysis module in accordance with embodiment 5 of the present invention;
FIG. 6 is a flowchart of a method for analyzing qi and blood status based on facial recognition according to embodiment 6 of the present invention;
FIG. 7 is a diagram showing a process of determining a facial feature point in example 7 of the present invention;
FIG. 8 is a process diagram of determining the analysis result in example 8 of the present invention;
FIG. 9 is a block diagram showing the process of establishing the qi-blood model in embodiment 9 of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the application. As used in the examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims. In the description of this application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Example 1: as shown in fig. 1, an embodiment of the present invention provides a facial recognition-based qi-blood state analysis system, including:
the video processing module is responsible for shooting through the intelligent camera, generating a face generation video, preprocessing the shot video to obtain effective face image information, and preprocessing comprises the following steps: denoising and cutting;
the feature extraction module is in charge of carrying out feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points; the facial color feature points include: color distribution, color spots, gloss conditions and the like of the parts of the face;
the data analysis module is responsible for transmitting the facial complexion characteristic points of the human face to the big data analysis platform and carrying out analysis calculation, determining physical sign data, transmitting the physical sign data to the artificial intelligence algorithm for analysis processing, obtaining an analysis result, and judging whether a part with poor complexion and a specific part exists in the object to be detected or not by the analysis result;
the qi-blood model building module is responsible for building a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment;
the periodic analysis module is in charge of judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to a preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously transmitting the comparison result and the conditioning scheme to the intelligent terminal;
the working principle and beneficial effects of the technical scheme are as follows: the video processing module of this embodiment shoots through the intelligent camera to generate facial generation video, carry out the preliminary treatment to the video of shooting and obtain effective face image information, the preliminary treatment includes: denoising and cutting; the feature extraction module performs feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points; the facial color feature points include: color distribution, color spots, gloss conditions and the like of the parts of the face; the data analysis module transmits the facial complexion characteristic points to the big data analysis platform and performs analysis and calculation to determine physical sign data, and transmits the physical sign data to the artificial intelligence algorithm for analysis and processing to obtain an analysis result, wherein the analysis result is used for judging whether a part with poor complexion and a specific part exists in an object to be detected; the qi-blood model building module builds a qi-blood model by periodically collecting analysis results and transmitting the analysis results to simulation equipment based on a deep neural network learning algorithm; the periodic analysis module judges the qi and blood state according to the established qi and blood model and a preset time node, compares the qi and blood state with a historical analysis result, gives a conditioning scheme corresponding to the current qi and blood state, and simultaneously sends the comparison result and the conditioning scheme to the intelligent terminal; according to the scheme, through qi and blood analysis of facial recognition of the face, basic information of a wearer can be quickly queried, related indexes of traditional Chinese medicine with high matching degree are prompted, professional preliminary physiological data analysis is given, and a user can conveniently know own conditions in advance and make a proper nursing mode; the intelligent terminal realizes resource sharing, so that users and families can know the physical condition of the whole family conveniently, and early knowledge, early maintenance and early health are realized; meanwhile, the video of the face is analyzed and processed, and the data is processed by the qi and blood analysis model to obtain the current qi and blood analysis result, so that the intelligent processing of qi and blood analysis is realized, and the accuracy of the qi and blood analysis data result is improved.
Example 2: as shown in fig. 2, on the basis of embodiment 1, a video processing module provided in an embodiment of the present invention includes:
the face information acquisition sub-module is responsible for detecting the face information of an object to be detected through a camera to obtain captured face information;
the acquisition condition setting sub-module is in charge of determining acquisition conditions such as definition of an object to be detected according to the captured face information and sending out an instruction for starting acquisition;
the face image processing sub-module is responsible for receiving an instruction for starting acquisition, acquiring face information of an object to be detected, and performing noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image;
the face image sending sub-module is responsible for sending the face image subjected to noise reduction and clipping to the feature extraction module, and carrying out feature point information of face parts and face color related distribution according to the face image;
the working principle and beneficial effects of the technical scheme are as follows: the facial information acquisition sub-module of the embodiment detects facial information of an object to be detected through a camera to obtain captured facial information; the acquisition condition setting submodule determines acquisition conditions such as definition of an object to be detected according to the captured face information and sends out an instruction for starting acquisition; the face image processing sub-module receives an instruction for starting acquisition, acquires face information of an object to be detected, and performs noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image; the face image sending sub-module sends the face image after the noise reduction and clipping to the feature extraction module, and carries out feature point information of face parts and face color related distribution according to the face image; according to the scheme, the parameter setting of the face image acquisition equipment is carried out through the captured face information, so that the personalized requirements of face acquisition parameter setting of specific environments are realized, and the accuracy of the human blood analysis result can be improved; the face information is subjected to noise reduction and clipping, so that the definition and the integrity of the face image are improved, the background interfering with qi and blood analysis is removed, the load of the system is reduced, and the efficiency of processing the face information by the system is improved.
Example 3: as shown in fig. 3, on the basis of embodiment 1, the data analysis module provided in the embodiment of the present invention includes:
the information sending sub-module is responsible for acquiring facial color feature points of people and transmitting the pulse facial color feature points to the big data analysis platform;
the information analysis sub-module is responsible for analyzing the feature points of the face colors of the incoming people by the big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data and the like;
the result output sub-module is responsible for mining and screening physical sign data of a user through an artificial intelligence algorithm and determining an analysis result;
the working principle and beneficial effects of the technical scheme are as follows: the information sending sub-module of the embodiment obtains facial color feature points and transmits the pulse facial color feature points to the big data analysis platform; the information analysis sub-module analyzes the transmitted facial color characteristic points by the big data analysis platform, and analyzes and calculates to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data and the like; the result output sub-module is used for mining and screening the sign data of the user through an artificial intelligence algorithm and determining an analysis result; according to the scheme, the characteristic points of the facial complexion are analyzed through the big data analysis platform, the sign data of the reaction gas-blood analysis are obtained, the intelligent analysis of the data is realized, and then the representative sign data are obtained through excavation and screening through the artificial intelligence algorithm, so that the accuracy of the gas-blood analysis result is improved, and the health level of an object to be detected can be truly reflected.
Example 4: as shown in fig. 4, on the basis of embodiment 1, the qi-blood model building module provided in the embodiment of the present invention includes:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
the network building sub-module is responsible for building a deep neural network comprising a long-short-term memory network and a convolutional neural network;
the parameter adjustment optimization sub-module is responsible for parameter adjustment optimization through training a model and through prediction and deviation of a label control;
the test set generation sub-module is responsible for manufacturing a test set corresponding to the training set;
the model output sub-module is in charge of inputting the training set and the testing set into the deep neural network, determining an evaluation result when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, and establishing a qi-blood model as a final and deep neural network learning algorithm;
the working principle and beneficial effects of the technical scheme are as follows: the training set generation submodule of the embodiment periodically collects and marks the relevant positions of the faces and corresponding organ data to perform data cleaning and high-latitude combination to generate a training set; the network construction sub-module constructs a deep neural network comprising a long-term memory network and a convolutional neural network; the parameter adjustment optimization submodule carries out parameter adjustment optimization through training a model and through prediction and deviation of a label control; the test set generating submodule makes a test set corresponding to the training set; the model output sub-module inputs the training set and the testing set into the deep neural network, and when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, an evaluation result is determined and is used as a final model and a model of qi and blood is established based on a deep neural network learning algorithm; according to the scheme, the efficiency of qi and blood data processing is improved by establishing the qi and blood model, the intelligent level is also improved, the complexity of adopting a plurality of functional modules is eliminated, the deep neural network learning algorithm is adopted to assist in establishing the qi and blood model, the qi and blood model can be independently learned, model parameters can be timely adjusted, and the accuracy of an evaluation result is ensured.
Example 5: as shown in fig. 5, on the basis of embodiment 1, the periodic analysis module provided in the embodiment of the present invention includes:
the analysis node setting submodule is responsible for setting time nodes required to compare the qi and blood state data and setting time nodes of historical analysis results;
the model service construction sub-module is responsible for carrying out service construction on the qi-blood model, analyzing and comparing the previous analysis results at fixed time, giving out the analysis results according to trend analysis, and then giving out corresponding maintenance strategies by combining a platform strategy database;
the result and strategy sending sub-module is responsible for sending the comparison result and the conditioning scheme to the intelligent terminal;
the working principle and beneficial effects of the technical scheme are as follows: the analysis node setting submodule of the embodiment sets a time node for comparing the qi and blood state data and sets a time node for historical analysis results; the model service construction submodule carries out service construction on the qi-blood model, analyzes and compares the analysis results at regular time, gives out the analysis results according to trend analysis, and then gives out corresponding maintenance strategies by combining a platform strategy database; the result and strategy sending sub-module sends the comparison result and the conditioning scheme to the intelligent terminal; according to the scheme, the analysis result of the current qi and blood state data is obtained through comparison with the historical analysis result, and reliable reference data is provided for adjustment of the nursing strategy; meanwhile, the analysis result and the conditioning scheme are sent to the intelligent terminal, so that on one hand, the object to be detected can conveniently know the analysis result in real time, and on the other hand, humanized service is provided, and the storage and the searching of the result are convenient.
Example 6: as shown in fig. 6, on the basis of embodiment 1 and embodiment 5, the method for analyzing a facial recognition-based qi-blood state according to the embodiment of the present invention includes the following steps:
s100: shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine;
s200: based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment;
s300: judging whether the physical sign data of the object to be detected has insufficient qi and blood according to the qi and blood model, and carrying out qi and blood nursing suggestion when the physical sign data has insufficient qi and blood;
the working principle and beneficial effects of the technical scheme are as follows: firstly, shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine; then based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment; finally, judging whether the physical sign data of the object to be detected has insufficient qi and blood according to the qi and blood model, and carrying out qi and blood nursing suggestion when the physical sign data has insufficient qi and blood; according to the scheme, through qi and blood analysis of facial recognition of the face, basic information of a wearer can be quickly queried, related indexes of traditional Chinese medicine with high matching degree are prompted, professional preliminary physiological data analysis is given, and a user can conveniently know own conditions in advance and make a proper nursing mode; the intelligent terminal realizes resource sharing, so that users and families can know the physical condition of the whole family conveniently, and early knowledge, early maintenance and early health are realized; meanwhile, the video of the face is analyzed and processed, and the data is processed by the qi and blood analysis model to obtain the current qi and blood analysis result, so that the intelligent processing of qi and blood analysis is realized, and the accuracy of the qi and blood analysis data result is improved.
Example 7: as shown in fig. 7, on the basis of embodiment 6, the process for determining a facial feature point provided in the embodiment of the present invention includes the following steps:
s101: acquiring and processing original video data by a camera, determining a face position image of a face, and detecting face information of an object to be detected;
s102: according to the facial information of the object to be detected captured by the camera, determining acquisition conditions such as the face definition of the object to be detected, and sending out an instruction for starting acquisition;
s103: carrying out noise reduction and clipping on face information of an object to be detected; carrying out characteristic point information of face parts and face color related distribution according to the obtained face image;
the working principle and beneficial effects of the technical scheme are as follows: firstly, acquiring and processing original video data through a camera, determining a facial position image of a human face, and detecting facial information of an object to be detected; secondly, according to the facial information of the object to be detected captured by the camera, determining acquisition conditions such as the face definition of the object to be detected, and sending out an instruction for starting acquisition; then carrying out noise reduction and cutting on face information of the object to be detected; carrying out characteristic point information of face parts and face color related distribution according to the obtained face image; according to the scheme, the parameter setting of the face image acquisition equipment is carried out through the captured face information, so that the personalized requirements of face acquisition parameter setting of specific environments are realized, and the accuracy of the human blood analysis result can be improved; the face information is subjected to noise reduction and clipping, so that the definition and the integrity of the face image are improved, the background interfering with qi and blood analysis is removed, the load of the system is reduced, and the efficiency of processing the face information by the system is improved.
Example 8: as shown in fig. 8, on the basis of embodiment 6, the process for determining the analysis result provided in the embodiment of the present invention includes the following steps:
s104: acquiring characteristic point information of the face position and the face color related distribution of the person, and transmitting the characteristic point information to a big data analysis platform;
s105: analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data;
s106: excavating and screening physical sign data of a user through a preset artificial intelligence algorithm, and determining an analysis result;
the working principle and beneficial effects of the technical scheme are as follows: firstly, acquiring characteristic point information of a face part and face color related distribution, and transmitting the characteristic point information to a big data analysis platform; then analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data; finally, mining and screening physical sign data of the user through a preset artificial intelligence algorithm, and determining an analysis result; according to the scheme, the characteristic points of the facial complexion are analyzed through the big data analysis platform, the sign data of the reaction gas-blood analysis are obtained, the intelligent analysis of the data is realized, and then the representative sign data are obtained through excavation and screening through the artificial intelligence algorithm, so that the accuracy of the gas-blood analysis result is improved, and the health level of an object to be detected can be truly reflected.
Example 9: as shown in fig. 9, on the basis of embodiment 6, the process for establishing an qi-blood model according to the embodiment of the present invention includes the following steps:
s201: the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
s202: constructing a deep neural network comprising a long-term memory network and a convolutional neural network;
s203: parameter adjustment optimization is carried out through training a model and through prediction and deviation of a label control;
s204: preparing a test set corresponding to the training set;
s205: inputting the training set and the testing set into a deep neural network, and when the average absolute error, the root mean square error and the average error meet a preset threshold value at the same time, determining an evaluation result as a final and deep neural network learning algorithm-based qi-blood model;
the working principle and beneficial effects of the technical scheme are as follows: firstly, periodically collecting and marking the relevant positions of the face and corresponding organ data, performing data cleaning, and combining the high latitude to generate a training set; secondly, constructing a deep neural network comprising a long-short-term memory network and a convolutional neural network; the parameter adjustment optimization submodule carries out parameter adjustment optimization through training a model and through prediction and deviation of a label control; then, a test set corresponding to the training set is manufactured; finally, inputting the training set and the testing set into a deep neural network, and when the average absolute error, the root mean square error and the average error meet a preset threshold value simultaneously, determining an evaluation result as a final and deep neural network learning algorithm-based qi-blood model; according to the scheme, the efficiency of qi and blood data processing is improved by establishing the qi and blood model, the intelligent level is also improved, the complexity of adopting a plurality of functional modules is eliminated, the deep neural network learning algorithm is adopted to assist in establishing the qi and blood model, the qi and blood model can be independently learned, model parameters can be timely adjusted, and the accuracy of an evaluation result is ensured.
Example 10: based on embodiment 9, the expression of the qi-blood model provided in the embodiment of the invention is:
wherein BLEU represents an evaluation index of qi and blood state, BP represents an evaluation index of standard qi and blood state, ω n Representation giving P n Weight, P n Score indicating current qi and blood state, lc indicating accuracy of analysis result of current qi and blood state, lr tableShowing the accuracy of the standard qi-blood state analysis result; exp () represents an exponential function with the natural constant e as a base; n represents an evaluation index of the candidate qi-blood state, and N represents a label of the evaluation index of the candidate qi-blood state;
the working principle and beneficial effects of the technical scheme are as follows: the embodiment establishes a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment; judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to the preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously transmitting the comparison result and the conditioning scheme to the intelligent terminal; the efficiency of qi and blood data processing is improved through the qi and blood model, the intelligent level is also improved, and the complexity of adopting a plurality of functional modules is omitted.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A facial recognition-based qi-blood state analysis system, comprising:
the video processing module is responsible for shooting through the intelligent camera, generating a face generation video, and preprocessing the shot video to obtain effective face image information;
the feature extraction module is in charge of carrying out feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points;
the data analysis module is responsible for transmitting the facial complexion characteristic points of the human face to the big data analysis platform and carrying out analysis calculation, determining physical sign data, transmitting the physical sign data to the artificial intelligence algorithm for analysis processing, obtaining an analysis result, and judging whether a part with poor complexion and a specific part exists in the object to be detected or not by the analysis result;
the qi-blood model building module is responsible for building a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment;
the periodic analysis module is in charge of judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to the preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously sending the comparison result and the conditioning scheme to the intelligent terminal.
2. The facial recognition-based qi-blood state analysis system of claim 1, wherein the preprocessing of the video processing module comprises: denoising and cutting;
the facial color feature points of the feature extraction module comprise: color distribution, mottle and luster of the face.
3. A facial recognition based qi-blood state analysis system as in claim 1, wherein the video processing module comprises:
the face information acquisition sub-module is responsible for detecting the face information of an object to be detected through a camera to obtain captured face information;
the acquisition condition setting sub-module is in charge of determining definition acquisition conditions of the object to be detected according to the captured face information and sending out an instruction for starting acquisition;
the face image processing sub-module is responsible for receiving an instruction for starting acquisition, acquiring face information of an object to be detected, and performing noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image;
and the face image sending sub-module is responsible for sending the face image subjected to noise reduction and clipping to the feature extraction module, and carrying out feature point information of face parts and face color related distribution according to the face image.
4. A facial recognition based qi-blood state analysis system as in claim 1, wherein the data analysis module comprises:
the information sending sub-module is responsible for acquiring facial color feature points of people and transmitting the pulse facial color feature points to the big data analysis platform;
the information analysis sub-module is responsible for analyzing the feature points of the face colors of the incoming people by the big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data;
and the result output sub-module is responsible for mining and screening the sign data of the user through an artificial intelligence algorithm and determining an analysis result.
5. The facial recognition-based qi-blood state analysis system of claim 1, wherein the qi-blood model building module comprises:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
the network building sub-module is responsible for building a deep neural network comprising a long-short-term memory network and a convolutional neural network;
the parameter adjustment optimization sub-module is responsible for parameter adjustment optimization through training a model and through prediction and deviation of a label control;
the test set generation sub-module is responsible for manufacturing a test set corresponding to the training set;
the model output sub-module is responsible for inputting the training set and the testing set into the deep neural network, determining an evaluation result when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, and establishing a qi-blood model as a final and deep neural network learning algorithm.
6. The facial recognition-based qi-blood state analysis system of claim 1, wherein the periodic analysis module comprises:
the analysis node setting submodule is responsible for setting time nodes required to compare the qi and blood state data and setting time nodes of historical analysis results;
the model service construction sub-module is responsible for carrying out service construction on the qi-blood model, analyzing and comparing the previous analysis results at fixed time, giving out the analysis results according to trend analysis, and then giving out corresponding maintenance strategies by combining a platform strategy database;
and the result and strategy sending sub-module is responsible for sending the comparison result and the conditioning scheme to the intelligent terminal.
7. The method for analyzing the qi and blood state based on facial recognition is characterized by comprising the following steps of:
shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine;
based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment;
judging whether the physical sign data of the object to be detected has deficiency of qi and blood according to the qi-blood model, and carrying out qi-blood nursing suggestion when the physical sign data has deficiency of qi and blood.
8. A facial recognition based qi and blood state analysis method as in claim 7, wherein the process of determining facial features comprises the steps of:
acquiring and processing original video data by a camera, determining a face position image of a face, and detecting face information of an object to be detected;
according to the facial information of the object to be detected captured by the camera, determining the face definition acquisition condition of the object to be detected, and sending out an instruction for starting acquisition;
carrying out noise reduction and clipping on face information of an object to be detected; and carrying out characteristic point information of face parts and face color related distribution according to the obtained face image.
9. The facial recognition-based qi-blood state analysis method as in claim 7, wherein the process of determining the analysis results comprises the steps of:
acquiring characteristic point information of the face position and the face color related distribution of the person, and transmitting the characteristic point information to a big data analysis platform;
analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data;
and excavating and screening the sign data of the user through a preset artificial intelligence algorithm, and determining an analysis result.
10. The facial recognition-based qi and blood state analysis method of claim 7, wherein the process of creating the qi and blood model comprises the steps of:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
constructing a deep neural network comprising a long-term memory network and a convolutional neural network;
parameter adjustment optimization is carried out through training a model and through prediction and deviation of a label control;
preparing a test set corresponding to the training set;
and inputting the training set and the testing set into the deep neural network, and determining an evaluation result when the average absolute error, the root mean square error and the average absolute error meet a preset threshold value at the same time, and finally establishing a qi-blood model based on a deep neural network learning algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310489454.9A CN116530981A (en) | 2023-05-04 | 2023-05-04 | Facial recognition-based qi and blood state analysis system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310489454.9A CN116530981A (en) | 2023-05-04 | 2023-05-04 | Facial recognition-based qi and blood state analysis system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116530981A true CN116530981A (en) | 2023-08-04 |
Family
ID=87442916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310489454.9A Pending CN116530981A (en) | 2023-05-04 | 2023-05-04 | Facial recognition-based qi and blood state analysis system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116530981A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117423041A (en) * | 2023-12-13 | 2024-01-19 | 成都中医药大学 | Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision |
-
2023
- 2023-05-04 CN CN202310489454.9A patent/CN116530981A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117423041A (en) * | 2023-12-13 | 2024-01-19 | 成都中医药大学 | Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision |
CN117423041B (en) * | 2023-12-13 | 2024-03-08 | 成都中医药大学 | Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190295729A1 (en) | Universal non-invasive blood glucose estimation method based on time series analysis | |
RU2757048C1 (en) | Method and system for assessing the health of the human body based on the large-volume sleep data | |
CN101247759B (en) | Electrophysiological analysis system and method | |
CN108171278B (en) | Motion pattern recognition method and system based on motion training data | |
CN112001122B (en) | Non-contact physiological signal measurement method based on end-to-end generation countermeasure network | |
CN116440425B (en) | Intelligent adjusting method and system of LED photodynamic therapeutic instrument | |
CN109276242A (en) | The method and apparatus of electrocardiosignal type identification | |
US11406304B2 (en) | Systems and methods for physiological sign analysis | |
CN112788200B (en) | Method and device for determining frequency spectrum information, storage medium and electronic device | |
CN116530981A (en) | Facial recognition-based qi and blood state analysis system and method | |
CN111829661A (en) | Forehead temperature measurement method and system based on face analysis | |
CN108009519B (en) | Light irradiation information monitoring method and device | |
CN114129169B (en) | Bioelectric signal data identification method, system, medium, and device | |
CN114305418B (en) | Data acquisition system and method for intelligent assessment of depression state | |
CN113128585B (en) | Deep neural network based multi-size convolution kernel method for realizing electrocardiographic abnormality detection and classification | |
CN110638440A (en) | Self-service electrocardio detecting system | |
CN117598700A (en) | Intelligent blood oxygen saturation detection system and method | |
CN116186561B (en) | Running gesture recognition and correction method and system based on high-dimensional time sequence diagram network | |
CN109431499B (en) | Botanic person home care auxiliary system and auxiliary method | |
CN116434979A (en) | Physiological state cloud monitoring method, monitoring system and storage medium | |
CN110693508A (en) | Multi-channel cooperative psychophysiological active sensing method and service robot | |
CN116230198A (en) | Multidimensional Tibetan medicine AI intelligent auxiliary decision-making device and system | |
CN112842355A (en) | Electrocardiosignal heart beat detection and identification method based on deep learning target detection | |
CN111637610A (en) | Indoor environment health degree adjusting method and system based on machine vision | |
CN117158972B (en) | Attention transfer capability evaluation method, system, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |