CN116530981A - Facial recognition-based qi and blood state analysis system and method - Google Patents

Facial recognition-based qi and blood state analysis system and method Download PDF

Info

Publication number
CN116530981A
CN116530981A CN202310489454.9A CN202310489454A CN116530981A CN 116530981 A CN116530981 A CN 116530981A CN 202310489454 A CN202310489454 A CN 202310489454A CN 116530981 A CN116530981 A CN 116530981A
Authority
CN
China
Prior art keywords
analysis
blood
module
data
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310489454.9A
Other languages
Chinese (zh)
Inventor
白伟民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xueyang Technology Co ltd
Original Assignee
Beijing Xueyang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xueyang Technology Co ltd filed Critical Beijing Xueyang Technology Co ltd
Priority to CN202310489454.9A priority Critical patent/CN116530981A/en
Publication of CN116530981A publication Critical patent/CN116530981A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1032Determining colour for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1176Recognition of faces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/444Evaluating skin marks, e.g. mole, nevi, tumour, scar
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4854Diagnosis based on concepts of traditional oriental medicine
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Human Computer Interaction (AREA)
  • Dermatology (AREA)
  • Optics & Photonics (AREA)
  • Alternative & Traditional Medicine (AREA)
  • Dentistry (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a system and a method for analyzing qi and blood states based on facial recognition, wherein the system comprises the following components: the device comprises a video processing module, a feature extraction module, a data analysis module, a qi-blood model building module and a periodic analysis module; the method comprises the following steps: shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment; judging whether the physical sign data of the object to be detected has deficiency of qi and blood according to the qi-blood model, and carrying out qi-blood nursing suggestion when the physical sign data has deficiency of qi and blood. The invention realizes the intelligent processing of the qi and blood analysis and improves the accuracy of the qi and blood analysis data result.

Description

Facial recognition-based qi and blood state analysis system and method
Technical Field
The invention relates to the technical field of intelligent analysis of data, in particular to a system and a method for analyzing qi and blood states based on facial recognition.
Background
Along with the continuous progress of medical level and the continuous update of medical equipment, the current medical level has long-term progress and development, and the diagnosis results are mostly based on blood collection, electrocardio, X-ray and the like to diagnose diseases of the body. However, the traditional Chinese medicine is used for seeing a patient by inspection, diagnosis by smelling, inquiry and diagnosis by cutting, and qi and blood are an important concept in the traditional Chinese medicine, namely the energy and blood circulation condition in the human body; the inspection is to primarily judge the health degree of qi and blood according to the facial color and the facial features of the person. Other health problems can be caused by long-term deficiency of qi and blood and accumulation of daily life and month. Therefore, how to know the qi and blood state of the user in daily life and to carry out timely nursing under the condition of qi and blood deficiency is a necessary condition for keeping health of the body; however, the current prior art lacks a device for analyzing qi and blood, resulting in inaccurate evaluation of health data and increased inspection costs.
First, application number: CN202210264326.X discloses a scalp health condition evaluation method, apparatus, storage medium and computer device, when the scalp health condition of a target person is evaluated, the scalp health condition of the target person can be detected by using a pre-configured scalp health condition detection model, and a scalp health condition detection value is obtained, and the scalp qi and blood condition of the target person can be evaluated by using a pre-configured scalp qi and blood condition detection model, and a scalp health index of the target person can be determined according to the scalp health condition detection value and the scalp qi and blood condition detection value, and the scalp health condition of the target person is evaluated according to the scalp health index; while the scalp health condition of the target person is evaluated, not only the scalp health condition but also the scalp qi and blood condition are considered, so that the scalp health condition is evaluated more comprehensively, and the obtained scalp health condition evaluation result is more accurate, but only the scalp is evaluated, the whole health data cannot be evaluated at all, and the data result is compared on one side.
Second prior art, application number: CN201310205895.8 discloses a USB type fingertip pulse meter, comprising: the device comprises a shell, a USB plug and a pulse feeling module arranged inside the shell; the pulse feeling module comprises: the device comprises a reflective sensor, a multifunctional microcontroller, a driving and detecting program memory; the multifunctional microcontroller comprises: a controllable gain amplifier, a filter, a reference power supply and a multifunctional microprocessor. Although pulse wave waveforms, cardiac states, qi and blood states and cardiovascular comprehensive figures of merit can be detected, the computer expands the function of screening and diagnosing cardiovascular health states; however, the qi and blood state is obtained through pulse diagnosis, so that the qi and blood analysis result is inaccurate, and the face image is not acquired and processed, so that the accuracy of the qi and blood analysis data is affected.
Third, application number: CN201910962353.2 discloses a health detecting device, comprising: the electrodes are used for collecting electromyographic signals of acupoints at different positions of a human body; the amplifying module comprises a first preamplifier and a second amplifier, and the myoelectric signal is finally amplified by 800-1000 times; the filter module filters the electromyographic signals amplified by the amplifying module in an analog filter mode and a digital filter mode in sequence; the processing module receives the electromyographic signals filtered by the filtering module and analyzes the health condition of the user according to the electromyographic signals; and the wireless communication module sends the health condition of the user to the cloud or the mobile terminal. Although the accuracy of the electromyographic signals is improved, the health condition of the user can be comprehensively reflected by detecting the electromyographic signals of acupoints at different positions of the body at the same time; but the data source is single and the intelligent degree is low.
The first, second and third existing technologies have the problems that the data sources are single, the accurate analysis and processing of the qi and blood data cannot be carried out, and the accuracy of the analysis data of the qi and blood data is affected.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a facial recognition-based qi-blood state analysis system, comprising:
the video processing module is responsible for shooting through the intelligent camera, generating a face generation video, and preprocessing the shot video to obtain effective face image information;
the feature extraction module is in charge of carrying out feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points;
the data analysis module is responsible for transmitting the facial complexion characteristic points of the human face to the big data analysis platform and carrying out analysis calculation, determining physical sign data, transmitting the physical sign data to the artificial intelligence algorithm for analysis processing, obtaining an analysis result, and judging whether a part with poor complexion and a specific part exists in the object to be detected or not by the analysis result;
the qi-blood model building module is responsible for building a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment;
the periodic analysis module is in charge of judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to the preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously sending the comparison result and the conditioning scheme to the intelligent terminal.
Optionally, the preprocessing of the video processing module includes: denoising and cutting;
the facial color feature points of the feature extraction module comprise: color distribution, mottle and luster of the face.
Optionally, the video processing module includes:
the face information acquisition sub-module is responsible for detecting the face information of an object to be detected through a camera to obtain captured face information;
the acquisition condition setting sub-module is in charge of determining acquisition conditions such as definition of an object to be detected according to the captured face information and sending out an instruction for starting acquisition;
the face image processing sub-module is responsible for receiving an instruction for starting acquisition, acquiring face information of an object to be detected, and performing noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image;
and the face image sending sub-module is responsible for sending the face image subjected to noise reduction and clipping to the feature extraction module, and carrying out feature point information of face parts and face color related distribution according to the face image.
Optionally, the data analysis module includes:
the information sending sub-module is responsible for acquiring facial color feature points of people and transmitting the pulse facial color feature points to the big data analysis platform;
the information analysis sub-module is responsible for analyzing the feature points of the face colors of the incoming people by the big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data;
and the result output sub-module is responsible for mining and screening the sign data of the user through an artificial intelligence algorithm and determining an analysis result.
Optionally, the qi-blood model building module includes:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
the network building sub-module is responsible for building a deep neural network comprising a long-short-term memory network and a convolutional neural network;
the parameter adjustment optimization sub-module is responsible for parameter adjustment optimization through training a model and through prediction and deviation of a label control;
the test set generation sub-module is responsible for manufacturing a test set corresponding to the training set;
the model output sub-module is responsible for inputting the training set and the testing set into the deep neural network, determining an evaluation result when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, and establishing a qi-blood model as a final and deep neural network learning algorithm.
Optionally, the periodic analysis module includes:
the analysis node setting submodule is responsible for setting time nodes required to compare the qi and blood state data and setting time nodes of historical analysis results;
the model service construction sub-module is responsible for carrying out service construction on the qi-blood model, analyzing and comparing the previous analysis results at fixed time, giving out the analysis results according to trend analysis, and then giving out corresponding maintenance strategies by combining a platform strategy database;
and the result and strategy sending sub-module is responsible for sending the comparison result and the conditioning scheme to the intelligent terminal.
The invention provides a facial recognition-based qi and blood state analysis method, which comprises the following steps of:
shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine;
based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment;
judging whether the physical sign data of the object to be detected has deficiency of qi and blood according to the qi-blood model, and carrying out qi-blood nursing suggestion when the physical sign data has deficiency of qi and blood.
Optionally, the process of determining the facial feature points includes the steps of:
acquiring and processing original video data by a camera, determining a face position image of a face, and detecting face information of an object to be detected;
according to the facial information of the object to be detected captured by the camera, determining acquisition conditions such as the face definition of the object to be detected, and sending out an instruction for starting acquisition;
carrying out noise reduction and clipping on face information of an object to be detected; and carrying out characteristic point information of face parts and face color related distribution according to the obtained face image.
Optionally, the process of determining the analysis result includes the following steps:
acquiring characteristic point information of the face position and the face color related distribution of the person, and transmitting the characteristic point information to a big data analysis platform;
analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data;
and excavating and screening the sign data of the user through a preset artificial intelligence algorithm, and determining an analysis result.
Optionally, the establishing process of the qi-blood model comprises the following steps:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
constructing a deep neural network comprising a long-term memory network and a convolutional neural network;
parameter adjustment optimization is carried out through training a model and through prediction and deviation of a label control;
preparing a test set corresponding to the training set;
and inputting the training set and the testing set into the deep neural network, and determining an evaluation result when the average absolute error, the root mean square error and the average absolute error meet a preset threshold value at the same time, and finally establishing a qi-blood model based on a deep neural network learning algorithm.
The video processing module of the invention shoots through the intelligent camera, generates a face generation video, and preprocesses the shot video to obtain effective face image information, wherein the preprocessing comprises: denoising and cutting; the feature extraction module performs feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points; the facial color feature points include: color distribution, color spots, gloss conditions and the like of the parts of the face; the data analysis module transmits the facial complexion characteristic points to the big data analysis platform and performs analysis and calculation to determine physical sign data, and transmits the physical sign data to the artificial intelligence algorithm for analysis and processing to obtain an analysis result, wherein the analysis result is used for judging whether a part with poor complexion and a specific part exists in an object to be detected; the qi-blood model building module builds a qi-blood model by periodically collecting analysis results and transmitting the analysis results to simulation equipment based on a deep neural network learning algorithm; the periodic analysis module judges the qi and blood state according to the established qi and blood model and a preset time node, compares the qi and blood state with a historical analysis result, gives a conditioning scheme corresponding to the current qi and blood state, and simultaneously sends the comparison result and the conditioning scheme to the intelligent terminal; according to the scheme, through qi and blood analysis of facial recognition of the face, basic information of a wearer can be quickly queried, related indexes of traditional Chinese medicine with high matching degree are prompted, professional preliminary physiological data analysis is given, and a user can conveniently know own conditions in advance and make a proper nursing mode; the intelligent terminal realizes resource sharing, so that users and families can know the physical condition of the whole family conveniently, and early knowledge, early maintenance and early health are realized; meanwhile, the video of the face is analyzed and processed, and the data is processed by the qi and blood analysis model to obtain the current qi and blood analysis result, so that the intelligent processing of qi and blood analysis is realized, and the accuracy of the qi and blood analysis data result is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a block diagram of an analysis system for qi and blood status based on facial recognition in embodiment 1 of the present invention;
FIG. 2 is a block diagram of a video processing module in embodiment 2 of the present invention;
FIG. 3 is a block diagram of a data analysis module in embodiment 3 of the present invention;
FIG. 4 is a block diagram of a qi and blood model building module according to embodiment 4 of the present invention;
FIG. 5 is a block diagram of a periodic analysis module in accordance with embodiment 5 of the present invention;
FIG. 6 is a flowchart of a method for analyzing qi and blood status based on facial recognition according to embodiment 6 of the present invention;
FIG. 7 is a diagram showing a process of determining a facial feature point in example 7 of the present invention;
FIG. 8 is a process diagram of determining the analysis result in example 8 of the present invention;
FIG. 9 is a block diagram showing the process of establishing the qi-blood model in embodiment 9 of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the application. As used in the examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims. In the description of this application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Example 1: as shown in fig. 1, an embodiment of the present invention provides a facial recognition-based qi-blood state analysis system, including:
the video processing module is responsible for shooting through the intelligent camera, generating a face generation video, preprocessing the shot video to obtain effective face image information, and preprocessing comprises the following steps: denoising and cutting;
the feature extraction module is in charge of carrying out feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points; the facial color feature points include: color distribution, color spots, gloss conditions and the like of the parts of the face;
the data analysis module is responsible for transmitting the facial complexion characteristic points of the human face to the big data analysis platform and carrying out analysis calculation, determining physical sign data, transmitting the physical sign data to the artificial intelligence algorithm for analysis processing, obtaining an analysis result, and judging whether a part with poor complexion and a specific part exists in the object to be detected or not by the analysis result;
the qi-blood model building module is responsible for building a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment;
the periodic analysis module is in charge of judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to a preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously transmitting the comparison result and the conditioning scheme to the intelligent terminal;
the working principle and beneficial effects of the technical scheme are as follows: the video processing module of this embodiment shoots through the intelligent camera to generate facial generation video, carry out the preliminary treatment to the video of shooting and obtain effective face image information, the preliminary treatment includes: denoising and cutting; the feature extraction module performs feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points; the facial color feature points include: color distribution, color spots, gloss conditions and the like of the parts of the face; the data analysis module transmits the facial complexion characteristic points to the big data analysis platform and performs analysis and calculation to determine physical sign data, and transmits the physical sign data to the artificial intelligence algorithm for analysis and processing to obtain an analysis result, wherein the analysis result is used for judging whether a part with poor complexion and a specific part exists in an object to be detected; the qi-blood model building module builds a qi-blood model by periodically collecting analysis results and transmitting the analysis results to simulation equipment based on a deep neural network learning algorithm; the periodic analysis module judges the qi and blood state according to the established qi and blood model and a preset time node, compares the qi and blood state with a historical analysis result, gives a conditioning scheme corresponding to the current qi and blood state, and simultaneously sends the comparison result and the conditioning scheme to the intelligent terminal; according to the scheme, through qi and blood analysis of facial recognition of the face, basic information of a wearer can be quickly queried, related indexes of traditional Chinese medicine with high matching degree are prompted, professional preliminary physiological data analysis is given, and a user can conveniently know own conditions in advance and make a proper nursing mode; the intelligent terminal realizes resource sharing, so that users and families can know the physical condition of the whole family conveniently, and early knowledge, early maintenance and early health are realized; meanwhile, the video of the face is analyzed and processed, and the data is processed by the qi and blood analysis model to obtain the current qi and blood analysis result, so that the intelligent processing of qi and blood analysis is realized, and the accuracy of the qi and blood analysis data result is improved.
Example 2: as shown in fig. 2, on the basis of embodiment 1, a video processing module provided in an embodiment of the present invention includes:
the face information acquisition sub-module is responsible for detecting the face information of an object to be detected through a camera to obtain captured face information;
the acquisition condition setting sub-module is in charge of determining acquisition conditions such as definition of an object to be detected according to the captured face information and sending out an instruction for starting acquisition;
the face image processing sub-module is responsible for receiving an instruction for starting acquisition, acquiring face information of an object to be detected, and performing noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image;
the face image sending sub-module is responsible for sending the face image subjected to noise reduction and clipping to the feature extraction module, and carrying out feature point information of face parts and face color related distribution according to the face image;
the working principle and beneficial effects of the technical scheme are as follows: the facial information acquisition sub-module of the embodiment detects facial information of an object to be detected through a camera to obtain captured facial information; the acquisition condition setting submodule determines acquisition conditions such as definition of an object to be detected according to the captured face information and sends out an instruction for starting acquisition; the face image processing sub-module receives an instruction for starting acquisition, acquires face information of an object to be detected, and performs noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image; the face image sending sub-module sends the face image after the noise reduction and clipping to the feature extraction module, and carries out feature point information of face parts and face color related distribution according to the face image; according to the scheme, the parameter setting of the face image acquisition equipment is carried out through the captured face information, so that the personalized requirements of face acquisition parameter setting of specific environments are realized, and the accuracy of the human blood analysis result can be improved; the face information is subjected to noise reduction and clipping, so that the definition and the integrity of the face image are improved, the background interfering with qi and blood analysis is removed, the load of the system is reduced, and the efficiency of processing the face information by the system is improved.
Example 3: as shown in fig. 3, on the basis of embodiment 1, the data analysis module provided in the embodiment of the present invention includes:
the information sending sub-module is responsible for acquiring facial color feature points of people and transmitting the pulse facial color feature points to the big data analysis platform;
the information analysis sub-module is responsible for analyzing the feature points of the face colors of the incoming people by the big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data and the like;
the result output sub-module is responsible for mining and screening physical sign data of a user through an artificial intelligence algorithm and determining an analysis result;
the working principle and beneficial effects of the technical scheme are as follows: the information sending sub-module of the embodiment obtains facial color feature points and transmits the pulse facial color feature points to the big data analysis platform; the information analysis sub-module analyzes the transmitted facial color characteristic points by the big data analysis platform, and analyzes and calculates to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data and the like; the result output sub-module is used for mining and screening the sign data of the user through an artificial intelligence algorithm and determining an analysis result; according to the scheme, the characteristic points of the facial complexion are analyzed through the big data analysis platform, the sign data of the reaction gas-blood analysis are obtained, the intelligent analysis of the data is realized, and then the representative sign data are obtained through excavation and screening through the artificial intelligence algorithm, so that the accuracy of the gas-blood analysis result is improved, and the health level of an object to be detected can be truly reflected.
Example 4: as shown in fig. 4, on the basis of embodiment 1, the qi-blood model building module provided in the embodiment of the present invention includes:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
the network building sub-module is responsible for building a deep neural network comprising a long-short-term memory network and a convolutional neural network;
the parameter adjustment optimization sub-module is responsible for parameter adjustment optimization through training a model and through prediction and deviation of a label control;
the test set generation sub-module is responsible for manufacturing a test set corresponding to the training set;
the model output sub-module is in charge of inputting the training set and the testing set into the deep neural network, determining an evaluation result when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, and establishing a qi-blood model as a final and deep neural network learning algorithm;
the working principle and beneficial effects of the technical scheme are as follows: the training set generation submodule of the embodiment periodically collects and marks the relevant positions of the faces and corresponding organ data to perform data cleaning and high-latitude combination to generate a training set; the network construction sub-module constructs a deep neural network comprising a long-term memory network and a convolutional neural network; the parameter adjustment optimization submodule carries out parameter adjustment optimization through training a model and through prediction and deviation of a label control; the test set generating submodule makes a test set corresponding to the training set; the model output sub-module inputs the training set and the testing set into the deep neural network, and when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, an evaluation result is determined and is used as a final model and a model of qi and blood is established based on a deep neural network learning algorithm; according to the scheme, the efficiency of qi and blood data processing is improved by establishing the qi and blood model, the intelligent level is also improved, the complexity of adopting a plurality of functional modules is eliminated, the deep neural network learning algorithm is adopted to assist in establishing the qi and blood model, the qi and blood model can be independently learned, model parameters can be timely adjusted, and the accuracy of an evaluation result is ensured.
Example 5: as shown in fig. 5, on the basis of embodiment 1, the periodic analysis module provided in the embodiment of the present invention includes:
the analysis node setting submodule is responsible for setting time nodes required to compare the qi and blood state data and setting time nodes of historical analysis results;
the model service construction sub-module is responsible for carrying out service construction on the qi-blood model, analyzing and comparing the previous analysis results at fixed time, giving out the analysis results according to trend analysis, and then giving out corresponding maintenance strategies by combining a platform strategy database;
the result and strategy sending sub-module is responsible for sending the comparison result and the conditioning scheme to the intelligent terminal;
the working principle and beneficial effects of the technical scheme are as follows: the analysis node setting submodule of the embodiment sets a time node for comparing the qi and blood state data and sets a time node for historical analysis results; the model service construction submodule carries out service construction on the qi-blood model, analyzes and compares the analysis results at regular time, gives out the analysis results according to trend analysis, and then gives out corresponding maintenance strategies by combining a platform strategy database; the result and strategy sending sub-module sends the comparison result and the conditioning scheme to the intelligent terminal; according to the scheme, the analysis result of the current qi and blood state data is obtained through comparison with the historical analysis result, and reliable reference data is provided for adjustment of the nursing strategy; meanwhile, the analysis result and the conditioning scheme are sent to the intelligent terminal, so that on one hand, the object to be detected can conveniently know the analysis result in real time, and on the other hand, humanized service is provided, and the storage and the searching of the result are convenient.
Example 6: as shown in fig. 6, on the basis of embodiment 1 and embodiment 5, the method for analyzing a facial recognition-based qi-blood state according to the embodiment of the present invention includes the following steps:
s100: shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine;
s200: based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment;
s300: judging whether the physical sign data of the object to be detected has insufficient qi and blood according to the qi and blood model, and carrying out qi and blood nursing suggestion when the physical sign data has insufficient qi and blood;
the working principle and beneficial effects of the technical scheme are as follows: firstly, shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine; then based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment; finally, judging whether the physical sign data of the object to be detected has insufficient qi and blood according to the qi and blood model, and carrying out qi and blood nursing suggestion when the physical sign data has insufficient qi and blood; according to the scheme, through qi and blood analysis of facial recognition of the face, basic information of a wearer can be quickly queried, related indexes of traditional Chinese medicine with high matching degree are prompted, professional preliminary physiological data analysis is given, and a user can conveniently know own conditions in advance and make a proper nursing mode; the intelligent terminal realizes resource sharing, so that users and families can know the physical condition of the whole family conveniently, and early knowledge, early maintenance and early health are realized; meanwhile, the video of the face is analyzed and processed, and the data is processed by the qi and blood analysis model to obtain the current qi and blood analysis result, so that the intelligent processing of qi and blood analysis is realized, and the accuracy of the qi and blood analysis data result is improved.
Example 7: as shown in fig. 7, on the basis of embodiment 6, the process for determining a facial feature point provided in the embodiment of the present invention includes the following steps:
s101: acquiring and processing original video data by a camera, determining a face position image of a face, and detecting face information of an object to be detected;
s102: according to the facial information of the object to be detected captured by the camera, determining acquisition conditions such as the face definition of the object to be detected, and sending out an instruction for starting acquisition;
s103: carrying out noise reduction and clipping on face information of an object to be detected; carrying out characteristic point information of face parts and face color related distribution according to the obtained face image;
the working principle and beneficial effects of the technical scheme are as follows: firstly, acquiring and processing original video data through a camera, determining a facial position image of a human face, and detecting facial information of an object to be detected; secondly, according to the facial information of the object to be detected captured by the camera, determining acquisition conditions such as the face definition of the object to be detected, and sending out an instruction for starting acquisition; then carrying out noise reduction and cutting on face information of the object to be detected; carrying out characteristic point information of face parts and face color related distribution according to the obtained face image; according to the scheme, the parameter setting of the face image acquisition equipment is carried out through the captured face information, so that the personalized requirements of face acquisition parameter setting of specific environments are realized, and the accuracy of the human blood analysis result can be improved; the face information is subjected to noise reduction and clipping, so that the definition and the integrity of the face image are improved, the background interfering with qi and blood analysis is removed, the load of the system is reduced, and the efficiency of processing the face information by the system is improved.
Example 8: as shown in fig. 8, on the basis of embodiment 6, the process for determining the analysis result provided in the embodiment of the present invention includes the following steps:
s104: acquiring characteristic point information of the face position and the face color related distribution of the person, and transmitting the characteristic point information to a big data analysis platform;
s105: analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data;
s106: excavating and screening physical sign data of a user through a preset artificial intelligence algorithm, and determining an analysis result;
the working principle and beneficial effects of the technical scheme are as follows: firstly, acquiring characteristic point information of a face part and face color related distribution, and transmitting the characteristic point information to a big data analysis platform; then analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data; finally, mining and screening physical sign data of the user through a preset artificial intelligence algorithm, and determining an analysis result; according to the scheme, the characteristic points of the facial complexion are analyzed through the big data analysis platform, the sign data of the reaction gas-blood analysis are obtained, the intelligent analysis of the data is realized, and then the representative sign data are obtained through excavation and screening through the artificial intelligence algorithm, so that the accuracy of the gas-blood analysis result is improved, and the health level of an object to be detected can be truly reflected.
Example 9: as shown in fig. 9, on the basis of embodiment 6, the process for establishing an qi-blood model according to the embodiment of the present invention includes the following steps:
s201: the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
s202: constructing a deep neural network comprising a long-term memory network and a convolutional neural network;
s203: parameter adjustment optimization is carried out through training a model and through prediction and deviation of a label control;
s204: preparing a test set corresponding to the training set;
s205: inputting the training set and the testing set into a deep neural network, and when the average absolute error, the root mean square error and the average error meet a preset threshold value at the same time, determining an evaluation result as a final and deep neural network learning algorithm-based qi-blood model;
the working principle and beneficial effects of the technical scheme are as follows: firstly, periodically collecting and marking the relevant positions of the face and corresponding organ data, performing data cleaning, and combining the high latitude to generate a training set; secondly, constructing a deep neural network comprising a long-short-term memory network and a convolutional neural network; the parameter adjustment optimization submodule carries out parameter adjustment optimization through training a model and through prediction and deviation of a label control; then, a test set corresponding to the training set is manufactured; finally, inputting the training set and the testing set into a deep neural network, and when the average absolute error, the root mean square error and the average error meet a preset threshold value simultaneously, determining an evaluation result as a final and deep neural network learning algorithm-based qi-blood model; according to the scheme, the efficiency of qi and blood data processing is improved by establishing the qi and blood model, the intelligent level is also improved, the complexity of adopting a plurality of functional modules is eliminated, the deep neural network learning algorithm is adopted to assist in establishing the qi and blood model, the qi and blood model can be independently learned, model parameters can be timely adjusted, and the accuracy of an evaluation result is ensured.
Example 10: based on embodiment 9, the expression of the qi-blood model provided in the embodiment of the invention is:
wherein BLEU represents an evaluation index of qi and blood state, BP represents an evaluation index of standard qi and blood state, ω n Representation giving P n Weight, P n Score indicating current qi and blood state, lc indicating accuracy of analysis result of current qi and blood state, lr tableShowing the accuracy of the standard qi-blood state analysis result; exp () represents an exponential function with the natural constant e as a base; n represents an evaluation index of the candidate qi-blood state, and N represents a label of the evaluation index of the candidate qi-blood state;
the working principle and beneficial effects of the technical scheme are as follows: the embodiment establishes a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment; judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to the preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously transmitting the comparison result and the conditioning scheme to the intelligent terminal; the efficiency of qi and blood data processing is improved through the qi and blood model, the intelligent level is also improved, and the complexity of adopting a plurality of functional modules is omitted.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A facial recognition-based qi-blood state analysis system, comprising:
the video processing module is responsible for shooting through the intelligent camera, generating a face generation video, and preprocessing the shot video to obtain effective face image information;
the feature extraction module is in charge of carrying out feature extraction according to the obtained effective facial image information to obtain extracted facial color feature points;
the data analysis module is responsible for transmitting the facial complexion characteristic points of the human face to the big data analysis platform and carrying out analysis calculation, determining physical sign data, transmitting the physical sign data to the artificial intelligence algorithm for analysis processing, obtaining an analysis result, and judging whether a part with poor complexion and a specific part exists in the object to be detected or not by the analysis result;
the qi-blood model building module is responsible for building a qi-blood model based on a deep neural network learning algorithm by periodically collecting analysis results and transmitting the analysis results to simulation equipment;
the periodic analysis module is in charge of judging the qi and blood state according to the established qi and blood model and comparing the qi and blood state with the historical analysis result according to the preset time node, giving a conditioning scheme corresponding to the current qi and blood state, and simultaneously sending the comparison result and the conditioning scheme to the intelligent terminal.
2. The facial recognition-based qi-blood state analysis system of claim 1, wherein the preprocessing of the video processing module comprises: denoising and cutting;
the facial color feature points of the feature extraction module comprise: color distribution, mottle and luster of the face.
3. A facial recognition based qi-blood state analysis system as in claim 1, wherein the video processing module comprises:
the face information acquisition sub-module is responsible for detecting the face information of an object to be detected through a camera to obtain captured face information;
the acquisition condition setting sub-module is in charge of determining definition acquisition conditions of the object to be detected according to the captured face information and sending out an instruction for starting acquisition;
the face image processing sub-module is responsible for receiving an instruction for starting acquisition, acquiring face information of an object to be detected, and performing noise reduction cutting on an acquired video of the face information to obtain a noise reduction cut face image;
and the face image sending sub-module is responsible for sending the face image subjected to noise reduction and clipping to the feature extraction module, and carrying out feature point information of face parts and face color related distribution according to the face image.
4. A facial recognition based qi-blood state analysis system as in claim 1, wherein the data analysis module comprises:
the information sending sub-module is responsible for acquiring facial color feature points of people and transmitting the pulse facial color feature points to the big data analysis platform;
the information analysis sub-module is responsible for analyzing the feature points of the face colors of the incoming people by the big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data or corresponding organ data;
and the result output sub-module is responsible for mining and screening the sign data of the user through an artificial intelligence algorithm and determining an analysis result.
5. The facial recognition-based qi-blood state analysis system of claim 1, wherein the qi-blood model building module comprises:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
the network building sub-module is responsible for building a deep neural network comprising a long-short-term memory network and a convolutional neural network;
the parameter adjustment optimization sub-module is responsible for parameter adjustment optimization through training a model and through prediction and deviation of a label control;
the test set generation sub-module is responsible for manufacturing a test set corresponding to the training set;
the model output sub-module is responsible for inputting the training set and the testing set into the deep neural network, determining an evaluation result when the average absolute error, the root mean square error and the average error simultaneously meet a preset threshold value, and establishing a qi-blood model as a final and deep neural network learning algorithm.
6. The facial recognition-based qi-blood state analysis system of claim 1, wherein the periodic analysis module comprises:
the analysis node setting submodule is responsible for setting time nodes required to compare the qi and blood state data and setting time nodes of historical analysis results;
the model service construction sub-module is responsible for carrying out service construction on the qi-blood model, analyzing and comparing the previous analysis results at fixed time, giving out the analysis results according to trend analysis, and then giving out corresponding maintenance strategies by combining a platform strategy database;
and the result and strategy sending sub-module is responsible for sending the comparison result and the conditioning scheme to the intelligent terminal.
7. The method for analyzing the qi and blood state based on facial recognition is characterized by comprising the following steps of:
shooting and acquiring an original video or a photo through preset shooting equipment, and determining facial feature points; transmitting the complexion feature points to a preset big data analysis platform, analyzing and calculating to determine physical sign data, transmitting the physical sign data to a preset artificial intelligence algorithm for analysis and processing, and determining an analysis result; the analysis result is used for judging whether the object to be detected has facial expression conditions caused by deficiency of qi and blood in traditional Chinese medicine;
based on a deep neural network learning algorithm, establishing a qi-blood model by periodically collecting analysis results and transmitting the analysis results to preset simulation equipment;
judging whether the physical sign data of the object to be detected has deficiency of qi and blood according to the qi-blood model, and carrying out qi-blood nursing suggestion when the physical sign data has deficiency of qi and blood.
8. A facial recognition based qi and blood state analysis method as in claim 7, wherein the process of determining facial features comprises the steps of:
acquiring and processing original video data by a camera, determining a face position image of a face, and detecting face information of an object to be detected;
according to the facial information of the object to be detected captured by the camera, determining the face definition acquisition condition of the object to be detected, and sending out an instruction for starting acquisition;
carrying out noise reduction and clipping on face information of an object to be detected; and carrying out characteristic point information of face parts and face color related distribution according to the obtained face image.
9. The facial recognition-based qi-blood state analysis method as in claim 7, wherein the process of determining the analysis results comprises the steps of:
acquiring characteristic point information of the face position and the face color related distribution of the person, and transmitting the characteristic point information to a big data analysis platform;
analyzing the transmitted characteristic point information based on a big data analysis platform, and analyzing and calculating to obtain sign data; wherein the sign data at least comprises specific part data of complexion, complexion data and corresponding organ data;
and excavating and screening the sign data of the user through a preset artificial intelligence algorithm, and determining an analysis result.
10. The facial recognition-based qi and blood state analysis method of claim 7, wherein the process of creating the qi and blood model comprises the steps of:
the training set generation sub-module is responsible for periodically collecting and marking the relevant positions of the faces and corresponding organ data, carrying out data cleaning and combining high latitude, and generating a training set;
constructing a deep neural network comprising a long-term memory network and a convolutional neural network;
parameter adjustment optimization is carried out through training a model and through prediction and deviation of a label control;
preparing a test set corresponding to the training set;
and inputting the training set and the testing set into the deep neural network, and determining an evaluation result when the average absolute error, the root mean square error and the average absolute error meet a preset threshold value at the same time, and finally establishing a qi-blood model based on a deep neural network learning algorithm.
CN202310489454.9A 2023-05-04 2023-05-04 Facial recognition-based qi and blood state analysis system and method Pending CN116530981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310489454.9A CN116530981A (en) 2023-05-04 2023-05-04 Facial recognition-based qi and blood state analysis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310489454.9A CN116530981A (en) 2023-05-04 2023-05-04 Facial recognition-based qi and blood state analysis system and method

Publications (1)

Publication Number Publication Date
CN116530981A true CN116530981A (en) 2023-08-04

Family

ID=87442916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310489454.9A Pending CN116530981A (en) 2023-05-04 2023-05-04 Facial recognition-based qi and blood state analysis system and method

Country Status (1)

Country Link
CN (1) CN116530981A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117423041A (en) * 2023-12-13 2024-01-19 成都中医药大学 Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117423041A (en) * 2023-12-13 2024-01-19 成都中医药大学 Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision
CN117423041B (en) * 2023-12-13 2024-03-08 成都中医药大学 Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision

Similar Documents

Publication Publication Date Title
US20190295729A1 (en) Universal non-invasive blood glucose estimation method based on time series analysis
RU2757048C1 (en) Method and system for assessing the health of the human body based on the large-volume sleep data
CN101247759B (en) Electrophysiological analysis system and method
CN108171278B (en) Motion pattern recognition method and system based on motion training data
CN112001122B (en) Non-contact physiological signal measurement method based on end-to-end generation countermeasure network
CN116440425B (en) Intelligent adjusting method and system of LED photodynamic therapeutic instrument
CN109276242A (en) The method and apparatus of electrocardiosignal type identification
US11406304B2 (en) Systems and methods for physiological sign analysis
CN112788200B (en) Method and device for determining frequency spectrum information, storage medium and electronic device
CN116530981A (en) Facial recognition-based qi and blood state analysis system and method
CN111829661A (en) Forehead temperature measurement method and system based on face analysis
CN108009519B (en) Light irradiation information monitoring method and device
CN114129169B (en) Bioelectric signal data identification method, system, medium, and device
CN114305418B (en) Data acquisition system and method for intelligent assessment of depression state
CN113128585B (en) Deep neural network based multi-size convolution kernel method for realizing electrocardiographic abnormality detection and classification
CN110638440A (en) Self-service electrocardio detecting system
CN117598700A (en) Intelligent blood oxygen saturation detection system and method
CN116186561B (en) Running gesture recognition and correction method and system based on high-dimensional time sequence diagram network
CN109431499B (en) Botanic person home care auxiliary system and auxiliary method
CN116434979A (en) Physiological state cloud monitoring method, monitoring system and storage medium
CN110693508A (en) Multi-channel cooperative psychophysiological active sensing method and service robot
CN116230198A (en) Multidimensional Tibetan medicine AI intelligent auxiliary decision-making device and system
CN112842355A (en) Electrocardiosignal heart beat detection and identification method based on deep learning target detection
CN111637610A (en) Indoor environment health degree adjusting method and system based on machine vision
CN117158972B (en) Attention transfer capability evaluation method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination