CN114677759A - Facial paralysis rehabilitation condition evaluation system and method based on structured light technology - Google Patents

Facial paralysis rehabilitation condition evaluation system and method based on structured light technology Download PDF

Info

Publication number
CN114677759A
CN114677759A CN202210287737.0A CN202210287737A CN114677759A CN 114677759 A CN114677759 A CN 114677759A CN 202210287737 A CN202210287737 A CN 202210287737A CN 114677759 A CN114677759 A CN 114677759A
Authority
CN
China
Prior art keywords
camera
facial paralysis
facial
computer
fourier transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210287737.0A
Other languages
Chinese (zh)
Inventor
陈作兵
戚斌杰
陈锦秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210287737.0A priority Critical patent/CN114677759A/en
Publication of CN114677759A publication Critical patent/CN114677759A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a facial paralysis rehabilitation status evaluating system and method based on a structured light technology, relating to the technical field of facial paralysis rehabilitation status evaluation, and the key points of the technical scheme are as follows: the system comprises a photographic instrument, a left camera and a right camera, wherein the left camera and the right camera are respectively positioned on the left side and the right side of the photographic instrument; the camera, the left camera and the right camera are connected with the computer and the power supply device; the camera, the left camera and the right camera are used for shooting through Fourier transform projection to obtain a three-dimensional point cloud video of facial expression of the facial paralysis patient. The method can realize the evaluation of the facial paralysis rehabilitation condition or the rehabilitation degree, the facial paralysis rehabilitation result obtained by the method of the invention is more objective than the manual judgment of individual doctors adopted in the prior art, and the method analyzes and evaluates the facial paralysis rehabilitation degree according to the three-dimensional point cloud video data of the facial movements, which is more accurate than the method of collecting the two-dimensional image of the facial paralysis patient by using a single camera.

Description

Facial paralysis rehabilitation condition evaluation system and method based on structured light technology
Technical Field
The invention relates to the technical field of facial paralysis rehabilitation evaluation, in particular to a facial paralysis rehabilitation evaluation system and method based on a structured light technology.
Background
Facial paralysis is a disease of facial muscle paralysis caused by facial nerve injury due to various reasons. The facial paralysis problem of the patient can be cured to a certain extent or completely through various symptomatic medicines and modern treatment means.
However, in the current clinical treatment, the evaluation of the curative effect of facial paralysis patients mainly depends on the treatment experience of doctors, and the recovery degree of facial paralysis is subjectively analyzed and evaluated by observing the actions of lifting eyebrows, curling gills, pounding mouths, trising noses and the like of the patients.
In addition, the Facial Disability Index (FDI) scale and other Facial nerve grading standards are subjective evaluation methods based on doctor and patient experience descriptions, and cannot rapidly, accurately and objectively give evaluation information on the recovery degree of Facial paralysis patients. Therefore, the present invention is directed to a system and a method for evaluating facial paralysis rehabilitation status based on structured light technology, so as to solve the above problems.
Disclosure of Invention
The invention aims to solve the problems and provides a facial paralysis rehabilitation status evaluation system and method based on a structured light technology, which can realize evaluation of facial paralysis rehabilitation status or rehabilitation degree.
The technical purpose of the invention is realized by the following technical scheme: a facial paralysis rehabilitation condition evaluating system based on a structured light technology comprises a photographic instrument, a left camera and a right camera, wherein the left camera and the right camera are respectively positioned on the left side and the right side of the photographic instrument; still include computer and power supply unit, photographic appearance, left camera and right camera all are connected with computer and power supply unit.
Further, the camera, the left camera and the right camera are used for shooting through Fourier transform projection to obtain a three-dimensional point cloud video of facial expressions of the facial paralysis patient.
The invention also provides an evaluation method of the facial paralysis rehabilitation status evaluation system based on the structured light technology, which comprises the following steps:
s1, projecting the sinusoidal structured light stripes onto the surface of the human face by controlling a projector through a computer;
s2, sending out voice and text instructions by using a loudspeaker and a display of the computer to make the facial paralysis patient make the required facial movements;
s3, acquiring left and right images of the deformed stripes on the surface of the human face through the left camera and the right camera, and transmitting the left and right images to a computer;
s4, respectively carrying out Fourier transform on the left image and the right image acquired in the step S3 through a computer;
s5, respectively filtering the left image and the right image after Fourier transform by using a computer, and respectively performing inverse Fourier transform on the left image and the right image after Fourier transform;
s6, calculating the truncation phase of the left and right images respectively according to the result of the inverse Fourier transform of the left and right images in the step S5 and the inverse Fourier transform of the reference phase;
s7, respectively performing phase unwrapping on the calculation results of the truncation phases in the step S6;
s8, reconstructing the unwrapped phases of the left image and the right image in the step S7 through a computer to obtain a frame of three-dimensional point cloud image of the face surface, and finishing the storage work of the three-dimensional point cloud image;
s9, judging whether the time for collecting the point cloud on the surface of the face is reached or not through the computer, if so, calculating and judging the rehabilitation degree of the facial paralysis of the patient by utilizing a depth neural network model trained in advance according to the three-dimensional point cloud video data corresponding to the movement of the face and outputting the rehabilitation degree; if not, the computer starts to collect the next three-dimensional face point cloud image, and the process goes to step S3.
Further, the facial actions in step S2 include raising the forehead, closing the eyes, shrugging the nose, showing the teeth, smiling, and left-falling the mouth.
Further, the time for collecting the point cloud on the surface of the human face in step S9 is a time length correspondingly set according to the facial movement instruction.
Further, the inverse fourier transform of the reference phase in step S6 is: and the computer sets a group of reference projection stripes without deformation on a reference plane according to the distance between the human face and the projector, and sequentially performs Fourier transform, filtering and inverse Fourier transform according to the reference projection stripes to obtain a result.
Further, the deep neural network model in step S9 is obtained by training human face three-dimensional point cloud video data of a large number of healthy people and patients with different facial paralysis rehabilitation degrees, wherein the different facial paralysis rehabilitation degrees are rehabilitation degrees given by a plurality of experts according to an industry standard.
In conclusion, the invention has the following beneficial effects:
1. the facial paralysis rehabilitation evaluation method adopts a projector, a left camera and a right camera, obtains a three-dimensional point cloud video of facial expression of a facial paralysis patient through Fourier transform projection, and then can realize evaluation of facial paralysis rehabilitation condition or rehabilitation degree according to three-dimensional point cloud video data based on a deep learning method;
2. compared with the method which is adopted in the prior art and only depends on the manual judgment of individual doctors to obtain the facial paralysis rehabilitation result, the method provided by the invention analyzes and evaluates the facial paralysis rehabilitation degree according to the three-dimensional point cloud video data of the facial movements, and is more accurate than the method which adopts a single camera to acquire the two-position image of the facial paralysis patient.
Drawings
FIG. 1 is a block diagram of the hardware components of the facial paralysis rehabilitation evaluating device in the embodiment of the present invention;
fig. 2 is a flow chart of a facial paralysis rehabilitation status evaluation method in the embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions of the present invention will be described in further detail below with reference to the embodiments of the present invention and the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail with reference to examples.
Example (b):
as shown in fig. 1, a facial paralysis rehabilitation status evaluation system based on structured light technology includes a camera, a left camera and a right camera, where the left camera and the right camera are respectively located on the left side and the right side of the camera, and the camera, the left camera and the right camera are used to obtain a three-dimensional point cloud video of facial expressions of facial paralysis patients through fourier transform projection. The evaluation system comprises a camera, a computer, a power supply device and a power supply device, wherein the camera, the left camera and the right camera are connected with the computer and the power supply device, the power supply device adopts commercial power, and the power supply device supplies power for power utilization parts in the whole evaluation system.
As shown in fig. 2, an evaluation method of a facial paralysis rehabilitation status evaluation system based on a structured light technology includes the following steps:
s1, projecting the sinusoidal structured light stripes onto the surface of the human face by controlling a projector through a computer;
s2, sending out voice and text instructions by using a loudspeaker and a display of the computer to make the facial paralysis patient make the required facial movements;
s3, acquiring left and right images of the deformed stripes on the surface of the human face through the left camera and the right camera, and transmitting the left and right images to a computer;
s4, respectively carrying out Fourier transform on the left image and the right image acquired in the step S3 through a computer;
s5, respectively filtering the left image and the right image after Fourier transform by using a computer, and respectively performing inverse Fourier transform on the filtered left image and the filtered right image;
s6, calculating the truncation phase of the left and right images respectively according to the result of the inverse Fourier transform of the left and right images in the step S5 and the inverse Fourier transform of the reference phase;
s7, respectively performing phase unwrapping on the calculation results of the truncation phases in the step S6;
s8, reconstructing the unwrapped phases of the left image and the right image in the step S7 through a computer to obtain a frame of three-dimensional point cloud image of the face surface, and finishing the storage work of the three-dimensional point cloud image;
s9, judging whether the time for collecting the point cloud on the surface of the face is reached through a computer, if so, calculating and judging the facial paralysis rehabilitation degree of the patient according to three-dimensional point cloud video data corresponding to the actions of the face by using a depth neural network model trained in advance and outputting the facial paralysis rehabilitation degree; if not, the computer starts to collect the next three-dimensional face point cloud image, and the process goes to step S3.
In the present embodiment, the facial actions in step S2 include raising the forehead, closing the eyes, shrugging the nose, showing the teeth, smiling, left-falling the mouth, and the like.
In step S9, the time for collecting the point cloud on the surface of the human face is the time length set according to the facial movement instruction, so that the three-dimensional point cloud video shooting of the whole facial movement can be completed.
In step S6, the inverse fourier transform of the reference phase is: and the computer sets a group of reference projection stripes without deformation on a reference plane according to the distance between the human face and the projector, and sequentially performs Fourier transform, filtering and inverse Fourier transform according to the reference projection stripes to obtain a result.
The deep neural network model in step S9 is obtained by training human face three-dimensional point cloud video data of a large number of healthy people and patients with different facial paralysis rehabilitation degrees, where the different facial paralysis rehabilitation degrees are rehabilitation degrees given by a plurality of experts according to an industry standard.
In the embodiment of the invention, the three-dimensional point cloud video of the facial expression of the facial paralysis patient is obtained by shooting through Fourier transform projection technology by adopting a projector, a left camera and a right camera, and the evaluation of the facial paralysis rehabilitation condition or the rehabilitation degree is realized based on a deep learning method and according to the point cloud video data. Compared with the method which is generally adopted by the current hospital and only depends on the manual judgment of individual doctors to obtain the facial paralysis rehabilitation result, the method is more objective. The method analyzes and evaluates the facial paralysis rehabilitation degree according to the three-dimensional point cloud video data of the facial movements, and is more accurate than the method of collecting the two-position image of the facial paralysis patient by using a single camera.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.

Claims (7)

1. A facial paralysis rehabilitation condition evaluation system based on structured light technology is characterized in that: the system comprises a camera, a left camera and a right camera, wherein the left camera and the right camera are respectively positioned on the left side and the right side of the camera; still include computer and power supply unit, photographic appearance, left camera and right camera all are connected with computer and power supply unit.
2. The facial paralysis rehabilitation status evaluation system based on the structured light technology as claimed in claim 1, wherein: the camera, the left camera and the right camera are used for shooting through Fourier transform projection to obtain a three-dimensional point cloud video of facial expression of the facial paralysis patient.
3. The evaluating method of the facial paralysis rehabilitation status evaluating system based on the structured light technology as claimed in any one of claims 1 to 2, wherein: the method comprises the following steps:
s1, projecting the sinusoidal structured light stripes onto the surface of the human face by controlling a projector through a computer;
s2, sending out voice and text instructions by using a loudspeaker and a display of the computer to make the facial paralysis patient make the required facial movements;
s3, acquiring left and right images of the deformed stripes on the surface of the human face through the left camera and the right camera, and transmitting the left and right images to a computer;
s4, respectively carrying out Fourier transform on the left image and the right image acquired in the step S3 through a computer;
s5, respectively filtering the left image and the right image after Fourier transform by using a computer, and respectively performing inverse Fourier transform on the left image and the right image after Fourier transform;
s6, calculating the truncation phase of the left and right images respectively according to the result of the inverse Fourier transform of the left and right images in the step S5 and the inverse Fourier transform of the reference phase;
s7, respectively performing phase unwrapping on the calculation results of the truncation phases in the step S6;
s8, reconstructing the unwrapped phases of the left image and the right image in the step S7 through a computer to obtain a frame of three-dimensional point cloud image of the face surface, and finishing the storage work of the three-dimensional point cloud image;
s9, judging whether the time for collecting the point cloud on the surface of the face is reached through a computer, if so, calculating and judging the facial paralysis rehabilitation degree of the patient according to three-dimensional point cloud video data corresponding to the actions of the face by using a depth neural network model trained in advance and outputting the facial paralysis rehabilitation degree; if not, the computer starts to collect the next three-dimensional face point cloud image, and the process goes to step S3.
4. The evaluating method of the facial paralysis rehabilitation status evaluating system based on the structured light technology as claimed in claim 3, wherein: the facial actions described in step S2 include raising the forehead, closing the eyes, shrugging the nose, showing the teeth, smiling, and left-falling the mouth.
5. The evaluating method of the facial paralysis rehabilitation status evaluating system based on the structured light technology as claimed in claim 3, wherein: the time for collecting the point cloud on the surface of the human face in the step S9 is a time length set correspondingly according to the face action instruction.
6. The evaluating method of the facial paralysis rehabilitation status evaluating system based on the structured light technology as claimed in claim 3, wherein: the inverse fourier transform of the reference phase in step S6 is: and the computer sets a group of reference projection stripes without deformation on a reference plane according to the distance between the human face and the projector, and sequentially performs Fourier transform, filtering and inverse Fourier transform according to the reference projection stripes to obtain a result.
7. The evaluating method of the facial paralysis rehabilitation status evaluating system based on the structured light technology as claimed in claim 3, wherein: the deep neural network model in the step S9 is obtained by training human face three-dimensional point cloud video data of a large number of healthy people and patients with different facial paralysis rehabilitation degrees, wherein the different facial paralysis rehabilitation degrees are rehabilitation degrees given by a plurality of experts according to evaluation scores by industry standards.
CN202210287737.0A 2022-03-23 2022-03-23 Facial paralysis rehabilitation condition evaluation system and method based on structured light technology Pending CN114677759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210287737.0A CN114677759A (en) 2022-03-23 2022-03-23 Facial paralysis rehabilitation condition evaluation system and method based on structured light technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210287737.0A CN114677759A (en) 2022-03-23 2022-03-23 Facial paralysis rehabilitation condition evaluation system and method based on structured light technology

Publications (1)

Publication Number Publication Date
CN114677759A true CN114677759A (en) 2022-06-28

Family

ID=82074342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210287737.0A Pending CN114677759A (en) 2022-03-23 2022-03-23 Facial paralysis rehabilitation condition evaluation system and method based on structured light technology

Country Status (1)

Country Link
CN (1) CN114677759A (en)

Similar Documents

Publication Publication Date Title
CN109298779A (en) Virtual training System and method for based on virtual protocol interaction
CN107708483A (en) For extracting the kinetic characteristic of user using hall effect sensor to provide a user the method and system of feedback
WO2007063878A1 (en) Face classifying method, face classifying device, classification map, face classifying program, recording medium where this program is recorded
CN111881838B (en) Dyskinesia assessment video analysis method and equipment with privacy protection function
JP2001346627A (en) Make-up advice system
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN113362924A (en) Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system
CN113196410A (en) Systems and methods for pain treatment
US20230200908A1 (en) Computing platform for improved aesthetic outcomes and patient safety in medical and surgical cosmetic procedures
CN114842522A (en) Artificial intelligence auxiliary evaluation method applied to beauty treatment
CN114663539A (en) 2D face restoration technology under mask based on audio drive
CN107886568B (en) Method and system for reconstructing facial expression by using 3D Avatar
CN110021203A (en) A kind of Oral healthy education experiencing system, method and medical education device
JP5095182B2 (en) Face classification device, face classification program, and recording medium on which the program is recorded
CN114677759A (en) Facial paralysis rehabilitation condition evaluation system and method based on structured light technology
CN115154828A (en) Brain function remodeling method, system and equipment based on brain-computer interface technology
WO2022269593A1 (en) A face rejuvenation method based on 3-d modeling, and guidance system thereof
CN115410707A (en) Remote diagnosis and treatment and rehabilitation system for knee osteoarthritis
Volk et al. Objective Measurement of Outcomes inFacial Palsy
CN114494601B (en) Three-dimensional face retrieval orthodontic correction and curative effect simulation system based on face image
KR20210154291A (en) Scalp management system using panoptic segmentation
Pursche et al. Multi-person remote heart-rate measurement from human faces-a cnn based approach
TW202122040A (en) Method for analyzing and estimating face muscle status
Yang et al. A study of real-time image processing method for treating human emotion by facial expression
Danino et al. Algorithm for facial weight-change [image weight-change simulator]

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination