CN116392086B - Method, terminal and storage medium for detecting stimulation - Google Patents

Method, terminal and storage medium for detecting stimulation Download PDF

Info

Publication number
CN116392086B
CN116392086B CN202310661494.7A CN202310661494A CN116392086B CN 116392086 B CN116392086 B CN 116392086B CN 202310661494 A CN202310661494 A CN 202310661494A CN 116392086 B CN116392086 B CN 116392086B
Authority
CN
China
Prior art keywords
user
nasolabial
eyelid
key point
mouth angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310661494.7A
Other languages
Chinese (zh)
Other versions
CN116392086A (en
Inventor
董芳
李凯
刘俊飙
吴炎强
俞嘉杰
喻晓斌
蒋路茸
孙乐
陈恒亮
陈婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Multimode Medical Technology Co ltd
Original Assignee
Zhejiang Multimode Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Multimode Medical Technology Co ltd filed Critical Zhejiang Multimode Medical Technology Co ltd
Priority to CN202310661494.7A priority Critical patent/CN116392086B/en
Publication of CN116392086A publication Critical patent/CN116392086A/en
Application granted granted Critical
Publication of CN116392086B publication Critical patent/CN116392086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4082Diagnosing or monitoring movement diseases, e.g. Parkinson, Huntington or Tourette
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention provides a method, a system, a terminal and a storage medium for detecting stimulation, wherein the method comprises the following steps: periodically detecting the state of the user in a detection range; when the state of the user is abnormal, flashing and/or picture flashing stimulation is carried out on the user; stopping the stimulation until the state of the user is recovered to be normal and/or the user is not found in the detection range. The invention can accurately detect whether the face or the body state is abnormal or not aiming at different detection users, and improves the accuracy of state detection.

Description

Method, terminal and storage medium for detecting stimulation
Technical Field
The present invention relates to the field of detection technologies, and in particular, to a method, a terminal, and a storage medium for detecting stimulation.
Background
Parkinson's disease, also known as "paralysis agitans", is a common senile nervous system degenerative disease with characteristic motor symptoms including resting tremor, bradykinesia, myotonia, dysposture balance and the like, and also with non-motor symptoms including constipation, olfactory disorder, sleep disorder, autonomic nerve dysfunction, mental and cognitive disorders and the like. In the daily life of a patient, parkinson's disease can cause myotonia and other conditions, which affect the daily life. In life, the method for detecting facial expression hypoxia of the parkinsonism patient disclosed in the comparison document (publication No. CN 111210415B) can detect the parkinsonism patient with facial hypoxia, but the effect of facial detection is not ideal enough and cannot be detected by other modes. With the development of the age, the technology of detecting facial anomalies has also been developed rapidly, especially in the medical and big data technical fields, which also puts higher demands on the method of detecting stimulation, and in the process of detecting facial anomalies, the facial images of the user are analyzed to detect whether the facial images of the user are abnormal.
In the existing face abnormality detection process, the face abnormality detection is generally performed by adopting a key point position comparison mode, but the accuracy of the face abnormality detection is low due to the fact that the positions of the face key points of different users are different.
And after the condition of facial hypoxia of the parkinsonism patient is detected, no subsequent steps are performed for early warning and stimulation.
Disclosure of Invention
The embodiment of the invention aims to provide a detection stimulation method, a detection stimulation system, a detection stimulation terminal and a storage medium, and aims to solve the problem that in the existing face abnormality detection process, the accuracy of face abnormality detection is low.
The embodiment of the invention is realized in such a way that a method for detecting stimulation comprises the following steps:
periodically detecting the state of the user in a detection range;
when the state of the user is abnormal, flashing and/or picture flashing stimulation is carried out on the user;
stopping the stimulation until the state of the user is recovered to be normal and/or the user is not found in the detection range.
Preferably, the flashing and/or screen flashing stimulation of the user comprises:
controlling the flash stimulation equipment to perform flash stimulation of gamma frequency bands;
and/or performing picture flicker stimulation on the user according to a preset frequency in the display device.
Preferably, when the state of the user is abnormal, the method includes:
the face of the user is abnormal;
and/or the physical state of the user is abnormal.
Preferably, the facial feature abnormality of the user includes:
extracting key points from a facial image of a user to obtain a key point set, and determining a nasolabial sulcus area according to facial key points in the key point set;
determining a nasolabial channel characteristic line according to the nasolabial channel region, and determining the depth of the nasolabial channel according to the nasolabial channel characteristic line;
obtaining a mouth angle key point and an eyelid key point in the key point set, and respectively determining a mouth angle gradient and an eyelid closing distance according to the mouth angle key point and the eyelid key point;
and if the nasolabial folds, the mouth angle gradient and the eyelid closing distance meet the facial abnormality conditions, judging that the face of the detection user is abnormal, and performing flash stimulation on the detection user.
Preferably, the performing flash stimulation on the detection user includes:
controlling the flash stimulation equipment to perform flash stimulation of gamma frequency bands;
or, performing picture flicker stimulation on the detection user according to the preset frequency in the flicker stimulation equipment.
Preferably, the determining the depth of the nasolabial device according to the characteristic line of the nasolabial device comprises:
obtaining the pixel area corresponding to the nasolabial groove feature line, obtaining the nasolabial groove area, and calculating the area ratio between the nasolabial groove area and the area of the nasolabial groove area;
and determining the depth of the nasolabial folds according to the area ratio.
Preferably, the determining the mouth angle inclination and the eyelid closing distance according to the mouth angle key point and the eyelid key point respectively includes:
acquiring position information of mouth angle designated points in the mouth angle key points, and calculating the slope between the mouth angle designated points according to the position information of the mouth angle designated points to obtain the mouth angle gradient;
and acquiring the position information of eyelid specified points in the eyelid key points, and determining the distance between the eyelid specified points according to the position information of the eyelid specified points to obtain a left eyelid closing distance and a right eyelid closing distance.
Preferably, the method further comprises:
acquiring video of the detection user to obtain an acquisition video, and determining the pupil gaze direction and the pupil movement track of the detection user according to the acquisition video;
Performing anomaly detection on the pupil gaze direction and the pupil movement track;
and if the abnormal detection is not qualified, sending a facial pupil abnormal prompt.
Preferably, the method further comprises:
determining facial expressions of the detected user according to the key point set, and determining an expression association feature set according to the facial expressions;
respectively determining the expression amplitude values of the feature areas in each expression associated feature group according to the facial image, and respectively calculating the difference value of the expression amplitude values between the feature areas in each expression associated feature group to obtain an amplitude value difference;
respectively inquiring standard deviation of each expression associated feature group under the facial expression, and respectively calculating variances between the corresponding amplitude differences and the standard deviation of each expression associated feature group to obtain expression feature values;
and if the expression characteristic value is larger than the characteristic threshold value, sending a facial expression abnormal prompt.
Preferably, after determining the mouth angle inclination and the eyelid closing distance according to the mouth angle key point and the eyelid key point, the method further comprises:
calculating a distance difference between the right eyelid closing distance and the left eyelid closing distance;
and if the distance difference is larger than a distance threshold, the depth of the nasolabial folds is larger than a depth threshold, and the inclination of the mouth angle is larger than an inclination threshold, judging that the depth of the nasolabial folds, the inclination of the mouth angle and the eyelid closing distance meet the facial abnormality conditions, and judging that the face of the detected user is abnormal.
Preferably, the abnormal physical state of the user includes:
acquiring a plurality of human body key points based on detection information, and acquiring a key point set; the key points of the human body comprise head, neck, shoulder, elbow, wrist, waist, knee and leg bowl parts;
calculating the confidence coefficient of each key point according to the key point set;
if the confidence coefficient of the key point is larger than 0.4, the user is considered to be abnormal in posture.
It is another object of an embodiment of the present invention to provide a detection stimulation system, comprising:
the detection system is used for identifying whether a user exists in the detection range and detecting the state of the user;
the abnormality identification system is used for judging whether the user is in an abnormal state according to the state of the user;
and the stimulation system is used for selecting whether to flash and/or flicker and stimulate the picture of the user according to the state of the user.
It is a further object of an embodiment of the present invention to provide a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, which processor implements the steps of the method as described above when executing the computer program.
It is a further object of embodiments of the present invention to provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
According to the embodiment of the invention, the nasolabial region can be effectively determined based on the facial key points in the key point set, the nasolabial feature line can be automatically determined based on the nasolabial region, the nasolabial depth of the face of the detected user can be effectively determined based on the nasolabial feature line, the mouth angle key points and the eyelid key points are obtained, the mouth angle gradient and the eyelid closing distance of the face of the detected user can be respectively determined based on the mouth angle key points and the eyelid key points, and whether the face of the detected user is abnormal or not can be effectively determined based on the nasolabial depth, the mouth angle gradient and the eyelid closing distance.
The stimulation is performed in a flashing or picture flashing mode, and the multiple modes are synchronously or separately performed, so that the stimulation device can be used on multiple devices, and a good stimulation effect is achieved.
Drawings
FIG. 1 is a flow chart of a first embodiment of the present invention for detecting stimulus;
FIG. 2 is a flow chart of detecting facial feature anomalies of a user provided by a first embodiment of the present invention;
FIG. 3 is a schematic view of a nasolabial device region provided by the first embodiment of the present invention;
fig. 4 is a schematic view of a face image according to a first embodiment of the present invention;
FIG. 5 is a flow chart of a method for detecting stimulus according to a second embodiment of the present invention;
fig. 6 is a schematic view of the degree of closure of the left and right eyelids provided by the second embodiment of the present invention;
FIG. 7 is a schematic view of the inclination of the mouth angle provided by the third embodiment of the present invention;
fig. 8 is a schematic structural view of a face abnormality detection system provided in a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention.
Description of the embodiments
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Examples
Referring to fig. 1-4, fig. 1 is a flowchart of a method for detecting stimulus according to a first embodiment of the present invention, and fig. 3 is data disclosed in LFW (labeled Faces in the wild), wherein the LFW face database is a database finished by the amerst division computer vision laboratory of the university of massachusetts, usa. The detection stimulation method can be applied to any terminal equipment or system, and comprises the following steps:
Periodically detecting the state of the user in a detection range;
the period can be set to any value within the range of 1-10s, and whether the patient is in an abnormal state can be rapidly detected through periodic identification so as to timely stimulate the user.
When the state of the user is abnormal, flashing and/or picture flashing stimulation is carried out on the user;
different targets can be achieved according to the needs through flash stimulation and/or picture flash stimulation, for example, the stimulation can be carried out on a patient with the Parkinson disease, and the facial and physical characteristics of the patient are well protected.
Stopping the stimulation until the state of the user is recovered to be normal and/or the user is not found in the detection range.
In the stimulation process, the stimulation is stopped for the user in the detection range periodically, and the user is recovered to the normal state or the user is not found in the detection range.
The detection range does not find that the user is not in the detection range, and the detection range comprises that the user leaves the detection range and also comprises that the user is blocked by the shielding object, so that the detection device cannot detect the user, for example, the camera shooting content does not appear.
The flashing and/or screen flashing stimulation of the user comprises:
controlling the flash stimulation equipment to perform flash stimulation of gamma frequency bands;
The flash stimulation is adopted to stimulate the user by the flash of gamma frequency band, so that the cerebral cortex layer of the user can be stimulated, and the brain of the user is forced to be active.
And/or performing picture flicker stimulation on the user according to a preset frequency in the display device.
Through the cooperation of picture flickering and flashing stimulation, the brain activity degree of a user can be further improved, and the two stimulation modes can be independently carried out, so that the device is convenient to use on various devices. For example, the mobile phone can be arranged on a mobile phone, a television or a display screen. The scene of use is more diverse.
The method comprises the following steps when the state of the user is abnormal:
the face of the user is abnormal;
among ways of detecting user abnormalities, detecting a face is a common way.
And/or the physical state of the user is abnormal.
However, the physical characteristics of parkinsonism also appear to be distinguishable from normal individuals, and are therefore detected in a variety of ways.
The detecting user is abnormal in state, and the detecting user is abnormal in facial features;
the method can also comprise the steps of detecting abnormal physical characteristics of the user, judging whether the user has unnatural shake or not, whether the shake amplitude is too large or not, and controlling shake consciously or not by judging whether the user has unnatural shake or not.
And when the user is in a normal state or leaves the detection area, the flashing stimulation is stopped.
The detecting facial feature anomalies of the user includes:
step S10, extracting key points from face images of detected users to obtain a key point set, and determining nose-lip groove areas according to face key points in the key point set;
in this step, key point extraction is performed on the detected facial image based on Practical Facial Landmark Detector (PFLD) algorithm, referring to fig. 2, a main network of the PFLD algorithm adopts a Mobile Net with a light weight level, a basic unit of the main network is a depth separable convolution (Depthwise Separable Convolution), and on the basis of the depth separable convolution, an Expansion layer and a project layer are further added to promote the number of channels to obtain more features, and an auxiliary network is used for estimating 3D rotation information, including yaw, pitch and roll three euler angles, so as to determine a head gesture, and make the positioning of the key point more accurate.
The loss function is a key design indicator, and is defined as follows:
wherein M represents the number of pictures for batch training, N represents the number of feature points to be detected preset for each face, K represents the directions of yaw, pitch, roll three degrees of freedom, represents the deviation of angles between the true value and the predicted value in the directions of the 3 degrees of freedom, and the larger the angle is, the larger the loss value is, w c n Representing the type of facial pose, such as frontal, lateral, head-up, low, with expression, occlusion, etc. The purpose of the loss function design is to give a small weight to data with a relatively large sample size (such as the face, i.e. the euler angles are relatively small), and the contribution to model training is small when gradient back propagation is performed; for data (side face, low head, head raising and expression extreme) with smaller sample size, a larger weight is given, so that contribution to model training is larger when gradient back propagation is carried out, the problem of unbalanced training samples under various conditions is solved very ingeniously through the design, and further the accuracy of extracting key points is improved.
Step S20, determining a nasolabial fold characteristic line according to the nasolabial fold region, and determining the nasolabial fold depth according to the nasolabial fold characteristic line;
wherein a nasolabial fold feature line can be automatically determined in the nasolabial fold region, the nasolabial fold feature line being used to characterize the nasolabial fold line on the face of the detected user, and the nasolabial fold depth can be effectively determined based on the nasolabial fold feature line.
Optionally, in this step, the determining a nasolabial device characteristic line according to the nasolabial device region includes:
gray processing is carried out on the nose-lip groove region to obtain a nose-lip groove gray map, and binarization processing is carried out on the nose-lip groove gray map to obtain a nose-lip groove binary map;
obtaining pixel points corresponding to preset gray values in the nasolabial space binary image to obtain the nasolabial space characteristic line;
the nose-lip sulcus gray level map is obtained by performing binarization processing on the nose-lip sulcus gray level map by using an adaptive threshold algorithm, the preset gray level value can be set according to requirements, the basic idea of the adaptive threshold algorithm is to traverse the image, calculate a moving average value, set to black if a certain pixel is obviously lower than the average value, or set to white if the pixel is obviously lower than the average value, wherein the average value is not a threshold value of a global image, but a local threshold value of the global image is calculated according to brightness distribution of different areas of the image. Assume thatf s (n)Is the sum of the s pixels after point n:
the final image T (n) is either 1 (black) or 0 (white) depending on whether it is darker than T percent of the average of its first s pixels:
as shown in fig. 3, the gray scale image and the binarized image are such that the nose-lip groove on the right side of the test image is deep, the area of the white region obtained by binarization is relatively large, and the nose-lip groove on the left side is shallow, so that an image without the white region is obtained.
Further, in this step, the determining the depth of the nasolabial device according to the nasolabial device characteristic line includes:
obtaining the pixel area corresponding to the nasolabial groove feature line, obtaining the nasolabial groove area, and calculating the area ratio between the nasolabial groove area and the area of the nasolabial groove area;
and determining the depth of the nasolabial folds according to the area ratio.
The method comprises the steps of calculating the area ratio between the area of the nasolabial groove and the area of the nasolabial groove area to determine the nasolabial groove depth, and specifically, in the step, the nasolabial groove depth is obtained by matching the area ratio with a pre-stored depth lookup table, and the correspondence between different area ratios and corresponding nasolabial groove depths is stored in the depth lookup table.
Step S30, obtaining a mouth angle key point and an eyelid key point in the key point set, and respectively determining a mouth angle gradient and an eyelid closing distance according to the mouth angle key point and the eyelid key point;
wherein, by obtaining the corner key point and the eyelid key point, the corner inclination and the eyelid closing distance to the face of the detected user can be determined based on the corner key point and the eyelid key point, respectively.
Step S40, judging that the face of the detection user is abnormal if the nose lip groove depth, the mouth angle gradient and the eyelid closing distance meet the facial abnormality conditions, and performing flash stimulation on the detection user;
the facial abnormality condition can be set according to requirements, whether the face of the detection user is abnormal or not can be effectively judged based on the depth of the nasolabial folds, the inclination of the mouth angles and the eyelid closing distance, and according to the embodiment, whether the face is abnormal or not can be accurately detected based on the depth of the nasolabial folds, the inclination of the mouth angles and the eyelid closing distance of each detection user aiming at different detection users, so that the problem of low detection accuracy of the facial abnormality caused by the difference of the positions of facial key points among the users is prevented, and the accuracy of the facial detection is improved.
In this step, the performing flash stimulation on the detected user includes:
controlling the flash stimulation equipment to perform flash stimulation of gamma frequency bands;
or, performing picture flicker stimulation on the detection user according to the preset frequency in the flicker stimulation equipment;
wherein, the gamma frequency band (30-100 Hz) stimulation can induce the gamma oscillation of the sea horse CA1 and the auditory cortex area of the brain, thereby improving the activity degree of the brain of the user and achieving the purpose of improving and detecting the cognitive function of the user.
In this embodiment, the flash stimulation device may be a camera, in which a flash device is built, and the camera may be installed (e.g. monitored) separately, or may be embedded in a device such as a screen/mobile phone (e.g. front camera).
The camera regularly recognizes the face/body state of the user, generally in a 3-5s interval, judges whether the user is in an abnormal state such as Alzheimer disease/Parkinson/apoplexy by recognizing the facial features or body state features of the user, and starts the flashing device when the abnormal state of the face of the user is detected, so that the brain wave stimulation is carried out on the user, and the treatment effect is achieved.
The flash lamp continuously works until the camera recognizes that the face of the user is restored to a normal state (judged by a threshold value) or the user leaves the recognition range of the camera, the flash lamp is closed, and optionally, a voice module is added to the flash stimulation device to carry out voice prompt.
Furthermore, the flash stimulation device can also be a signal conversion or transmission device, one end of the flash stimulation device is connected with an input signal, and the other end is connected with a television or other display screens through an HDMI wire; the video processing module can also be directly embedded in the intelligent television.
After the input television program is processed by the processing module, a signal with a specific frequency preset in the flash stimulation equipment is superimposed on an original television program signal, the two signals are not interfered with each other in different channels, and the television can play the television program and stimulate brain waves of a user through picture flicker with the specific frequency.
The superimposed signals may be multiplexed, such as by superimposing 42.5Hz and 44.3Hz signals simultaneously, or the frequencies of the signals may not be integers. Furthermore, the flash stimulation device can also be a picture display device, and the flash stimulation device directly performs picture flash stimulation on the detection user according to the preset frequency so as to achieve the effect of stimulating the brain waves of the detection user.
In this embodiment, the flash stimulation device may be a camera, in which a flash device is built, and the camera may be installed (e.g. monitored) separately, or may be embedded in a device such as a screen/mobile phone (e.g. front camera).
The camera regularly recognizes the face/body state of the user, generally in a 3-5s interval, judges whether the user is in an abnormal state such as Alzheimer disease/Parkinson/apoplexy by recognizing the facial features or body state features of the user, and starts the flashing device when the abnormal state of the face of the user is detected, so that the brain wave stimulation is carried out on the user, and the treatment effect is achieved.
The flash lamp continuously works until the camera recognizes that the face of the user is restored to a normal state (judged by a threshold value) or the user leaves the recognition range of the camera, the flash lamp is closed, and optionally, a voice module is added to the flash stimulation device to carry out voice prompt.
Furthermore, the flash stimulation device can also be a signal conversion or transmission device, one end of the flash stimulation device is connected with an input signal, and the other end is connected with a television or other display screens through an HDMI wire; the video processing module can also be directly embedded in the intelligent television.
After the input television program is processed by the processing module, a signal with a specific frequency preset in the flash stimulation equipment is superimposed on an original television program signal, the two signals are not interfered with each other in different channels, and the television can play the television program and stimulate brain waves of a user through picture flicker with the specific frequency.
The superimposed signals may be multiplexed, such as by superimposing 42.5Hz and 44.3Hz signals simultaneously, or the frequencies of the signals may not be integers. Furthermore, the flash stimulation device can also be a picture display device, and the flash stimulation device directly performs picture flash stimulation on the detection user according to the preset frequency so as to achieve the effect of detecting and stimulating the brain waves of the user.
Alternatively, in this step, if it is detected that there is abnormality in the face of the detection user, it may be determined that the detection user may have parkinson.
In this embodiment, the method further includes:
acquiring video of the detection user to obtain an acquisition video, and determining the pupil gaze direction and the pupil movement track of the detection user according to the acquisition video;
performing anomaly detection on the pupil gaze direction and the pupil movement track;
if the abnormality detection is not qualified, sending a facial pupil abnormality prompt;
the pupil detection method comprises the steps of respectively positioning pupils on all video images in acquired videos, determining the pupil gaze direction and the pupil movement track of a detected user based on the pupil positions obtained by positioning, judging whether the pupil gaze direction and the pupil movement track are abnormal or not, judging whether the pupil movement of the detected user is abnormal or not, and sending a facial pupil abnormality prompt if the pupil gaze direction and the pupil movement track are abnormal or not.
Further, the method further comprises:
determining facial expressions of the detected user according to the key point set, and determining an expression association feature set according to the facial expressions;
Respectively determining the expression amplitude values of the feature areas in each expression associated feature group according to the facial image, and respectively calculating the difference value of the expression amplitude values between the feature areas in each expression associated feature group to obtain an amplitude value difference;
respectively inquiring standard deviation of each expression associated feature group under the facial expression, and respectively calculating variances between the corresponding amplitude differences and the standard deviation of each expression associated feature group to obtain expression feature values;
if the expression characteristic value is larger than the characteristic threshold value, sending a facial expression abnormality prompt;
the facial motion coding system (Facial Action Coding System, FACS) is used to divide the face into a plurality of motion Units (Action Units, AU) which are independent and connected with each other, and the motion characteristics of the facial motion Units and the main areas controlled by the motion Units can reflect the micro-changes of the facial motion, so that the facial motion coding system can be used for detecting the motion states of muscles such as facial frowning muscles, frontal striation muscles, nasal pressing muscles and the like, and the upper forehead area, the cheek on both sides, the mouth angle and the eye angle are all motion unit detection areas used as modeling indexes in fig. 4.
FACS refers to a set of facial muscle movements corresponding to the emotion displayed. The facial actions of a subject are intended to be described by the magnitude and combination of facial Action Units (AUs) and have been widely used in the study of micro-expressions. The amplitude of each AU is divided into 5 levels by a scale. The functions between the AUs are relatively independent, each AU is determined as a small action unit, representing part or all of the functions of one muscle, or the combined action of multiple muscles.
The method is characterized in that the prior FACS is improved, a symmetrical AU is divided into two different AUs, the amplitudes of the two AUs are quantized again, and the two AUs are used as an original data set, and an asymmetric AU is positioned and quantized by using a data set training deep neural network.
For a particular expression (or facial action), since each facial expression is followed by a neutral expression, the value of neutral expression AUs can be identified and averaged. The comparison can find out which AUs will peak in this expression. The AU associated with this expression can be determined. For example: AU01 (internal eyebrow lifter), AU06 (cheek lifter) and AU12 (lip puller) were found to have three different peaks in smile's facial expression video, so they are related to smiles.
When a detected user exhibits a specific facial expression, an asymmetric amplitude difference (amplitude difference) of symmetry AUs is focused, and therefore when a specific AUs is activated, a variance between the corresponding amplitude difference and standard deviation of each expression-associated feature group is calculated, resulting in expression feature values.
In this embodiment, based on the face key points in the key point set, the nasolabial groove region can be effectively determined, the nasolabial groove feature line can be automatically determined based on the nasolabial groove region, the nasolabial groove depth of the face of the detected user can be effectively determined based on the nasolabial groove feature line, the mouth angle gradient and the eyelid closing distance of the face of the detected user can be respectively determined based on the mouth angle key points and the eyelid key points by acquiring the mouth angle key points and the eyelid key points, and whether the face of the detected user is abnormal or not can be effectively determined based on the nasolabial groove depth, the mouth angle gradient and the eyelid closing distance.
The abnormal physical state of the user comprises the following steps:
acquiring a plurality of human body key points based on detection information, and acquiring a key point set; the key points of the human body comprise head, neck, shoulder, elbow, wrist, waist, knee and leg bowl parts;
first, key points are set for different postures of a human body, key points of the head, neck, shoulder, elbow, wrist, waist, knee, leg bowl and the like of the human body are detected from an image, and a key point set c= { (X1, Y1), (X2, Y2), (Xn, yn) }, and a posture evaluation index (confidence score) s= (S1, S2,., sn) are obtained, and typical rigidity postures include head skew, arm vacation, elbow bending and the like.
And respectively calculating attitude evaluation indexes aiming at different human body key points.
Aiming at the head skew condition, connecting head and neck, head and two sides shoulder key point coordinates to obtain straight lines 1, 2 and 3, and respectively calculating the angle ranges and distances of the straight lines 1 and 2 and the straight lines 2 and 3;
aiming at the condition of arm vacation, connecting the wrist with the elbow to obtain an arm position straight line, simultaneously connecting the elbow with the shoulder, and connecting the elbow with the waist key points to obtain two other straight lines, and calculating the angle range and the distance between the arm straight line and the two other straight lines.
For the elbow bending situation, connecting the wrist and elbow key points and the elbow and shoulder key points, obtaining two straight lines and calculating the angle range and distance between the two straight lines.
Calculating the confidence coefficient of each key point according to the key point set;
if the confidence coefficient of the key point is larger than 0.4, the user is considered to be abnormal in posture.
Under normal conditions, the confidence coefficient of each key point is usually less than 0.4, the values of C and S are evaluated in a circulating way, if Si is more than 0.4, the human body can be considered to be in a certain abnormal state, and the angle theta and the distance d are calculated respectively according to the above cases. The two-dimensional coordinates of the key points are (Xt, yt), the value range of t is the number of the key points, each key point corresponds to one two-dimensional coordinate, and then the angle can be calculated by the following formula:
the calculation formula of p, q and r is as follows:
distance ratio calculation formula:
the angular and distance ranges in each case are as follows:
(1) Under the condition of head skew, the angle ranges of the straight line 1 and the straight line 2 and the straight line 1 and the straight line 3 are [10 degrees, 40 degrees ] U.S. 50 degrees, 80 degrees ] ], and the distance ratio range is [0,0.4] [0.8, + ] U.S. 0.8.
(2) Under the condition of arm vacation, the angle range of the straight line 1 and the straight line 2 is [0 degree, 180 degrees ], and the distance ratio range is [0.1,0.45]; the angle range of the straight line 1 and the straight line 3 is [40 degrees, 120 degrees ] and the distance ratio range is [0.1,0.45].
(3) When the elbow is bent, the angle range of the two straight lines is [1 DEG, 60 DEG ]. The distance ratio range is [0.8,0.95].
Other ways of detecting if an abnormality occurs in the posture of the user may also be employed:
locating a position to a user target based on the detection information;
setting initial parameters according to the current body state of a user in the detection information, wherein the body position of the current body state of the user comprises a head part, a shoulder arm part, a hand part, a waist part and a leg part; matching the current frame in the detection information with the previous N-m frame; if the matching result of the body position has a difference of every X frames below a threshold value, the user is determined to be in a tremble state; the method comprises the steps of carrying out a first treatment on the surface of the And if the matching result of the body position exceeds the normal threshold range, the user is determined to be in a posture abnormal state.
The position of the target user can be rapidly positioned by the depth on-line real-time tracking technology. And images of different frames are matched through improved Hungary algorithm circulation, and m can be set according to conditions. In the matching process, the matching can be performed according to the number of body positions, for example, one part of five body parts including the head, the shoulder and the arm, the hand, the waist and the leg is abnormal, so that the user can be considered to be abnormal, two parts of the user can be set to be abnormal, and the user can be considered to be abnormal, and three parts, four parts, five parts and the like can also be set.
The matching mode can check whether the position of the central line exceeds a threshold value by marking the characteristic points and then selecting the central line of the characteristic points and comparing the position of the central line. For example, selecting characteristic points of a hand in a picture, wherein the characteristic points are edges of the hand, enabling the characteristic points to integrally depict the shape of the hand, then finding out the central line of each finger, shaking the hand when a parkinsonism patient shakes the hand, and if the change range exceeds the normal threshold range, judging that the hand is in a shaking state in a specific time, wherein the position of the central line of the hand is slightly changed.
For example, if the center line of the finger in the first frame image is set to the initial position, the center line of the finger in the second frame image is set to a distance of 3 from the center line of the finger in the first frame image, and the normal threshold is set to 2, then it can be considered that the normal threshold is exceeded. In addition, the number of the comparison images may be set to 2, 3, 4, 5, or the like, and the result is more accurate when a plurality of images are compared.
The head, shoulder arm, hand, waist, leg, etc. may be mapped with feature points and then compared to see if the difference in position distance exceeds a threshold. The person deliberately shakes or moves more, and the amplitude is larger, so that the threshold value can be set to a certain range, such as 2-4, and then, in the range smaller than 2 or larger than 4, it can be determined that the normal threshold value is not exceeded.
In addition to the determination by using the distance between the center lines, the determination may also be performed by using the distances between the sequential connection lines of the feature points, for example, for the shoulder, waist, etc., the center line is far from the edge, and is not easy to select, so that the connection line determination may be selected.
Examples
Referring to fig. 5, a flowchart of a method for detecting stimulus according to a second embodiment of the present invention is provided, and the method is used for further refining step S30 in the first embodiment, and includes the steps of:
step S31, acquiring position information of mouth angle designated points in the mouth angle key points, and calculating the slope between the mouth angle designated points according to the position information of the mouth angle designated points to obtain the mouth angle gradient;
wherein the mouth angle specifying point can be set according to the requirement, and the mouth angle specifying point in the step is P in the mouth angle key point 76 、P 85 、P 82 By calculating P 76 、P 85 、P 82 Slope between three points, yielding the mouth angle slope:
K L to detect the inclination of the left mouth angle on the face of the user, K R To detect the inclination of the right hand corner of the user's face due to the left hand cornerAnd (3) skewing to cause the slope of the left side to be larger than that of the right side, and taking the difference between the two, so that the mouth angle inclination measurement standard can be obtained.
Step S32, obtaining the position information of eyelid specified points in the eyelid key points, and determining the distance between the eyelid specified points according to the position information of the eyelid specified points to obtain a left eyelid closing distance and a right eyelid closing distance;
Wherein P is selected 60-67 、P 68-75 The eye area calculates the distance between the upper eyelid and the lower eyelid to judge the closing degree, and the smaller the distance is, the higher the eyelid closing degree is. Fig. 6 shows that the distance between the left eyelid and the right eyelid is smaller for parkinsonian patients, while the eyelid distance for normal persons is not much changed.
Optionally, in this embodiment, after determining the mouth angle inclination and the eyelid closing distance according to the mouth angle key point and the eyelid key point, the method further includes:
calculating a distance difference between the right eyelid closing distance and the left eyelid closing distance;
and if the distance difference is larger than a distance threshold, the depth of the nasolabial folds is larger than a depth threshold, and the inclination of the mouth angle is larger than an inclination threshold, judging that the depth of the nasolabial folds, the inclination of the mouth angle and the eyelid closing distance meet the facial abnormality conditions, and judging that the face of the detected user is abnormal.
Referring to fig. 7, the difference between the inclination of the mouth angle and the inclination threshold in the normal state is shown, and in this step, the distance threshold, the depth threshold and the inclination threshold may be set according to the requirement.
In this embodiment, by acquiring the position information of the specified points of the corners of the eyes, the slope between the specified points of the corners of the eyes can be effectively calculated based on the position information of the specified points of the corners of the eyes, the inclination of the corners of the eyes can be obtained, and by acquiring the position information of the specified points of the eyes in the key points of the eyes, the distance between the specified points of the eyes can be effectively determined based on the position information of the specified points of the eyes, and the left eyelid closing distance and the right eyelid closing distance can be obtained.
Examples
Referring to fig. 8, a schematic structural diagram of a face abnormality detection system 100 according to a third embodiment of the present invention includes: a nasolabial device determining module 10, a depth determining module 11, an inclination determining module 12, and a flash stimulating module 13, wherein:
the nasolabial device determining module 10 is configured to extract key points from facial images of a detected user, obtain a set of key points, and determine a nasolabial device region according to facial key points in the set of key points.
The depth determining module 11 is configured to determine a nasolabial device characteristic line according to the nasolabial device region, and determine a nasolabial device depth according to the nasolabial device characteristic line.
Wherein, the depth determining module 11 is further configured to: gray processing is carried out on the nose-lip groove region to obtain a nose-lip groove gray map, and binarization processing is carried out on the nose-lip groove gray map to obtain a nose-lip groove binary map;
and obtaining pixel points corresponding to a preset gray value in the nasolabial space binary image to obtain the nasolabial space characteristic line.
Optionally, the depth determining module 11 is further configured to: obtaining the pixel area corresponding to the nasolabial groove feature line, obtaining the nasolabial groove area, and calculating the area ratio between the nasolabial groove area and the area of the nasolabial groove area;
And determining the depth of the nasolabial folds according to the area ratio.
And the inclination determining module 12 is configured to obtain a corner key point and an eyelid key point in the key point set, and determine a corner inclination and an eyelid closing distance according to the corner key point and the eyelid key point, respectively.
Wherein the inclination determination module 12 is further configured to: acquiring position information of mouth angle designated points in the mouth angle key points, and calculating the slope between the mouth angle designated points according to the position information of the mouth angle designated points to obtain the mouth angle gradient;
and acquiring the position information of eyelid specified points in the eyelid key points, and determining the distance between the eyelid specified points according to the position information of the eyelid specified points to obtain a left eyelid closing distance and a right eyelid closing distance.
And the flash stimulation module 13 is used for judging that the face of the detection user is abnormal if the depth of the nasolabial folds, the inclination of the mouth angle and the eyelid closing distance meet the abnormal facial conditions, and performing flash stimulation on the detection user.
The flash stimulation module 13 is also configured to: said flashing stimulus to said detecting user comprises:
controlling the flash stimulation equipment to perform flash stimulation of gamma frequency bands;
Or, performing picture flicker stimulation on the detection user according to the preset frequency in the flicker stimulation equipment.
Wherein the flash stimulation module 13 is further configured to: acquiring video of the detection user to obtain an acquisition video, and determining the pupil gaze direction and the pupil movement track of the detection user according to the acquisition video;
performing anomaly detection on the pupil gaze direction and the pupil movement track;
and if the abnormal detection is not qualified, sending a facial pupil abnormal prompt.
Optionally, the flash stimulation module 13 is further configured to: determining facial expressions of the detected user according to the key point set, and determining an expression association feature set according to the facial expressions;
respectively determining the expression amplitude values of the feature areas in each expression associated feature group according to the facial image, and respectively calculating the difference value of the expression amplitude values between the feature areas in each expression associated feature group to obtain an amplitude value difference;
respectively inquiring standard deviation of each expression associated feature group under the facial expression, and respectively calculating variances between the corresponding amplitude differences and the standard deviation of each expression associated feature group to obtain expression feature values;
and if the expression characteristic value is larger than the characteristic threshold value, sending a facial expression abnormal prompt.
Further, the flash stimulation module 13 is further configured to: calculating a distance difference between the right eyelid closing distance and the left eyelid closing distance;
and if the distance difference is larger than a distance threshold, the depth of the nasolabial folds is larger than a depth threshold, and the inclination of the mouth angle is larger than an inclination threshold, judging that the depth of the nasolabial folds, the inclination of the mouth angle and the eyelid closing distance meet the facial abnormality conditions, and judging that the face of the detected user is abnormal.
According to the face detection method and device, the nose and lip groove area can be effectively determined based on the face key points in the key point set, the nose and lip groove characteristic line can be automatically determined based on the nose and lip groove area, the nose and lip groove depth of the face of the detected user can be effectively determined based on the nose and lip groove characteristic line, the mouth angle inclination and the eyelid closing distance of the face of the detected user can be respectively determined based on the mouth angle key points and the eyelid key points by acquiring the mouth angle key points and the eyelid key points, whether the face of the detected user is abnormal or not can be effectively determined based on the nose and lip groove depth, the mouth angle inclination and the eyelid closing distance, and according to the face detection method and device, whether the face of the detected user is abnormal or not can be accurately detected according to different detected users, and the face detection accuracy is improved.
Examples
Fig. 9 is a block diagram of a terminal device 2 according to a fourth embodiment of the present application. As shown in fig. 9, the terminal device 2 of this embodiment includes: a processor 20, a memory 21 and a computer program 22 stored in said memory 21 and executable on said processor 20, for example a program for detecting a stimulation method. The steps of the various embodiments of the detection stimulus method described above are implemented by the processor 20 when executing the computer program 22.
Illustratively, the computer program 22 may be partitioned into one or more modules that are stored in the memory 21 and executed by the processor 20 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 22 in the terminal device 2. The terminal device may include, but is not limited to, a processor 20, a memory 21.
The processor 20 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may be an external storage device of the terminal device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 21 may also be used for temporarily storing data that has been output or is to be output.
In addition, each functional module in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Wherein the computer readable storage medium may be nonvolatile or volatile. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium may be appropriately scaled according to the requirements of jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunication signals, for example, according to jurisdictions and patent practices.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (5)

1. A method of detecting stimulation, the method comprising:
periodically detecting the state of the user in a detection range;
when the state of the user is abnormal, flashing and/or picture flashing stimulation is carried out on the user;
stopping the stimulation until the state of the user is recovered to be normal and/or the user is not found in the detection range;
the method comprises the following steps when the state of the user is abnormal:
the face of the user is abnormal;
and/or the physical state of the user is abnormal;
the face occurrence abnormality of the user includes:
extracting key points from a facial image of a user to obtain a key point set, and determining a nasolabial sulcus area according to facial key points in the key point set;
Determining a nasolabial channel characteristic line according to the nasolabial channel region, and determining the depth of the nasolabial channel according to the nasolabial channel characteristic line;
obtaining a mouth angle key point and an eyelid key point in the key point set, and respectively determining a mouth angle gradient and an eyelid closing distance according to the mouth angle key point and the eyelid key point;
if the nasolabial device is deep and shallow, the mouth angle gradient and the eyelid closing distance meet facial abnormality conditions, judging that the face of the detected user is abnormal;
the flashing and/or screen flashing stimulation of the user comprises:
controlling a flash stimulation device to perform flash stimulation of a gamma frequency band;
and/or performing picture flicker stimulation on the user according to a preset frequency in the display device;
the abnormal physical state of the user comprises the following steps:
acquiring a plurality of human body key points based on detection information, and acquiring a key point set; the key points of the human body comprise head, neck, shoulder, elbow, wrist, waist, knee and leg bowl parts;
calculating the confidence coefficient of each key point according to the key point set;
if the confidence coefficient of the key point is larger than 0.4, the user is considered to be abnormal in posture.
2. The method of claim 1, wherein:
The determining a nasolabial band feature line according to the nasolabial band region includes:
gray processing is carried out on the nose-lip groove region to obtain a nose-lip groove gray map, and binarization processing is carried out on the nose-lip groove gray map to obtain a nose-lip groove binary map;
obtaining pixel points corresponding to preset gray values in the nasolabial space binary image to obtain the nasolabial space characteristic line;
the determining the depth of the nasolabial folds according to the nasolabial fold characteristic line comprises the following steps:
obtaining the pixel area corresponding to the nasolabial groove feature line, obtaining the nasolabial groove area, and calculating the area ratio between the nasolabial groove area and the area of the nasolabial groove area;
and determining the depth of the nasolabial folds according to the area ratio.
3. The method of claim 2, wherein:
the determining the mouth angle gradient and the eyelid closing distance according to the mouth angle key point and the eyelid key point respectively comprises the following steps:
acquiring position information of mouth angle designated points in the mouth angle key points, and calculating the slope between the mouth angle designated points according to the position information of the mouth angle designated points to obtain the mouth angle gradient;
acquiring position information of eyelid specified points in the eyelid key points, and determining the distance between the eyelid specified points according to the position information of the eyelid specified points to obtain a left eyelid closing distance and a right eyelid closing distance;
After determining the mouth angle gradient and the eyelid closing distance according to the mouth angle key point and the eyelid key point, the method further comprises:
calculating a distance difference between the right eyelid closing distance and the left eyelid closing distance;
and if the distance difference is larger than a distance threshold, the depth of the nasolabial folds is larger than a depth threshold, and the inclination of the mouth angle is larger than an inclination threshold, judging that the depth of the nasolabial folds, the inclination of the mouth angle and the eyelid closing distance meet the facial abnormality conditions, and judging that the face of the user is abnormal.
4. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 3 when the computer program is executed.
5. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 3.
CN202310661494.7A 2023-06-06 2023-06-06 Method, terminal and storage medium for detecting stimulation Active CN116392086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310661494.7A CN116392086B (en) 2023-06-06 2023-06-06 Method, terminal and storage medium for detecting stimulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310661494.7A CN116392086B (en) 2023-06-06 2023-06-06 Method, terminal and storage medium for detecting stimulation

Publications (2)

Publication Number Publication Date
CN116392086A CN116392086A (en) 2023-07-07
CN116392086B true CN116392086B (en) 2023-08-25

Family

ID=87009016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310661494.7A Active CN116392086B (en) 2023-06-06 2023-06-06 Method, terminal and storage medium for detecting stimulation

Country Status (1)

Country Link
CN (1) CN116392086B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN110179460A (en) * 2019-04-04 2019-08-30 苏州国科视清医疗科技有限公司 Device of waking up is detected and promoted based on the brainfag of eye electricity and head pose
CN110681029A (en) * 2019-09-29 2020-01-14 广东省医疗器械研究所 Alzheimer disease prevention and treatment device and implementation method thereof
CN111048205A (en) * 2019-12-17 2020-04-21 创新工场(北京)企业管理股份有限公司 Method and device for assessing symptoms of Parkinson's disease
CN111210415A (en) * 2020-01-06 2020-05-29 浙江大学 Method for detecting facial expression coma of Parkinson patient
CN111814615A (en) * 2020-06-28 2020-10-23 湘潭大学 Parkinson non-contact intelligent detection method based on instruction video
CN111915842A (en) * 2020-07-02 2020-11-10 广东技术师范大学 Abnormity monitoring method and device, computer equipment and storage medium
CN113435362A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Abnormal behavior detection method and device, computer equipment and storage medium
CN113537005A (en) * 2021-07-02 2021-10-22 福州大学 On-line examination student behavior analysis method based on attitude estimation
CN114429474A (en) * 2020-10-29 2022-05-03 苏宁云计算有限公司 Human body key point detection method and device and computer storage medium
CN115438725A (en) * 2022-08-23 2022-12-06 科大讯飞股份有限公司 State detection method, device, equipment and storage medium
CN115914942A (en) * 2022-10-28 2023-04-04 深圳亚希诺科技有限公司 Method and system for realizing multi-dimensional sense organ same-frequency stimulation through sound, light and vibration
CN115937829A (en) * 2022-11-25 2023-04-07 湖北三峡物联网知识产权运营有限公司 Method for detecting abnormal behaviors of operators in crane cab

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN110179460A (en) * 2019-04-04 2019-08-30 苏州国科视清医疗科技有限公司 Device of waking up is detected and promoted based on the brainfag of eye electricity and head pose
CN110681029A (en) * 2019-09-29 2020-01-14 广东省医疗器械研究所 Alzheimer disease prevention and treatment device and implementation method thereof
CN111048205A (en) * 2019-12-17 2020-04-21 创新工场(北京)企业管理股份有限公司 Method and device for assessing symptoms of Parkinson's disease
CN111210415A (en) * 2020-01-06 2020-05-29 浙江大学 Method for detecting facial expression coma of Parkinson patient
CN111814615A (en) * 2020-06-28 2020-10-23 湘潭大学 Parkinson non-contact intelligent detection method based on instruction video
CN111915842A (en) * 2020-07-02 2020-11-10 广东技术师范大学 Abnormity monitoring method and device, computer equipment and storage medium
CN114429474A (en) * 2020-10-29 2022-05-03 苏宁云计算有限公司 Human body key point detection method and device and computer storage medium
CN113435362A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Abnormal behavior detection method and device, computer equipment and storage medium
CN113537005A (en) * 2021-07-02 2021-10-22 福州大学 On-line examination student behavior analysis method based on attitude estimation
CN115438725A (en) * 2022-08-23 2022-12-06 科大讯飞股份有限公司 State detection method, device, equipment and storage medium
CN115914942A (en) * 2022-10-28 2023-04-04 深圳亚希诺科技有限公司 Method and system for realizing multi-dimensional sense organ same-frequency stimulation through sound, light and vibration
CN115937829A (en) * 2022-11-25 2023-04-07 湖北三峡物联网知识产权运营有限公司 Method for detecting abnormal behaviors of operators in crane cab

Also Published As

Publication number Publication date
CN116392086A (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN105184246B (en) Living body detection method and living body detection system
CN108427503B (en) Human eye tracking method and human eye tracking device
JP3673834B2 (en) Gaze input communication method using eye movement
US11715231B2 (en) Head pose estimation from local eye region
CN111933275A (en) Depression evaluation system based on eye movement and facial expression
US20210100492A1 (en) Method for detecting and classifying a motor seizure
CN112232128B (en) Eye tracking based method for identifying care needs of old disabled people
JP2018007792A (en) Expression recognition diagnosis support device
Al-Rahayfeh et al. Enhanced frame rate for real-time eye tracking using circular hough transform
CN109784302A (en) A kind of human face in-vivo detection method and face recognition device
US20160302658A1 (en) Filtering eye blink artifact from infrared videonystagmography
Zhang et al. Discrimination of gaze directions using low-level eye image features
CN114202795A (en) Method for quickly positioning pupils of old people
CN111526286B (en) Method and system for controlling motor motion and terminal equipment
JP2004192551A (en) Eye opening/closing determining apparatus
CN105279764B (en) Eye image processing apparatus and method
CN116392086B (en) Method, terminal and storage medium for detecting stimulation
Rakshita Communication through real-time video oculography using face landmark detection
CN113297966A (en) Night learning method based on multiple stimuli
US11954905B2 (en) Landmark temporal smoothing
JP2000268161A (en) Real time expression detector
Attivissimo et al. Performance evaluation of image processing algorithms for eye blinking detection
CN112233769A (en) Recovery system after suffering from illness based on data acquisition
Perez et al. Real-time iris detection on faces with coronal axis rotation
Chaudhari et al. Face feature detection and normalization based on eyeball center and recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant