CN107741784A - A kind of amusement exchange method suitable for leaden paralysis patient - Google Patents

A kind of amusement exchange method suitable for leaden paralysis patient Download PDF

Info

Publication number
CN107741784A
CN107741784A CN201710929483.7A CN201710929483A CN107741784A CN 107741784 A CN107741784 A CN 107741784A CN 201710929483 A CN201710929483 A CN 201710929483A CN 107741784 A CN107741784 A CN 107741784A
Authority
CN
China
Prior art keywords
patient
image
mrow
cursor
mouth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710929483.7A
Other languages
Chinese (zh)
Inventor
李金屏
安庆浩
蒋明敏
鲁守银
韩延彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201710929483.7A priority Critical patent/CN107741784A/en
Publication of CN107741784A publication Critical patent/CN107741784A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of amusement exchange method suitable for leaden paralysis patient, it comprises the following steps:Step 1, the head image of leaden paralysis patient is obtained, head image is cut out to the face area image of patient;Step 2, the center of patient's face area image is calculated, the movement of cursor is controlled using the offset direction of center;Step 3, patient's face area image cuts out to eyes and the face region of patient, and according to the blink of eyes and the image recognition patient of mouth region and action of shutting up;Step 4, the movement of control cursor of shaking the head up and down and combinations thereof according to patient, shut up according to patient and complete clicking for right mouse button twice, clicking for left mouse button is completed according to patient's blink twice, so as to control the operation of cursor and mouse to complete to entertain interactive mode.The present invention not only meets patient and watches the demand of entertainment, and meets the requirement of amusement interaction real-time.

Description

A kind of amusement exchange method suitable for leaden paralysis patient
Technical field
The present invention relates to a kind of amusement exchange method suitable for leaden paralysis patient, belong to technical field of image processing.
Background technology
Paralysis crowd can not move in China's generally existing, leaden paralysis patient four limbs therein, aphasis be present, but It is that brain will is regained consciousness.The reason for causing paralysis has a lot, such as apoplexy sequela, Cerebral Palsy in Children, traffic accident etc..
In paralysis crowd, paralysis caused by cerebral apoplexy occupies very high ratio.Cerebral apoplexy is due to that cerebral vessels are unexpected Rupture causes a kind of disease of brain tissue impairment because angiemphraxis causes blood to cannot flow into brain, including ischemic and goes out Courageous and upright two kinds of situations.Cerebral apoplexy is also the refractory disease that the world today seriously endangers human health and life security, has hair The characteristics of sick rate is high, disability rate is high and the death rate is high.In 2008, world's death toll was 57,000,000, and wherein chronic disease is dead Number accounts for the 63% of total death toll, and in China, it is chronic disease that chronic disease, which accounts for the 85% of total death toll, angiocarpy, cancer etc., Main causes of death.With the raising of the development of medical science, Intensive Care Therapy technology and comprehensive rescue technology, cerebral apoplexy patient The death rate is remarkably decreased, but disability rate is in anti-increase trend.The mental health of patient is very helpful to treating, and we more should The health of promotion patient.
The achievement in research of head pose estimation is a lot.A kind of it is proposed that head appearance being applied under Varying Illumination State estimates that this method is classified using oriented histogram of gradients and principal component analysis extraction feature followed by SVM.Someone The sift features of face image are extracted, sift characteristic matchings then are carried out to two images, so as to carry out head pose estimation.Have People constructs posture similar spatial first, and using the similitude between posture, head pose is estimated.Above method is all For the estimation of posture on head level direction.In addition, the research for head pose estimation also used random forest, Metric learning, popular study, deep learning etc..
Identification for blink, researcher both domestic and external have also carried out some researchs.Someone is taken the photograph using infrared-sensitive Camera obtains driver's face image, then using Kalman filter tracking eyes, and then calculates the time that driver persistently blinks, And then judge whether patient is tired, but such a method has certain requirement to imaging device.Someone trains first to open Eye and the template closed one's eyes, correlation analysis then is carried out with current eye region to judge the state of eye opening or eye closing.Remove In terms of blink identification has been also applied to the man-machine interaction of individuals with disabilities outside this.
There are leaden paralysis patient four limbs can not move, and the characteristics of aphasis, brain will is regained consciousness be present, so patient In addition to basic psychological need, it is also necessary to obtain the consolation of spirit aspect.Leaden paralysis patient had largely in one day Free time, and the mood of patient is unstable under normal circumstances, it usually needs special medical personnel are nursed.Such as Fruit disease people oneself can select oneself to think the program of amusing and diverting oneself of viewing, and this is not only a kind of mental support to patient, and Substantial amounts of human and material resources can be saved, and then are the certain value of social creativity.
The content of the invention
For above-mentioned deficiency, the invention provides a kind of amusement exchange method suitable for leaden paralysis patient, it can By patient head posture, blink and open and shut up action to complete to entertain interaction.
The present invention solves its technical problem and adopted the technical scheme that:A kind of amusement suitable for leaden paralysis patient interacts Method, it is characterized in that, it is by the head pose of patient, blink and opens the movement for action control cursor of shutting up and the left and right of mouse Key operation, and then complete amusement interaction.
Further, described amusement exchange method comprises the following steps:
Step 1, the head image of leaden paralysis patient is obtained, head image is cut out to the face area image of patient;
Step 2, the center of patient's face area image is calculated, cursor is controlled using the offset direction of center It is mobile;
Step 3, patient's face area image is cut out to eyes and the face region of patient, and according to eyes and mouth area The blink of the image recognition patient in domain and action of shutting up;
Step 4, the movement of the control cursor of shaking the head up and down and combinations thereof according to patient, shuts up twice according to patient Clicking for right mouse button is completed, clicking for left mouse button is completed according to patient's blink twice, so as to control the behaviour of cursor and mouse Make to complete to entertain interactive mode.
Further, the step 1 comprises the following steps that:
Step 11:Camera is installed, and adjusts the angle of camera, makes the head of camera alignment patient, and then is obtained The head image of patient;
Step 12:The head image of patient is cut using the Adaboost algorithm in SeetaFace algorithms or OpenCV Go out the image of patient's face area.
Further, the step 2 comprises the following steps that:
Step 21:The center of patient's face area image, the people are calculated according to the height and width of face area image The center of face area image is the midpoint of face area image height and width;
Step 22:Tetra- regions of m1, m2, m3 and m4 are marked in the head image of patient, are obtained according to head pose The offset direction of center;
Step 23:The movement in eight directions of cursor, the cursor are controlled according to the offset direction of face image center Position on present displays is obtained by function GetCursorPos (), and it is new by equation below to calculate cursor Position:
x1=x+k (m1+m3)
y1=y+k (m2+m4)
Wherein x and y is coordinate of the cursor on present displays, x1 and y1 be the cursor being calculated over the display New position, k are the amplitude of cursor movement.
Further, the step 3 comprises the following steps that:
Step 31:Face area imagery exploitation SeetaFace algorithms according to cutting out are calculated in two of eyes Two key points of heart point and the corners of the mouth, according to the size of central point, key point and human face region image it is self-defined go out eyes and mouth The size of bar area image;
Step 32:Blink Threshold-training, selects 30 width eye opening images and 30 width eye closing images from the image sequence of patient, The image of eye areas is converted into gray-scale map, and the variance of eyes gray level image is calculated according to equation below:
Wherein, M × N is the size of image, and A (x, y) is the pixel value in image, and μ 1 is the average of eye image, and σ is meter Obtained variance;
Step 33:The mean μ 12 of the width eye closing image variance of mean μ 11 and 30 of 30 width eye opening image variances is calculated, and then Calculate the threshold value T1=(μ 11+ μ 12)/2 of blink image;
Step 34:Blink test, calculates the variance of patient's eyes area image, is then considered to open eyes more than threshold value T1, no Then it is considered to close one's eyes;
Step 35:Shut up Threshold-training, select that 30 width shut up image and 30 width are opened one's mouth figure from the image sequence of patient Picture, the image of mouth region is converted into gray-scale map, and the average of mouth gray level image is calculated according to equation below;
μ 2 is the mouth image average being calculated;
Step 36:Calculate the shut up width of mean μ 21 and 30 of image of 30 width to open one's mouth the mean μ 22 of image, and then calculate Shut up the threshold value T2=(μ 21+ μ 22)/2 of image;
Step 37:Test of shutting up is opened, the average of patient's mouth region image is calculated, is then considered to shut up more than threshold value T2, Otherwise it is assumed that it is to open one's mouth.
Further, the step 4 comprises the following steps that:
Step 41:When patient wants to watch entertainment, patient is by keeping 3 seconds triggering key completion systems of action of opening one's mouth The startup of system;
Step 42:After system starts, patient is by the movement of control cursor of shaking the head up and down and combinations thereof, and by light Mark is moved to entertainment program;
Step 43:Then patient shuts up carries out clicking for right mouse button twice, opens the control menu of entertainment program, Mouth duration is shut up after being 1 second shuts up once to open;By blinking twice, entertainment program is opened, is closed one's eyes the duration 1 Opened eyes after second as blink once;
Step 44:When patient is not desired to watch entertainment, patient triggers closing key by keeping opening one's mouth to open one's mouth 3 seconds again The closing of completion system.
The beneficial effects of the invention are as follows:
Having fully taken into account leaden paralysis patient four limbs can not move, and aphasis be present, but brain will is clear-headed Feature, the right and left key of the invention by head pose, blink, the movement for opening action control cursor of shutting up and mouse, and then complete Entertain interaction.When patient wants viewing entertainment, first by keeping the action startup for completing system in 3 seconds of opening one's mouth; After system starts, and then can be by the movement for control cursor of shaking the head up and down and combinations thereof, and move the cursor to amusement Program program;Then clicking for right mouse button is completed twice by opening to shut up, open the control menu of entertainment program, open one's mouth to act Duration is 1 second, then shuts up and is shut up once to open;By blinking twice, entertainment program is opened, is closed one's eyes 1 second duration, Then open eyes as blink once;Finally by the closing for keeping opening one's mouth to act 3 seconds completion systems.
The present invention obtains the image of patient head by camera first, and face area is then cut out in head image Image, the center of face area is calculated, the movement of cursor is controlled according to the offset direction of center;It is left for mouse The control of right button, eyes and the image of mouth region are cut out in human face region image first, and then identify blink and open Shut up action, and clicking for right mouse button is completed twice by opening one's mouth, blink completes clicking for left mouse button twice, so as to control The operation of cursor and mouse completes amusement interactive mode, and meets the requirement for entertaining interactive real-time.
Brief description of the drawings
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 (a) and Fig. 2 (b) are severe paralytic's schematic device, and Fig. 2 (a) is device front schematic view, Fig. 2 (b) Schematic device during to make patient rise and retire;
Fig. 3 (a) to Fig. 3 (c) is Face datection and head pose schematic diagram, and Fig. 3 (a) is the head image obtained, Fig. 3 (b) it is m1, m2, m3, m4 mark schematic diagrames, Fig. 3 (c) is that m1 calculates schematic diagram;
Fig. 4 (a) to Fig. 4 (h) is that schematic diagram is moved in face regional center position;
Fig. 5 is that head pose controls cursor to move flow chart;
Fig. 6 is the part eye opening eye closing schematic diagram during blink Threshold-training;
Fig. 7 is that the part shut up in test process is opened one's mouth schematic diagram of shutting up.
Embodiment
For the technical characterstic for illustrating this programme can be understood, below by embodiment and its accompanying drawing is combined to the present invention It is described in detail.Following disclosure provides many different embodiments or example is used for realizing the different structure of the present invention. In order to simplify disclosure of the invention, hereinafter the part and setting of specific examples are described.In addition, the present invention can be not With repeat reference numerals in example and/or letter.This repetition is for purposes of simplicity and clarity, itself not indicate to be begged for By the relation between various embodiments and/or setting.It should be noted that part illustrated in the accompanying drawings is painted not necessarily to scale System.Present invention omits the description to known assemblies and treatment technology and process to avoid being unnecessarily limiting the present invention.
The present invention has taken into full account the characteristics of leaden paralysis patient, and takes this as a foundation, and designs whole device, and it passes through The head pose of patient, blink and open the movement for action control cursor of shutting up and the left and right key operation of mouse, and then complete amusement Interaction.
The SeetaFace face recognition engines that the present invention uses are that cas computer institute researcher mountain generation light teacher leads Recognition of face seminar research and development, code be based on C++ realize, independent of third party library, SeetaFace increase income for face inspection Survey, face aligns, recognition of face and its research of association area are made that huge contribution.Adaboost algorithm is in OpenCV A kind of face recognition algorithms carried, this algorithm are that recognition of face and its research of association area provide a great convenience.
A kind of amusement exchange method suitable for leaden paralysis patient of the present invention, it by the head pose of patient, blink Eye and the movement for opening action control cursor of shutting up and the left and right key operation of mouse, and then complete amusement interaction.Such as Fig. 1 institutes Show, described amusement exchange method comprises the following steps:
Step 1, the head image of leaden paralysis patient is obtained, head image is cut out to the face area image of patient;
Step 2, the center of patient's face area image is calculated, cursor is controlled using the offset direction of center It is mobile;
Step 3, patient's face area image is cut out to eyes and the face region of patient, and according to eyes and mouth area The blink of the image recognition patient in domain and action of shutting up;
Step 4, the movement of the control cursor of shaking the head up and down and combinations thereof according to patient, shuts up twice according to patient Clicking for right mouse button is completed, clicking for left mouse button is completed according to patient's blink twice, so as to control the behaviour of cursor and mouse Make to complete to entertain interactive mode.
Further, the step 1 comprises the following steps that:
Step 11:Camera is installed, and adjusts the angle of camera, makes the head of camera alignment patient, and then is obtained The head image of patient;As shown in Fig. 2 the B cradle head cameras in Fig. 2 (a) and Fig. 2 (b) can obtain the head image of patient, Display C is that patient entertains interactive interface.
Step 12:The head image of patient is cut using the Adaboost algorithm in SeetaFace algorithms or OpenCV Go out the image of patient's face area.
Further, the step 2 comprises the following steps that:
Step 21:The center of patient's face area image, the people are calculated according to the height and width of face area image The center of face area image is the midpoint of face area image height and width.According to obtained face area image, such as Fig. 3 (a) shown in, wherein R1 rectangle frames are human face region, according to the height and width of face area image, take the midpoint of height and width, that is, are people The center of face area image, the center projection in head image such as the R2 positions in Fig. 3 (a).
Step 22:Tetra- regions of m1, m2, m3 and m4 are marked in the head image of patient, are obtained according to head pose The offset direction of center.Fig. 3 (b) is Fig. 3 (a) rough schematic view, and the centre dot in Fig. 3 (b) is face area Center, inner rectangular frame are R1 rectangle frames in Fig. 3 (a), mark four regions in the head image of patient, i.e. m1, M2, m3, m4, as shown in Fig. 3 (b), the mark in four regions is the center of display window, due to paralytic four limbs without Method moves, all in specific implementation process, and camera needs to be directed at the head of paralytic.Fig. 3 (c) is Fig. 3 (b) p1 positions Put, in Fig. 3 (c), when m1 is located at position 2, m1=0;When m1 is located at position 1, m1=-1;When m1 is located at position 3, m1 =1.
Step 23:The movement in eight directions of cursor, the cursor are controlled according to the offset direction of face image center Position on present displays is obtained by function GetCursorPos (), and it is new by equation below to calculate cursor Position:
x1=x+k (m1+m3)
y1=y+k (m2+m4)
Wherein x and y is coordinate of the cursor on present displays, x1 and y1 be the cursor being calculated over the display New position, k are the amplitude of cursor movement, i.e. speed.Fig. 4 is that schematic diagram, its pair with cursor are moved in face regional center position It should be related to and be shown in Table 1:
Table 1:Mouse moves table
The specific implementation process of step 2 then calculates figure as shown in figure 5, obtain the image of patient's face area first The center of picture, the offset direction of center is then obtained according to head pose, finally according to cursor in present displays On position, position is obtained by function GetCursorPos (), and the new position of cursor is calculated by formula, and mobile Cursor.
Further, the step 3 comprises the following steps that:
Step 31:Face area imagery exploitation SeetaFace algorithms according to cutting out are calculated in two of eyes Two key points of heart point and the corners of the mouth, according to the size of central point, key point and human face region image it is self-defined go out eyes and mouth The size of bar area image;The size that the application defines eye areas image is 45 × 25, and the size of mouth region image is 125×50。
Step 32:Blink Threshold-training, selects 30 width eye opening images and 30 width eye closing images from the image sequence of patient, As shown in fig. 6, the image of eye areas is converted into gray-scale map, and the side of eyes gray level image is calculated according to equation below Difference:
Wherein, M × N is the size of image, and A (x, y) is the pixel value in image, and μ 1 is the average of eye image, and σ is meter Obtained variance, what variance reflected is the dispersion degree of data.
Step 33:The mean μ 12 of the width eye closing image variance of mean μ 11 and 30 of 30 width eye opening image variances is calculated, and then Calculate the threshold value T1=(μ 11+ μ 12)/2 of blink image;
Step 34:Blink test, calculates the variance of patient's eyes area image, is then considered to open eyes more than threshold value T1, no Then it is considered to close one's eyes;
Step 35:Shut up Threshold-training, select that 30 width shut up image and 30 width are opened one's mouth figure from the image sequence of patient Picture, as shown in fig. 7, the image of mouth region is converted into gray-scale map, and mouth gray level image is calculated according to equation below Average;
μ 2 is the mouth image average being calculated;
Step 36:Calculate the shut up width of mean μ 21 and 30 of image of 30 width to open one's mouth the mean μ 22 of image, and then calculate Shut up the threshold value T2=(μ 21+ μ 22)/2 of image;
Step 37:Test of shutting up is opened, the average of patient's mouth region image is calculated, is then considered to shut up more than threshold value T2, Otherwise it is assumed that it is to open one's mouth.
Further, the step 4 comprises the following steps that:
Step 41:When patient wants to watch entertainment, patient is by keeping 3 seconds triggering key completion systems of action of opening one's mouth The startup of system;
Step 42:After system starts, patient is by the movement of control cursor of shaking the head up and down and combinations thereof, and by light Mark is moved to entertainment program;
Step 43:Then patient shuts up carries out clicking for right mouse button twice, opens the control menu of entertainment program, Mouth duration is shut up after being 1 second shuts up once to open;By blinking twice, entertainment program is opened, is closed one's eyes the duration 1 Opened eyes after second as blink once;
Step 44:When patient is not desired to watch entertainment, patient triggers closing key by keeping opening one's mouth to open one's mouth 3 seconds again The closing of completion system.
In addition, the application of the present invention is not limited to technique, mechanism, the system of the specific embodiment described in specification Make, material composition, means, method and step., will be easy as one of ordinary skill in the art from the disclosure Ground understands, for current technique that is existing or will developing later, mechanism, manufacture, material composition, means, method or Step, the knot that wherein they perform the function being substantially the same with the corresponding embodiment of the invention described or acquisition is substantially the same Fruit, they can be applied according to the present invention.Therefore, appended claims of the present invention are intended to these techniques, mechanism, system Make, material composition, means, method or step are included in its protection domain.

Claims (6)

1. a kind of amusement exchange method suitable for leaden paralysis patient, it is characterized in that, its head pose by patient, blink The left and right key operation of movement and mouse with action control cursor of shutting up, and then complete amusement interaction.
2. a kind of amusement exchange method suitable for leaden paralysis patient according to claim 1, it is characterized in that, it is described Amusement exchange method comprises the following steps:
Step 1, the head image of leaden paralysis patient is obtained, head image is cut out to the face area image of patient;
Step 2, the center of patient's face area image is calculated, the shifting of cursor is controlled using the offset direction of center It is dynamic;
Step 3, patient's face area image cuts out to eyes and the face region of patient, and according to eyes and mouth region The blink of image recognition patient and action of shutting up;
Step 4, the movement of the control cursor of shaking the head up and down and combinations thereof according to patient, shuts up according to patient and completes twice Right mouse button is clicked, and clicking for left mouse button is completed twice according to patient's blink, so as to control the operation of cursor and mouse Complete amusement interactive mode.
3. a kind of amusement exchange method suitable for leaden paralysis patient according to claim 2, it is characterized in that, the step Rapid 1 comprises the following steps that:
Step 11:Camera is installed, and adjusts the angle of camera, makes the head of camera alignment patient, and then obtains patient Head image;
Step 12:The head image of patient is cut out into disease using the Adaboost algorithm in SeetaFace algorithms or OpenCV The image in face region.
4. a kind of amusement exchange method suitable for leaden paralysis patient according to claim 2, it is characterized in that, the step Rapid 2 comprise the following steps that:
Step 21:The center of patient's face area image, the face area are calculated according to the height and width of face area image The center of area image is the midpoint of face area image height and width;
Step 22:M is marked in the head image of patient1,m2,m3And m4Four regions, centre bit is obtained according to head pose The offset direction put;
Step 23:The movement in eight directions of cursor is controlled according to the offset direction of face image center, the cursor is being worked as Position on preceding display is obtained by function GetCursorPos (), and calculates the new position of cursor by equation below:
x1=x+k (m1+m3)
y1=y+k (m2+m4)
Wherein x and y is coordinate of the cursor on present displays, x1And y1For the new position of the cursor that is calculated over the display Put, k is the amplitude of cursor movement.
5. a kind of amusement exchange method suitable for leaden paralysis patient according to claim 2, it is characterized in that, the step Rapid 3 comprise the following steps that:
Step 31:Face area imagery exploitation SeetaFace algorithms according to cutting out calculate two central points of eyes With two key points of the corners of the mouth, according to the size of central point, key point and human face region image it is self-defined go out eyes and face area The size of area image;
Step 32:Blink Threshold-training, selects 30 width eye opening images and 30 width eye closing images, by eye from the image sequence of patient The image in eyeball region is converted into gray-scale map, and the variance of eyes gray level image is calculated according to equation below:
<mrow> <mi>&amp;sigma;</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mi>x</mi> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mi>y</mi> <mi>M</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>A</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mi>M</mi> <mo>&amp;times;</mo> <mi>N</mi> </mrow> </mfrac> </mrow>
Wherein, M × N be image size, A (x, y) be image in pixel value, μ1For the average of eye image, σ is to calculate The variance arrived;
Step 33:Calculate the mean μ of 30 width eye opening image variances11With the mean μ of 30 width eye closing image variances12, and then calculate The threshold value T of blink image1=(μ1112)/2;
Step 34:Blink test, the variance of patient's eyes area image is calculated, more than threshold value T1Then it is considered to open eyes, otherwise it is assumed that It is to close one's eyes;
Step 35:Shut up Threshold-training, select that 30 width shut up image and 30 width are opened one's mouth image from the image sequence of patient, will The image of mouth region is converted into gray-scale map, and the average of mouth gray level image is calculated according to equation below;
<mrow> <msub> <mi>&amp;mu;</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mi>x</mi> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mi>y</mi> <mi>M</mi> </munderover> <mi>A</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>M</mi> <mo>&amp;times;</mo> <mi>N</mi> </mrow> </mfrac> </mrow>
μ2For the mouth image average being calculated;
Step 36:30 width are calculated to shut up the mean μ of image21Opened one's mouth with 30 width the mean μ of image22, and then calculate and open figure of shutting up The threshold value T of picture2=(μ2122)/2;
Step 37:Test of shutting up is opened, the average of patient's mouth region image is calculated, more than threshold value T2Then it is considered to shut up, otherwise recognizes To be to open one's mouth.
6. a kind of amusement exchange method suitable for leaden paralysis patient according to claim 2, it is characterized in that, the step Rapid 4 comprise the following steps that:
Step 41:When patient wants to watch entertainment, patient triggers key completion system for 3 seconds by keeping opening one's mouth to act Start;
Step 42:After system starts, patient is moved cursor by the movement of control cursor of shaking the head up and down and combinations thereof Move entertainment program;
Step 43:Then patient shuts up carries out clicking for right mouse button twice, opens the control menu of entertainment program, opens one's mouth to move As the duration be 1 second after shut up to shut up once;By blinking twice, entertainment program is opened, after 1 second duration of closing one's eyes Open eyes to blink once;
Step 44:When patient is not desired to watch entertainment, patient triggers closing key completion by keeping opening one's mouth to open one's mouth 3 seconds again The closing of system.
CN201710929483.7A 2017-10-09 2017-10-09 A kind of amusement exchange method suitable for leaden paralysis patient Pending CN107741784A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710929483.7A CN107741784A (en) 2017-10-09 2017-10-09 A kind of amusement exchange method suitable for leaden paralysis patient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710929483.7A CN107741784A (en) 2017-10-09 2017-10-09 A kind of amusement exchange method suitable for leaden paralysis patient

Publications (1)

Publication Number Publication Date
CN107741784A true CN107741784A (en) 2018-02-27

Family

ID=61236725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710929483.7A Pending CN107741784A (en) 2017-10-09 2017-10-09 A kind of amusement exchange method suitable for leaden paralysis patient

Country Status (1)

Country Link
CN (1) CN107741784A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509890A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for extracting information
CN108596056A (en) * 2018-04-10 2018-09-28 武汉斑马快跑科技有限公司 A kind of taxi operation behavior act recognition methods and system
CN111259802A (en) * 2020-01-16 2020-06-09 东北大学 Head posture estimation-based auxiliary aphasia paralytic patient demand expression method
CN111898407A (en) * 2020-06-06 2020-11-06 东南大学 Human-computer interaction operating system based on human face action recognition
CN113504831A (en) * 2021-07-23 2021-10-15 电光火石(北京)科技有限公司 IOT (input/output) equipment control method based on facial image feature recognition, IOT and terminal equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101697199A (en) * 2009-08-11 2010-04-21 北京盈科成章科技有限公司 Detection method of head-face gesture and disabled assisting system using same to manipulate computer
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN106611447A (en) * 2016-12-30 2017-05-03 首都师范大学 Work attendance method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731418A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust accurate eye positioning in complicated background image
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101697199A (en) * 2009-08-11 2010-04-21 北京盈科成章科技有限公司 Detection method of head-face gesture and disabled assisting system using same to manipulate computer
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN106611447A (en) * 2016-12-30 2017-05-03 首都师范大学 Work attendance method and apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509890A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for extracting information
CN108509890B (en) * 2018-03-27 2022-08-16 百度在线网络技术(北京)有限公司 Method and device for extracting information
CN108596056A (en) * 2018-04-10 2018-09-28 武汉斑马快跑科技有限公司 A kind of taxi operation behavior act recognition methods and system
CN111259802A (en) * 2020-01-16 2020-06-09 东北大学 Head posture estimation-based auxiliary aphasia paralytic patient demand expression method
CN111898407A (en) * 2020-06-06 2020-11-06 东南大学 Human-computer interaction operating system based on human face action recognition
CN111898407B (en) * 2020-06-06 2022-03-29 东南大学 Human-computer interaction operating system based on human face action recognition
CN113504831A (en) * 2021-07-23 2021-10-15 电光火石(北京)科技有限公司 IOT (input/output) equipment control method based on facial image feature recognition, IOT and terminal equipment

Similar Documents

Publication Publication Date Title
CN107741784A (en) A kind of amusement exchange method suitable for leaden paralysis patient
Schaller A man without words
Eysenck Principles of cognitive psychology
Caracciolo The Reader's Virtual Body: Narrative space and its reconstruction
Irons et al. Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing
CN107784630B (en) Method, device and terminal for turning attributes of face image
CN105426882B (en) The method of human eye is quickly positioned in a kind of facial image
Lorenceau Cursive writing with smooth pursuit eye movements
Chen et al. CreativeBioMan: a brain-and body-wearable, computing-based, creative gaming system
Wang et al. Face recognition in simulated prosthetic vision: face detection-based image processing strategies
Memon On assisted living of paralyzed persons through real-time eye features tracking and classification using Support Vector Machines
Robbins The psychology of dreams
CN108922617A (en) A kind of self-closing disease aided diagnosis method neural network based
CN110232327A (en) A kind of driving fatigue detection method based on trapezoidal concatenated convolutional neural network
Jialiang et al. Research on the auxiliary treatment system of childhood autism based on virtual reality
Johnson Ghost
US20190076018A1 (en) Systems and Methods for Promoting Eye Contact in Children with Autism Spectrum Disorder
Chang et al. A theory for treating dizziness due to optical flow (visual vertigo)
Brower et al. Priming infants to use color in an individuation task: Does social context matter?
JP2007164617A (en) Template forming method using gaussian function, and vision input communication method using eye motion
Dowling et al. Mobility assessment using simulated Arti. cial Human Vision
Eom et al. Remote Learning Support System Using a Mobile Robot Operated by Eye-gaze Input for Long Time School Non-attendance Students
Li [Retracted] Construction of College Students’ Mental Health Assessment and Art Therapy System Aided by the Internet of Things and Big Data
Raihan et al. Toddlers Working Memory Development via Visual Attention and Visual Sequential-Memory
McGee The Influence of Target Regularity and Task on Screen-Based and Real-World Visual Exploration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180227