CN109976528A - A kind of method and terminal device based on the dynamic adjustment watching area of head - Google Patents
A kind of method and terminal device based on the dynamic adjustment watching area of head Download PDFInfo
- Publication number
- CN109976528A CN109976528A CN201910258196.7A CN201910258196A CN109976528A CN 109976528 A CN109976528 A CN 109976528A CN 201910258196 A CN201910258196 A CN 201910258196A CN 109976528 A CN109976528 A CN 109976528A
- Authority
- CN
- China
- Prior art keywords
- area
- head
- user
- instruction
- terminal device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a kind of method and terminal device based on the dynamic adjustment watching area of head, it is determined for the eye movement identification based on user after watching information attentively, the dynamic adjustment watching area of joint head, the watching area made more meet the expectation of user, improve the accuracy rate for determining watching area.Information is watched attentively this method comprises: obtaining;Watch information attentively according to described and determine corresponding first area;Obtain the first head feature data;According to first area described in the first head feature data point reuse, second area is obtained.
Description
This application claims in submission on March 22nd, 2019 Patent Office of the People's Republic of China, application No. is 201910222440.4, application name
A kind of referred to as priority of the Chinese patent application of " method and terminal device based on the dynamic adjustment watching area of head ", whole
Content is hereby incorporated by reference in the application.
Technical field
This application involves field of human-computer interaction more particularly to a kind of methods and terminal based on the dynamic adjustment watching area of head
Equipment.
Background technique
Currently, the application with human-computer interaction is more and more wider, also increasingly for the interactive mode between user and equipment
It is more.Specifically, the operation of user can be identified by the eye feature data of user, and then equipment is made to execute corresponding movement.
Eyeball tracking technical application is mobile by the eyeball of user in human-computer interaction scene, and then realizes and control to equipment
System.For example, being in the human-computer interaction of terminal device, the direction and position of user's blinkpunkt can be determined by eyeball tracking technology
It sets, control of the Lai Shixian user to terminal device, for example, click, slide etc..
But due to the difference etc. that affected by environment, user uses, the accuracy of eyeball tracking declines, and makes it easy to know
Not mistake is easy to appear operation error so that operation is unable to reach precisely.Therefore, how to realize more accurately to user
Practical operation identified, become urgent problem to be solved.
Summary of the invention
The application provides a kind of method and terminal device based on the dynamic adjustment watching area of head, for the eye based on user
Dynamic identification determines watch information attentively after, the dynamic adjustment watching area of joint head, the watching area made more meets the expectation of user, mentions
Height determines the accuracy rate of watching area.
In view of this, the application first aspect provides a kind of method based on the dynamic adjustment watching area of head, comprising:
Information is watched in acquisition attentively;
Information is watched attentively according to this determines corresponding first area;
Obtain the first head feature data;
According to the first head feature data point reuse first area, second area is obtained.
Optionally, in a kind of possible embodiment, this method can also include:
Instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, acquisition instruction corresponding with the second area, and execute this and refer to
It enables, may include:
Obtain control data;
According to the control data, the instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, which may include:
Any one in facial feature data, the second head feature data, voice data or control instruction.
Optionally, in a kind of possible embodiment, this according to the first head feature data point reuse first area,
Second area is obtained, may include:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the first head feature data, obtains secondth area
Domain.
Optionally, the third area in a kind of possible embodiment, within the presetting range of the determination first area
Domain may include:
It obtains this and watches the corresponding precision of blinkpunkt that information includes attentively;
N times of region for determining the precision except the first area is the third region, which is greater than 1.
Optionally, in a kind of possible embodiment,
The facial feature data may include: in eye movement behavioral data or eye motion state at least one of;
The second head feature data include: in the motion state at the preset position in the motion state or head on head
At least one of.
Optionally, in a kind of possible embodiment,
The first head feature data include: in the motion state at the preset position in the motion state or head on head
At least one of.
The application second aspect provides a kind of terminal device, comprising:
Eye movement identification module watches information attentively for obtaining;
Processing module determines corresponding first area for watching information attentively according to this;
Dynamic identification module, for obtaining the first head feature data;
The processing module is also used to obtain second area according to the first head feature data point reuse first area.
Optionally, in a kind of possible embodiment,
The processing module is also used to obtain instruction corresponding with the second area, and executes the instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Obtain control data;
According to the control data, the instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, the control data, comprising:
Any one in facial feature data, the second head feature data, voice data or control instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the first head feature data, obtains secondth area
Domain.
Optionally, in a kind of possible embodiment, which is specifically used for:
It obtains this and watches the corresponding precision of blinkpunkt that information includes attentively;
N times of region for determining the precision except the first area is the third region, which is greater than 1.
Optionally, in a kind of possible embodiment,
The facial feature data may include: in eye movement behavioral data or eye motion state at least one of;
The second head feature data include: in the motion state at the preset position in the motion state or head on head
At least one of.
Optionally, in a kind of possible embodiment,
The first head feature data include: in the motion state at the preset position in the motion state or head on head
At least one of.
The application third aspect provides a kind of terminal device, comprising:
Processor, memory, bus and input/output interface, the processor, the memory and the input/output interface
It is connected by the bus;
The memory, for storing program code;
The processor executes the step of method of the application first aspect offer when calling the program code in the memory.
Fourth aspect, the application provide a kind of computer readable storage medium, it should be noted that the technical side of the application
Substantially all or part of the part that contributes to existing technology or the technical solution can be produced case in other words with software
The form of mouth embodies, which is stored in a storage medium, for being stored as used in above equipment
Computer software instructions, it includes above-mentioned for program designed by any one of first aspect embodiment for executing.
The storage medium includes: USB flash disk, mobile hard disk, read-only memory (english abbreviation ROM, full name in English: Read-Only
Memory), random access memory (english abbreviation: RAM, full name in English: Random Access Memory), magnetic disk or light
The various media that can store program code such as disk.
5th aspect, the application provide a kind of computer program product, which includes computer software
Instruction, the computer software instructions can be loaded by processor realize any one of above-mentioned first aspect based on head
Process in the method for dynamic adjustment watching area.
In the embodiment of the present application, watching attentively after information determines first area by user first, continues to obtain user
The first head feature data adjust first area, and then obtain closer to user's phase and by the first head feature data
The second area of prestige.Therefore, the embodiment of the present application determines more accurate second area by the eye of user in conjunction with head, makes
Second area more meets the desired region of user.It, can also be with even if being influenced to make eye recognition inaccurate by environment, user's difference etc.
First area is adjusted in conjunction with the head feature of user, the second area made is more acurrate.Improve the experience of user.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the method provided by the present application based on the dynamic adjustment watching area of head;
Fig. 2 is another flow diagram of the method provided by the present application based on the dynamic adjustment watching area of head;
Fig. 3 is a kind of area schematic of the method provided by the present application based on the dynamic adjustment watching area of head;
Fig. 4 a is the schematic diagram of first area and third region in the embodiment of the present application;
Fig. 4 b is the schematic diagram of first area, second area and third region in the embodiment of the present application;
Fig. 5 is a kind of schematic diagram of acquisition instruction in the embodiment of the present application;
Fig. 6 is a kind of schematic diagram executed instruction in the embodiment of the present application;
Fig. 7 is a kind of embodiment schematic diagram of terminal device provided by the present application;
Fig. 8 is another embodiment schematic diagram of terminal device provided by the present application.
Specific embodiment
The application provides a kind of method and terminal device based on the dynamic adjustment watching area of head, for the eye based on user
Dynamic identification determines watch information attentively after, the dynamic adjustment watching area of joint head, the watching area made more meets the expectation of user, mentions
Height determines the accuracy rate of watching area.
Firstly, the method provided by the present application based on the dynamic adjustment watching area of head can be applied to terminal device, the terminal
Equipment has the module of acquisition image data, for example, camera, sensor etc..The terminal device can be various with camera shooting
The electronic equipment of head or sensor etc., for example, mobile phone, laptop, display etc..
The process of the method provided by the present application based on the dynamic adjustment watching area of head is illustrated first below, is please referred to
Fig. 1, a kind of flow diagram of the method provided by the present application based on the dynamic adjustment watching area of head may include:
101, it obtains and watches information attentively.
Information is watched attentively it is possible, firstly, to obtain, this watches information attentively can obtain by the camera of terminal device, sensor etc.
It gets, this is watched the blinkpunkt that information may include user attentively, watches duration, blinkpunkt coordinate attentively or watch vector attentively etc..
Specifically, the blinkpunkt of user can be identified by eyeball tracking.Terminal device can directly by camera or
Sensor etc. acquires the eyes image of user, then identifies the eyes image of user, obtain user watches information attentively.In addition, if
Terminal device has infrared facility, then can also emit at least two groups infrared light to the eye of user, be also possible to user
At least one eye ball emit at least one set of infrared light, the eye of user generates infrared light spot under the mapping of infrared light,
The eyes image for acquiring user later, identifies eyes image to obtain eye feature data, and then obtain watching information attentively, should
Watching attentively may include coordinate etc. where the blinkpunkt, direction of gaze, blinkpunkt of user in information.
102, corresponding first area is determined according to watching information attentively.
Obtain user watch information attentively after, information can be watched attentively according to this and determine corresponding first area.
Specifically, this watch attentively may include in information user simple eye or eyes blinkpunkt, direction of gaze, blinkpunkt institute
In coordinate etc., information can be watched attentively according to this and determine corresponding first area at the terminal, which can be understood as
The region that terminal device is watched attentively according to the user that information recognizes is watched attentively.Terminal device can acquire the eye feature of user, lead to
It crosses and eyeball tracking is carried out to the eye of user, coordinate etc. where determining the blinkpunkt, direction of gaze, blinkpunkt of user, in turn
Determine first area.And the size of first area can be adjusted according to the region for watching blinkpunkt in information attentively, be also possible to
It, specifically can be according to reality directly using the center of blinkpunkt as the region of the default size in the center of circle after the center for determining blinkpunkt
Border application scenarios are adjusted, herein and without limitation.
Further, terminal device may include display device, including light emitting diode (light emitting
Diode, LED) screen, capacitance plate, touch screens etc., the application is referred to as screen.User can watch attentively on the screen of terminal
Any point, terminal device watch information attentively according to user's, identify the first area that user is watched attentively.
For example, user need to watch sliding area, terminal attentively when carrying out slide on the screen for needing to carry out terminal device
Equipment obtains the eyes image data of user, and calculates blinkpunkt by the data model of machine vision algorithm and eyeball tracking
Position, and then determine the position that user is watched attentively, and obtain operation corresponding with the region.
In a kind of optional embodiment, terminal device is identifying what user was watched attentively according to the information of watching attentively of user
After first area, it can be highlighted the first area on the screen, alternatively, showing the first area by way of focus
Etc., it can be specifically adjusted according to practical application scene, herein without limitation.
103, the first head feature data are obtained.
In watching attentively after information determines first area according to user, continue the first head feature data for obtaining user.
The first head feature data can be the camera by terminal device, sensor etc. acquisition, for example, terminal
Equipment can be spaced the head image of user of m milliseconds of acquisitions, obtain L frame head image, L is positive integer, then to the L frame
Head image carries out image recognition, obtains the first head feature data of user.
Optionally, the motion state or the preset portion in head which may include the head of user
One or more in the times of exercise at preset position etc. in the motion state of position or head, which can be use
The portion of tissue of family face, for example, terminal device can judge to use account by the moving direction of the part facial tissue of user
The moving direction in portion.
104, according to the first head feature data point reuse first area, second area is obtained.
After getting the first head feature data of user, terminal device can be according to the first head feature data
First area is adjusted, second area is obtained, which is the operating area that user selectes, and it is expected closer to user
Region.
Specifically, after determining first area, the accuracy operated it is expected to user to improve, it can be further
In conjunction with the head feature of user, first area is adjusted, obtains second area, so that second area is more met user desired
Operating area.
For example, when user's both hands are inconvenient, in terminal device according to the screen of user watched information attentively and determine terminal
On certain point after, user can also be mobile according to head, for example, coming back, bowing, turning left, right-hand rotation or all directions are combined and rotated
Etc. mode, adjust screen on point, make this close to the point of user's desired control.
In addition, when passing through the first head feature data point reuse first area, it can be by the headwork of user, in real time
First area is adjusted, terminal device can show the amplitude of adjustment on the screen, and user can be according on the screen of terminal device
Visual feedback, adjust the amplitude of head movement, and then more accurately adjust first area, obtain second area.
105, instruction corresponding with second area is obtained, and executes the instruction.
After determining second area, terminal device obtains instruction corresponding with the second area, and executes the instruction.
Terminal device is manipulated in general, facial characteristics centering can be used in user, common control mode includes a little
Hit, slide etc., sliding can fall in specific region according to user's blinkpunkt position, and the side of sliding is defined with specific region
To, can also by judging glide direction from first blinkpunkt position to the change direction of next blinkpunkt position, and
Clicking operation can make to watch the time threshold whether duration reaches clicking operation attentively, be also possible to realize by blink operation,
The special key being also possible on electronic equipment, such as prominent side key, capacitance plate Touch Screen, are also possible to voice behaviour
Make, can also be that facial characteristics operates, such as beep mouth, opens one's mouth, nods etc..
For example, terminal device makes coke according to the information of watching attentively of user if there is rollback control area in the terminal device lower right corner
Point continues through head control focus and enters rollback control area, terminal device is available after rollback control area
The corresponding back-off instruction in rollback control area, and the back-off instruction is executed, a upper interface is return back to from current interface.
It should be understood that the step 105 in the embodiment of the present application is optional step.
In the embodiment of the present application, watching attentively after information determines first area by user first, continues to obtain user
The first head feature data adjust first area, and then obtain closer to user's phase and by the first head feature data
The second area of prestige, and the corresponding instruction of second area is obtained, execute the instruction.Therefore, the embodiment of the present application passes through user's
Eye determines more accurate second area in conjunction with head, and second area is made more to meet the desired region of user.Even if by environment,
The influences such as user's difference keep eye recognition inaccurate, can also be adjusted, mend to first area in conjunction with the head feature of user
The accuracy for repaying eyeball tracking, the second area made are more acurrate.Improve the experience of user.
The method based on the dynamic adjustment watching area of head for providing you to the application below is further illustrated, and please join
Fig. 2 is read, another flow diagram of the method based on the dynamic adjustment watching area of head in the embodiment of the present application may include:
201, it obtains and watches information attentively.
202, corresponding first area is determined according to watching information attentively.
It should be understood that step 201 in the embodiment of the present application, 202 with the step 101,102 similar in earlier figures 1, herein not
It repeats again.
203, the third region within the presetting range of the first area is determined.
After determining first area, determine that the third region within the presetting range of first area, third region include
First area, and third region is typically larger than first area.
Optionally, in a kind of possible embodiment, after determining first area, determine that blinkpunkt is corresponding precisely
Degree, and using blinkpunkt as center dot, N times with precision is that radius determines third region, and for N greater than 1, i.e. third region can be with
N times of the region including the precision except first area and first area.For example, being set if precision is 0.5 degree in terminal
Standby distance resolution is about 3mm, therefore, can determine third region by radius of 3*3=9mm.
Illustratively, as shown in figure 3, first area 301 belongs to third region 302, the central point with first area 301 is
Centre dot determines third region with N times of the radius of first area 301, and the range of first area 301 is less than third region
302。
Further, when watching information attentively of user is being determined by eyeball tracking, involved parameter may include essence
Accuracy, precision may include accuracy value and accuracy value.Accuracy value be calculate watch information attentively and actual watch the inclined of information attentively
Difference, accuracy value are then watch deviation attentively discrete.In general, it is to be understood that precision be the practical position watched attentively of blinkpunkt with
The collected average error value watched attentively between position of terminal device, it is same in lasting record that precision can be understood as terminal device
Dispersion degree when one blinkpunkt, for example, error amount can be measured by the mean square deviation of continuous sample.Specifically, passing through
Eyeball tracking determines the watching attentively before information of user, can be calibrated, obtain calibration parameter.In practical applications, it calibrated
Journey is using the significant process of eyeball tracking technology, generally according to obtaining under the different eye feature of each user or varying environment
It is not necessarily identical to obtain different calibration parameters, therefore, before user watches information attentively using eyeball tracking acquisition, school can be carried out
Standard obtains calibration parameter, and obtains accuracy value and accuracy value according to the calibration parameter and preset eyeball tracking algorithm.When
So, it can be terminal device and accuracy value and accuracy value directly obtained according to the calibration parameter and preset eyeball tracking algorithm,
It is also possible to terminal device and the calibration parameter is sent to server or other network equipments, server or other network equipment roots
Accuracy value and accuracy value etc. are obtained according to preset eyeball tracking algorithm, is then sent to terminal device, it specifically can be according to reality
Application scenarios adjustment, herein and without limitation.
204, the first head feature data are obtained.
Step 204 in the embodiment of the present application is similar with the step 103 in earlier figures 1, and details are not described herein again.
It should be noted that the embodiment of the present application to the execution sequence of step 203 and step 204 without limitation, Ke Yixian
Step 203 is executed, step 204 can also be first carried out, can be specifically adjusted according to practical application scene, not limited herein
It is fixed.
205, first area is adjusted in the range of third region according to the first head feature data, obtains second area.
After obtaining the first head feature data, first area can be adjusted according to the first head feature data
It is whole, and it is no more than the range in third region, obtain second area.
The first head feature data may include the motion state on the head of user or the fortune at the preset position in head
It is one or more in the times of exercise at preset position etc. in dynamic state or head.
In general, the feedback of the dynamic operation of the head of user can be shown according to the screen of terminal device, the interface of screen can be high
Bright or display preset shape area identification, for example, cursor, focus etc., to identify current fixation point region, Yong Huke
To show according to the screen of terminal device, the adjustment progress to first area is determined, and then adjust head and move amplitude.
For example, if terminal device according to the blinkpunkt watched information attentively and determine user of user, and determines first in screen
Region, and after determining third region, if the first area identified does not meet the desired region of user, user can be with head
The position for adjusting first area in third region is moved, second area is obtained.Specific head movement may include bowing, lifting
Head, bow, turn left, right-hand rotation or all directions combine rotation etc., the second area made more meets the desired region of user.
Illustratively, as shown in Fig. 4 a and 4b, as shown in fig. 4 a, first area 401 and third region 402 are determined.Such as figure
Shown in 4b, first area is adjusted by the head movement of user, obtains second area 403.
206, control data are obtained.
After determining second area, it can also continue to obtain control data.
The control data may include facial feature data, the second head feature data, voice data, control instruction or hand
Gesture controls any one in data.
The control data can be obtained according to various ways, and facial feature data may include the eye feature number of user
According to can be with pupil position, pupil shape, iris position, iris shape, eyelid position, canthus position, hot spot (also referred to as Poole
Spot by the emperor himself) position etc., it perhaps also may include the eye motion state of user for example, the movement of simple eye or eyes blink, blink time
Number etc., can specifically adjust according to application scenarios.Also may include simple eye or eyes the blinkpunkt of user, watch attentively duration,
Blinkpunkt coordinate one of watches vector attentively etc. or a variety of.Facial feature data can also include the facial characteristics of user,
For example, smile expression, beep mouth, staring etc..Second head feature data may include the motion state or head on the head of user
It is one or more in the times of exercise at preset position etc. in the motion state at the preset position in portion or head, for example, point
Head turns left, turns right, bows etc..Control instruction can be the suitable operation of terminal device response user, for example, specific behaviour
It can be user to operate the key of terminal device, user may include user to end to the button operation of terminal device
The other equipment of the operation of virtual key or access terminal equipment in the operation of the physical button of end equipment or touch screen, example
Such as, the operation of key included by keyboard, handle etc., it is any one or more of.Voice control data can be terminal and set
Standby voice acquisition module is acquired to obtain to the voice of user, and voice data may include that user grasps second area
The control voice of work.Gesture control data, which can be, obtains terminal device progress gesture control by user, can be and passes through
Camera, sensor or touch screen of terminal device etc. collect.
Illustratively, which can be specific for example, user (can such as be smiled, be stared at by blink, facial expression
Eye etc. particular emotions), head pose, such as nod, shake the head, head oscillation, lip reading identification data, nozzle type identification data, such as beep
Mouth is opened one's mouth, key: physical button (such as home key, booting side key, volume key, function key, capacitance touch button etc.), screen
Touch controlled key (the screen return key under such as Android), virtual key, voice control data, gesture control data, when watching attentively
Long data etc..
207, according to control data, instruction corresponding with second area is obtained, and execute instruction.
After obtaining control data, terminal device obtains instruction corresponding with second area according to the control data, and
Execute the instruction.
Specifically, after getting control data, the operation to second area can be determined according to the control data.
Illustratively, terminal device highlights the second area after determining second area, and then user can carry out a step behaviour
Make, for example, nodding, watching duration attentively more than threshold value etc., terminal device obtains second area pair according to the further operating of user
The instruction answered.For example, if determining for second area counterpart terminal equipment operates, the available determine instruction of terminal device, so
After execute determine instruction, show next interface.
Optionally, in addition, if second area corresponds to multiple instruction, corresponding finger further can be determined according to control data
It enables.For example, if second area corresponds to multiple instruction duration can be watched attentively according to user, if user, which watches duration attentively, belongs to first
Section then obtains corresponding first and instructs and execute, if the duration of watching attentively of user belongs to second interval, obtains corresponding second
Instruct and execute etc., it can be specifically adjusted according to application scenarios.
For example, as shown in figure 5, user can be controlled by eye and head when there is a new information prompt on mobile phone
Determine second area 501, after determining second area 501, the available user's of terminal device nods, blinks, for a long time
The corresponding control data of movement such as watch attentively, for example, if user nods, the available open instructions of terminal device passes through this dozen
Instruction unpack content relevant to the new information is opened, as shown in fig. 6, can be obtained in the new information by the open instructions
Hold, and is shown in the screen of terminal device.
Therefore, in the embodiment of the present application, watching attentively after information determines first area, and according to end by user first
The precision that information is watched in end equipment identification attentively determines third region, continues the first head feature data for obtaining user, and pass through
The first head feature data, adjust first area in the range of third region, and then obtain closer to user desired the
Two regions, and pass through the constraint in third region, avoid the amplitude of adjustment excessive, and lead to the adjustment inaccuracy to first area,
Make second area that can not meet the desired region of user.Then the control action that can further obtain user obtains control number
According to, and according to the corresponding instruction of control data acquisition second area, then execute the instruction.Therefore, the embodiment of the present application is logical
The eye for crossing user determines more accurate second area in conjunction with head, and second area is made more to meet the desired region of user.I.e.
Make to be influenced to make by environment, user's difference etc. eye recognition inaccurate, can also in conjunction with user head feature to first area into
Row adjustment, the second area made are more acurrate.Improve the experience of user.And second area is carried out about by third region
Beam makes the second area for when carrying out the adjustment of first area, avoiding amplitude excessive, and causing deviate desired region.And
And the control data of user can be further obtained, it, can be more into one by the corresponding instruction of control data acquisition second area
The intention for determining user is walked, the corresponding control instruction of second area is more accurately obtained, avoids maloperation, improve user experience.
For example, estimating direction and the coordinate of user's blinkpunkt, by eyeball tracking technology in the field of human-computer interaction of mobile phone to realize use
The control (click or sliding etc.) to mobile phone at family.However, since environment influences or user in most application scenarios
Body differentia influence causes the blinkpunkt precision of eyeball tracking technology to decline, and operation is unable to reach precisely.At this point, dynamic using head
Tracer technique carries out operation amendment, and by visual feedback, obtains optimum operation position.
It is aforementioned that method provided by the present application is described in detail, device provided by the present application is illustrated below.
Referring to Fig. 7, a kind of embodiment schematic diagram of terminal device provided by the present application, may include:
Eye movement identification module 701 watches information attentively for obtaining;
Processing module 703 determines corresponding first area for watching information attentively according to this;
Dynamic identification module 702, for obtaining the first head feature data;
The processing module 703 is also used to obtain the secondth area according to the first head feature data point reuse first area
Domain, the second area are the region where blinkpunkt.
Optionally, in a kind of possible embodiment,
The processing module 703 is also used to obtain instruction corresponding with the second area, and executes the instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Obtain control data;
According to the control data, the instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, the control data, comprising:
Any one in facial feature data, the second head feature data, voice data or control instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the first head feature data, obtains secondth area
Domain.
Optionally, in a kind of possible embodiment, which is specifically used for:
It obtains this and watches the corresponding precision of blinkpunkt that information includes attentively;
N times of region for determining the precision except the first area is the third region, which is greater than 1.
Optionally, in a kind of possible embodiment,
The facial feature data include: blinkpunkt, watch attentively in duration or eye motion state at least one of;
The second head feature data include: in the motion state at the preset position in the motion state or head on head
At least one of.
Optionally, in a kind of possible embodiment,
The first head feature data include: in the motion state at the preset position in the motion state or head on head
At least one of.
Referring to Fig. 8, in the embodiment of the present application terminal device another embodiment schematic diagram, comprising:
Central processing unit (central processing units, CPU) 801, storage medium 802, power supply 803, storage
Device 804, input/output interface 805, it should be appreciated that the CPU in the embodiment of the present application can be one, be also possible to multiple, input
Output interface can be one, be also possible to multiple, and this is not limited here.Power supply 803 can mention for stable state detection device
For working power, memory 804 and storage medium 802 can be of short duration storage or persistent storage, and finger is stored in storage medium
It enables, when CPU can be according to the specific steps in instruction execution earlier figures 1- Fig. 6 embodiment in the memory.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes each embodiment the method for the application Fig. 1-6
All or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before
Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.
Claims (15)
1. a kind of method based on the dynamic adjustment watching area of head characterized by comprising
Information is watched in acquisition attentively;
Watch information attentively according to described and determine corresponding first area;
Obtain the first head feature data;
According to first area described in the first head feature data point reuse, second area is obtained.
2. the method according to claim 1, wherein the method also includes:
Instruction corresponding with the second area is obtained, and executes described instruction.
3. according to the method described in claim 2, it is characterized in that, described obtain corresponding with second area instruction, and
Execute described instruction, comprising:
Obtain control data;
According to the control data, described instruction corresponding with the second area is obtained, and executes described instruction.
4. according to the method described in claim 3, the control data, comprising:
Any one in facial feature data, the second head feature data, voice data or control instruction.
5. method according to any of claims 1-4, which is characterized in that described according to the first head feature number
According to the first area is adjusted, second area is obtained, comprising:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the first head feature data, obtains described
Two regions.
6. according to the method described in claim 5, it is characterized in that, within the presetting range of the determination first area
Third region, comprising:
Watch the corresponding precision of blinkpunkt that information includes described in acquisition attentively;
N times of region for determining the precision except the first area is the third region, and the N is greater than 1.
7. a kind of terminal device characterized by comprising
Eye movement identification module watches information attentively for obtaining;
Processing module determines corresponding first area for watching information attentively according to;
Dynamic identification module, for obtaining the first head feature data;
The processing module is also used to the first area according to the first head feature data point reuse, obtains second area,
The second area is the region where blinkpunkt.
8. terminal device according to claim 7, which is characterized in that
The processing module is also used to obtain instruction corresponding with the second area, and executes described instruction.
9. terminal device according to claim 8, which is characterized in that the processing module is specifically used for:
Obtain control data;
According to the control data, described instruction corresponding with the second area is obtained, and executes described instruction.
10. terminal device according to claim 9, the control data, comprising:
Any one in facial feature data, the second head feature data, voice data or control instruction.
11. the terminal device according to any one of claim 8-10, which is characterized in that the processing module is specific to use
In:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the first head feature data, obtains described
Two regions.
12. terminal device according to claim 11, which is characterized in that the processing module is specifically used for:
Watch the corresponding precision of blinkpunkt that information includes described in acquisition attentively;
N times of region for determining the precision except the first area is the third region, and the N is greater than 1.
13. a kind of terminal device, comprising:
Memory, for storing program;
Processor, for executing the described program of the memory storage, when described program is performed, the processor is used for
Execute such as step as claimed in any one of claims 1 to 6.
14. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer executes such as
Method described in any one of claim 1-6.
15. a kind of computer program product comprising instruction, which is characterized in that when the computer program product is in electronic equipment
When upper operation, so that the electronic equipment executes such as method of any of claims 1-6.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910222440 | 2019-03-22 | ||
CN2019102224404 | 2019-03-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109976528A true CN109976528A (en) | 2019-07-05 |
CN109976528B CN109976528B (en) | 2023-01-24 |
Family
ID=67082228
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910258196.7A Active CN109976528B (en) | 2019-03-22 | 2019-04-01 | Method for adjusting watching area based on head movement and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109976528B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110941344A (en) * | 2019-12-09 | 2020-03-31 | Oppo广东移动通信有限公司 | Method for obtaining gazing point data and related device |
CN111638780A (en) * | 2020-04-30 | 2020-09-08 | 长城汽车股份有限公司 | Vehicle display control method and vehicle host |
CN112738388A (en) * | 2019-10-28 | 2021-04-30 | 七鑫易维(深圳)科技有限公司 | Photographing processing method and system, electronic device and storage medium |
CN113642364A (en) * | 2020-05-11 | 2021-11-12 | 华为技术有限公司 | Face image processing method, device and equipment and computer readable storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130169531A1 (en) * | 2011-12-29 | 2013-07-04 | Grinbath, Llc | System and Method of Determining Pupil Center Position |
CN105900041A (en) * | 2014-01-07 | 2016-08-24 | 微软技术许可有限责任公司 | Target positioning with gaze tracking |
US9529428B1 (en) * | 2014-03-28 | 2016-12-27 | Amazon Technologies, Inc. | Using head movement to adjust focus on content of a display |
US20170287446A1 (en) * | 2016-03-31 | 2017-10-05 | Sony Computer Entertainment Inc. | Real-time user adaptive foveated rendering |
US20170345400A1 (en) * | 2016-05-27 | 2017-11-30 | Beijing Pico Technology Co., Ltd. | Method of vision correction in a virtual reality environment |
CN107656613A (en) * | 2017-09-08 | 2018-02-02 | 国网山东省电力公司电力科学研究院 | A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye |
US20180348861A1 (en) * | 2017-05-31 | 2018-12-06 | Magic Leap, Inc. | Eye tracking calibration techniques |
CN109343700A (en) * | 2018-08-31 | 2019-02-15 | 深圳市沃特沃德股份有限公司 | Eye movement controls calibration data acquisition methods and device |
CN109410285A (en) * | 2018-11-06 | 2019-03-01 | 北京七鑫易维信息技术有限公司 | A kind of calibration method, device, terminal device and storage medium |
-
2019
- 2019-04-01 CN CN201910258196.7A patent/CN109976528B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130169531A1 (en) * | 2011-12-29 | 2013-07-04 | Grinbath, Llc | System and Method of Determining Pupil Center Position |
CN105900041A (en) * | 2014-01-07 | 2016-08-24 | 微软技术许可有限责任公司 | Target positioning with gaze tracking |
US9529428B1 (en) * | 2014-03-28 | 2016-12-27 | Amazon Technologies, Inc. | Using head movement to adjust focus on content of a display |
US20170287446A1 (en) * | 2016-03-31 | 2017-10-05 | Sony Computer Entertainment Inc. | Real-time user adaptive foveated rendering |
US20170345400A1 (en) * | 2016-05-27 | 2017-11-30 | Beijing Pico Technology Co., Ltd. | Method of vision correction in a virtual reality environment |
US20180348861A1 (en) * | 2017-05-31 | 2018-12-06 | Magic Leap, Inc. | Eye tracking calibration techniques |
CN107656613A (en) * | 2017-09-08 | 2018-02-02 | 国网山东省电力公司电力科学研究院 | A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye |
CN109343700A (en) * | 2018-08-31 | 2019-02-15 | 深圳市沃特沃德股份有限公司 | Eye movement controls calibration data acquisition methods and device |
CN109410285A (en) * | 2018-11-06 | 2019-03-01 | 北京七鑫易维信息技术有限公司 | A kind of calibration method, device, terminal device and storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112738388A (en) * | 2019-10-28 | 2021-04-30 | 七鑫易维(深圳)科技有限公司 | Photographing processing method and system, electronic device and storage medium |
CN112738388B (en) * | 2019-10-28 | 2022-10-18 | 七鑫易维(深圳)科技有限公司 | Photographing processing method and system, electronic device and storage medium |
CN110941344A (en) * | 2019-12-09 | 2020-03-31 | Oppo广东移动通信有限公司 | Method for obtaining gazing point data and related device |
CN110941344B (en) * | 2019-12-09 | 2022-03-15 | Oppo广东移动通信有限公司 | Method for obtaining gazing point data and related device |
CN111638780A (en) * | 2020-04-30 | 2020-09-08 | 长城汽车股份有限公司 | Vehicle display control method and vehicle host |
CN113642364A (en) * | 2020-05-11 | 2021-11-12 | 华为技术有限公司 | Face image processing method, device and equipment and computer readable storage medium |
CN113642364B (en) * | 2020-05-11 | 2024-04-12 | 华为技术有限公司 | Face image processing method, device, equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109976528B (en) | 2023-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109976528A (en) | A kind of method and terminal device based on the dynamic adjustment watching area of head | |
Zhu et al. | Nonlinear eye gaze mapping function estimation via support vector regression | |
US9039419B2 (en) | Method and system for controlling skill acquisition interfaces | |
US9733703B2 (en) | System and method for on-axis eye gaze tracking | |
CN108681399B (en) | Equipment control method, device, control equipment and storage medium | |
Harezlak et al. | Towards accurate eye tracker calibration–methods and procedures | |
US8146020B2 (en) | Enhanced detection of circular engagement gesture | |
KR20210153151A (en) | Head mounted display system configured to exchange biometric information | |
Rozado et al. | Fast human-computer interaction by combining gaze pointing and face gestures | |
Bigdelou et al. | Simultaneous categorical and spatio-temporal 3d gestures using kinect | |
JP2024109844A (en) | SYSTEM AND METHOD FOR OPERATING A HEAD MOUNTED DISPLAY SYSTEM BASED ON USER IDENTIF | |
Essig et al. | ADAMAAS: towards smart glasses for mobile and personalized action assistance | |
CN110647790A (en) | Method and device for determining gazing information | |
CN113495613B (en) | Eyeball tracking calibration method and device | |
Lander et al. | hEYEbrid: A hybrid approach for mobile calibration-free gaze estimation | |
Edughele et al. | Eye-tracking assistive technologies for individuals with amyotrophic lateral sclerosis | |
CN109960412A (en) | A kind of method and terminal device based on touch-control adjustment watching area | |
Brousseau et al. | Smarteye: An accurate infrared eye tracking system for smartphones | |
Heck et al. | Webcam eye tracking for desktop and Mobile devices: A systematic review | |
Lei et al. | An end-to-end review of gaze estimation and its interactive applications on handheld mobile devices | |
Moreno-Arjonilla et al. | Eye-tracking on virtual reality: a survey | |
CN109144262A (en) | A kind of man-machine interaction method based on eye movement, device, equipment and storage medium | |
Czuszynski et al. | Septic safe interactions with smart glasses in health care | |
CN109917923A (en) | Method and terminal device based on free movement adjustment watching area | |
Villanueva et al. | A geometric approach to remote eye tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |