CN109917923A - Method and terminal device based on free movement adjustment watching area - Google Patents
Method and terminal device based on free movement adjustment watching area Download PDFInfo
- Publication number
- CN109917923A CN109917923A CN201910257529.4A CN201910257529A CN109917923A CN 109917923 A CN109917923 A CN 109917923A CN 201910257529 A CN201910257529 A CN 201910257529A CN 109917923 A CN109917923 A CN 109917923A
- Authority
- CN
- China
- Prior art keywords
- area
- terminal device
- information
- instruction
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The application provides a kind of method and terminal device based on free movement adjustment watching area, it is determined for the eye movement identification based on user after watching information attentively, in conjunction with the operation adjustment watching area of rotation terminal device, the watching area made more meets the expectation of user, improves the accuracy rate for determining watching area.This method comprises: information is watched in terminal device acquisition attentively;The terminal device watches information attentively according to and determines corresponding first area;The terminal device obtains rotation information;The terminal device adjusts the first area according to the rotation information, obtains second area.
Description
This application claims in submission on March 22nd, 2019 Patent Office of the People's Republic of China, application No. is 201910222431.5, application name
The referred to as priority of the Chinese patent application of " method and terminal device based on free movement adjustment watching area ", whole
Content is hereby incorporated by reference in the application.
Technical field
This application involves field of human-computer interaction more particularly to it is a kind of based on free movement adjustment watching area method and
Terminal device.
Background technique
Currently, the application with human-computer interaction is more and more wider, also increasingly for the interactive mode between user and equipment
It is more.Specifically, the operation of user can be identified by the eye feature data of user, and then equipment is made to execute corresponding movement.
Eyeball tracking technical application is mobile by the eyeball of user in human-computer interaction scene, and then realizes and control to equipment
System.For example, being in the human-computer interaction of terminal device, the direction and position of user's blinkpunkt can be determined by eyeball tracking technology
It sets, control of the Lai Shixian user to terminal device, for example, click, slide etc..
But due to the difference etc. that affected by environment, user uses, the accuracy of eyeball tracking declines, and makes it easy to know
Not mistake is easy to appear operation error so that operation is unable to reach precisely.Therefore, how more accurately to determine user's
Practical operation region, becomes urgent problem to be solved.
Summary of the invention
The application provides a kind of method and terminal device based on free movement adjustment watching area, for being based on user
Eye movement identification determine watch information attentively after, in conjunction with rotation terminal device operation adjustment watching area, the watching area made
More meet the expectation of user, improves the accuracy rate for determining watching area.
In view of this, the application first aspect provides a kind of method based on free movement adjustment watching area, comprising:
Information is watched in acquisition attentively;
Information is watched attentively according to this determines corresponding first area;
Obtain rotation information;
The first area is adjusted according to the rotation information, obtains second area.
Optionally, in a kind of possible embodiment, this method can also include:
Instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, acquisition instruction corresponding with the second area, and execute this and refer to
It enables, may include:
Obtain control data;
According to the control data, the instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, which may include:
Any one in facial feature data, head feature data, voice data or control instruction.
Optionally, in a kind of possible embodiment, this adjusts the first area according to the rotation information, obtains second
Region may include:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the rotation information, obtains the second area.
Optionally, the third area in a kind of possible embodiment, within the presetting range of the determination first area
Domain may include:
It obtains this and watches the corresponding precision of blinkpunkt that information includes attentively;
N times of region for determining the precision except the first area is the third region, which is greater than 1.
Optionally, in a kind of possible embodiment, which obtains rotation information, comprising:
The terminal device obtains the rotation information by sensor.
Optionally, in a kind of possible embodiment, which includes angular-rate sensor;
The rotation information includes the rotational angular velocity detected by the angular-rate sensor.
Optionally, in a kind of possible embodiment,
The facial feature data may include: in eye movement behavioral data or eye motion state at least one of;
The head feature data include: in the motion state at the preset position in the motion state or head on head at least
One.
The application second aspect provides a kind of terminal device, comprising:
Eye movement identification module watches information attentively for obtaining;
Processing module determines corresponding first area for watching information attentively according to this;
Detection module, for obtaining rotation information;
The processing module is also used to adjust the first area according to the rotation information, obtains second area.
Optionally, in a kind of possible embodiment,
The processing module is also used to obtain instruction corresponding with the second area, and executes the instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Obtain control data;
According to the control data, the instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, the control data, comprising:
Any one in facial feature data, head feature data, voice data or control instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the rotation information, obtains the second area.
Optionally, in a kind of possible embodiment, which is specifically used for:
It obtains this and watches the corresponding precision of blinkpunkt that information includes attentively;
N times of region for determining the precision except the first area is the third region, which is greater than 1.
Optionally, in a kind of possible embodiment,
The detection module, specifically for obtaining the rotation information by sensor.
Optionally, in a kind of possible embodiment, which includes angular-rate sensor;
The rotation information includes the rotational angular velocity detected by the angular-rate sensor.
Optionally, in a kind of possible embodiment,
The facial feature data may include: in eye movement behavioral data or eye motion state at least one of;
The head feature data include: in the motion state at the preset position in the motion state or head on head at least
One.
The application third aspect provides a kind of terminal device, comprising:
Processor, memory, bus and input/output interface, the processor, the memory and the input/output interface
It is connected by the bus;
The memory, for storing program code;
The processor executes the step of method of the application first aspect offer when calling the program code in the memory.
Fourth aspect, the application provide a kind of computer readable storage medium, it should be noted that the technical side of the application
Substantially all or part of the part that contributes to existing technology or the technical solution can be produced case in other words with software
The form of mouth embodies, which is stored in a storage medium, for being stored as used in above equipment
Computer software instructions, it includes above-mentioned for program designed by any one of first aspect embodiment for executing.
The storage medium includes: USB flash disk, mobile hard disk, read-only memory (english abbreviation ROM, full name in English: Read-Only
Memory), random access memory (english abbreviation: RAM, full name in English: Random Access Memory), magnetic disk or light
The various media that can store program code such as disk.
5th aspect, the application provide a kind of computer program product, which includes computer software
Instruction, the computer software instructions can load by processor realize any one of above-mentioned first aspect based on oneself
By the process in the method for movement adjustment watching area.
In the embodiment of the present application, terminal device watching attentively after information determines first area by user first, continues
The rotation information of terminal device itself is obtained, and by the rotation information, adjusts first area, and then obtain closer to user's phase
The second area of prestige.Therefore, the embodiment of the present application is determined more accurate by the combination of the eye and rotation terminal device of user
Second area makes second area more meet the desired region of user.Even if the influences such as affected by environment, user's difference know eye
It is not inaccurate, first area can also be adjusted in conjunction with the rotation information of terminal device, compensate the accuracy of eyeball tracking,
The second area made is more acurrate, improves the experience of user.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the method provided by the present application that watching area is adjusted based on free movement;
Fig. 2 is another flow diagram of the method provided by the present application that watching area is adjusted based on free movement;
Fig. 3 is a kind of area schematic of the method provided by the present application that watching area is adjusted based on free movement;
Fig. 4 a is the schematic diagram of first area and third region in the embodiment of the present application;
Fig. 4 b is the schematic diagram of first area, second area and third region in the embodiment of the present application;
Fig. 5 is a kind of schematic diagram of acquisition instruction in the embodiment of the present application;
Fig. 6 is a kind of schematic diagram executed instruction in the embodiment of the present application;
Fig. 7 is a kind of embodiment schematic diagram of terminal device provided by the present application;
Fig. 8 is another embodiment schematic diagram of terminal device provided by the present application.
Specific embodiment
The application provides a kind of method and terminal device based on free movement adjustment watching area, for being based on user
Eye movement identification determine watch information attentively after, in conjunction with rotation terminal device operation adjustment watching area, the watching area made
More meet the expectation of user, improves the accuracy rate for determining watching area.
Firstly, the method provided by the present application based on free movement adjustment watching area can be applied to terminal device, it should
Terminal device has the module of acquisition image data, for example, camera, sensor etc..The terminal device, which can be, various to be had
The electronic equipment of camera or sensor etc., for example, mobile phone, laptop, display etc..
The process of the method provided by the present application based on free movement adjustment watching area is illustrated first below, is asked
Refering to fig. 1, it is provided by the present application based on free movement adjustment watching area method a kind of flow diagram, may include:
101, it obtains and watches information attentively.
Information is watched attentively it is possible, firstly, to obtain, this watches information attentively can obtain by the camera of terminal device, sensor etc.
It gets, this is watched the blinkpunkt that information may include user attentively, watches duration, blinkpunkt coordinate attentively or watch vector attentively etc..
Specifically, the blinkpunkt of user can be identified by eyeball tracking.Terminal device can directly by camera or
Sensor etc. acquires the eyes image of user, then identifies the eyes image of user, obtain user watches information attentively.In addition, if
Terminal device has infrared facility, then can also emit at least two groups infrared light to the eye of user, be also possible to user
At least one eye ball emit at least one set of infrared light, the eye of user generates infrared light spot under the mapping of infrared light,
The eyes image for acquiring user later, identifies eyes image to obtain eye feature data, and then obtain watching information attentively, should
Watching attentively may include coordinate etc. where the blinkpunkt, direction of gaze, blinkpunkt of user in information.
102, corresponding first area is determined according to watching information attentively.
Obtain user watch information attentively after, information can be watched attentively according to this and determine corresponding first area.
Specifically, this watch attentively may include in information user simple eye or eyes blinkpunkt, direction of gaze, blinkpunkt institute
In coordinate etc., information can be watched attentively according to this and determine corresponding first area at the terminal, which can be understood as
The region that terminal device is watched attentively according to the user that information recognizes is watched attentively.Terminal device can acquire the eye feature of user, lead to
It crosses and eyeball tracking is carried out to the eye of user, coordinate etc. where determining the blinkpunkt, direction of gaze, blinkpunkt of user, in turn
Determine first area.And the range of first area can be determined directly according to the region for watching blinkpunkt in information attentively, be also possible to
It, specifically can be according to reality directly using the center of blinkpunkt as the region of the default size in the center of circle after the center for determining blinkpunkt
Border application scenarios are adjusted, and herein and are not construed as limiting.
Further, terminal device may include display device, including light emitting diode (light emitting
Diode, LED) screen, capacitance plate, touch screens etc., the application is referred to as screen.User can watch attentively on the screen of terminal
Any point, terminal device watch information attentively according to user's, identify the first area that user is watched attentively.
For example, user need to watch sliding area, terminal attentively when carrying out slide on the screen for needing to carry out terminal device
Equipment obtains the eyes image data of user, and calculates blinkpunkt by the data model of machine vision algorithm and eyeball tracking
Position, and then determine the position that user is watched attentively, and obtain operation corresponding with the region.
In a kind of optional embodiment, terminal device is identifying what user was watched attentively according to the information of watching attentively of user
After first area, it can be highlighted the first area on the screen, alternatively, showing the first area by way of focus
Etc., it can be specifically adjusted according to practical application scene, be not construed as limiting herein.
103, rotation information is obtained.
In terminal device watching attentively after information determines first area according to user, user's expectation is operated to improve
Accuracy, terminal device can continue obtain user rotation information.The rotation information may include the rotation of terminal device
One or more in angular speed, rotation direction, rotation amplitude or rotational angle etc..
In general, user can rotate terminal device, so that terminal device obtains rotation information.Specific rotation behaviour
Work may include the overturning knot that the rotation of horizontal plane, the overturning of vertical direction or horizontal plane rotation and vertical direction are carried out to terminal
Conjunction etc. various rotating manners, can specifically be adjusted, the application is not construed as limiting this according to practical application scene.
The rotation information can be through sensor and obtain, and be also possible to the camera acquisition figure by terminal device
Picture, and carry out analyzing determining etc. mode, it can be specifically adjusted according to practical application scene.Specifically, terminal device can
With include detect terminal device itself movement or rotation sensor, for example, the sensor can be angular-rate sensor (with
Under be referred to as gyroscope), angular-rate sensor is typically different than accelerometer, and accelerometer is typically only capable to detect axial direction
Line movement, and the physical quantity that angular-rate sensor can measure may include deflection, inclination when rotational angular velocity, because
This, the embodiment of the present application can obtain rotation information of the terminal device under the operation of user using angular-rate sensor.More
Specifically, when the rotation information be pass through sensor when, terminal device can according to angular-rate sensor it is collected deflection, incline
Rotational angular velocity etc. when tiltedly, the one or more in rotation direction, angle or amplitude of analysing terminal equipment etc. obtains
To the rotation information of terminal device.In addition, terminal device can also by camera acquired image, analysing terminal equipment
Rotation information.For example, when the terminal device be mobile phone when, when user rotate mobile phone when, the front camera of terminal device or
Rear camera continuous acquisition image, analyzes collected consecutive image, determines direction, the angle of terminal device rotation
Degree, amplitude etc. information, to obtain the rotation information.
It should be noted that in addition to the sensor and camera above-mentioned by terminal device obtain rotation information it
Outside, rotation information can also be obtained in other way, for example, in conjunction with other sensors, including gravity sensor, infrared biography
Sensor etc., which combines, obtains rotation information, or obtains rotation information etc. by external sensing apparatus, and the application is only
Exemplary illustration, and be not construed as limiting.
104, first area is adjusted according to rotation information, obtains second area.
After the rotation information for getting terminal device itself, terminal device can be according to the rotation information to the firstth area
Domain is adjusted, to determine second area.The second area is the operating area that user selectes, and is more met for desired operation
Region.
Illustratively, when in dark environment, for example, current light intensity is lower than threshold value, the note of collected user
Visual information may inaccuracy, therefore, terminal device according to the certain point watched attentively on the screen that information determines terminal of user it
Afterwards, user can also rotate terminal device, and terminal device is believed by the rotation that camera or sensor collect terminal device
After breath, the point on screen is adjusted according to collected rotation information, makes this close to the point of user's desired control.For example, working as
For needing to adjust first area to the left, then can turn left terminal device, adjust first area to the left, obtain for it is expected
Second area.
In addition, the firstth area can be adjusted in real time by rotation information when adjusting first area by rotation terminal device
Domain, terminal device can show the region adjusted on the screen, and user can be according to the vision on the screen of terminal device
Feedback, the amplitude of adjustment rotation terminal device, and then first area is more accurately adjusted, obtain second area.
105, instruction corresponding with second area is obtained, and executes the instruction.
After determining second area, terminal device obtains instruction corresponding with the second area, and executes the instruction.
Terminal device is manipulated in general, facial characteristics centering can be used in user, common control mode includes a little
Hit, slide etc., sliding can fall in specific region according to user's blinkpunkt position, and the side of sliding is defined with specific region
To, can also by judging glide direction from first blinkpunkt position to the change direction of next blinkpunkt position, and
Clicking operation can make to watch the time threshold whether duration reaches clicking operation attentively, be also possible to realize by blink operation,
The special key being also possible on electronic equipment, such as prominent side key, capacitance plate Touch Screen, are also possible to voice behaviour
Make, can also be that facial characteristics operates, such as beep mouth, opens one's mouth, nods etc..
For example, terminal device makes coke according to the information of watching attentively of user if there is rollback control area in the terminal device lower right corner
Point continues through rotation terminal device control focus and enters rollback control area after rollback control area, terminal device
The available corresponding back-off instruction in rollback control area, and the back-off instruction is executed, Shang Yijie is return back to from current interface
Face.
It should be understood that the step 105 in the embodiment of the present application is optional step.
In the embodiment of the present application, watching attentively after information determines first area by user first, continues to obtain terminal
The rotation information of equipment, and by the rotation information, first area is adjusted, and then obtain closer to desired secondth area of user
Domain, and the corresponding instruction of second area is obtained, execute the instruction.Therefore, eye and rotation of the embodiment of the present application by user
The combination of terminal device determines more accurate second area, and second area is made more to meet the desired region of user.Even if by environment,
The influences such as user's difference keep eye recognition inaccurate, can also adjust in conjunction with the rotation information of terminal device to first area
It is whole, the accuracy of eyeball tracking is compensated, the second area made is more acurrate, improves the experience of user.
The method based on free movement adjustment watching area for providing you to the application below is further illustrated,
It, can be with referring to Fig. 2, another flow diagram of the method based on free movement terminal watching area in the embodiment of the present application
Include:
201, it obtains and watches information attentively.
202, corresponding first area is determined according to watching information attentively.
It should be understood that step 201 in the embodiment of the present application, 202 with the step 101,102 similar in earlier figures 1, herein not
It repeats again.
203, the third region within the presetting range of the first area is determined.
After determining first area, determine that the third region within the presetting range of first area, third region include
First area, and third region is typically larger than first area.
Optionally, in a kind of possible embodiment, after determining first area, determine that blinkpunkt is corresponding precisely
Degree, and using blinkpunkt as center dot, N times with precision is that radius determines third region, and for N greater than 1, i.e. third region can be with
N times of the region including the precision except first area and first area.For example, being set if precision is 0.5 degree in terminal
Standby distance resolution is about 3mm, therefore, can determine third region by radius of 3*3=9mm.
Illustratively, as shown in figure 3, first area 301 belongs to third region 302, the central point with first area 301 is
Centre dot determines third region with N times of the radius of first area 301, and the range of first area 301 is less than third region
302。
Further, when watching information attentively of user is being determined by eyeball tracking, involved parameter may include essence
Accuracy, precision may include accuracy value and accuracy value.Accuracy value be calculate watch information attentively and actual watch the inclined of information attentively
Difference, accuracy value are then watch deviation attentively discrete.In general, it is to be understood that precision be the practical position watched attentively of blinkpunkt with
The collected average error value watched attentively between position of terminal device, it is same in lasting record that precision can be understood as terminal device
Dispersion degree when one blinkpunkt, for example, error amount can be measured by the mean square deviation of continuous sample.Specifically, passing through
Eyeball tracking determines the watching attentively before information of user, can be calibrated, obtain calibration parameter.In practical applications, it calibrated
Journey is using the significant process of eyeball tracking technology, generally according to obtaining under the different eye feature of each user or varying environment
It is not necessarily identical to obtain different calibration parameters, therefore, before user watches information attentively using eyeball tracking acquisition, school can be carried out
Standard obtains calibration parameter, and obtains accuracy value and accuracy value according to the calibration parameter and preset eyeball tracking algorithm.When
So, it can be terminal device and accuracy value and accuracy value directly obtained according to the calibration parameter and preset eyeball tracking algorithm,
It is also possible to terminal device and the calibration parameter is sent to server or other network equipments, server or other network equipment roots
Accuracy value and accuracy value etc. are obtained according to preset eyeball tracking algorithm, is then sent to terminal device, it specifically can be according to reality
Application scenarios adjustment, herein and is not construed as limiting.
204, rotation information is obtained.
Step 204 in the embodiment of the present application is similar with the step 103 in earlier figures 1, and details are not described herein again.
It should be noted that the embodiment of the present application is not construed as limiting the execution sequence of step 203 and step 204, Ke Yixian
Step 203 is executed, step 204 can also be first carried out, can be specifically adjusted according to practical application scene, not limited herein
It is fixed.
205, first area is adjusted in the range of third region according to rotation information, obtains second area.
After obtaining rotation information, first area can be adjusted according to the rotation information, and be no more than third
The range in region, obtains second area.
In general, user can show the feedback of the rotation operation of terminal device according to the screen of terminal device, screen
Interface can highlight or show the area identification of preset shape, for example, cursor, focus etc., to identify current fixation point place
Region, user can show according to the screen of terminal device, determine the adjustment progress to first area, and then adjust terminal device
Rotation amplitude, with determination more meet the desired second area of user.
For example, if terminal device according to the blinkpunkt watched information attentively and determine user of user, and determines first in screen
Region, and after determining third region, if the first area identified does not meet the desired region of user, user can be rotated
Terminal device adjusts the position of first area in third region, determines second area.
Illustratively, as shown in Fig. 4 a and 4b, as shown in fig. 4 a, first area 401 and third region 402 are determined.Such as figure
Shown in 4b, terminal device is rotated by user, first area is adjusted, obtain second area 403.
206, control data are obtained.
After determining second area, it can also continue to obtain control data.
The control data may include facial feature data, head feature data, voice data, control instruction or gesture control
Any one in data processed.
The control data can be obtained according to various ways, and facial feature data may include the eye feature number of user
According to can be with pupil position, pupil shape, iris position, iris shape, eyelid position, canthus position, hot spot (also referred to as Poole
Spot by the emperor himself) position etc., it perhaps also may include the eye motion state of user for example, the movement of simple eye or eyes blink, blink time
Number etc., can specifically adjust according to application scenarios.Also may include simple eye or eyes the blinkpunkt of user, watch attentively duration,
Blinkpunkt coordinate one of watches vector attentively etc. or a variety of.Facial feature data can also include the facial characteristics of user,
For example, smile expression, beep mouth, staring etc..Head feature data may include in the motion state or head on the head of user
The motion state at preset position or the times of exercise etc. at preset position in head in it is one or more, for example, nod,
Turn left, turn right, bow etc..Control instruction can be the suitable operation of terminal device response user, for example, specific operation can
To be that user operates the key of terminal device, user may include that user sets terminal to the button operation of terminal device
The other equipment of the operation of virtual key or access terminal equipment in the operation of standby physical button or touch screen, for example, key
The operation of key included by disk, handle etc. is any one or more of.Voice control data can be the language of terminal device
Sound acquisition module is acquired to obtain to the voice of user, and voice data may include the control that user operates second area
Voice processed.Gesture control data, which can be, obtains terminal device progress gesture control by user, can be and is set by terminal
Standby camera, sensor or touch screen etc. collect.
Illustratively, which can be specific for example, user (can such as be smiled, be stared at by blink, facial expression
Eye etc. particular emotions), head pose, such as nod, shake the head, head oscillation, lip reading identification data, nozzle type identification data, such as beep
Mouth is opened one's mouth, key: physical button (such as home key, booting side key, volume key, function key, capacitance touch button etc.), screen
Touch controlled key (the screen return key under such as Android), virtual key, voice control data, gesture control data, when watching attentively
Long data etc..
207, according to control data, instruction corresponding with second area is obtained, and execute instruction.
After obtaining control data, terminal device obtains instruction corresponding with second area according to the control data, and
Execute the instruction.
Specifically, after getting control data, the operation to second area can be determined according to the control data.
Illustratively, terminal device highlights the second area after determining second area, and then user can carry out a step behaviour
Make, for example, nodding, watching duration attentively more than threshold value etc., terminal device obtains second area pair according to the further operating of user
The instruction answered.For example, if determining for second area counterpart terminal equipment operates, the available determine instruction of terminal device, so
After execute determine instruction, show next interface.
Optionally, in addition, if second area corresponds to multiple instruction, corresponding finger further can be determined according to control data
It enables.For example, if second area corresponds to multiple instruction duration can be watched attentively according to user, if user, which watches duration attentively, belongs to first
Section then obtains corresponding first and instructs and execute, if the duration of watching attentively of user belongs to second interval, obtains corresponding second
Instruct and execute etc., it can be specifically adjusted according to application scenarios.
For example, as shown in figure 5, user can pass through eye and rotation mobile phone when there is a new information prompt on mobile phone
Determine second area 501, after determining second area 501, the available user's of terminal device nods, blinks, for a long time
The corresponding control data of movement such as watch attentively, for example, if user nods, the available open instructions of terminal device passes through this dozen
Instruction unpack content relevant to the new information is opened, as shown in fig. 6, can be obtained in the new information by the open instructions
Hold, and is shown in the screen of terminal device.
Therefore, in the embodiment of the present application, watching attentively after information determines first area, and according to end by user first
The precision that information is watched in end equipment identification attentively determines third region, continues the rotation information for obtaining terminal device, the rotation information
Pass through the acquisition determination of the equipment such as sensor or camera under the manipulation of user for terminal device.Terminal device passes through this turn
Dynamic information, adjusts first area in the range of third region, and then obtains closer to the desired second area of user.And pass through
The constraint in third region avoids the amplitude of adjustment excessive, and leads to the adjustment inaccuracy to first area, makes second area can not
Meet the desired region of user.Then the control action that can further obtain user obtains control data, and according to the control
The corresponding instruction of data acquisition second area, then executes the instruction.Therefore, eye and end of the embodiment of the present application by user
The combination of the rotation of end equipment determines more accurate second area, and second area is made more to meet the desired region of user.Even if by
The influences such as environment, user's difference keep eye recognition inaccurate, can also combine user to the rotation of terminal device to first area
It is adjusted, the second area made is more acurrate.Improve the experience of user.And second area is carried out about by third region
Beam makes the second area for when carrying out the adjustment of first area, avoiding amplitude excessive, and causing deviate desired region.And
And the control data of user can be further obtained, it, can be more into one by the corresponding instruction of control data acquisition second area
The intention for determining user is walked, the corresponding control instruction of second area is more accurately obtained, avoids maloperation, improve user experience.
For example, estimating the direction and position of user's blinkpunkt, by eyeball tracking technology in the field of human-computer interaction of mobile phone to realize use
The control (click or sliding etc.) to mobile phone at family.However, since environment influences or user in most application scenarios
Body differentia influence causes the blinkpunkt precision of eyeball tracking technology to decline, and operation is unable to reach precisely.At this point, utilizing user
The rotation operation of terminal device is modified, and is adjusted in real time by visual feedback, optimum operation region is obtained.
It is aforementioned that method provided by the present application is described in detail, device provided by the present application is illustrated below.
Referring to Fig. 7, a kind of embodiment schematic diagram of terminal device provided by the present application, may include:
Eye movement identification module 701 watches information attentively for obtaining;
Processing module 703 determines corresponding first area for watching information attentively according to this;
Detection module 702, for obtaining rotation information;
The processing module 703 is also used to adjust the first area according to the rotation information, obtains second area, this second
Region can be understood as the region where the practical blinkpunkt of user.
Optionally, in a kind of possible embodiment,
The processing module 703 is also used to obtain instruction corresponding with the second area, and executes the instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Obtain control data;
According to the control data, the instruction corresponding with the second area is obtained, and executes the instruction.
Optionally, in a kind of possible embodiment, the control data, comprising:
Any one in facial feature data, head feature data, voice data or control instruction.
Optionally, in a kind of possible embodiment, which is specifically used for:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the rotation information, obtains the second area.
Optionally, in a kind of possible embodiment, which is specifically used for:
It obtains this and watches the corresponding precision of blinkpunkt that information includes attentively;
N times of region for determining the precision except the first area is the third region, which is greater than 1.
Optionally, in a kind of possible embodiment,
Detection module 702, specifically for obtaining the rotation information by sensor.
Optionally, in a kind of possible embodiment, which includes angular-rate sensor;
The rotation information includes the rotational angular velocity detected by the angular-rate sensor.
Optionally, in a kind of possible embodiment,
The facial feature data include: blinkpunkt, watch attentively in duration or eye motion state at least one of;
The head feature data include: in the motion state at the preset position in the motion state or head on head at least
One.
Referring to Fig. 8, in the embodiment of the present application terminal device another embodiment schematic diagram, comprising:
Central processing unit (central processing units, CPU) 801, storage medium 802, power supply 803, storage
Device 804, input/output interface 805, it should be appreciated that the CPU in the embodiment of the present application can be one, be also possible to multiple, input
Output interface can be one, be also possible to multiple, and this is not limited here.Power supply 803 can mention for stable state detection device
For working power, memory 804 and storage medium 802 can be of short duration storage or persistent storage, and finger is stored in storage medium
It enables, when CPU can be according to the specific steps in instruction execution earlier figures 1- Fig. 6 embodiment in the memory.In addition, terminal is set
It can also include other components, for example, sensor, camera etc., the application other than the standby component shown in Fig. 8
Embodiment is only exemplary illustration, and is not construed as limiting.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes each embodiment the method for the application Fig. 1-6
All or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before
Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.
Claims (19)
1. a kind of method based on free movement adjustment watching area characterized by comprising
Information is watched in terminal device acquisition attentively;
The terminal device watches information attentively according to and determines corresponding first area;
The terminal device obtains rotation information;
The terminal device adjusts the first area according to the rotation information, obtains second area.
2. the method according to claim 1, wherein the method also includes:
The terminal device obtains instruction corresponding with the second area, and executes described instruction.
3. according to the method described in claim 2, it is characterized in that, described obtain corresponding with second area instruction, and
Execute described instruction, comprising:
The terminal device obtains control data;
The terminal device obtains described instruction corresponding with the second area according to the control data, and described in execution
Instruction.
4. according to the method described in claim 3, the control data, comprising:
Any one in facial feature data, head feature data, voice data or control instruction.
5. method according to any of claims 1-4, which is characterized in that the terminal device is believed according to the rotation
Breath adjusts the first area, obtains second area, comprising:
The terminal device determines the third region within the presetting range of the first area;
The terminal device adjusts the first area in the range of third region according to the rotation information, obtains institute
State second area.
6. according to the method described in claim 5, it is characterized in that, within the presetting range of the determination first area
Third region, comprising:
Watch the corresponding precision of blinkpunkt that information includes described in acquisition attentively;
N times of region for determining the precision except the first area is the third region, and the N is greater than 1.
7. to go any one of 1-6 the method according to right, which is characterized in that the terminal device obtains rotation information, packet
It includes:
The terminal device obtains the rotation information by sensor.
8. the method according to the description of claim 7 is characterized in that the sensor includes angular-rate sensor;
The rotation information includes the rotational angular velocity detected by the angular-rate sensor.
9. a kind of terminal device characterized by comprising
Eye movement identification module watches information attentively for obtaining;
Processing module determines corresponding first area for watching information attentively according to;
Detection module, for obtaining rotation information;
The processing module is also used to adjust the first area according to the rotation information, obtains second area.
10. terminal device according to claim 9, which is characterized in that
The processing module is also used to obtain instruction corresponding with the second area, and executes described instruction.
11. terminal device according to claim 10, which is characterized in that the processing module is specifically used for:
Obtain control data;
According to the control data, described instruction corresponding with the second area is obtained, and executes described instruction.
12. terminal device according to claim 11, the control data, comprising:
Any one in facial feature data, head feature data, voice data or control instruction.
13. the terminal device according to any one of claim 9-12, which is characterized in that the processing module is specific to use
In:
Determine the third region within the presetting range of the first area;
The first area is adjusted in the range of the third region according to the rotation information, obtains the second area.
14. terminal device according to claim 13, which is characterized in that the processing module is specifically used for:
Watch the corresponding precision of blinkpunkt that information includes described in acquisition attentively;
N times of region for determining the precision except the first area is the third region, and the N is greater than 1.
15. according to right to remove any one of 9-14 described in terminal device, which is characterized in that
The detection module, specifically for obtaining the rotation information by sensor.
16. terminal device according to claim 15, which is characterized in that the sensor includes angular-rate sensor;
The rotation information includes the rotational angular velocity detected by the angular-rate sensor.
17. a kind of terminal device, comprising:
Memory, for storing program;
Processor, for executing the described program of the memory storage, when described program is performed, the processor is used for
Execute such as step described in any one of claims 1-8.
18. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer executes such as
Method described in any one of claim 1-8.
19. a kind of computer program product comprising instruction, which is characterized in that when the computer program product is in electronic equipment
When upper operation, so that the electronic equipment executes such as method of any of claims 1-8.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2019102224315 | 2019-03-22 | ||
CN201910222431 | 2019-03-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109917923A true CN109917923A (en) | 2019-06-21 |
CN109917923B CN109917923B (en) | 2022-04-12 |
Family
ID=66968041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910257529.4A Active CN109917923B (en) | 2019-03-22 | 2019-04-01 | Method for adjusting gazing area based on free motion and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109917923B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110327061A (en) * | 2019-08-12 | 2019-10-15 | 北京七鑫易维信息技术有限公司 | It is a kind of based on the personality determining device of eye movement tracer technique, method and apparatus |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104238751A (en) * | 2014-09-17 | 2014-12-24 | 联想(北京)有限公司 | Display method and electronic equipment |
US20150015483A1 (en) * | 2012-03-06 | 2015-01-15 | Samsung Electronics Co., Ltd. | Method of controlling at least one function of device by using eye action and device for performing the method |
CN105892647A (en) * | 2016-03-23 | 2016-08-24 | 京东方科技集团股份有限公司 | Display screen adjusting method and device as well as display device |
CN106155316A (en) * | 2016-06-28 | 2016-11-23 | 广东欧珀移动通信有限公司 | Control method, control device and electronic installation |
CN106921890A (en) * | 2015-12-24 | 2017-07-04 | 上海贝尔股份有限公司 | A kind of method and apparatus of the Video Rendering in the equipment for promotion |
CN107407977A (en) * | 2015-03-05 | 2017-11-28 | 索尼公司 | Message processing device, control method and program |
CN108334191A (en) * | 2017-12-29 | 2018-07-27 | 北京七鑫易维信息技术有限公司 | Based on the method and apparatus of the determination blinkpunkt of eye movement analysis equipment |
CN108968907A (en) * | 2018-07-05 | 2018-12-11 | 四川大学 | The bearing calibration of eye movement data and device |
-
2019
- 2019-04-01 CN CN201910257529.4A patent/CN109917923B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150015483A1 (en) * | 2012-03-06 | 2015-01-15 | Samsung Electronics Co., Ltd. | Method of controlling at least one function of device by using eye action and device for performing the method |
CN104238751A (en) * | 2014-09-17 | 2014-12-24 | 联想(北京)有限公司 | Display method and electronic equipment |
CN107407977A (en) * | 2015-03-05 | 2017-11-28 | 索尼公司 | Message processing device, control method and program |
CN106921890A (en) * | 2015-12-24 | 2017-07-04 | 上海贝尔股份有限公司 | A kind of method and apparatus of the Video Rendering in the equipment for promotion |
CN105892647A (en) * | 2016-03-23 | 2016-08-24 | 京东方科技集团股份有限公司 | Display screen adjusting method and device as well as display device |
CN106155316A (en) * | 2016-06-28 | 2016-11-23 | 广东欧珀移动通信有限公司 | Control method, control device and electronic installation |
CN108334191A (en) * | 2017-12-29 | 2018-07-27 | 北京七鑫易维信息技术有限公司 | Based on the method and apparatus of the determination blinkpunkt of eye movement analysis equipment |
CN108968907A (en) * | 2018-07-05 | 2018-12-11 | 四川大学 | The bearing calibration of eye movement data and device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110327061A (en) * | 2019-08-12 | 2019-10-15 | 北京七鑫易维信息技术有限公司 | It is a kind of based on the personality determining device of eye movement tracer technique, method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN109917923B (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102258424B1 (en) | User interface programmatic scaling | |
Harezlak et al. | Towards accurate eye tracker calibration–methods and procedures | |
Zhu et al. | Nonlinear eye gaze mapping function estimation via support vector regression | |
US20170371421A1 (en) | Multipurpose controllers and methods | |
US8146020B2 (en) | Enhanced detection of circular engagement gesture | |
US9965031B2 (en) | System and method for probabilistic object tracking over time | |
CN109976528A (en) | A kind of method and terminal device based on the dynamic adjustment watching area of head | |
WO2014030017A1 (en) | Calibration of eye tracking system | |
Bigdelou et al. | Simultaneous categorical and spatio-temporal 3d gestures using kinect | |
US10488918B2 (en) | Analysis of user interface interactions within a virtual reality environment | |
US20180267604A1 (en) | Computer pointer device | |
US11481037B2 (en) | Multipurpose controllers and methods | |
Essig et al. | ADAMAAS: towards smart glasses for mobile and personalized action assistance | |
WO2021185110A1 (en) | Method and device for eye tracking calibration | |
CN109960412B (en) | Method for adjusting gazing area based on touch control and terminal equipment | |
CN109917923A (en) | Method and terminal device based on free movement adjustment watching area | |
Villanueva et al. | A geometric approach to remote eye tracking | |
Huang et al. | Low-cost and high-speed eye tracker | |
RU2818028C1 (en) | Method and device for calibration in oculography | |
Yeung | Mouse cursor control with head and eye movements: A low-cost approach | |
Le et al. | A Practical Method to Eye-tracking on the Phone: Toolkit, Accuracy and Precision | |
US20240122469A1 (en) | Virtual reality techniques for characterizing visual capabilities | |
US20230059153A1 (en) | Methods, devices and media for input/output space mapping in head-based human-computer interactions | |
CN109145010B (en) | Information query method and device, storage medium and wearable device | |
Chen et al. | A Two-stage model for inference of target identity during 2D cursor control from natural gaze trajectories |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |