CN109976528B - Method for adjusting watching area based on head movement and terminal equipment - Google Patents

Method for adjusting watching area based on head movement and terminal equipment Download PDF

Info

Publication number
CN109976528B
CN109976528B CN201910258196.7A CN201910258196A CN109976528B CN 109976528 B CN109976528 B CN 109976528B CN 201910258196 A CN201910258196 A CN 201910258196A CN 109976528 B CN109976528 B CN 109976528B
Authority
CN
China
Prior art keywords
area
region
head
user
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910258196.7A
Other languages
Chinese (zh)
Other versions
CN109976528A (en
Inventor
孔祥晖
秦林婵
黄通兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Publication of CN109976528A publication Critical patent/CN109976528A/en
Application granted granted Critical
Publication of CN109976528B publication Critical patent/CN109976528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The application provides a method for adjusting a gazing area based on head movement and a terminal device, which are used for combining the head movement adjustment of the gazing area after determining gazing information based on eye movement identification of a user, so that the obtained gazing area is more in line with the expectation of the user, and the accuracy rate of determining the gazing area is improved. The method comprises the following steps: acquiring gazing information; determining a corresponding first area according to the gazing information; acquiring first head characteristic data; and adjusting the first area according to the first head characteristic data to obtain a second area.

Description

Method for adjusting watching area based on head movement and terminal equipment
The present application claims priority of chinese patent application having application number 201910222440.4, entitled "method for adjusting gaze area based on head movement and terminal device" filed in 2019, 3, month 22, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the field of human-computer interaction, in particular to a method for adjusting a watching area based on head movement and a terminal device.
Background
At present, with the wider application of human-computer interaction, more and more interaction modes are provided between users and equipment. Specifically, the user operation may be identified through the eye feature data of the user, so as to cause the device to execute the corresponding action.
The eyeball tracking technology is applied to a man-machine interaction scene, and equipment is controlled through eyeball movement of a user. For example, in the human-computer interaction of the terminal device, the direction and the position of the user's gaze point can be determined through an eyeball tracking technology to realize the control of the terminal device by the user, such as clicking, sliding and the like.
However, due to environmental influences, differences in user usage, and the like, the accuracy of eyeball tracking is reduced, so that recognition errors are prone to occur, and further, the operation cannot be accurate, and misoperation is prone to occur. Therefore, how to more accurately identify the actual operation of the user becomes a problem to be solved urgently.
Disclosure of Invention
The application provides a method for adjusting a gazing area based on head movement and a terminal device, which are used for combining the head movement adjustment of the gazing area after determining gazing information based on eye movement identification of a user, so that the obtained gazing area is more in line with the expectation of the user, and the accuracy rate of determining the gazing area is improved.
In view of the above, a first aspect of the present application provides a method for adjusting a gaze area based on head movement, including:
acquiring gazing information;
determining a corresponding first area according to the gazing information;
acquiring first head characteristic data;
and adjusting the first area according to the first head characteristic data to obtain a second area.
Optionally, in a possible implementation, the method may further include:
and acquiring an instruction corresponding to the second area, and executing the instruction.
Optionally, in a possible implementation, the obtaining an instruction corresponding to the second area and executing the instruction may include:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
Optionally, in a possible implementation, the control data may include:
any one of facial feature data, second head feature data, voice data, or control instructions.
Optionally, in a possible implementation, the adjusting the first region according to the first head feature data to obtain a second region may include:
determining a third area within a preset range of the first area;
and adjusting the first area within the range of the third area according to the first head characteristic data to obtain the second area.
Optionally, in a possible implementation, the determining a third area within the preset range of the first area may include:
acquiring the precision of the gazing point corresponding to the gazing information;
determining a region of the precision N times outside the first region as the third region, the N being greater than 1.
Alternatively, in one possible implementation,
the facial feature data may include: at least one of eye movement behavior data or eye movement status;
the second head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
Alternatively, in one possible implementation,
the first head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
A second aspect of the present application provides a terminal device, including:
the eye movement identification module is used for acquiring gazing information;
the processing module is used for determining a corresponding first area according to the gazing information;
the head movement identification module is used for acquiring first head characteristic data;
the processing module is further configured to adjust the first area according to the first head feature data to obtain a second area.
Alternatively, in one possible implementation,
the processing module is further configured to obtain an instruction corresponding to the second area, and execute the instruction.
Optionally, in a possible implementation manner, the processing module is specifically configured to:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
Optionally, in a possible implementation, the control data includes:
any one of facial feature data, second head feature data, voice data, or control instructions.
Optionally, in a possible implementation manner, the processing module is specifically configured to:
determining a third area within a preset range of the first area;
and adjusting the first area within the range of the third area according to the first head characteristic data to obtain the second area.
Optionally, in a possible implementation manner, the processing module is specifically configured to:
acquiring the precision of the gazing point corresponding to the gazing information;
determining a region of the precision N times outside the first region as the third region, the N being greater than 1.
Alternatively, in one possible implementation,
the facial feature data may include: at least one of eye movement behavior data or eye movement status;
the second head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
Alternatively, in one possible implementation,
the first head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
A third aspect of the present application provides a terminal device, comprising:
the system comprises a processor, a memory, a bus and an input/output interface, wherein the processor, the memory and the input/output interface are connected through the bus;
the memory for storing program code;
the processor, when invoking the program code in the memory, performs the steps of the method as provided by the first aspect of the application.
In a fourth aspect, the present application provides a computer-readable storage medium, it should be noted that a part of the technical solutions of the present application, or all or part of the technical solutions, which substantially or substantially contributes to the prior art, may be embodied in the form of a software product stored in a storage medium for storing computer software instructions for the above-mentioned apparatus, which includes a program for executing the above-mentioned program designed for any one of the embodiments of the first aspect.
The storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In a fifth aspect, the present application provides a computer program product comprising computer software instructions that are loadable by a processor to carry out the process of the method for adjusting a gaze region based on head movement of any one of the above first aspects.
In the embodiment of the application, after the first area is determined according to the gazing information of the user, the first head characteristic data of the user is continuously acquired, and the first area is adjusted according to the first head characteristic data, so that the second area closer to the expectation of the user is obtained. Therefore, the embodiment of the application determines the more accurate second region by combining the eyes and the head of the user, so that the second region is more consistent with the region expected by the user. Even if the eye recognition is inaccurate due to influences of environments, user differences and the like, the first area can be adjusted by combining the head characteristics of the user, and the obtained second area is more accurate. The user experience is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for adjusting a gaze region based on head movement provided by the present application;
fig. 2 is another schematic flow chart of a method for adjusting a gaze region based on head movement provided by the present application;
fig. 3 is a schematic area diagram of a method for adjusting a gaze area based on head movement according to the present application;
FIG. 4a is a schematic diagram of a first region and a third region in an embodiment of the present application;
FIG. 4b is a schematic diagram of a first region, a second region and a third region in an embodiment of the present application;
FIG. 5 is a diagram illustrating an embodiment of a fetch instruction;
FIG. 6 is a diagram illustrating execution of instructions according to an embodiment of the present application;
fig. 7 is a schematic diagram of an embodiment of a terminal device provided in the present application;
fig. 8 is a schematic diagram of another embodiment of the terminal device provided in the present application.
Detailed Description
The application provides a method for adjusting a gazing area based on head movement and a terminal device, which are used for combining the head movement adjustment of the gazing area after determining gazing information based on eye movement identification of a user, so that the obtained gazing area is more in line with the expectation of the user, and the accuracy rate of determining the gazing area is improved.
First, the method for adjusting a gaze region based on head movement provided by the present application may be applied to a terminal device having a module for collecting image data, such as a camera, a sensor, and the like. The terminal device may be various electronic devices having a camera, a sensor, or the like, for example, a mobile phone, a notebook computer, a display, or the like.
First, a flow of the method for adjusting a gaze region based on head movement provided by the present application is described below, referring to fig. 1, where a flow diagram of the method for adjusting a gaze region based on head movement provided by the present application may include:
101. gaze information is obtained.
First, gaze information may be acquired, which may be acquired by a camera, a sensor, or the like of the terminal device, and may include a gaze point, a gaze duration, a gaze point coordinate, a gaze vector, or the like of the user.
In particular, the user's gaze point may be identified through eye tracking. The terminal equipment can directly acquire the eye images of the user through a camera or a sensor and the like, and then identify the eye images of the user to obtain the gazing information of the user. In addition, if the terminal device has an infrared device, at least two groups of infrared light rays can be emitted to the eyes of the user, or at least one group of infrared light rays can be emitted to at least one eyeball of the user, the eyes of the user generate infrared light spots under the mapping of the infrared light rays, then, the eye images of the user are collected, the eye images are identified to obtain eye characteristic data, and then, watching information is obtained, wherein the watching information can comprise the watching point, the watching direction, the coordinate where the watching point is located and the like of the user.
102. And determining a corresponding first area according to the gazing information.
After obtaining the gaze information of the user, the corresponding first region may be determined from the gaze information.
Specifically, the gazing information may include a gazing point, a gazing direction, coordinates where the gazing point is located, and the like of a single eye or both eyes of the user, and a corresponding first area may be determined on the terminal according to the gazing information, where the first area may be understood as an area that the terminal device recognizes according to the gazing information and is gazed by the user. The terminal device can collect eye features of the user, and determine a fixation point, a fixation direction, coordinates where the fixation point is located and the like of the user by performing eyeball tracking on the eyes of the user, so as to determine the first area. The size of the first area may be adjusted according to the area of the gaze point in the gaze information, or may be a preset size area directly centered on the center of the gaze point after the center of the gaze point is determined, and specifically may be adjusted according to an actual application scenario, which is not limited herein.
Further, the terminal device may include a display device including a Light Emitting Diode (LED) screen, a capacitive screen, a resistive screen, and the like, which is referred to as a screen for short in this application. The user can watch any point on the screen of the terminal, and the terminal equipment identifies the first area watched by the user according to the watching information of the user.
For example, when a sliding operation needs to be performed on a screen of a terminal device, a user needs to watch a sliding area, the terminal device obtains eye image data of the user, calculates a gazing point position through a machine vision algorithm and a data model tracked by eyeballs, further determines a gazing position of the user, and obtains an operation corresponding to the area.
In an optional embodiment, after identifying the first area gazed by the user according to the gazing information of the user, the terminal device may highlight the first area on the screen, or display the first area in a focused manner, and so on, which may be specifically adjusted according to an actual application scenario, and is not limited herein.
103. First head feature data is acquired.
After determining the first area based on the gaze information of the user, the first head feature data of the user continues to be acquired.
The first head feature data may be obtained by a camera, a sensor, and the like of the terminal device, for example, the terminal device may acquire a head image of the user once every m milliseconds to obtain an L-frame head image, where L is a positive integer, and then perform image recognition on the L-frame head image to obtain the first head feature data of the user.
Alternatively, the first head feature data may include one or more of a motion state of the head of the user, or a motion state of a preset portion in the head, or a number of times of motions of the preset portion in the head, and the like, and the preset portion may be a partial organization of the face of the user, for example, the terminal device may determine the moving direction of the head of the user by a moving direction of the partial facial organization of the user.
104. And adjusting the first area according to the first head characteristic data to obtain a second area.
After the first head feature data of the user is obtained, the terminal device may adjust the first area according to the first head feature data to obtain a second area, where the second area is an operation area selected by the user and is closer to an area expected by the user.
Specifically, after the first region is determined, in order to improve the accuracy of the operation desired by the user, the first region may be further adjusted in combination with the head characteristics of the user to obtain a second region, so that the second region better conforms to the operation region desired by the user.
For example, when the user has inconvenience in both-hand operation, after the terminal device determines a certain point on the screen of the terminal according to the gaze information of the user, the user may further adjust a point on the screen to be close to a point that the user desires to control according to head movement, for example, head up, head down, left turn, right turn, or combination of directions and rotation.
In addition, when the first area is adjusted through the first head characteristic data, the first area can be adjusted in real time through head actions of a user, the terminal device can display the adjusted amplitude on a screen, the user can adjust the amplitude of head movement according to visual feedback on the screen of the terminal device, and then the first area is adjusted more accurately to obtain the second area.
105. And acquiring an instruction corresponding to the second area and executing the instruction.
After determining the second area, the terminal device acquires an instruction corresponding to the second area and executes the instruction.
Generally, a user may use a facial feature pair to control a terminal device, where the general control manner includes clicking, sliding, and the like, the sliding may be performed in a specific area according to a location of a gaze point of the user, a sliding direction is defined in the specific area, or the sliding direction may be determined by a change direction from a first gaze point to a next gaze point, and the clicking operation may enable a gaze duration to reach a time threshold of the clicking operation, or may be implemented by blinking operation, or may be a special key on the electronic device, such as a protruding side key, a capacitive screen touch screen, or a voice operation, or may be a facial feature operation, such as a beeping mouth, a mouth-opening, a nodding, and the like.
For example, if there is a rollback control area in the lower right corner of the terminal device, the terminal device continues to enter the rollback control area through the head control focus after bringing the focus close to the rollback control area according to the gaze information of the user, and the terminal device may obtain a rollback instruction corresponding to the rollback control area, execute the rollback instruction, and rollback to the previous interface from the current interface.
It should be understood that step 105 in the embodiments of the present application is an optional step.
In the embodiment of the application, after the first area is determined according to the gazing information of the user, the first head characteristic data of the user is continuously acquired, the first area is adjusted according to the first head characteristic data, a second area closer to the expectation of the user is further acquired, an instruction corresponding to the second area is acquired, and the instruction is executed. Therefore, the embodiment of the application determines the more accurate second region by combining the eyes and the head of the user, so that the second region is more consistent with the region expected by the user. Even if the eye recognition is inaccurate due to the influence of environment, user difference and the like, the first region can be adjusted by combining the head characteristics of the user, so that the eyeball tracking accuracy is compensated, and the obtained second region is more accurate. The user experience is improved.
Referring to fig. 2, another flow chart of the method for adjusting a gaze region based on head movement in the embodiment of the present application may include:
201. gaze information is obtained.
202. And determining a corresponding first area according to the gazing information.
It should be understood that steps 201 and 202 in the embodiment of the present application are similar to steps 101 and 102 in fig. 1, and are not described here again.
203. A third region within a preset range of the first region is determined.
After determining the first region, a third region within a preset range of the first region is determined, the third region including the first region, and the third region being generally larger than the first region.
Optionally, in a possible embodiment, after the first region is determined, the accuracy corresponding to the gazing point is determined, and the third region is determined with the gazing point as a center dot and with the radius being N times the accuracy, where N is greater than 1, that is, the third region may include the first region and a region outside the first region and N times the accuracy. For example, if the precision is 0.5 degrees, the distance resolution at the terminal device is about 3mm, and thus, the third region may be determined with a radius of 3 × 3=9 mm.
Exemplarily, as shown in fig. 3, the first region 301 belongs to a third region 302, the third region is determined by N times of the radius of the first region 301 with the center point of the first region 301 as a center dot, and the range of the first region 301 is smaller than that of the third region 302.
Further, when determining the gaze information of the user through eye tracking, the parameters involved may include accuracy, and the accuracy may include an accuracy value and a precision value. The accuracy value is the deviation of the calculated gaze information from the actual gaze information, and the accuracy value is the dispersion of the gaze deviations. Generally, the accuracy may be understood as an average error value between an actual gazing position of the gazing point and a gazing position collected by the terminal device, and the accuracy may be understood as a discrete degree of the terminal device when continuously recording the same gazing point, for example, an error value may be measured by a mean square error of continuous samples. Specifically, before determining the gaze information of the user through eye tracking, calibration may be performed, resulting in calibration parameters. In practical applications, the calibration process is an important process using an eye tracking technology, and different calibration parameters are not always the same according to different eye features of each user or different environments, so that before the user uses eye tracking to obtain gaze information, calibration may be performed to obtain calibration parameters, and a precision value are obtained according to the calibration parameters and a preset eye tracking algorithm. Of course, the terminal device may obtain the accuracy value and the accuracy value directly according to the calibration parameter and the preset eye tracking algorithm, or the terminal device may send the calibration parameter to a server or other network devices, and the server or other network devices obtain the accuracy value, and the like according to the preset eye tracking algorithm and then send the accuracy value, and the like to the terminal device, which may be specifically adjusted according to an actual application scenario, and this is not limited herein.
204. First head feature data is acquired.
Step 204 in the embodiment of the present application is similar to step 103 in fig. 1, and is not described herein again.
It should be noted that, in the embodiment of the present application, the execution sequence of step 203 and step 204 is not limited, step 203 may be executed first, step 204 may also be executed first, and the specific implementation may be adjusted according to an actual application scenario, and is not limited herein.
205. And adjusting the first area within the range of the third area according to the first head characteristic data to obtain a second area.
After the first head feature data is acquired, the first region may be adjusted according to the first head feature data, and the second region is obtained without exceeding the range of the third region.
The first head feature data may include one or more of a motion state of the head of the user or a motion state of a preset portion in the head, or a number of motions of a preset portion in the head, or the like.
Generally, feedback of the head movement operation of the user can be displayed according to a screen of the terminal device, an interface of the screen can be highlighted or a region identifier in a preset shape, such as a cursor, a focus and the like, can be displayed to identify a region where the current gaze point is located, and the user can determine an adjustment progress of the first region according to the screen display of the terminal device, so as to adjust the head movement amplitude.
For example, if the terminal device determines the gaze point of the user according to the gaze information of the user, determines the first region in the screen, and determines the third region, and if the identified first region does not conform to the desired region of the user, the user may adjust the position of the first region in the third region by head movement, to obtain the second region. The specific head movement may include lowering, raising, lowering, turning left, turning right, or combinations thereof, etc., to make the resulting second region more consistent with the user's desired region.
Exemplarily, as shown in fig. 4a and 4b, as shown in fig. 4a, a first region 401 and a third region 402 are determined. As shown in fig. 4b, the first region is adjusted by the head movement of the user, resulting in a second region 403.
206. Control data is acquired.
After the second area is determined, the acquisition of control data may also continue.
The control data may include any of facial feature data, second head feature data, voice data, control instructions, or gesture control data.
The control data may be obtained in various ways, and the facial feature data may include eye feature data of the user, such as a pupil position, a pupil shape, an iris position, an iris shape, an eyelid position, an eye corner position, a light spot (also referred to as purkinje spot) position, and the like, or may also include an eye movement state of the user, for example, a single-eye or double-eye blinking motion, a blinking number, and the like, which may be specifically adjusted according to an application scenario. One or more of a gaze point, gaze duration, gaze point coordinates or gaze vectors, etc. for a user's single or both eyes may also be included. The facial feature data may also include facial features of the user, such as smiling expressions, beeps, glares, and so forth. The second head feature data may include one or more of a motion state of the head of the user or a motion state of a preset portion in the head, or a number of motions of a preset portion in the head, and the like, for example, nodding, turning left, turning right, lowering the head, and the like. The control instruction may be any one or more of operations of the terminal device in response to the user, for example, the specific operation may be an operation of a key of the terminal device by the user, and the operation of the key of the terminal device by the user may include an operation of a physical key of the terminal device by the user, an operation of a virtual key on a touch screen, or an operation of a key included in another device accessing the terminal device, for example, a keyboard, a handle, and the like. The voice control data may be obtained by acquiring voice of the user by a voice acquisition module of the terminal device, and the voice data may include control voice of the user operating the second area. The gesture control data may be obtained by performing gesture control on the terminal device by a user, and may be obtained by collecting through a camera, a sensor, a touch screen, or the like of the terminal device.
Illustratively, the control data may be, for example, a user may select a control data item by blinking a blink, facial expression (e.g., a specific expression such as smile, glazer, etc.), head gesture such as nodding head, shaking head, head swing, etc., lip language identification data, mouth type identification data such as a beep, a flare, a key: the system comprises a plurality of keys, a plurality of keys and a plurality of display screens, wherein the keys comprise an entity key (such as a home key, a starting side key, a volume key, a function key, a capacitance touch key and the like), a screen touch key (such as a screen return key and the like under Android), a virtual key, voice control data, gesture control data, watching duration data and the like.
207. And acquiring an instruction corresponding to the second area according to the control data, and executing the instruction.
After acquiring the control data, the terminal device acquires an instruction corresponding to the second area according to the control data, and executes the instruction.
Specifically, after the control data is acquired, the operation on the second area may be determined according to the control data. Illustratively, after determining the second area, the terminal device highlights the second area, and then the user may perform further operations, for example, nodding, the gazing duration exceeds a threshold, and the like, and the terminal device obtains an instruction corresponding to the second area according to the further operations of the user. For example, if the second area corresponds to a determination operation of the terminal device, the terminal device may obtain the determination instruction, then execute the determination instruction, and display the next interface.
In addition, optionally, if the second area corresponds to a plurality of instructions, the corresponding instructions may be further determined according to the control data. For example, if the second area corresponds to a plurality of instructions, the corresponding first instruction may be obtained and executed if the user gazing duration belongs to the first interval, the corresponding second instruction may be obtained and executed if the user gazing duration belongs to the second interval, and the like, and the adjustment may be specifically performed according to the application scenario.
For example, as shown in fig. 5, when a new message prompt is provided on a mobile phone, a user may determine the second area 501 through eye and head control, and after determining the second area 501, the terminal device may acquire control data corresponding to actions of nodding, blinking, long-time gazing, and the like of the user, for example, if the user nods, the terminal device may acquire an open instruction, and open content related to the new message through the open instruction, as shown in fig. 6, the content of the new message may be acquired through the open instruction and displayed on a screen of the terminal device.
Therefore, in the embodiment of the application, after the first region is determined according to the gazing information of the user, the third region is determined according to the accuracy of the terminal device for identifying the gazing information, the first head feature data of the user is continuously acquired, the first region is adjusted within the range of the third region according to the first head feature data, the second region closer to the user expectation is further obtained, and the situation that the adjustment amplitude is too large and the adjustment of the first region is inaccurate due to the constraint of the third region is avoided, so that the second region cannot accord with the user expectation region. Then, the control action of the user can be further acquired to obtain control data, and an instruction corresponding to the second area is acquired according to the control data and then executed. Therefore, the embodiment of the application determines the more accurate second region by combining the eyes and the head of the user, so that the second region is more consistent with the region expected by the user. Even if the eye recognition is inaccurate due to the influence of environment, user difference and the like, the first area can be adjusted by combining the head characteristics of the user, so that the obtained second area is more accurate. The user experience is improved. And the second area is restrained by the third area, so that the situation that the obtained second area deviates from the expected area due to overlarge amplitude is avoided when the first area is adjusted. Moreover, the control data of the user can be further acquired, the instruction corresponding to the second area is acquired through the control data, the intention of the user can be further determined, the control instruction corresponding to the second area can be acquired more accurately, misoperation is avoided, and user experience is improved. For example, in the field of human-computer interaction of mobile phones, the direction and coordinates of a user's gaze point are estimated by an eye tracking technique to realize user control (clicking or sliding, etc.) of the mobile phone. However, in most application scenarios, due to environmental influences or individual differences of users, the precision of the gaze point of the eye tracking technology is reduced, and the operation cannot be accurate. At this time, the operation is corrected by the head-moving tracking technique, and the optimum operation position is obtained by visual feedback.
The method provided by the present application is described in detail above, and the apparatus provided by the present application is described below. Referring to fig. 7, a schematic diagram of an embodiment of a terminal device provided in the present application may include:
an eye movement recognition module 701, configured to obtain gaze information;
a processing module 703, configured to determine a corresponding first area according to the gazing information;
a head movement identification module 702 for obtaining first head feature data;
the processing module 703 is further configured to adjust the first region according to the first head feature data to obtain a second region, where the gazing point is located.
Alternatively, in one possible implementation,
the processing module 703 is further configured to obtain an instruction corresponding to the second area, and execute the instruction.
Optionally, in a possible implementation, the processing module 703 is specifically configured to:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
Optionally, in a possible implementation, the control data includes:
any one of facial feature data, second head feature data, voice data, or control instructions.
Optionally, in a possible implementation manner, the processing module 703 is specifically configured to:
determining a third area within a preset range of the first area;
and adjusting the first area within the range of the third area according to the first head characteristic data to obtain the second area.
Optionally, in a possible implementation manner, the processing module 703 is specifically configured to:
obtaining the accuracy of the gazing point included in the gazing information;
determining a region of the precision N times outside the first region as the third region, the N being greater than 1.
Alternatively, in one possible implementation,
the facial feature data includes: at least one of a point of regard, a duration of regard, or an eye movement state;
the second head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
Alternatively, in one possible implementation,
the first head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
Referring to fig. 8, another embodiment of the terminal device in the embodiment of the present application is shown, which includes:
a Central Processing Unit (CPU) 801, a storage medium 802, a power supply 803, a memory 804, and an input/output interface 805, it should be understood that in this embodiment of the present application, there may be one or more CPUs, and there may be one or more input/output interfaces, which is not limited herein. The power source 803 may provide operating power for the steady state detection device, and the memory 804 and the storage medium 802 may be a transitory or persistent storage having stored therein instructions that when executed by the CPU perform the steps of the embodiments of fig. 1-6 described above.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of fig. 1 to 6 of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for adjusting a gaze region based on head movement, comprising:
acquiring gazing information;
determining a corresponding first area according to the gazing information;
acquiring first head characteristic data;
adjusting the first region according to the first head feature data to obtain a second region, including:
determining a third area within the preset range of the first area, specifically including: acquiring the precision of the gazing point corresponding to the gazing information; determining a region of N times the precision outside the first region as the third region, the N being greater than 1; the precision comprises a precision value and a precision value, and the precision value are obtained according to a calibration parameter and a preset eyeball tracking algorithm; the calibration parameters are obtained by calibration before the fixation information is determined;
and adjusting the first area within the range of the third area according to the first head characteristic data to obtain the second area.
2. The method of claim 1, further comprising:
and acquiring an instruction corresponding to the second area, and executing the instruction.
3. The method of claim 2, wherein the fetching and executing the instruction corresponding to the second region comprises:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
4. The method of claim 3, the control data, comprising:
any one of facial feature data, second head feature data, voice data, or control instructions.
5. A terminal device, comprising:
the eye movement identification module is used for acquiring gazing information;
the processing module is used for determining a corresponding first area according to the gazing information;
the head movement identification module is used for acquiring first head characteristic data;
the processing module is further configured to adjust the first area according to the first head feature data to obtain a second area, where the gaze point is located;
the processing module is specifically configured to: determining a third area within a preset range of the first area; adjusting the first area within the range of the third area according to the first head feature data to obtain the second area;
the processing module is specifically configured to: acquiring the precision of the gazing point corresponding to the gazing information; determining a region of N times the precision outside the first region as the third region, the N being greater than 1; the precision comprises a precision value and a precision value, and the precision value are obtained according to calibration parameters and a preset eyeball tracking algorithm; the calibration parameters are calibrated before determining the gaze information.
6. The terminal device of claim 5,
the processing module is further configured to obtain an instruction corresponding to the second area, and execute the instruction.
7. The terminal device of claim 6, wherein the processing module is specifically configured to:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
8. The terminal device of claim 7, the control data comprising:
any one of facial feature data, second head feature data, voice data, or control instructions.
9. A terminal device, comprising:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being configured to perform the method of any of claims 1-4 when the program is executed.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1-4.
CN201910258196.7A 2019-03-22 2019-04-01 Method for adjusting watching area based on head movement and terminal equipment Active CN109976528B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910222440 2019-03-22
CN2019102224404 2019-03-22

Publications (2)

Publication Number Publication Date
CN109976528A CN109976528A (en) 2019-07-05
CN109976528B true CN109976528B (en) 2023-01-24

Family

ID=67082228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910258196.7A Active CN109976528B (en) 2019-03-22 2019-04-01 Method for adjusting watching area based on head movement and terminal equipment

Country Status (1)

Country Link
CN (1) CN109976528B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738388B (en) * 2019-10-28 2022-10-18 七鑫易维(深圳)科技有限公司 Photographing processing method and system, electronic device and storage medium
CN110941344B (en) * 2019-12-09 2022-03-15 Oppo广东移动通信有限公司 Method for obtaining gazing point data and related device
CN111638780A (en) * 2020-04-30 2020-09-08 长城汽车股份有限公司 Vehicle display control method and vehicle host
CN113642364B (en) * 2020-05-11 2024-04-12 华为技术有限公司 Face image processing method, device, equipment and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9529428B1 (en) * 2014-03-28 2016-12-27 Amazon Technologies, Inc. Using head movement to adjust focus on content of a display

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8860660B2 (en) * 2011-12-29 2014-10-14 Grinbath, Llc System and method of determining pupil center position
US9244539B2 (en) * 2014-01-07 2016-01-26 Microsoft Technology Licensing, Llc Target positioning with gaze tracking
US10192528B2 (en) * 2016-03-31 2019-01-29 Sony Interactive Entertainment Inc. Real-time user adaptive foveated rendering
CN105975083B (en) * 2016-05-27 2019-01-18 北京小鸟看看科技有限公司 A kind of vision correction methods under reality environment
CA3065131A1 (en) * 2017-05-31 2018-12-06 Magic Leap, Inc. Eye tracking calibration techniques
CN107656613B (en) * 2017-09-08 2020-12-18 国网智能科技股份有限公司 Human-computer interaction system based on eye movement tracking and working method thereof
CN109343700B (en) * 2018-08-31 2020-10-27 深圳市沃特沃德股份有限公司 Eye movement control calibration data acquisition method and device
CN109410285B (en) * 2018-11-06 2021-06-08 北京七鑫易维信息技术有限公司 Calibration method, calibration device, terminal equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9529428B1 (en) * 2014-03-28 2016-12-27 Amazon Technologies, Inc. Using head movement to adjust focus on content of a display

Also Published As

Publication number Publication date
CN109976528A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109976528B (en) Method for adjusting watching area based on head movement and terminal equipment
CN110460837B (en) Electronic device with foveal display and gaze prediction
US11917126B2 (en) Systems and methods for eye tracking in virtual reality and augmented reality applications
EP3123283B1 (en) Eye gaze tracking based upon adaptive homography mapping
CN102830797B (en) A kind of man-machine interaction method based on sight line judgement and system
Mardanbegi et al. Eye-based head gestures
CN113646732A (en) System and method for obtaining control schemes based on neuromuscular data
CN108681399B (en) Equipment control method, device, control equipment and storage medium
WO2012137801A1 (en) Input device, input method, and computer program
CN107066085B (en) Method and device for controlling terminal based on eyeball tracking
US20170150898A1 (en) Methods and apparatuses for electrooculogram detection, and corresponding portable devices
US20180267604A1 (en) Computer pointer device
EP4095744A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
WO2021185110A1 (en) Method and device for eye tracking calibration
CN109101110A (en) A kind of method for executing operating instructions, device, user terminal and storage medium
US20220236801A1 (en) Method, computer program and head-mounted device for triggering an action, method and computer program for a computing device and computing device
Brousseau et al. Smarteye: An accurate infrared eye tracking system for smartphones
Lander et al. hEYEbrid: A hybrid approach for mobile calibration-free gaze estimation
US10444831B2 (en) User-input apparatus, method and program for user-input
CN109960412B (en) Method for adjusting gazing area based on touch control and terminal equipment
CN109917923B (en) Method for adjusting gazing area based on free motion and terminal equipment
CN111966852B (en) Face-based virtual face-lifting method and device
CN112839162B (en) Method, device, terminal and storage medium for adjusting eye display position
US20240122469A1 (en) Virtual reality techniques for characterizing visual capabilities
US20220236795A1 (en) Systems and methods for signaling the onset of a user's intent to interact

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant