CN109917923B - Method for adjusting gazing area based on free motion and terminal equipment - Google Patents

Method for adjusting gazing area based on free motion and terminal equipment Download PDF

Info

Publication number
CN109917923B
CN109917923B CN201910257529.4A CN201910257529A CN109917923B CN 109917923 B CN109917923 B CN 109917923B CN 201910257529 A CN201910257529 A CN 201910257529A CN 109917923 B CN109917923 B CN 109917923B
Authority
CN
China
Prior art keywords
area
terminal device
gazing
region
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910257529.4A
Other languages
Chinese (zh)
Other versions
CN109917923A (en
Inventor
孔祥晖
黄通兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Publication of CN109917923A publication Critical patent/CN109917923A/en
Application granted granted Critical
Publication of CN109917923B publication Critical patent/CN109917923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a method for adjusting a gazing area based on free motion and a terminal device, which are used for adjusting the gazing area by combining the operation of rotating the terminal device after determining gazing information based on eye movement identification of a user, so that the obtained gazing area is more in line with the expectation of the user, and the accuracy of determining the gazing area is improved. The method comprises the following steps: the terminal equipment acquires gazing information; the terminal equipment determines a corresponding first area according to the gazing information; the terminal equipment acquires rotation information; and the terminal equipment adjusts the first area according to the rotation information to obtain a second area.

Description

Method for adjusting gazing area based on free motion and terminal equipment
The present application claims priority of chinese patent application having application number 201910222431.5, entitled "method for adjusting gaze area based on free motion and terminal device" filed on 22/3/2019, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the field of human-computer interaction, in particular to a method for adjusting a gazing area based on free motion and terminal equipment.
Background
At present, with the wider application of human-computer interaction, more and more interaction modes are provided between users and equipment. Specifically, the user's operation can be identified through the eye feature data of the user, and then the device executes the corresponding action.
The eyeball tracking technology is applied to a human-computer interaction scene, and equipment is controlled through eyeball movement of a user. For example, in the human-computer interaction of the terminal device, the direction and the position of the user's gaze point can be determined through an eye tracking technology, so as to realize the control of the terminal device by the user, such as clicking, sliding and the like.
However, due to environmental influences, differences in user usage, and the like, the accuracy of eyeball tracking is reduced, so that identification errors are likely to occur, and further, the operation cannot be accurate, and misoperation is likely to occur. Therefore, how to more accurately determine the actual operation area of the user becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a method for adjusting a gazing area based on free motion and a terminal device, which are used for adjusting the gazing area by combining the operation of rotating the terminal device after determining gazing information based on eye movement identification of a user, so that the obtained gazing area is more in line with the expectation of the user, and the accuracy of determining the gazing area is improved.
In view of the above, a first aspect of the present application provides a method for adjusting a gaze area based on free motion, including:
acquiring gazing information;
determining a corresponding first area according to the gazing information;
acquiring rotation information;
and adjusting the first area according to the rotation information to obtain a second area.
Optionally, in a possible implementation, the method may further include:
and acquiring an instruction corresponding to the second area and executing the instruction.
Optionally, in a possible implementation, the obtaining an instruction corresponding to the second area and executing the instruction may include:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
Optionally, in a possible implementation, the control data may include:
any one of facial feature data, head feature data, voice data, or control instructions.
Optionally, in a possible implementation manner, the adjusting the first area according to the rotation information to obtain the second area may include:
determining a third area within a preset range of the first area;
and adjusting the first area within the range of the third area according to the rotation information to obtain the second area.
Optionally, in a possible implementation, the determining a third area within the preset range of the first area may include:
acquiring the precision of the gazing point corresponding to the gazing information;
determining a region of the precision N times outside the first region as the third region, the N being greater than 1.
Optionally, in a possible implementation, the acquiring, by the terminal device, the rotation information includes:
the terminal equipment acquires the rotation information through a sensor.
Optionally, in one possible embodiment, the sensor comprises an angular velocity sensor;
the rotation information includes a rotational angular velocity detected by the angular velocity sensor.
Alternatively, in one possible implementation,
the facial feature data may include: at least one of eye movement behavior data or eye movement status;
the head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
A second aspect of the present application provides a terminal device, including:
the eye movement identification module is used for acquiring gazing information;
the processing module is used for determining a corresponding first area according to the gazing information;
the detection module is used for acquiring rotation information;
the processing module is further configured to adjust the first area according to the rotation information to obtain a second area.
Alternatively, in one possible implementation,
the processing module is further configured to obtain an instruction corresponding to the second area, and execute the instruction.
Optionally, in a possible implementation manner, the processing module is specifically configured to:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
Optionally, in a possible implementation, the control data includes:
any one of facial feature data, head feature data, voice data, or control instructions.
Optionally, in a possible implementation manner, the processing module is specifically configured to:
determining a third area within a preset range of the first area;
and adjusting the first area within the range of the third area according to the rotation information to obtain the second area.
Optionally, in a possible implementation manner, the processing module is specifically configured to:
acquiring the precision of the gazing point corresponding to the gazing information;
determining a region of the precision N times outside the first region as the third region, the N being greater than 1.
Alternatively, in one possible implementation,
the detection module is specifically configured to acquire the rotation information through a sensor.
Optionally, in one possible embodiment, the sensor comprises an angular velocity sensor;
the rotation information includes a rotational angular velocity detected by the angular velocity sensor.
Alternatively, in one possible implementation,
the facial feature data may include: at least one of eye movement behavior data or eye movement status;
the head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
A third aspect of the present application provides a terminal device, comprising:
the system comprises a processor, a memory, a bus and an input/output interface, wherein the processor, the memory and the input/output interface are connected through the bus;
the memory for storing program code;
the processor, when invoking the program code in the memory, performs the steps of the method provided by the first aspect of the application.
In a fourth aspect, the present application provides a computer-readable storage medium, it should be noted that a part of the technical solutions of the present application, or all or part of the technical solutions, which substantially or substantially contributes to the prior art, may be embodied in the form of a software product stored in a storage medium for storing computer software instructions for the above-mentioned apparatus, which includes a program for executing the above-mentioned program designed for any one of the embodiments of the first aspect.
The storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In a fifth aspect, the present application provides a computer program product comprising computer software instructions that are loadable by a processor to carry out the procedure in the method for adjusting a gaze region based on free motion of any of the above first aspects.
In the embodiment of the application, firstly, after the terminal device determines the first area through the gazing information of the user, the rotation information of the terminal device is continuously acquired, and the first area is adjusted through the rotation information, so that the second area closer to the expectation of the user is obtained. Therefore, the more accurate second region is determined through the combination of the eyes of the user and the rotary terminal device, and the second region is made to be more in line with the region expected by the user. Even if the eye identification is inaccurate due to the influences of environment, user difference and the like, the first area can be adjusted by combining the rotation information of the terminal equipment, the eyeball tracking accuracy is compensated, the obtained second area is more accurate, and the user experience is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for adjusting a gaze region based on free motion provided by the present application;
fig. 2 is another schematic flow chart of a method for adjusting a gaze region based on free motion provided by the present application;
fig. 3 is a schematic area diagram of a method for adjusting a gaze area based on free motion according to the present application;
FIG. 4a is a schematic diagram of a first region and a third region in an embodiment of the present application;
FIG. 4b is a schematic diagram of a first region, a second region and a third region in an embodiment of the present application;
FIG. 5 is a diagram illustrating an embodiment of a fetch instruction;
FIG. 6 is a diagram illustrating execution of instructions according to an embodiment of the present application;
fig. 7 is a schematic diagram of an embodiment of a terminal device provided in the present application;
fig. 8 is a schematic diagram of another embodiment of the terminal device provided in the present application.
Detailed Description
The application provides a method for adjusting a gazing area based on free motion and a terminal device, which are used for adjusting the gazing area by combining the operation of rotating the terminal device after determining gazing information based on eye movement identification of a user, so that the obtained gazing area is more in line with the expectation of the user, and the accuracy of determining the gazing area is improved.
First, the method for adjusting a gaze region based on free motion provided by the present application may be applied to a terminal device having a module for collecting image data, such as a camera, a sensor, and the like. The terminal device may be various electronic devices having a camera, a sensor, or the like, for example, a mobile phone, a notebook computer, a display, or the like.
First, a flow of the method for adjusting a gaze region based on free motion provided by the present application is described below, referring to fig. 1, where a flow diagram of the method for adjusting a gaze region based on free motion provided by the present application may include:
101. gaze information is obtained.
First, gaze information may be acquired, which may be acquired by a camera, a sensor, or the like of the terminal device, and may include a gaze point, a gaze duration, a gaze point coordinate, or a gaze vector, or the like, of the user.
In particular, the user's gaze point may be identified through eye tracking. The terminal equipment can directly acquire the eye images of the user through a camera or a sensor and the like, and then identify the eye images of the user to obtain the gazing information of the user. In addition, if the terminal device has an infrared device, at least two groups of infrared light rays can be emitted to the eyes of the user, or at least one group of infrared light rays can be emitted to at least one eyeball of the user, the eyes of the user generate infrared light spots under the mapping of the infrared light rays, then, the eye images of the user are collected, the eye images are identified to obtain eye characteristic data, and then, watching information is obtained, wherein the watching information can comprise the watching point, the watching direction, the coordinate where the watching point is located and the like of the user.
102. And determining a corresponding first area according to the gazing information.
After obtaining the gaze information of the user, the corresponding first region may be determined from the gaze information.
Specifically, the gazing information may include a gazing point, a gazing direction, coordinates where the gazing point is located, and the like of a single eye or both eyes of the user, and a corresponding first area may be determined on the terminal according to the gazing information, where the first area may be understood as an area that the terminal device recognizes according to the gazing information and is gazed by the user. The terminal device can collect eye features of the user, and determine a fixation point, a fixation direction, coordinates where the fixation point is located and the like of the user by performing eyeball tracking on the eyes of the user, so as to determine the first area. The range of the first area may be determined directly according to the area of the gaze point in the gaze information, or may be a preset size area directly centered on the center of the gaze point after the center of the gaze point is determined, which may be specifically adjusted according to an actual application scenario, and is not limited herein.
Further, the terminal device may include a display device including a Light Emitting Diode (LED) screen, a capacitive screen, a resistive screen, and the like, which is referred to as a screen for short in this application. The user can watch any point on the screen of the terminal, and the terminal equipment identifies the first area watched by the user according to the watching information of the user.
For example, when a sliding operation needs to be performed on a screen of a terminal device, a user needs to watch a sliding area, the terminal device acquires eye image data of the user, calculates a watching point position through a machine vision algorithm and a data model tracked by eyeballs, further determines a watching position of the user, and acquires an operation corresponding to the area.
In an alternative embodiment, after identifying the first area gazed by the user according to the gazing information of the user, the terminal device may highlight the first area on the screen, or display the first area in a focused manner, and so on, which may be specifically adjusted according to an actual application scenario, and is not limited herein.
103. And acquiring rotation information.
After the terminal device determines the first area according to the gazing information of the user, in order to improve the accuracy of the operation expected by the user, the terminal device may continue to acquire the rotation information of the user. The rotation information may include one or more of a rotation angular speed, a rotation direction, a rotation amplitude, a rotation angle, or the like of the terminal device.
Generally, a user can rotate a terminal device to enable the terminal device to acquire rotation information. The specific rotation operation may include various rotation modes such as rotation of a horizontal plane, turning in a vertical direction, or combination of horizontal plane rotation and turning in the vertical direction, and the like, of the terminal, and may be specifically adjusted according to an actual application scenario, which is not limited in the present application.
The rotation information can be acquired through a sensor, or can be acquired through a camera of the terminal device, analyzed, determined and the like, and can be specifically adjusted according to actual application scenes. Specifically, the terminal device may include a sensor that detects movement or rotation of the terminal device itself, for example, the sensor may be an angular velocity sensor (hereinafter, also referred to as a gyroscope), the angular velocity sensor is generally different from an accelerometer, the accelerometer generally can detect only linear motion in an axial direction, and the physical quantity that the angular velocity sensor can measure may include a rotation angular velocity when the terminal device is deflected or tilted, so that the angular velocity sensor may be used in embodiments of the present application to acquire rotation information of the terminal device under the operation of a user. More specifically, when the rotation information is the passing sensor, the terminal device may analyze one or more of the rotation direction, angle, or amplitude of the terminal device according to the rotation angular velocity at the time of the deflection, the inclination, and the like acquired by the angular velocity sensor, to obtain the rotation information of the terminal device. In addition, the terminal equipment can also analyze the rotation information of the terminal equipment through images collected by the camera. For example, when the terminal device is a mobile phone, when a user rotates the mobile phone, a front camera or a rear camera of the terminal device continuously collects images, analyzes the collected continuous images, and determines information such as a direction, an angle, an amplitude and the like of the rotation of the terminal device to obtain the rotation information.
It should be noted that, in addition to the aforementioned rotation information obtained through the sensor and the camera of the terminal device, the rotation information may also be obtained in other manners, for example, the rotation information is obtained in combination with other sensors, including a gravity sensor, an infrared sensor, and the like, or the rotation information is obtained through an external sensing device, and the present application is only an exemplary illustration and is not limited thereto.
104. And adjusting the first area according to the first rotation information to obtain a second area.
After the rotation information of the terminal device itself is acquired, the terminal device may adjust the first area according to the rotation information to determine the second area. The second area is an operation area selected by the user and is more suitable for a desired operation area.
For example, when the terminal device is in a dark environment, for example, the current illumination intensity is lower than a threshold, the collected gazing information of the user may be inaccurate, so that after the terminal device determines a certain point on the screen of the terminal device according to the gazing information of the user, the user may further rotate the terminal device, and after the terminal device collects the rotation information of the terminal device through a camera or a sensor, the point on the screen is adjusted according to the collected rotation information so as to be close to a point that the user desires to control. For example, when the user needs to adjust the first area to the left, the terminal device may be turned to the left to adjust the first area to the left, resulting in a desired second area.
In addition, when the first area is adjusted by rotating the terminal equipment, the first area can be adjusted in real time through the rotation information, the terminal equipment can display the area being adjusted on the screen, and a user can adjust the amplitude of the rotating terminal equipment according to visual feedback on the screen of the terminal equipment, so that the first area is adjusted more accurately, and the second area is obtained.
105. And acquiring an instruction corresponding to the second area and executing the instruction.
After determining the second area, the terminal device acquires an instruction corresponding to the second area and executes the instruction.
Generally, a user may use a facial feature pair to control a terminal device, where the general control manner includes clicking, sliding, and the like, the sliding may be performed in a specific area according to a location of a gaze point of the user, a sliding direction is defined in the specific area, or the sliding direction may be determined by a change direction from a first gaze point to a next gaze point, and the clicking operation may enable a gaze duration to reach a time threshold of the clicking operation, or may be implemented by blinking operation, or may be a special key on the electronic device, such as a protruding side key, a capacitive screen touch screen, or a voice operation, or may be a facial feature operation, such as a beeping mouth, a mouth-opening, a nodding, and the like.
For example, if a backspacing control area is located in a lower right corner of the terminal device, the terminal device continues to control the focus to enter the backspacing control area by rotating the terminal device after the focus is made to approach the backspacing control area according to the user's gaze information, and the terminal device may obtain a backspacing instruction corresponding to the backspacing control area, execute the backspacing instruction, and backspace from a current interface to a previous interface.
It should be understood that step 105 in the embodiments of the present application is an optional step.
In the embodiment of the application, after the first area is determined according to the gazing information of the user, the rotation information of the terminal device is continuously acquired, the first area is adjusted according to the rotation information, a second area closer to the expectation of the user is further obtained, an instruction corresponding to the second area is acquired, and the instruction is executed. Therefore, the more accurate second region is determined through the combination of the eyes of the user and the rotary terminal device, and the second region is made to be more in line with the region expected by the user. Even if the eye identification is inaccurate due to the influences of environment, user difference and the like, the first area can be adjusted by combining the rotation information of the terminal equipment, the eyeball tracking accuracy is compensated, the obtained second area is more accurate, and the user experience is improved.
For further description of the method for adjusting a gaze area based on a free motion provided in the present application, referring to fig. 2, another flow chart of the method for adjusting a gaze area based on a free motion terminal in the embodiment of the present application may include:
201. gaze information is obtained.
202. And determining a corresponding first area according to the gazing information.
It should be understood that steps 201 and 202 in the embodiment of the present application are similar to steps 101 and 102 in fig. 1, and are not described here again.
203. A third region within a preset range of the first region is determined.
After determining the first region, a third region within a preset range of the first region is determined, the third region including the first region, and the third region being generally larger than the first region.
Optionally, in a possible embodiment, after the first region is determined, the accuracy corresponding to the gazing point is determined, and the third region is determined with the gazing point as a center dot and with the radius being N times the accuracy, where N is greater than 1, that is, the third region may include the first region and a region outside the first region and N times the accuracy. For example, if the accuracy is 0.5 degrees, the distance resolution at the terminal device is about 3mm, and thus the third region may be determined with a radius of 3 × 3 — 9 mm.
Exemplarily, as shown in fig. 3, the first region 301 belongs to a third region 302, the third region is determined by N times of the radius of the first region 301 with the center point of the first region 301 as a center dot, and the range of the first region 301 is smaller than that of the third region 302.
Further, when determining the gaze information of the user through eye tracking, the parameters involved may include accuracy, and the accuracy may include an accuracy value and a precision value. The accuracy value is the deviation of the calculated gaze information from the actual gaze information, and the accuracy value is the dispersion of the gaze deviations. Generally, the accuracy may be understood as an average error value between an actual gazing position of the gazing point and a gazing position collected by the terminal device, and the accuracy may be understood as a discrete degree of the terminal device when continuously recording the same gazing point, for example, an error value may be measured by a mean square error of continuous samples. Specifically, before determining the gaze information of the user through eye tracking, calibration may be performed, resulting in calibration parameters. In practical applications, the calibration process is an important process using an eye tracking technology, and different calibration parameters are not always the same according to different eye features of each user or different environments, so that before the user uses eye tracking to obtain gaze information, calibration may be performed to obtain calibration parameters, and a precision value are obtained according to the calibration parameters and a preset eye tracking algorithm. Of course, the terminal device may obtain the accuracy value and the accuracy value directly according to the calibration parameter and the preset eye tracking algorithm, or the terminal device may send the calibration parameter to a server or other network devices, and the server or other network devices obtain the accuracy value, and the like according to the preset eye tracking algorithm and then send the accuracy value and the accuracy value to the terminal device, which may be specifically adjusted according to an actual application scenario, and this is not limited herein.
204. And acquiring rotation information.
Step 204 in the embodiment of the present application is similar to step 103 in fig. 1, and is not described herein again.
It should be noted that, in the embodiment of the present application, the execution order of step 203 and step 204 is not limited, step 203 may be executed first, step 204 may also be executed first, and the specific implementation may be adjusted according to an actual application scenario, and is not limited herein.
205. And adjusting the first area within the range of the third area according to the rotation information to obtain a second area.
After the rotation information is acquired, the first area may be adjusted according to the rotation information, and the range of the third area is not exceeded, so as to obtain the second area.
Generally, feedback of a user on a rotation operation of the terminal device may be displayed according to a screen of the terminal device, an interface of the screen may highlight or display an area identifier in a preset shape, such as a cursor, a focus, and the like, to identify an area where a current point of regard is located, and the user may determine an adjustment progress of the first area according to the screen display of the terminal device, and then adjust a rotation amplitude of the terminal device, so as to determine a second area more conforming to a user's expectation.
For example, if the terminal device determines a gaze point of the user according to the gaze information of the user, and determines the first area in the screen, and after determining the third area, if the identified first area does not conform to the desired area of the user, the user may rotate the terminal device to adjust the position of the first area in the third area, and determine the second area.
Exemplarily, as shown in fig. 4a and 4b, as shown in fig. 4a, a first region 401 and a third region 402 are determined. As shown in fig. 4b, the user turns the terminal device to adjust the first area, resulting in a second area 403.
206. Control data is acquired.
After the second area is determined, the acquisition of control data may also continue.
The control data may include any of facial feature data, head feature data, voice data, control instructions, or gesture control data.
The control data may be obtained in various manners, and the facial feature data may include eye feature data of the user, such as a pupil position, a pupil shape, an iris position, an iris shape, an eyelid position, an eye corner position, a light spot (also referred to as purkinje spot) position, and the like, or may also include an eye movement state of the user, for example, a single-eye or double-eye blinking motion, a blinking number, and the like, and may be specifically adjusted according to an application scene. One or more of a gaze point, gaze duration, gaze point coordinates or gaze vectors, etc. for a user's single or both eyes may also be included. The facial feature data may also include facial features of the user, such as smiling expressions, beeps, glares, and so forth. The head feature data may include one or more of a motion state of the head of the user or a motion state of a preset portion in the head, or a number of motions of a preset portion in the head, and the like, for example, nodding, turning left, turning right, lowering the head, and the like. The control instruction may be an operation of the terminal device in response to a user, for example, the specific operation may be an operation of a key of the terminal device by the user, and the operation of the key of the terminal device by the user may include any one or more of an operation of a physical key of the terminal device by the user, an operation of a virtual key on a touch screen, or an operation of a key included in another device accessing the terminal device, for example, a keyboard, a handle, and the like. The voice control data may be obtained by acquiring voice of the user by a voice acquisition module of the terminal device, and the voice data may include control voice for operating the second area by the user. The gesture control data may be obtained by performing gesture control on the terminal device by a user, and may be obtained by collecting a camera, a sensor, a touch screen, or the like of the terminal device.
Illustratively, the control data may be, specifically for example, a user may select a control by blinking, facial expression (e.g., a specific expression such as smile, glazel, etc.), head gesture, such as nodding head, shaking head, head waving, etc., lip language identification data, mouth type identification data, such as a beep, a flare, a key: the system comprises a plurality of keys, a plurality of keys and a plurality of display screens, wherein the keys comprise an entity key (such as a home key, a starting side key, a volume key, a function key, a capacitance touch key and the like), a screen touch key (such as a screen return key and the like under Android), a virtual key, voice control data, gesture control data, watching duration data and the like.
207. And acquiring an instruction corresponding to the second area according to the control data, and executing the instruction.
After acquiring the control data, the terminal device acquires an instruction corresponding to the second area according to the control data, and executes the instruction.
Specifically, after the control data is acquired, the operation on the second area may be determined according to the control data. Illustratively, after determining the second area, the terminal device highlights the second area, and then the user may perform further operations, for example, nodding, the gazing duration exceeds a threshold, and the like, and the terminal device obtains an instruction corresponding to the second area according to the further operations of the user. For example, if the second area corresponds to a determination operation of the terminal device, the terminal device may obtain the determination instruction, then execute the determination instruction, and display a next interface.
In addition, optionally, if the second area corresponds to a plurality of instructions, the corresponding instructions may be further determined according to the control data. For example, if the second area corresponds to a plurality of instructions, the corresponding first instruction may be obtained and executed if the user gazing duration belongs to the first interval, the corresponding second instruction may be obtained and executed if the user gazing duration belongs to the second interval, and the like, and the adjustment may be specifically performed according to the application scenario.
For example, as shown in fig. 5, when there is a new message prompt on the mobile phone, the user may determine the second area 501 by eyes and turning the mobile phone, and after determining the second area 501, the terminal device may acquire control data corresponding to actions of nodding, blinking, long-time gazing, and the like of the user, for example, if the user nods, the terminal device may acquire an open instruction, and open the content related to the new message through the open instruction, as shown in fig. 6, the content of the new message may be acquired through the open instruction and displayed on the screen of the terminal device.
Therefore, in the embodiment of the application, after the first region is determined according to the gazing information of the user, the third region is determined according to the accuracy of the terminal device for recognizing the gazing information, and the rotation information of the terminal device is continuously acquired, wherein the rotation information is acquired and determined by the terminal device through a sensor or a camera and other devices under the control of the user. The terminal equipment adjusts the first area within the range of the third area through the rotation information, and then obtains a second area which is closer to the user expectation. And through the constraint of the third area, the inaccurate adjustment of the first area caused by the overlarge adjustment amplitude is avoided, so that the second area can not accord with the area expected by the user. Then, the control action of the user can be further acquired to obtain control data, and an instruction corresponding to the second area is acquired according to the control data and then executed. Therefore, the embodiment of the application determines the more accurate second region through the combination of the eyes of the user and the rotation of the terminal device, so that the second region is more in line with the region expected by the user. Even if the eye recognition is inaccurate due to the influence of environment, user difference and the like, the first area can be adjusted by combining the rotation of the terminal equipment by the user, so that the obtained second area is more accurate. The user experience is improved. And the second area is restrained by the third area, so that the situation that the obtained second area deviates from the expected area due to overlarge amplitude is avoided when the first area is adjusted. Moreover, the control data of the user can be further acquired, the instruction corresponding to the second area is acquired through the control data, the intention of the user can be further determined, the control instruction corresponding to the second area can be acquired more accurately, misoperation is avoided, and user experience is improved. For example, in the field of human-computer interaction of mobile phones, the direction and position of a user's gaze point are estimated by an eye tracking technique to realize user control (clicking or sliding, etc.) of the mobile phone. However, in most application scenarios, due to environmental influences or individual differences of users, the precision of the gaze point of the eye tracking technology is reduced, and the operation cannot be accurate. At the moment, the user is used for correcting the rotation operation of the terminal equipment, and the optimal operation area is obtained through visual feedback real-time adjustment.
The method provided by the present application is described in detail above, and the apparatus provided by the present application is described below. Referring to fig. 7, a schematic diagram of an embodiment of a terminal device provided in the present application may include:
an eye movement recognition module 701, configured to obtain gaze information;
a processing module 703, configured to determine a corresponding first area according to the gazing information;
a detection module 702, configured to obtain rotation information;
the processing module 703 is further configured to adjust the first area according to the rotation information to obtain a second area, where the second area can be understood as an area where the actual point of regard of the user is located.
Alternatively, in one possible implementation,
the processing module 703 is further configured to obtain an instruction corresponding to the second area, and execute the instruction.
Optionally, in a possible implementation manner, the processing module 703 is specifically configured to:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
Optionally, in a possible implementation, the control data includes:
any one of facial feature data, head feature data, voice data, or control instructions.
Optionally, in a possible implementation manner, the processing module 703 is specifically configured to:
determining a third area within a preset range of the first area;
and adjusting the first area within the range of the third area according to the rotation information to obtain the second area.
Optionally, in a possible implementation manner, the processing module 703 is specifically configured to:
acquiring the precision of the gazing point corresponding to the gazing information;
determining a region of the precision N times outside the first region as the third region, the N being greater than 1.
Alternatively, in one possible implementation,
the detecting module 702 is specifically configured to obtain the rotation information through a sensor.
Optionally, in one possible embodiment, the sensor comprises an angular velocity sensor;
the rotation information includes a rotational angular velocity detected by the angular velocity sensor.
Alternatively, in one possible implementation,
the facial feature data includes: at least one of a point of regard, a duration of regard, or an eye movement state;
the head feature data includes: at least one of a motion state of the head or a motion state of a preset portion in the head.
Referring to fig. 8, another embodiment of the terminal device in the embodiment of the present application is shown, which includes:
a Central Processing Unit (CPU) 801, a storage medium 802, a power supply 803, a memory 804, and an input/output interface 805, it should be understood that in this embodiment of the present application, there may be one or more CPUs, and there may be one or more input/output interfaces, which is not limited herein. The power source 803 may provide operating power for the steady state detection device, and the memory 804 and the storage medium 802 may be a transitory or persistent storage having stored therein instructions that when executed by the CPU perform the steps of the embodiments of fig. 1-6 described above. In addition, the terminal device may include other components besides the components shown in fig. 8, for example, a sensor, a camera, and the like, and the embodiment of the present application is merely an example and is not limited.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of fig. 1 to 6 of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. A method of adjusting a gaze region based on free motion, comprising:
the terminal equipment acquires gazing information;
the terminal equipment determines a corresponding first area according to the gazing information;
the terminal equipment acquires rotation information;
the terminal equipment adjusts the first area according to the rotation information to obtain a second area, and the method comprises the following steps:
the terminal equipment determines a third area within a preset range of the first area; the terminal equipment adjusts the first area within the range of the third area according to the rotation information to obtain the second area; wherein the determining a third region within a preset range of the first region comprises: acquiring the precision of the gazing point corresponding to the gazing information; determining a region of N times the precision outside the first region as the third region, the N being greater than 1.
2. The method of claim 1, further comprising:
and the terminal equipment acquires the instruction corresponding to the second area and executes the instruction.
3. The method of claim 2, wherein the fetching and executing the instruction corresponding to the second region comprises:
the terminal equipment acquires control data;
and the terminal equipment acquires the instruction corresponding to the second area according to the control data and executes the instruction.
4. The method of claim 3, the control data, comprising:
any one of facial feature data, head feature data, voice data, or control instructions.
5. The method according to any one of claims 1 to 4, wherein the terminal device acquires the rotation information, and comprises:
and the terminal equipment acquires the rotation information through a sensor.
6. The method of claim 5, wherein the sensor comprises an angular velocity sensor;
the rotation information includes a rotation angular velocity detected by the angular velocity sensor.
7. A terminal device, comprising:
the eye movement identification module is used for acquiring gazing information;
the processing module is used for determining a corresponding first area according to the gazing information;
the detection module is used for acquiring rotation information;
the processing module is further configured to adjust the first area according to the rotation information to obtain a second area;
the processing module is specifically configured to: determining a third area within a preset range of the first area; adjusting the first area within the range of the third area according to the rotation information to obtain the second area;
the processing module is specifically configured to: acquiring the precision of the gazing point corresponding to the gazing information; determining a region of N times the precision outside the first region as the third region, the N being greater than 1.
8. The terminal device of claim 7,
the processing module is further configured to obtain an instruction corresponding to the second area, and execute the instruction.
9. The terminal device of claim 8, wherein the processing module is specifically configured to:
acquiring control data;
and acquiring the instruction corresponding to the second area according to the control data, and executing the instruction.
10. The terminal device of claim 9, the control data comprising:
any one of facial feature data, head feature data, voice data, or control instructions.
11. The terminal device according to any of claims 7-10,
the detection module is specifically configured to acquire the rotation information through a sensor.
12. The terminal device of claim 11, wherein the sensor comprises an angular velocity sensor;
the rotation information includes a rotation angular velocity detected by the angular velocity sensor.
13. A terminal device, comprising:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being configured to perform the steps of any of claims 1-6 when the program is executed.
14. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1-6.
CN201910257529.4A 2019-03-22 2019-04-01 Method for adjusting gazing area based on free motion and terminal equipment Active CN109917923B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910222431 2019-03-22
CN2019102224315 2019-03-22

Publications (2)

Publication Number Publication Date
CN109917923A CN109917923A (en) 2019-06-21
CN109917923B true CN109917923B (en) 2022-04-12

Family

ID=66968041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910257529.4A Active CN109917923B (en) 2019-03-22 2019-04-01 Method for adjusting gazing area based on free motion and terminal equipment

Country Status (1)

Country Link
CN (1) CN109917923B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110327061B (en) * 2019-08-12 2022-03-08 北京七鑫易维信息技术有限公司 Character determining device, method and equipment based on eye movement tracking technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104238751A (en) * 2014-09-17 2014-12-24 联想(北京)有限公司 Display method and electronic equipment
CN105892647A (en) * 2016-03-23 2016-08-24 京东方科技集团股份有限公司 Display screen adjusting method and device as well as display device
CN106155316A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106921890A (en) * 2015-12-24 2017-07-04 上海贝尔股份有限公司 A kind of method and apparatus of the Video Rendering in the equipment for promotion
CN107407977A (en) * 2015-03-05 2017-11-28 索尼公司 Message processing device, control method and program
CN108334191A (en) * 2017-12-29 2018-07-27 北京七鑫易维信息技术有限公司 Based on the method and apparatus of the determination blinkpunkt of eye movement analysis equipment
CN108968907A (en) * 2018-07-05 2018-12-11 四川大学 The bearing calibration of eye movement data and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101919009B1 (en) * 2012-03-06 2018-11-16 삼성전자주식회사 Method for controlling using eye action and device thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104238751A (en) * 2014-09-17 2014-12-24 联想(北京)有限公司 Display method and electronic equipment
CN107407977A (en) * 2015-03-05 2017-11-28 索尼公司 Message processing device, control method and program
CN106921890A (en) * 2015-12-24 2017-07-04 上海贝尔股份有限公司 A kind of method and apparatus of the Video Rendering in the equipment for promotion
CN105892647A (en) * 2016-03-23 2016-08-24 京东方科技集团股份有限公司 Display screen adjusting method and device as well as display device
CN106155316A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN108334191A (en) * 2017-12-29 2018-07-27 北京七鑫易维信息技术有限公司 Based on the method and apparatus of the determination blinkpunkt of eye movement analysis equipment
CN108968907A (en) * 2018-07-05 2018-12-11 四川大学 The bearing calibration of eye movement data and device

Also Published As

Publication number Publication date
CN109917923A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN110308789B (en) Method and system for mixed reality interaction with peripheral devices
US11650659B2 (en) User input processing with eye tracking
US11917126B2 (en) Systems and methods for eye tracking in virtual reality and augmented reality applications
EP3123283B1 (en) Eye gaze tracking based upon adaptive homography mapping
EP3095025B1 (en) Eye gaze detection with multiple light sources and sensors
EP3005030B1 (en) Calibrating eye tracking system by touch input
US9377859B2 (en) Enhanced detection of circular engagement gesture
CN109976528B (en) Method for adjusting watching area based on head movement and terminal equipment
CN113646732A (en) System and method for obtaining control schemes based on neuromuscular data
US10488918B2 (en) Analysis of user interface interactions within a virtual reality environment
WO2012137801A1 (en) Input device, input method, and computer program
EP4095744A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
US20180267604A1 (en) Computer pointer device
Brousseau et al. Smarteye: An accurate infrared eye tracking system for smartphones
CN112162627A (en) Eyeball tracking method combined with head movement detection and related device
CN109917923B (en) Method for adjusting gazing area based on free motion and terminal equipment
US10444831B2 (en) User-input apparatus, method and program for user-input
CN109960412B (en) Method for adjusting gazing area based on touch control and terminal equipment
US20240122469A1 (en) Virtual reality techniques for characterizing visual capabilities
US11797081B2 (en) Methods, devices and media for input/output space mapping in head-based human-computer interactions
US20220187910A1 (en) Information processing apparatus
AU2022293326A1 (en) Virtual reality techniques for characterizing visual capabilities
CN115756173A (en) Eye tracking method, system, storage medium and computing equipment
CN115857709A (en) Mouse device with several vector detecting modules
Inoue et al. Development of pointing system that uses gaze point detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant