WO2020063077A1 - 应用于终端的眼球追踪信息处理方法和装置 - Google Patents

应用于终端的眼球追踪信息处理方法和装置 Download PDF

Info

Publication number
WO2020063077A1
WO2020063077A1 PCT/CN2019/097659 CN2019097659W WO2020063077A1 WO 2020063077 A1 WO2020063077 A1 WO 2020063077A1 CN 2019097659 W CN2019097659 W CN 2019097659W WO 2020063077 A1 WO2020063077 A1 WO 2020063077A1
Authority
WO
WIPO (PCT)
Prior art keywords
algorithm
target
function module
combination
module
Prior art date
Application number
PCT/CN2019/097659
Other languages
English (en)
French (fr)
Inventor
孔祥晖
秦林婵
黄通兵
Original Assignee
北京七鑫易维信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京七鑫易维信息技术有限公司 filed Critical 北京七鑫易维信息技术有限公司
Publication of WO2020063077A1 publication Critical patent/WO2020063077A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present application relates to the technical field of eye tracking, and in particular, to a method and device for processing eye tracking information applied to a terminal.
  • Eye tracking technology as an innovative interactive method, is more and more well known to the public, and has been widely used in people's work and learning.
  • eye tracking technology can be applied to mobile devices, for example, using eye tracking technology in mobile phones.
  • the usage scenarios of mobile devices are usually complicated, the mobility is strong, and the environment around the mobile device changes frequently. For example, a user holds a mobile device from indoor to outdoor.
  • the eye tracking algorithm needs to include multiple application scenarios.
  • the eye tracking technology that includes algorithms for different scenarios makes the architecture of the eye tracking system more complicated, and the update speed of the eye tracking algorithm becomes slower.
  • the eye tracking technology containing algorithms for different scenarios occupies more resources and consumes more resources of the occupied devices, sacrificing other performance of the mobile device.
  • the embodiments of the present application provide a method and device for processing eye tracking information applied to a terminal.
  • a method for processing eye tracking information applied to a terminal including: acquiring scene information; determining a function module combination corresponding to the scene information according to a preset function module, and functions in the function module combination.
  • the target algorithm combination corresponding to the module wherein the functional module combination includes at least one functional module, and the target algorithm combination includes at least one target algorithm; the target working mode is determined according to the functional module combination and the target algorithm combination; and the target working mode switch is completed.
  • completing the switching of the working mode includes: controlling the eye tracking device to switch from the current working mode to the target working mode, at least part of the functional modules and the target algorithm used in the current working mode, and the functional modules and the target algorithm used in the target working mode. Is different.
  • the method for processing eye tracking information applied to a terminal further includes: determining parameter information according to scene information, wherein the parameter information includes at least one of the following: a head box range, a frame rate, an accuracy, and an accuracy, and the head box range represents the target The range of head movement of the object, the frame rate represents the number of eyeball images collected per unit time, the accuracy represents the deviation of the target's gaze point position from the actual position of the target object, and the accuracy represents the degree of dispersion of the gaze point's position; according to the parameters
  • the information determines the function module combination from the preset function modules and the target algorithm combination corresponding to the function modules in the function module combination.
  • the method for processing eye tracking information applied to a terminal further includes: in a case where the parameter information includes at least the range of the head box, determining that the function module combination includes a first function module, wherein the first function module is configured to extract eye features; Determine whether the eye tracking device is in a moving state; when the eye tracking device is in a moving state, obtain the moving speed of the target object within the range of the head box; and determine the target algorithm combination corresponding to the first functional module from multiple algorithms according to the moving speed .
  • the eye tracking information processing method applied to the terminal further includes: in a case where the moving speed is greater than a preset speed, determining a target algorithm combination corresponding to the first functional module includes a first algorithm, wherein the first algorithm is set to Obtain the eye characteristics of the target object in the frame image; when the moving speed is less than or equal to the preset speed, determining the target algorithm combination corresponding to the first functional module includes a second algorithm, where the second algorithm is set in the eye image Obtain the eye characteristics of the target object.
  • the method for processing eye tracking information applied to a terminal further includes: in a case where the parameter information includes at least accuracy, determining that the function module combination includes a second function module, wherein the second function module is configured to locate the pupil; obtain The distance between the eye tracking device and the eye of the target object; the comparison distance and the preset distance to obtain the comparison result; the accuracy is determined according to the comparison result; and the target algorithm combination corresponding to the second functional module is determined according to the accuracy.
  • the method for processing eye tracking information applied to a terminal further includes: when the accuracy is greater than a preset accuracy, determining a target algorithm combination corresponding to the second functional module includes a third algorithm and a fourth algorithm, wherein the third The algorithm is set to coarsely locate the pupil, and the fourth algorithm is set to finely locate the pupil; when the accuracy is less than or equal to a preset accuracy, determining a target algorithm combination corresponding to the second functional module includes a third algorithm.
  • the method for processing eye tracking information applied to a terminal further includes: in a case where the parameter information includes at least a frame rate, determining a combination of function modules includes a third function module, wherein the third function module is configured to determine a frequency of acquiring an eyeball image Obtain temperature information of the eye tracking device; determine the system loss of the eye tracking device according to the temperature information; determine the frame rate of the eye tracking device according to the system loss and the operating status of the eye tracking device, wherein the operating status of the eye tracking device includes at least: the front desk Running state and background running state; the target algorithm combination corresponding to the third functional module is determined according to the frame rate.
  • the method for processing eye tracking information applied to a terminal further includes: when the frame rate is greater than a preset frame rate, determining a target algorithm combination corresponding to the third functional module includes a fifth algorithm, wherein the fifth algorithm is set to reduce The number of eyeball images collected by the eye tracking device per unit time; if the frame rate is less than or equal to a preset frame rate, determining a target algorithm combination corresponding to the third function module includes a sixth algorithm, where the sixth algorithm is set to improve The number of eyeball images collected by the eye tracking device per unit time.
  • a method for processing eye tracking information applied to a terminal includes: acquiring environmental information; determining a function module combination corresponding to the environmental information according to a preset function module, and a function module combination.
  • the target algorithm combination corresponding to the function module of the system wherein the function module combination includes at least one function module, and the target algorithm combination includes at least one target algorithm; the target working mode is determined according to the function module combination and the target algorithm combination; and the target working mode switch is completed.
  • a device for processing eye tracking information applied to a terminal including: an acquisition module configured to acquire scene information; a selection module configured to determine scene information correspondence according to a preset function module Function module combination and the target algorithm combination corresponding to the function module in the function module combination, wherein the function module combination includes at least one function module and the target algorithm combination includes at least one target algorithm; the determination module is set to be based on the function module The combination of the combination and the target algorithm determines the target work mode; the switching module is set to complete the target work mode switch.
  • a storage medium includes a stored program, where the program executes a method for processing eye tracking information applied to a terminal.
  • a processor is further provided.
  • the processor is configured to run a program, wherein the program executes an eye tracking information processing method applied to a terminal when the program is run.
  • a method of automatically adapting eye tracking technology according to different scenarios is adopted. After obtaining the scene information, the function module combination corresponding to the scene information is determined according to the preset function module, and the function module corresponding to the function module combination is corresponding. The target algorithm combination is determined, and the target working mode is determined according to the function module combination and the target algorithm combination, and then the target working mode switching is completed.
  • the target algorithm that is most suitable for the scene information that is, different scene information corresponds to different target algorithms, so that the eye tracking device uses the same
  • the target algorithm most suitable for the scene information of the current scene processes the information collected by the eye tracking device, and the algorithm that does not fit the scene information of the current scene is not processed, thereby reducing the resource consumption of the eye tracking system.
  • FIG. 1 is a flowchart of a method for processing eye tracking information applied to a terminal according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an eye tracking information processing device applied to a terminal according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for processing eye tracking information applied to a terminal according to an embodiment of the present application.
  • an embodiment of a method for processing eye tracking information applied to a terminal is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings may be in a computer system such as a set of computer-executable instructions. Perform, and although the logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
  • FIG. 1 is a flowchart of a method for processing eye tracking information applied to a terminal according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step S102 Acquire scene information.
  • the eye tracking device is a mobile device with an eye tracking function
  • the mobile device may be, but is not limited to, a mobile phone, a tablet, and smart glasses.
  • the mobile device has a collector, which can collect scene information of the scene where the eye tracking device is located.
  • the scene information of the eye-tracking device includes the device information of the eye-tracking device and the environmental information of the environment in which the eye-tracking device is located, where the device information of the eye-tracking device includes, but is not limited to, the temperature and operating speed of the eye-tracking device ;
  • the environmental information of the environment where the eye tracking device is located includes, but is not limited to, the movement status of the eye tracking device (for example, during the user's walking, the relative displacement between the user's eye and the mobile device changes, at this time it is determined that the eye tracking device is moving State), movement speed, distance between the eye-tracking device and the target's eyes, and the brightness of the environment in which the eye-tracking device is located (e.g., when the user moves from indoor to outdoor, the brightness of the environment in which the mobile device is located occurs Changes), application information of the eye-tracking device (for example, the frequency of use of eye-tracking technology when a user browses a webpage), and the like.
  • step S104 the function module combination corresponding to the scene information and the target algorithm combination corresponding to the function module in the function module combination are determined according to the preset function module, wherein the function module combination includes at least one function module and the target algorithm combination includes at least A target algorithm.
  • the preset function module in the eye tracking device includes multiple function modules, and at least one or more of the multiple function modules can be arranged and combined to obtain a function module combination.
  • a function module for extracting eye features can be A functional module combination is obtained by combining with a functional module for positioning the pupil.
  • the same function module may correspond to multiple algorithms.
  • the algorithm corresponding to the function module for positioning the pupil includes the algorithm for fine positioning of the pupil and the algorithm for coarse positioning of the pupil.
  • the combination of multiple algorithms can obtain the target. Algorithm combination.
  • the target algorithm is a common algorithm applied to the scene where the eye-tracking device of the terminal is located. The target algorithm can be used to obtain more accurate processing results on the information collected by the eye-tracking device in this scene.
  • the relative displacement between the user's eye and the eye tracking device changes greatly.
  • the eye tracking device requires a larger head box range and a faster frame rate. In order to make eye tracking more broad and frequent.
  • the eye-tracking device switches the algorithm of image information collected by the camera to the algorithm of image information collected by the infrared.
  • the eye tracking technology is used to control the up and down movements and jumps of the webpage, but in this scenario, the use of eye tracking is less frequent, and webpage browsing requires higher computing power. Without preempting CPU or GPU resources, the eye tracking device reduces the execution frequency of the eye tracking algorithm.
  • step S104 can optimize the eye tracking algorithm based on different scenarios, adaptively select the eye tracking algorithm, or adjust the main parameters in the eye tracking algorithm to achieve the purpose of reducing the resource consumption of the eye tracking system.
  • Step S106 Determine the target working mode according to the combination of the function modules and the target algorithm.
  • the target working mode is a mode based on the combination of functional modules that uses the target algorithm in the target algorithm combination to process the images collected by the eye tracking device. Because different scenarios may correspond to different combinations of functional modules and target algorithms, the corresponding target work modes may also be different in different scenarios. For example, when users browse the web, use eye tracking technology to control the up and down movement of the web page, Jump, etc., but in this scenario, the target working mode of the eye tracking device is to run the eye tracking algorithm with a lower execution frequency.
  • Step S108 completing the switching of the target working mode, wherein completing the switching of the working mode includes: controlling the eye tracking device to switch from the current working mode to the target working mode, the functional modules and target algorithms adopted by the current working mode, and the target working mode Functional modules and target algorithms are at least partially different.
  • the eye tracking device uses the first working mode to process the user's eye image.
  • the light intensity of the environment where the eye tracking device is located changes.
  • the user's gaze point information may not be accurate .
  • the working mode of the eye tracking device is switched from the first working mode to the second suitable for outdoor use. Work mode to get more accurate user gaze point information.
  • the function module combination corresponding to the scene information is determined according to the preset function modules, and the function
  • the target algorithm combination corresponding to the functional module in the module combination, and the target working mode is determined according to the functional module combination and the target algorithm combination.
  • the target working mode switch is completed.
  • the functional module combination includes at least one functional module and the target algorithm combination. Includes at least one target algorithm.
  • the target algorithm that is most suitable for the scene information is further determined, that is, different scene information corresponds to different target algorithms, so that the eye tracking device uses
  • the target algorithm that is most suitable for the scene information of the current scene processes the information collected by the eye tracking device, while the algorithm that is not suitable for the scene information of the current scene is not processed, reducing the resource consumption of the eye tracking system.
  • determining the function module combination corresponding to the scenario information according to the preset function module, and the target algorithm combination corresponding to the function module in the function module combination may include the following steps:
  • Step S1020 Determine parameter information according to the scene information.
  • step S1022 a function module combination and a target algorithm combination corresponding to the function module in the function module combination are determined from the preset function modules according to the parameter information.
  • the parameter information includes at least one of the following: head box range, frame rate, accuracy, and precision.
  • the head box range represents the range in which the head of the target object moves, including the forward and backward movement range and the left and right movement range;
  • the frame rate represents the number of eyeball images collected per unit time, for example, a frame rate of 30Hz means that 30 frames of images are acquired per second;
  • Accuracy represents the deviation between the position of the gaze point of the target object and the actual position of the target object;
  • accuracy represents the degree of dispersion of the position of the gaze point, for example, taking the root mean square of continuous samples as the accuracy.
  • determining a function module combination from a preset function module and a target algorithm combination corresponding to a function module in the function module combination according to parameter information may include:
  • step S20 if the parameter information includes at least the range of the head box, it is determined that the function module combination includes a first function module, where the first function module is configured to extract eye features;
  • Step S22 Determine whether the eye tracking device is in a moving state
  • Step S24 When the eye tracking device is in a moving state, obtain a moving speed of the target object within the range of the head box;
  • Step S26 Determine a target algorithm combination corresponding to the first functional module according to the moving speed.
  • determining a target algorithm combination corresponding to the first functional module includes a first algorithm, wherein the first algorithm is set to acquire an eye feature of the target object in a full frame image; In a case where the moving speed is less than or equal to a preset speed, determining a target algorithm combination corresponding to the first functional module includes a second algorithm, wherein the second algorithm is configured to acquire an eye feature of the target object in the eye image.
  • the range of the head box of the eye tracking device is determined by the hardware camera and the lens, and for the eye tracking device, the range of the head box is a fixed value.
  • the eye tracking algorithm considers that the position of the user and the eye tracking device is relatively fixed. At this time, it is only necessary to capture the eye area in the image collected by the eye tracking device. Eye characteristics, that is, the tracking algorithm is used to obtain the eye characteristics of the target object.
  • the eye tracking algorithm needs to re-find the eye area in the full frame image (that is, the first algorithm is used to obtain the target object's Eye features), resulting in the overall algorithm being time consuming, and the need to constantly switch between the two search algorithms.
  • the first algorithm is used to find the eye area in the full image by default, the first algorithm takes a long time and consumes a large amount of memory resources, resulting in a waste of resources.
  • the eye tracking device includes a unit such as a gravity sensor, an acceleration sensor, and a gyroscope.
  • the eye tracking device acquires data such as acceleration values, rotation angles, and other data collected by units such as gravity sensors, acceleration sensors, and gyroscopes, and determines whether the eye tracking device is in a moving state by using data such as acceleration values and rotation angles. For example, when the acceleration value is greater than a preset acceleration value and / or the rotation angle is greater than a preset angle, it is determined that the eye tracking device is in a moving state. When the eye tracking device is in a moving state, the eye tracking device further detects the moving speed of the target object within the range of the head box.
  • the eye tracking device uses a first algorithm to obtain the eye characteristics of the target object; when the moving speed is less than or equal to the preset speed, the eye tracking device uses a second algorithm to obtain the target object Eye features.
  • determining a function module combination from a preset function module and a target algorithm combination corresponding to a function module in the function module combination according to parameter information may include:
  • step S30 if the parameter information includes at least accuracy, it is determined that the function module combination includes a second function module, wherein the second function module is configured to locate the pupil;
  • Step S32 Obtain a distance between the eye tracking device and the eye of the target object
  • Step S34 comparing the distance with a preset distance to obtain a comparison result
  • Step S36 Determine the accuracy according to the comparison result
  • Step S38 Determine a target algorithm combination corresponding to the second functional module according to the accuracy.
  • determining a target algorithm combination corresponding to the second functional module includes a third algorithm and a fourth algorithm; in the case where the accuracy is less than or equal to the preset accuracy, determining the first
  • the target algorithm combination corresponding to the two functional modules includes a third algorithm.
  • the third algorithm is set to coarsely locate the pupil
  • the fourth algorithm is set to finely locate the pupil.
  • the accuracy of the fixation point is one of the key indicators of the eye tracking technology.
  • Different eye tracking devices can use different eye tracking algorithms to meet the accuracy requirements.
  • the eye-tracking device may perform pupil positioning twice, once for coarse pupil positioning, and once for fine pupil positioning. Among them, the two pupil positioning can be serial or only coarse positioning can be selected. The difference between the two lies in the accuracy of the gaze point.
  • the two methods consume different system resources, and the overall calculation time is also different. Therefore, the pupil can be located by selecting an algorithm corresponding to the current scene to avoid wasting system resources.
  • the accuracy of the eye-tracking algorithm may be selected by the eye-tracking device by a manual writing method of the eye-tracking device.
  • the eye tracking device can also switch the accuracy of the gaze point by judging the environmental impact.
  • a distance sensor is installed on the eye tracking device, and the distance sensor can detect the distance between the eye tracking device and the eye of the target object.
  • the eye-tracking device determines the accuracy of the fixation point according to the distance between the eye-tracking device and the eye of the target object and the size of the preset distance.
  • the eye tracking device uses a combination of a first algorithm for coarse positioning of the pupil and a second algorithm for fine positioning of the pupil to locate the pupil; when the accuracy is less than or equal to With accuracy set, the eye tracking device uses a second algorithm to locate the pupil.
  • the accuracy of the fixation point is measured by the deviation angle over a certain distance.
  • the deviation angle is constant, the closer the distance between the eye tracking device and the eye of the target object, the greater the deviation distance. small. Therefore, when the distance between the eye tracking device and the eye of the target object is less than a certain threshold, the eye tracking device increases the deviation angle, and at this time, the deviation distance does not exceed the distance threshold.
  • determining a function module combination from a preset function module and a target algorithm combination corresponding to a function module in the function module combination according to parameter information may include:
  • Step S40 if the parameter information includes at least a frame rate, determine that the function module combination includes a third function module, where the third function module is configured to determine a frequency of acquiring an eyeball image;
  • Step S42 obtaining temperature information of the eye tracking device
  • Step S44 Determine the system loss of the eye tracking device according to the temperature information
  • Step S46 Determine the frame rate of the eye-tracking device according to the system loss and the operating state of the eye-tracking device, wherein the operating state of the eye-tracking device includes at least the foreground operating state and the background operating state;
  • Step S48 Determine a target algorithm combination corresponding to the third functional module according to the frame rate.
  • determining a target algorithm combination corresponding to the third functional module includes a fifth algorithm, where the fifth algorithm is configured to reduce eyeball images collected per unit time of the eye tracking device.
  • the target algorithm combination corresponding to the third function module includes a sixth algorithm, where the sixth algorithm is set to improve the eyeball image collected by the eye tracking device per unit time. Quantity.
  • the frame rate can be appropriately reduced to reduce the amount of data to be processed and system energy consumption, and save system resources. Therefore, you can avoid wasting system resources by selecting the frame rate corresponding to the current scene.
  • a temperature sensor is installed on the eye tracking device to detect the entire temperature of the eye tracking device. After determining the temperature of the whole machine, the eye tracking device judges the system loss according to the temperature change. Further, the eye-tracking device determines the frame rate according to the system loss and the running state of the eye-tracking device (the front-end running state or the background running state), and then determines whether to use the fifth algorithm for frame reduction or the The frame's sixth algorithm performs eye tracking. Wherein, when the frame rate is greater than the preset frame rate, the eye tracking device uses the fifth algorithm for eye tracking; when the frame rate is less than or equal to the preset frame rate, the eye tracking device uses the sixth algorithm for eye tracking.
  • the solution provided by this application automatically adapts the eye tracking algorithm for different scenarios.
  • the algorithm of the corresponding functional module can be based on the computing power, accuracy and environmental information of the eye tracking device. Make a selection to form an adaptive system.
  • each functional module triggers and works according to different scene modes.
  • the separate maintenance mechanism of each function module forms an adaptive solution according to the function splicing method, which improves the system's flexibility whether it is remotely upgraded or independently maintained.
  • FIG. 3 is a flowchart of a method for processing eye tracking information applied to a terminal according to an embodiment of the present application, as shown in FIG. 3 As shown, the method includes the following steps:
  • Step S302 Acquire environmental information.
  • the eye tracking device is a mobile device with an eye tracking function
  • the mobile device may be, but is not limited to, a mobile phone, a tablet, and smart glasses.
  • the mobile device has a collector, which can collect environmental information of the environment in which the eye tracking device is located, such as temperature, humidity, light intensity, and the like.
  • step S304 the function module combination corresponding to the environmental information and the target algorithm combination corresponding to the function module in the function module combination are determined according to the preset function module, wherein the function module combination includes at least one function module and the target algorithm combination includes at least A target algorithm.
  • the preset function modules in the eye tracking device include multiple function modules, and any one or more of the multiple function modules can be arranged and combined to obtain a function module combination.
  • a function module that extracts eye features can be A functional module combination is obtained by combining with a functional module for positioning the pupil.
  • the same function module may correspond to multiple algorithms.
  • the algorithm corresponding to the function module for positioning the pupil includes the algorithm for fine positioning of the pupil and the algorithm for coarse positioning of the pupil.
  • the combination of multiple algorithms can obtain the target. Algorithm combination.
  • the target algorithm is a common algorithm applied to the environment in which the eye tracking device of the terminal is located. Using the target algorithm in this environment can obtain more accurate processing results on the information collected by the eye tracking device.
  • the eye tracking device can start the acquisition function module and switch the first acquisition algorithm based on the camera to capture image information in the acquisition function module to the second acquisition algorithm based on the infrared acquisition image information, that is, the infrared acquisition based
  • a second acquisition algorithm for acquiring image information is used as a target algorithm.
  • Step S306 Determine the target working mode according to the combination of the functional modules and the target algorithm.
  • the target working mode is a mode based on the combination of functional modules that uses the target algorithm in the target algorithm combination to process the images collected by the eye tracking device. Because different environments may correspond to different combinations of functional modules and target algorithms, corresponding target operating modes may also be different in different environments.
  • Step S308 the target working mode switching is completed.
  • the completion of the working mode switching includes controlling the eye tracking device to switch from the current working mode to the target working mode.
  • the functional modules and target algorithms used in the current working mode are the same as those used in the target working mode. Functional modules and target algorithms are at least partially different.
  • the eye tracking device uses the first working mode to process the user's eye image.
  • the light intensity of the environment where the eye tracking device is located changes.
  • the user's gaze point information may not be accurate .
  • the working mode of the eye tracking device is switched from the first working mode to the second suitable for outdoor use. Work mode to get more accurate user gaze point information.
  • the method of automatically adapting eye tracking technology according to different environments is adopted.
  • After obtaining the environmental information of the eye tracking device it is selected from the preset function modules in the eye tracking device.
  • the functional module combination corresponding to the environmental information, and the target algorithm combination corresponding to the functional modules in the functional module combination, and the target working mode is determined according to the functional module combination and the target algorithm combination.
  • the eye tracking device is controlled from the current working mode. Switch to the target working mode, where the functional modules and target algorithms used in the current working mode are at least partially different from the functional modules and target algorithms used in the target working mode.
  • the eye tracking device After determining the environmental information of the environment in which the eye tracking device is located, further determine the target algorithm that is most suitable for the environmental information, that is, different environmental information corresponds to different target algorithms, so that the eye tracking device uses
  • the target algorithm that is most suitable for the environmental information of the current environment processes the information collected by the eye tracking device, and the algorithm that is not suitable for the environmental information of the current environment is not processed, thereby reducing the resource consumption of the eye tracking system.
  • selecting a function module combination corresponding to environmental information, and a target algorithm combination corresponding to a function module in the function module combination may include the following steps:
  • Step S2020 determining parameter information according to the environmental information
  • step S2022 the function module combination and the target algorithm combination corresponding to the function module in the function module combination are determined from the preset function modules according to the parameter information.
  • the parameter information includes at least one of the following: head box range, frame rate, accuracy, and precision.
  • the head box range represents the range in which the head of the target object moves, including the forward and backward movement range and the left and right movement range;
  • the frame rate represents the number of eyeball images collected per unit time, for example, a frame rate of 30Hz means that 30 frames of images are acquired per second;
  • Accuracy represents the deviation between the position of the gaze point of the target object and the actual position of the target object;
  • accuracy represents the degree of dispersion of the position of the gaze point, for example, taking the root mean square of continuous samples as the accuracy.
  • the function module combination and the target algorithm combination corresponding to the function module in the function module combination are determined according to the parameter information, and the information collected by the eye tracking device is based on the target algorithm corresponding to each function module in the function module combination.
  • the processing process is the same as the content provided in Embodiment 1, and is not repeated here.
  • FIG. 2 is a schematic structural diagram of an eye tracking information processing device applied to a terminal according to an embodiment of the present application. As shown in FIG. 2, the device includes an obtaining module 201, a selecting module 203, a determining module 205, and a switching module 207.
  • the acquisition module 201 is configured to acquire scene information; the selection module 203 is configured to select a function module combination corresponding to the scenario information according to a preset function module, and a target algorithm combination corresponding to a function module in the function module combination, where The function module combination includes at least one function module, and the target algorithm combination includes at least one target algorithm.
  • the determination module 205 is configured to determine the target working mode according to the function module combination and the target algorithm combination; the switching module 207 is set to complete the target working mode. Switching, in which completing the switching of the working mode includes: controlling the eye tracking device to switch from the current working mode to the target working mode, at least one of the functional modules and target algorithms used in the current working mode, and the functional modules and target algorithms used in the target working mode Some are different.
  • the acquisition module 201, the selection module 203, the determination module 205, and the switching module 207 correspond to steps S102 to S108 in Embodiment 1.
  • the four modules and the corresponding steps implement the same examples and application scenarios. However, it is not limited to the content disclosed in the first embodiment.
  • the determining module includes a first determining module and a second determining module.
  • the first determination module is configured to determine parameter information according to scene information, where the parameter information includes at least one of the following: a head box range, a frame rate, an accuracy, and an accuracy, and the head box range represents a target object's head movement. Range, the frame rate characterizes the number of eyeball images collected per unit time, the accuracy characterizes the deviation between the position of the gaze point of the target object and the actual position of the target object, and the accuracy characterizes the degree of dispersion of the position of the gaze point; the second determination module is The parameter information determines the functional module combination and the target algorithm combination corresponding to the functional modules in the functional module combination.
  • first determination module and the second determination module correspond to steps S1020 to S1022 in Embodiment 1.
  • the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above embodiments. 1 published content.
  • the second determination module includes a third determination module, a determination module, a first acquisition module, and a fourth determination module.
  • the third determination module is configured to determine the combination of the function module including the first function module when the parameter information includes at least the range of the head box, wherein the first function module is configured to extract eye characteristics; the determination module is configured to determine Whether the eye-tracking device is in a moving state; the first acquisition module is configured to acquire the moving speed of the target object within the range of the head box when the eye-tracking device is in a moving state; the fourth determining module is configured to determine the first The target algorithm combination corresponding to a functional module.
  • third determination module determination module
  • first acquisition module determination module
  • fourth determination module correspond to steps S20 to S26 in Embodiment 1. Examples and application scenarios implemented by the four modules and corresponding steps The same, but not limited to the content disclosed in the first embodiment.
  • the fourth determining module includes a fifth determining module and a sixth determining module.
  • the fifth determining module is configured to determine a target algorithm combination corresponding to the first functional module when the moving speed is greater than a preset speed, and the first algorithm is configured to obtain a target object in a full-frame image.
  • Eye characteristics; a sixth determining module configured to set a target algorithm combination corresponding to the first function module to a second algorithm when the moving speed is less than or equal to a preset speed, wherein the second algorithm is set in the eye image Obtain the eye characteristics of the target object.
  • the second determination module includes a seventh determination module, a second acquisition module, a comparison module, an eighth determination module, and a ninth determination module.
  • the seventh determination module is configured to determine the combination of the function modules including the second function module if the parameter information includes at least accuracy, wherein the second function module is configured to locate the pupil; the second acquisition module is configured to Obtain the distance between the eye tracking device and the eye of the target object; the comparison module is set to compare the distance and the preset distance to obtain the comparison result; the eighth determination module is set to determine the accuracy based on the comparison result;
  • the nine determination module is configured to determine the target algorithm combination corresponding to the second function module according to the accuracy.
  • the seventh determination module, the second acquisition module, the comparison module, the eighth determination module, and the ninth determination module correspond to steps S30 to S38 in Embodiment 1, and the five modules correspond to the corresponding steps.
  • the implementation example is the same as the application scenario, but is not limited to the content disclosed in the above embodiment 1.
  • the ninth determination module includes: a tenth determination module and an eleventh determination module.
  • the tenth determination module is configured to determine a target algorithm combination corresponding to the second function module when the accuracy is greater than a preset accuracy, and includes a third algorithm and a fourth algorithm, and the third algorithm is configured to perform pupil detection.
  • the fourth algorithm is set to finely locate the pupil;
  • the eleventh determination module is set to determine a target algorithm combination corresponding to the second functional module with an accuracy of less than or equal to a preset accuracy, including a third algorithm.
  • the second determination module includes: a twelfth determination module, a third acquisition module, a thirteenth determination module, a fourteenth determination module, and a fifteenth determination module.
  • the twelfth determination module is configured to determine that the combination of the function modules includes a third function module if the parameter information includes at least the frame rate, wherein the third function module is configured to determine the frequency of collecting eyeball images; the third acquisition module , Set to obtain temperature information of the eye-tracking device; a thirteenth determination module is set to determine the system loss of the eye-tracking device based on the temperature information; a fourteenth determination module is set to determine the eyeball based on the system loss and the operating state of the eye-tracking device The frame rate of the tracking device, wherein the running state of the eyeball tracking device includes at least the foreground running state and the background running state; the fifteenth determination module is configured to determine the target algorithm combination corresponding to the third functional module according to the frame rate.
  • the above-mentioned twelfth determination module, the third acquisition module, the thirteenth determination module, the fourteenth determination module, and the fifteenth determination module correspond to steps S40 to S48 in Embodiment 1, and five modules
  • the examples and application scenarios implemented by the corresponding steps are the same, but are not limited to those disclosed in the first embodiment.
  • the fifteenth determination module includes: a sixteenth determination module and a seventeenth determination module.
  • the sixteenth determination module is configured to determine a target algorithm combination corresponding to the third function module when the frame rate is greater than a preset frame rate, and the fifth algorithm is configured to reduce the unit time of the eye tracking device.
  • the number of eyeball images collected within the seventeenth; the seventeenth determination module is configured to determine a target algorithm combination corresponding to the third functional module when the frame rate is less than or equal to a preset frame rate, including a sixth algorithm, wherein the sixth algorithm Set to increase the number of eyeball images collected by the eyeball tracking device per unit time.
  • a storage medium includes a stored program, where the program executes the eye tracking information processing method applied to the terminal provided in Embodiment 1.
  • a processor is further provided, and the processor is configured to run a program, wherein the program executes the eye tracking information processing method applied to the terminal provided in the embodiment 1 when the program is run.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • multiple units or components may be combined or may be combined. Integration into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium. , Including a number of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: U disks, Read-Only Memory (ROM), Random Access Memory (RAM), mobile hard disks, magnetic disks, or optical disks, and other media that can store program codes .
  • the solution provided by the embodiment of the present application can be applied to the eye tracking technology.
  • the technical problem of the large resource consumption of the existing eye tracking system is solved, and the eye tracking system is reduced. Resource consumption.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)
  • Position Input By Displaying (AREA)

Abstract

本申请公开了一种应用于终端的眼球追踪信息处理方法和装置。其中,该方法包括:获取场景信息;根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法;根据功能模块组合和目标算法组合确定目标工作模式;完成目标工作模式切换。

Description

应用于终端的眼球追踪信息处理方法和装置 技术领域
本申请涉及眼球追踪技术领域,具体而言,涉及一种应用于终端的眼球追踪信息处理方法和装置。
背景技术
眼球追踪技术作为革新的交互方式,越来越多被大众所熟知,在人们的工作、学习等方面得到了广泛的应用。
目前,可将眼球追踪技术可应用在移动设备上,例如,在手机中使用眼球追踪技术。然而,通常移动设备的使用场景复杂,可移动性较强,移动设备周围的环境变化频繁,例如,用户手持移动设备从室内走向室外。为了使能够得到精准的眼球追踪结果,需要眼球追踪算法包含多个的应用场景的算法。
然而,包含不同场景算法的眼球追踪技术使得眼球追踪系统的架构更加复杂,眼球追踪算法的更新速度变慢。并且,包含不同场景算法的眼球追踪技术占用的资源较多,占用设备的资源消耗也较多,牺牲了移动设备的其他性能。
发明内容
本申请实施例提供了一种应用于终端的眼球追踪信息处理方法和装置。
根据本申请实施例的一个方面,提供了一种应用于终端的眼球追踪信息处理方法,包括:获取场景信息;根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法;根据功能模块组合和目标算法组合确定目标工作模式;完成目标工作模式切换。
进一步地,完成工作模式切换包括:控制眼球追踪设备从当前的工作模式切换至目标工作模式,当前的工作模式采用的功能模块和目标算法,与目标工作模式采用的功能模块和目标算法中至少部分是不同的。
进一步地,应用于终端的眼球追踪信息处理方法还包括:根据场景信息确定参数信息,其中,参数信息至少包括如下之一:头盒范围、帧率、准确度以及精准度,头 盒范围表征目标对象头部所移动的范围,帧率表征单位时间内采集眼球图像的数量,准确度表征目标对象的注视点位置与目标对象的实际位置的偏差,精准度表征注视点位置的离散程度;根据参数信息从预设功能模块中确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合。
进一步地,应用于终端的眼球追踪信息处理方法还包括:在参数信息至少包括头盒范围的情况下,确定功能模块组合包括第一功能模块,其中,第一功能模块设置为提取眼部特征;判断眼球追踪设备是否处于移动状态;在眼球追踪设备处于移动状态的情况下,获取目标对象在头盒范围内的移动速度;根据移动速度从多个算法中确定第一功能模块对应的目标算法组合。
进一步地,应用于终端的眼球追踪信息处理方法还包括:在移动速度大于预设速度的情况下,确定第一功能模块对应的目标算法组合包括第一算法,其中,第一算法设置为在全帧图像中获取目标对象的眼部特征;在移动速度小于等于预设速度的情况下,确定第一功能模块对应的目标算法组合包括第二算法,其中,第二算法设置为在眼部图像中获取目标对象的眼部特征。
进一步地,应用于终端的眼球追踪信息处理方法还包括:在参数信息至少包括准确度的情况下,确定功能模块组合包括第二功能模块,其中,第二功能模块设置为对瞳孔进行定位;获取眼球追踪设备与目标对象的眼部之间的距离;比对距离与预设距离,得到比对结果;根据比对结果确定准确度;根据准确度确定第二功能模块对应的目标算法组合。
进一步地,应用于终端的眼球追踪信息处理方法还包括:在准确度大于预设准确度的情况下,确定第二功能模块对应的目标算法组合包括第三算法与第四算法,其中,第三算法设置为对瞳孔进行粗定位,第四算法设置为对瞳孔进行精定位;在准确度小于等于预设准确度的情况下,确定第二功能模块对应的目标算法组合包括第三算法。
进一步地,应用于终端的眼球追踪信息处理方法还包括:在参数信息至少包括帧率的情况下,确定功能模块组合包括第三功能模块,其中,第三功能模块设置为确定采集眼球图像的频率;获取眼球追踪设备的温度信息;根据温度信息确定眼球追踪设备的系统损耗;根据系统损耗以及眼球追踪设备的运行状态确定眼球追踪设备的帧率,其中,眼球追踪设备的运行状态至少包括:前台运行状态、后台运行状态;根据帧率确定第三功能模块对应的目标算法组合。
进一步地,应用于终端的眼球追踪信息处理方法还包括:在帧率大于预设帧率的情况下,确定第三功能模块对应的目标算法组合包括第五算法,其中,第五算法设置 为降低眼球追踪设备单位时间内采集到的眼球图像的数量;在帧率小于等于预设帧率的情况下,确定第三功能模块对应的目标算法组合包括第六算法,其中,第六算法设置为提高眼球追踪设备单位时间内采集到的眼球图像的数量。
根据本申请实施例的另一方面,还提供了一种应用于终端的眼球追踪信息处理方法,包括:获取环境信息;根据预设功能模块确定环境信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法;根据功能模块组合和目标算法组合确定目标工作模式;完成目标工作模式切换。
根据本申请实施例的另一方面,还提供了一种应用于终端的眼球追踪信息处理装置,包括:获取模块,设置为获取场景信息;选择模块,设置为根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法;确定模块,设置为根据功能模块组合和目标算法组合确定目标工作模式;切换模块,设置为完成目标工作模式切换。
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,程序执行应用于终端的眼球追踪信息处理方法。
根据本申请实施例的另一方面,还提供了一种处理器,该处理器设置为运行程序,其中,程序运行时执行应用于终端的眼球追踪信息处理方法。
在本申请实施例中,采用根据不同场景自动适配眼球追踪技术的方式,在得到场景信息之后,根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,并根据功能模块组合和目标算法组合确定目标工作模式,然后完成目标工作模式切换。
在上述过程中,在确定了眼球追踪设备所处场景的场景信息之后,进一步确定与该场景信息最适配的目标算法,即不同的场景信息对应不同的目标算法,从而使得眼球追踪设备使用与当前场景的场景信息最适配的目标算法对眼球追踪设备采集到的信息进行处理,而与当前场景的场景信息不适配的算法不进行处理,从而降低了眼球追踪系统的资源消耗。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图 中:
图1是根据本申请实施例的一种应用于终端的眼球追踪信息处理方法的流程图;
图2是根据本申请实施例的一种应用于终端的眼球追踪信息处理装置的结构示意图;以及
图3是根据本申请实施例的一种应用于终端的眼球追踪信息处理方法的流程图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例1
根据本申请实施例,提供了一种应用于终端的眼球追踪信息处理方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请实施例的应用于终端的眼球追踪信息处理方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,获取场景信息。
需要说明的是,眼球追踪设备为具有眼球追踪功能的移动设备,其中,移动设备可以为但不限于手机、平板、智能眼镜等设备。在本实施例中,移动设备具有采集器,该采集器可采集眼球追踪设备所处场景的场景信息。
需要说明的是,眼球追踪设备的场景信息包括眼球追踪设备的设备信息以及眼球追踪设备所处环境的环境信息,其中,眼球追踪设备的设备信息包括但不限于眼球追踪设备的温度、运行速率等;眼球追踪设备所处环境的环境信息包括但不限于眼球追踪设备的移动状态(例如,用户在行走过程中,用户的眼部与移动设备的相对位移发生变化,此时确定眼球追踪设备处于移动状态)、移动速度、眼球追踪设备与目标对象的眼部之间的距离、眼球追踪设备所处环境的光照亮度(例如,用户从室内移动至室外时,移动设备所处环境的光照亮度发生了变化)、眼球追踪设备的应用信息(例如,用户在浏览网页时,使用眼球追踪技术的使用频率)等。
步骤S104,根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法。
需要说明的是,眼球追踪设备中的预设功能模块包含多个功能模块,多个功能模块中的至少一个或多个进行排列组合可以得到功能模块组合,例如,提取眼部特征的功能模块可以与对瞳孔进行定位的功能模块进行组合得到一个功能模块组合。另外,同一个功能模块可能对应多个算法,例如,对瞳孔进行定位的功能模块对应的算法包括对瞳孔进行精定位的算法,以及对瞳孔进行粗定位的算法,多个算法进行组合可以得到目标算法组合。其中,目标算法是应用于终端的眼球追踪设备所处场景的常用算法,使用目标算法在该场景下对眼球追踪设备采集到的信息可以得到更准确的处理结果。
可选的,用户在行走过程中,用户的眼部与眼球追踪设备的相对位移之间的变化较大,在该场景下,眼球追踪设备需要较大的头盒范围以及较快的帧率,以使眼球注视点的跟踪更为宽泛和频繁。
可选的,用户从室内移动至室外时,眼球追踪设备所处环境的光照亮度发生了变化,其中,室外环境对眼球追踪结果的干扰更大,注视点准度易发生较大偏差。在该场景下,眼球追踪设备将相机采集图像信息的算法切换为红外采集图像信息的算法。
可选的,用户在浏览网页时,使用眼球追踪技术控制网页的上下移动、跳转等,但在该场景下,眼球追踪的使用频率较低,并且网页浏览需要较高运算能力的支持,为了不抢占CPU或GPU的资源,眼球追踪设备降低眼球追踪算法的执行频率。
此外,还需要说明的是,在某一场景下,仅运行与当前场景相适配的功能模块组合中各个功能模块所对应的目标算法,同时对其他算法进行休眠处理,可以达到环境自适应眼球追踪技术的目的。
另外,步骤S104能够基于不同的场景,对眼球追踪算法进行优化,自适应选择眼球追踪算法,或对眼球追踪算法中的主要参数进行调整,达到了降低眼球追踪系统的资源消耗的目的。
步骤S106,根据功能模块组合和目标算法组合确定目标工作模式。
需要说明的是,目标工作模式是基于功能模块组合,使用目标算法组合中的目标算法对眼球追踪设备采集到的图像进行处理的模式。由于不同的场景可能对应不同的功能模块组合以及目标算法组合,因此,在不同的场景下对应的目标工作模式也可能不同,例如,用户在浏览网页时,使用眼球追踪技术控制网页的上下移动、跳转等,但在该场景下,眼球追踪设备的目标工作模式为以较低的执行频率运行眼球追踪算法。
步骤S108,完成目标工作模式切换,其中,完成工作模式切换包括:控制眼球追踪设备从当前的工作模式切换至目标工作模式,当前的工作模式采用的功能模块和目标算法,与目标工作模式采用的功能模块和目标算法中至少部分是不同的。
在一种可选的方案中,当用户在室内时,眼球追踪设备采用第一工作模式对用户的眼部图像进行处理。当用户从室内移动至室外时,眼球追踪设备所处环境的光照亮度发生了变化,此时,如果仍采用第一工作模式对用户的眼部图像进行处理,得到用户的注视点信息可能不准确。另外,由于室外环境对眼球追踪结果的干扰更大,注视点准度易发生较大偏差,因此,在该场景下,眼球追踪设备的工作模式由第一工作模式切换为适于室外的第二工作模式,以得到更准确的用户的注视点信息。
基于上述步骤S102至步骤S108所限定的方案,可以获知,采用根据不同场景自动适配眼球追踪技术的方式,在得到场景信息之后,根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,并根据功能模块组合和目标算法组合确定目标工作模式,最后,完成目标工作模式切换,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法。
容易注意到的是,在确定了眼球追踪设备所处场景的场景信息之后,进一步确定与该场景信息最适配的目标算法,即不同的场景信息对应不同的目标算法,从而使得眼球追踪设备使用与当前场景的场景信息最适配的目标算法对眼球追踪设备采集到的信息进行处理,而与当前场景的场景信息不适配的算法不进行处理,降低了眼球追踪系统的资源消耗。
在一种可选的方案中,根据预设功能模块确定场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合可以包括如下步骤:
步骤S1020,根据场景信息确定参数信息;
步骤S1022,根据参数信息从预设功能模块中确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合。
需要说明的是,参数信息至少包括如下之一:头盒范围、帧率、准确度以及精准度。其中,头盒范围表征目标对象头部所移动的范围,包括前后移动范围以及左右移动范围;帧率表征单位时间内采集眼球图像的数量,例如,帧率为30Hz表示每秒采集30帧图像;准确度表征目标对象的注视点位置与目标对象的实际位置的偏差;精准度表征注视点位置的离散程度,例如,将连续样本的均方根作为精准度。
在一种可选的方案中,根据参数信息从预设功能模块中确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,可以包括:
步骤S20,在参数信息至少包括头盒范围的情况下,确定功能模块组合包括第一功能模块,其中,第一功能模块设置为提取眼部特征;
步骤S22,判断眼球追踪设备是否处于移动状态;
步骤S24,在眼球追踪设备处于移动状态的情况下,获取目标对象在头盒范围内的移动速度;
步骤S26,根据移动速度确定第一功能模块对应的目标算法组合。
可选的,在移动速度大于预设速度的情况下,确定第一功能模块对应的目标算法组合包括第一算法,其中,第一算法设置为在全帧图像中获取目标对象的眼部特征;在移动速度小于等于预设速度的情况下,确定第一功能模块对应的目标算法组合包括第二算法,其中,第二算法设置为在眼部图像中获取目标对象的眼部特征。
需要说明的是,眼球追踪设备的头盒范围是由硬件相机和镜头决定的,其中,对于眼球追踪设备而言,头盒范围为定值。通常,在得到目标对象的注视点信息的情况下,眼球追踪算法认为用户与眼球追踪设备的位置是相对固定的,此时,仅需在眼球追踪设备所采集的图像中的眼部区域内捕获眼部特征,即采用跟踪算法获取目标对象的眼部特征。然而,当用户在头盒范围内的移动距离较大,并超过了眼球追踪算法的默认区域时,眼球追踪算法需要在全帧图像中重新查找眼部区域(即采用第一算法获取目标对象的眼部特征),导致整体算法耗时,而且需要不断的切换两种查找算法。另外,如果默认使用第一算法在全幅图像中查找眼部区域,则由于第一算法耗时较长并且消耗内存资源较多,导致资源浪费。
具体的,眼球追踪设备中具有重力传感器、加速度传感器、陀螺仪等单元。眼球 追踪设备获取重力传感器、加速度传感器、陀螺仪等单元采集到的加速度值、转动角度等数据,并加速度值、转动角度等数据来确定眼球追踪设备是否处于移动状态。例如,在加速度值大于预设加速度值,和/或转动角度大于预设角度的情况下,确定眼球追踪设备处于移动状态。在眼球追踪设备处于移动状态的情况下,眼球追踪设备进一步检测目标对象在头盒范围内的移动速度。其中,在移动速度大于预设速度的情况下,眼球追踪设备采用第一算法获取目标对象的眼部特征;在移动速度小于等于预设速度的情况下,眼球追踪设备采用第二算法获取目标对象的眼部特征。
在一种可选的方案中,根据参数信息从预设功能模块中确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,可以包括:
步骤S30,在参数信息至少包括准确度的情况下,确定功能模块组合包括第二功能模块,其中,第二功能模块设置为对瞳孔进行定位;
步骤S32,获取眼球追踪设备与目标对象的眼部之间的距离;
步骤S34,比对距离与预设距离,得到比对结果;
步骤S36,根据比对结果确定准确度;
步骤S38,根据准确度确定第二功能模块对应的目标算法组合。
可选的,在准确度大于预设准确度的情况下,确定第二功能模块对应的目标算法组合包括第三算法与第四算法;在准确度小于等于预设准确度的情况下,确定第二功能模块对应的目标算法组合包括第三算法。其中,第三算法设置为对瞳孔进行粗定位,第四算法设置为对瞳孔进行精定位。
需要说明的是,注视点的准确度是眼球追踪技术的关键指标之一,不同的眼球追踪设备可以采用不同的眼球追踪算法来满足准确度的需要。可选的,在获得目标对象的眼部特征之后,眼球追踪设备可进行两次瞳孔定位,一次为瞳孔粗定位,一次为瞳孔精定位。其中,两次瞳孔定位可以串行,也可以仅选择粗定位。两者的差别在于注视点的准确度存在差别,同时这两种方式对系统资源的消耗也不相同,整体的运算时间也不相同。因此,可通过选择与当前场景对应的算法对瞳孔进行定位,以避免系统资源的浪费。
可选的,可通过眼球追踪设备的手动写入方式,由眼球追踪设备自行选择眼球追踪算法的准确度。
可选的,眼球追踪设备还可通过判断环境影响来切换注视点的准确度。具体的,眼球追踪设备上安装有距离传感器,距离传感器可检测眼球追踪设备与目标对象的眼 部之间的距离。眼球追踪设备根据眼球追踪设备与目标对象的眼部之间的距离和预设距离的大小,来确定注视点的准确度。在准确度大于预设准确度的情况下,眼球追踪设备采用对瞳孔进行粗定位的第一算法与对瞳孔进行精定位的第二算法相结合的方式对瞳孔进行定位;在准确度小于等于预设准确度的情况下,眼球追踪设备采用第二算法对瞳孔进行定位。
需要说明的是,注视点的准确度以某一距离上的偏差角度来衡量,在偏差角度一定的情况下,眼球追踪设备与目标对象的眼部之间的距离越近,则偏差的距离越小。因此,在眼球追踪设备与目标对象的眼部之间的距离小于某一阈值的情况下,眼球追踪设备加大偏差角度,此时,偏差的距离不会超过距离阈值。
在一种可选的方案中,根据参数信息从预设功能模块中确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,可以包括:
步骤S40,在参数信息至少包括帧率的情况下,确定功能模块组合包括第三功能模块,其中,第三功能模块设置为确定采集眼球图像的频率;
步骤S42,获取眼球追踪设备的温度信息;
步骤S44,根据温度信息确定眼球追踪设备的系统损耗;
步骤S46,根据系统损耗以及眼球追踪设备的运行状态确定眼球追踪设备的帧率,其中,眼球追踪设备的运行状态至少包括:前台运行状态、后台运行状态;
步骤S48,根据帧率确定第三功能模块对应的目标算法组合。可选的,在帧率大于预设帧率的情况下,确定第三功能模块对应的目标算法组合包括第五算法,其中,第五算法设置为降低眼球追踪设备单位时间内采集到的眼球图像的数量;在帧率小于等于预设帧率的情况下,第三功能模块对应的目标算法组合包括第六算法,其中,第六算法设置为提高眼球追踪设备单位时间内采集到的眼球图像的数量。
需要说明的是,在不需要快速的、更多的注视点数据的情况下,可适当降低帧率,以降低需要处理的数据量以及系统能耗,节省系统资源。因此,可通过选择与当前场景对应的帧率,以避免系统资源的浪费。
具体的,眼球追踪设备上安装有温度传感器,以检测眼球追踪设备的整机温度。眼球追踪设备在确定整机温度之后,根据温度的变化来判断系统损耗。进一步,眼球追踪设备根据系统损耗以及眼球追踪设备的运行状态(前台运行状态,还是后台运行状态)来确定帧率,进而根据帧率确定采用用于降帧的第五算法,还是采用用于升帧的第六算法来进行眼球追踪。其中,在帧率大于预设帧率的情况下,眼球追踪设备采 用第五算法进行眼球追踪;在帧率小于等于预设帧率的情况下,眼球追踪设备采用第六算法进行眼球追踪。
需要说明的是,由上述内容可知,本申请所提供的方案针对不同场景,自动适配眼球追踪算法,可以根据眼球追踪设备的计算能力、精准度以及环境信息等来对对应的功能模块的算法进行选择,从而形成自适应体系。另外,由于在本申请中将眼球追踪技术分为若干的功能模块,每个功能模块按照不同的场景模式进行触发、工作。每个功能模块的单独维护机制,按照功能拼接的方式形成自适应解决方案,无论在远程升级或者独立维护,提高了系统的灵活性。
实施例2
根据本申请实施例,还提供了一种应用于终端的眼球追踪信息处理方法实施例,其中,图3是根据本申请实施例的应用于终端的眼球追踪信息处理方法的流程图,如图3所示,该方法包括如下步骤:
步骤S302,获取环境信息。
需要说明的是,眼球追踪设备为具有眼球追踪功能的移动设备,其中,移动设备可以为但不限于手机、平板、智能眼镜等设备。在本实施例中,移动设备具有采集器,该采集器可采集眼球追踪设备所处环境的环境信息,例如,温度、湿度、光强等。
步骤S304,根据预设功能模块确定环境信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法。
需要说明的是,眼球追踪设备中的预设功能模块包含多个功能模块,多个功能模块中的任意一个或多个进行排列组合可以得到功能模块组合,例如,提取眼部特征的功能模块可以与对瞳孔进行定位的功能模块进行组合得到一个功能模块组合。另外,同一个功能模块可能对应多个算法,例如,对瞳孔进行定位的功能模块对应的算法包括对瞳孔进行精定位的算法,以及对瞳孔进行粗定位的算法,多个算法进行组合可以得到目标算法组合。其中,目标算法是应用于终端的眼球追踪设备所处环境的常用算法,使用目标算法在该环境下对眼球追踪设备采集到的信息可以得到更准确的处理结果。
可选的,用户从室内移动至室外时,眼球追踪设备所处环境的光照亮度发生了变化,其中,室外环境对眼球追踪结果的干扰更大,注视点准度易发生较大偏差。因此,在该场景下,眼球追踪设备可启动采集功能模块,并将采集功能模块中基于相机采集图像信息的第一采集算法切换为基于红外采集图像信息的第二采集算法,即,将基于 红外采集图像信息的第二采集算法作为目标算法。
步骤S306,根据功能模块组合和目标算法组合确定目标工作模式。
需要说明的是,目标工作模式是基于功能模块组合,使用目标算法组合中的目标算法对眼球追踪设备采集到的图像进行处理的模式。由于不同的环境可能对应不同的功能模块组合以及目标算法组合,因此,在不同的环境下对应的目标工作模式也可能不同。
步骤S308,完成目标工作模式切换,其中,完成工作模式切换包括:控制眼球追踪设备从当前的工作模式切换至目标工作模式,当前的工作模式采用的功能模块和目标算法,与目标工作模式采用的功能模块和目标算法中至少部分是不同的。
在一种可选的方案中,当用户在室内时,眼球追踪设备采用第一工作模式对用户的眼部图像进行处理。当用户从室内移动至室外时,眼球追踪设备所处环境的光照亮度发生了变化,此时,如果仍采用第一工作模式对用户的眼部图像进行处理,得到用户的注视点信息可能不准确。另外,由于室外环境对眼球追踪结果的干扰更大,注视点准度易发生较大偏差,因此,在该场景下,眼球追踪设备的工作模式由第一工作模式切换为适于室外的第二工作模式,以得到更准确的用户的注视点信息。
基于上述步骤S302至步骤S308所限定的方案,可以获知,采用根据不同环境自动适配眼球追踪技术的方式,在得到眼球追踪设备的环境信息之后,从眼球追踪设备中的预设功能模块中选择与环境信息对应的功能模块组合,以及该功能模块组合中的功能模块所对应的目标算法组合,并依据功能模块组合和目标算法组合确定目标工作模式,最后,控制眼球追踪设备从当前的工作模式切换为目标工作模式,其中,当前的工作模式采用的功能模块和目标算法,与目标工作模式采用的功能模块和目标算法中至少部分是不同的。
容易注意到的是,在确定了眼球追踪设备所处环境的环境信息之后,进一步确定与该环境信息最适配的目标算法,即不同的环境信息对应不同的目标算法,从而使得眼球追踪设备使用与当前环境的环境信息最适配的目标算法对眼球追踪设备采集到的信息进行处理,而与当前环境的环境信息不适配的算法不进行处理,从而降低了眼球追踪系统的资源消耗。
在一种可选的方案中,根据预设功能模块中选择与环境信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合可以包括如下步骤:
步骤S2020,根据环境信息确定参数信息;
步骤S2022,根据参数信息从预设功能模块中确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合。
需要说明的是,参数信息至少包括如下之一:头盒范围、帧率、准确度以及精准度。其中,头盒范围表征目标对象头部所移动的范围,包括前后移动范围以及左右移动范围;帧率表征单位时间内采集眼球图像的数量,例如,帧率为30Hz表示每秒采集30帧图像;准确度表征目标对象的注视点位置与目标对象的实际位置的偏差;精准度表征注视点位置的离散程度,例如,将连续样本的均方根作为精准度。
需要说明的是,根据参数信息确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,并根据功能模块组合中各个功能模块对应的目标算法对眼球追踪设备所采集到的信息进行处理的过程,与实施例1所提供的内容相同,在此不再赘述。
实施例3
根据本申请实施例,还提供了一种应用于终端的眼球追踪信息处理装置实施例,需要说明的是,该装置可执行实施例1所提供的应用于终端的眼球追踪信息处理方法。其中,图2是根据本申请实施例的应用于终端的眼球追踪信息处理装置的结构示意图,如图2所示,该装置包括:获取模块201、选择模块203、确定模块205以及切换模块207。
其中,获取模块201,设置为获取场景信息;选择模块203,设置为根据预设功能模块中选择与场景信息对应的功能模块组合,以及功能模块组合中的功能模块所对应的目标算法组合,其中,功能模块组合中包括至少一个功能模块,目标算法组合中包括至少一个目标算法;确定模块205,设置为根据功能模块组合和目标算法组合确定目标工作模式;切换模块207,设置为完成目标工作模式切换,其中,完成工作模式切换包括:控制眼球追踪设备从当前的工作模式切换至目标工作模式,当前的工作模式采用的功能模块和目标算法,与目标工作模式采用的功能模块和目标算法中至少部分是不同的。
需要说明的是,上述获取模块201、选择模块203、确定模块205以及切换模块207对应于实施例1中的步骤S102至步骤S108,四个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的方案中,确定模块包括:第一确定模块以及第二确定模块。其中,第一确定模块,设置为根据场景信息确定参数信息,其中,参数信息至少包括如下之一:头盒范围、帧率、准确度以及精准度,头盒范围表征目标对象头部所移动的范围, 帧率表征单位时间内采集眼球图像的数量,准确度表征目标对象的注视点位置与目标对象的实际位置的偏差,精准度表征注视点位置的离散程度;第二确定模块,设置为根据参数信息确定功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合。
需要说明的是,上述第一确定模块以及第二确定模块对应于实施例1中的步骤S1020至步骤S1022,两个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的方案中,第二确定模块包括:第三确定模块、判断模块、第一获取模块以及第四确定模块。其中,第三确定模块,设置为在参数信息至少包括头盒范围的情况下,确定功能模块组合包括第一功能模块,其中,第一功能模块设置为提取眼部特征;判断模块,设置为判断眼球追踪设备是否处于移动状态;第一获取模块,设置为在眼球追踪设备处于移动状态的情况下,获取目标对象在头盒范围内的移动速度;第四确定模块,设置为根据移动速度确定第一功能模块对应的目标算法组合。
需要说明的是,上述第三确定模块、判断模块、第一获取模块以及第四确定模块对应于实施例1中的步骤S20至步骤S26,四个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的方案中,第四确定模块包括:第五确定模块以及第六确定模块。其中,第五确定模块,设置为在移动速度大于预设速度的情况下,确定第一功能模块对应的目标算法组合包括第一算法,其中,第一算法设置为在全帧图像中获取目标对象的眼部特征;第六确定模块,设置为在移动速度小于等于预设速度的情况下,第一功能模块对应的目标算法组合包括第二算法,其中,第二算法设置为在眼部图像中获取目标对象的眼部特征。
在一种可选的方案中,第二确定模块包括:第七确定模块、第二获取模块、比对模块、第八确定模块以及第九确定模块。其中,第七确定模块,设置为在参数信息至少包括准确度的情况下,确定功能模块组合包括第二功能模块,其中,第二功能模块设置为对瞳孔进行定位;第二获取模块,设置为获取眼球追踪设备与目标对象的眼部之间的距离;比对模块,设置为比对距离与预设距离,得到比对结果;第八确定模块,设置为根据比对结果确定准确度;第九确定模块,设置为根据准确度确定第二功能模块对应的目标算法组合。
需要说明的是,上述第七确定模块、第二获取模块、比对模块、第八确定模块以及第九确定模块对应于实施例1中的步骤S30至步骤S38,五个模块与对应的步骤所 实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的方案中,第九确定模块包括:第十确定模块以及第十一确定模块。其中,第十确定模块,设置为在准确度大于预设准确度的情况下,确定第二功能模块对应的目标算法组合包括第三算法与第四算法,其中,第三算法设置为对瞳孔进行粗定位,第四算法设置为对瞳孔进行精定位;第十一确定模块,设置为在准确度小于等于预设准确度的情况下,确定第二功能模块对应的目标算法组合包括第三算法。
在一种可选的方案中,第二确定模块包括:第十二确定模块、第三获取模块、第十三确定模块、第十四确定模块以及第十五确定模块。其中,第十二确定模块,设置为在参数信息至少包括帧率的情况下,确定功能模块组合包括第三功能模块,其中,第三功能模块设置为确定采集眼球图像的频率;第三获取模块,设置为获取眼球追踪设备的温度信息;第十三确定模块,设置为根据温度信息确定眼球追踪设备的系统损耗;第十四确定模块,设置为根据系统损耗以及眼球追踪设备的运行状态确定眼球追踪设备的帧率,其中,眼球追踪设备的运行状态至少包括:前台运行状态、后台运行状态;第十五确定模块,设置为根据帧率确定第三功能模块对应的目标算法组合。
需要说明的是,上述第十二确定模块、第三获取模块、第十三确定模块、第十四确定模块以及第十五确定模块对应于实施例1中的步骤S40至步骤S48,五个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的方案中,第十五确定模块包括:第十六确定模块以及第十七确定模块。其中,第十六确定模块,设置为在帧率大于预设帧率的情况下,确定第三功能模块对应的目标算法组合包括第五算法,其中,第五算法设置为降低眼球追踪设备单位时间内采集到的眼球图像的数量;第十七确定模块,设置为在帧率小于等于预设帧率的情况下,确定第三功能模块对应的目标算法组合包括第六算法,其中,第六算法设置为提高眼球追踪设备单位时间内采集到的眼球图像的数量。
实施例4
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,程序执行实施例1所提供的应用于终端的眼球追踪信息处理方法。
实施例5
根据本申请实施例的另一方面,还提供了一种处理器,该处理器设置为运行程序,其中,程序运行时执行实施例1所提供的应用于终端的眼球追踪信息处理方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
本申请实施例提供的方案,可以应用于眼球追踪技术方面,通过根据不同场景自动适配眼球追踪技术方式,解决了现有的眼球追踪系统的资源消耗量大的技术问题,降低了眼球追踪系统的资源消耗。

Claims (13)

  1. 一种应用于终端的眼球追踪信息处理方法,包括:
    获取场景信息;
    根据预设功能模块确定所述场景信息对应的功能模块组合,以及所述功能模块组合中的功能模块所对应的目标算法组合,其中,所述功能模块组合中包括至少一个功能模块,所述目标算法组合中包括至少一个目标算法;
    根据所述功能模块组合和所述目标算法组合确定目标工作模式;
    完成所述目标工作模式切换。
  2. 根据权利要求1所述的方法,其中,完成所述工作模式切换包括:控制眼球追踪设备从当前的工作模式切换至所述目标工作模式,所述当前的工作模式采用的功能模块和目标算法,与所述目标工作模式采用的功能模块和目标算法中至少部分是不同的。
  3. 根据权利要求1所述的方法,其中,根据预设功能模块确定所述场景信息对应的功能模块组合,以及所述功能模块组合中的功能模块所对应的目标算法组合,包括:
    根据所述场景信息确定参数信息,其中,所述参数信息至少包括如下之一:头盒范围、帧率、准确度以及精准度,所述头盒范围表征目标对象头部所移动的范围,所述帧率表征单位时间内采集眼球图像的数量,所述准确度表征所述目标对象的注视点位置与所述目标对象的实际位置的偏差,所述精准度表征所述注视点位置的离散程度;
    根据所述参数信息从所述预设功能模块中确定所述功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合。
  4. 根据权利要求3所述的方法,其中,根据所述参数信息从所述预设功能模块中确定所述功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,包括:
    在所述参数信息至少包括所述头盒范围的情况下,确定所述功能模块组合包括第一功能模块,其中,所述第一功能模块设置为提取眼部特征;
    判断眼球追踪设备是否处于移动状态;
    在所述眼球追踪设备处于所述移动状态的情况下,获取所述目标对象在所述头盒范围内的移动速度;
    根据所述移动速度确定所述第一功能模块对应的目标算法组合。
  5. 根据权利要求4所述的方法,其中,根据所述移动速度确定所述第一功能模块对应的目标算法组合,包括:
    在所述移动速度大于预设速度的情况下,确定所述第一功能模块对应的目标算法组合包括第一算法,其中,所述第一算法设置为在全帧图像中获取所述目标对象的眼部特征;
    在所述移动速度小于等于所述预设速度的情况下,确定所述第一功能模块对应的目标算法组合包括第二算法,其中,所述第二算法设置为在眼部图像中获取所述目标对象的眼部特征。
  6. 根据权利要求3所述的方法,其中,根据所述参数信息从所述预设功能模块中确定所述功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,包括:
    在所述参数信息至少包括所述准确度的情况下,确定所述功能模块组合包括第二功能模块,其中,所述第二功能模块设置为对瞳孔进行定位;
    获取眼球追踪设备与所述目标对象的眼部之间的距离;
    比对所述距离与预设距离,得到比对结果;
    根据所述比对结果确定所述准确度;
    根据所述准确度确定所述第二功能模块对应的目标算法组合。
  7. 根据权利要求6所述的方法,其中,根据所述准确度确定所述第二功能模块对应的目标算法组合,包括:
    在所述准确度大于预设准确度的情况下,确定所述第二功能模块对应的目标算法组合包括第三算法与第四算法,其中,所述第三算法设置为对所述瞳孔进行粗定位,所述第四算法设置为对所述瞳孔进行精定位;
    在所述准确度小于等于所述预设准确度的情况下,确定所述第二功能模块对应的目标算法组合包括所述第三算法。
  8. 根据权利要求3所述的方法,其中,根据所述参数信息从所述预设功能模块中确 定所述功能模块组合以及该功能模块组合中的功能模块所对应的目标算法组合,包括:
    在所述参数信息至少包括所述帧率的情况下,确定所述功能模块组合包括第三功能模块,其中,所述第三功能模块设置为确定采集眼球图像的频率;
    获取眼球追踪设备的温度信息;
    根据所述温度信息确定所述眼球追踪设备的系统损耗;
    根据所述系统损耗以及所述眼球追踪设备的运行状态确定所述眼球追踪设备的帧率,其中,所述眼球追踪设备的运行状态至少包括:前台运行状态、后台运行状态;
    根据所述帧率确定所述第三功能模块对应的目标算法组合。
  9. 根据权利要求8所述的方法,其中,根据所述帧率确定所述第三功能模块对应的目标算法组合,包括:
    在所述帧率大于预设帧率的情况下,确定所述第三功能模块对应的目标算法组合包括第五算法,其中,所述第五算法设置为降低所述眼球追踪设备单位时间内采集到的眼球图像的数量;
    在所述帧率小于等于所述预设帧率的情况下,确定所述第三功能模块对应的目标算法组合包括第六算法,其中,所述第六算法设置为提高所述眼球追踪设备单位时间内采集到的眼球图像的数量。
  10. 一种应用于终端的眼球追踪信息处理方法,包括:
    获取环境信息;
    根据预设功能模块确定所述环境信息对应的功能模块组合,以及所述功能模块组合中的功能模块所对应的目标算法组合,其中,所述功能模块组合中包括至少一个功能模块,所述目标算法组合中包括至少一个目标算法;
    根据所述功能模块组合和所述目标算法组合确定目标工作模式;
    完成所述目标工作模式切换。
  11. 一种应用于终端的眼球追踪信息处理装置,包括:
    获取模块,设置为获取场景信息;
    选择模块,设置为根据预设功能模块确定所述场景信息对应的功能模块组合, 以及所述功能模块组合中的功能模块所对应的目标算法组合,其中,所述功能模块组合中包括至少一个功能模块,所述目标算法组合中包括至少一个目标算法;
    确定模块,设置为根据所述功能模块组合和所述目标算法组合确定目标工作模式;
    切换模块,设置为完成所述目标工作模式切换。
  12. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至10中任意一项所述的应用于终端的眼球追踪信息处理方法。
  13. 一种处理器,所述处理器设置为运行程序,其中,所述程序运行时执行权利要求1至10中任意一项所述的应用于终端的眼球追踪信息处理方法。
PCT/CN2019/097659 2018-09-30 2019-07-25 应用于终端的眼球追踪信息处理方法和装置 WO2020063077A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811163395.1A CN109460706B (zh) 2018-09-30 2018-09-30 应用于终端的眼球追踪信息处理方法和装置
CN201811163395.1 2018-09-30

Publications (1)

Publication Number Publication Date
WO2020063077A1 true WO2020063077A1 (zh) 2020-04-02

Family

ID=65607283

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/097659 WO2020063077A1 (zh) 2018-09-30 2019-07-25 应用于终端的眼球追踪信息处理方法和装置

Country Status (2)

Country Link
CN (1) CN109460706B (zh)
WO (1) WO2020063077A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460706B (zh) * 2018-09-30 2021-03-23 北京七鑫易维信息技术有限公司 应用于终端的眼球追踪信息处理方法和装置
CN109960412B (zh) * 2019-03-22 2022-06-07 北京七鑫易维信息技术有限公司 一种基于触控调整注视区域的方法以及终端设备
CN110221696B (zh) * 2019-06-11 2021-06-08 Oppo广东移动通信有限公司 眼球追踪方法及相关产品
CN110225252B (zh) * 2019-06-11 2021-07-23 Oppo广东移动通信有限公司 拍照控制方法及相关产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104360732A (zh) * 2014-10-16 2015-02-18 南京大学 提高视线跟踪系统准确度的补偿方法与装置
CN105183538A (zh) * 2014-06-03 2015-12-23 联想(北京)有限公司 一种信息处理方法及电子设备
CN106125919A (zh) * 2016-06-20 2016-11-16 联想(北京)有限公司 一种状态控制方法及电子设备
CN106708251A (zh) * 2015-08-12 2017-05-24 天津电眼科技有限公司 一种基于眼球追踪技术的智能眼镜控制方法
US20170188823A1 (en) * 2015-09-04 2017-07-06 University Of Massachusetts Eye tracker system and methods for detecting eye parameters
CN109460706A (zh) * 2018-09-30 2019-03-12 北京七鑫易维信息技术有限公司 应用于终端的眼球追踪信息处理方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391888B2 (en) * 2003-05-30 2008-06-24 Microsoft Corporation Head pose assessment methods and systems
US7629877B2 (en) * 2006-12-19 2009-12-08 Matvey Lvovskiy Multifunctional collimator indicator
US9829708B1 (en) * 2014-08-19 2017-11-28 Boston Incubator Center, LLC Method and apparatus of wearable eye pointing system
US10437327B2 (en) * 2015-05-08 2019-10-08 Apple Inc. Eye tracking device and method for operating an eye tracking device
US10152121B2 (en) * 2016-01-06 2018-12-11 Facebook Technologies, Llc Eye tracking through illumination by head-mounted displays
CN106406543A (zh) * 2016-11-23 2017-02-15 长春中国光学科学技术馆 人眼控制vr场景变换装置
CN106873778B (zh) * 2017-01-23 2020-04-28 深圳超多维科技有限公司 一种应用的运行控制方法、装置和虚拟现实设备
CN107390863B (zh) * 2017-06-16 2020-07-07 北京七鑫易维信息技术有限公司 设备的控制方法及装置、电子设备、存储介质
CN107992378B (zh) * 2017-10-30 2019-07-26 维沃移动通信有限公司 一种文件处理方法及移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183538A (zh) * 2014-06-03 2015-12-23 联想(北京)有限公司 一种信息处理方法及电子设备
CN104360732A (zh) * 2014-10-16 2015-02-18 南京大学 提高视线跟踪系统准确度的补偿方法与装置
CN106708251A (zh) * 2015-08-12 2017-05-24 天津电眼科技有限公司 一种基于眼球追踪技术的智能眼镜控制方法
US20170188823A1 (en) * 2015-09-04 2017-07-06 University Of Massachusetts Eye tracker system and methods for detecting eye parameters
CN106125919A (zh) * 2016-06-20 2016-11-16 联想(北京)有限公司 一种状态控制方法及电子设备
CN109460706A (zh) * 2018-09-30 2019-03-12 北京七鑫易维信息技术有限公司 应用于终端的眼球追踪信息处理方法和装置

Also Published As

Publication number Publication date
CN109460706A (zh) 2019-03-12
CN109460706B (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2020063077A1 (zh) 应用于终端的眼球追踪信息处理方法和装置
US20220342475A1 (en) Terminal control method and terminal
CN110058694B (zh) 视线追踪模型训练的方法、视线追踪的方法及装置
CN110163806B (zh) 一种图像处理方法、装置以及存储介质
CN113362775B (zh) 一种显示屏控制方法、装置、电子设备及存储介质
KR102121592B1 (ko) 시력 보호 방법 및 장치
CN108322663B (zh) 拍照方法、装置、终端及存储介质
CN103885589A (zh) 眼动追踪方法及装置
WO2016192189A1 (zh) 一种降低终端设备功耗的方法及装置
CN104754218A (zh) 一种智能拍照方法及终端
CN112650405B (zh) 一种电子设备的交互方法及电子设备
CN106339086A (zh) 一种调节屏幕字体方法、装置以及电子设备
CN113709385B (zh) 一种视频处理方法及装置、计算机设备和存储介质
KR20240024277A (ko) 시선 분류
WO2017206383A1 (zh) 一种终端控制方法、控制装置以及终端
KR102163996B1 (ko) 사용자 기기의 비접촉식 인식 기능 성능 향상 방법 및 장치
CN105700277B (zh) 一种投影亮度调整方法和装置
CN107995417B (zh) 一种拍照的方法和移动终端
CN108196700B (zh) 一种显示处理方法、移动终端以及计算机可读存储介质
CN117455989A (zh) 室内场景slam追踪方法、装置、头戴式设备及介质
CN108647647B (zh) 空调器的控制方法、控制装置及空调器
CN105635582A (zh) 一种基于眼部特征识别的拍照控制方法及拍照控制终端
WO2022011534A1 (zh) 拍摄控制方法、装置、智能设备及计算机可读存储介质
CN114510183A (zh) 动效时长管理方法及电子设备
CN114339032A (zh) 控制方法、智能终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19864299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/07/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19864299

Country of ref document: EP

Kind code of ref document: A1