CN112748797B - Eyeball tracking method and related equipment - Google Patents

Eyeball tracking method and related equipment Download PDF

Info

Publication number
CN112748797B
CN112748797B CN201911050893.XA CN201911050893A CN112748797B CN 112748797 B CN112748797 B CN 112748797B CN 201911050893 A CN201911050893 A CN 201911050893A CN 112748797 B CN112748797 B CN 112748797B
Authority
CN
China
Prior art keywords
eye image
resolution
eye
eyeball tracking
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911050893.XA
Other languages
Chinese (zh)
Other versions
CN112748797A (en
Inventor
王文东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911050893.XA priority Critical patent/CN112748797B/en
Publication of CN112748797A publication Critical patent/CN112748797A/en
Application granted granted Critical
Publication of CN112748797B publication Critical patent/CN112748797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The application discloses eyeball tracking method and related equipment, which are applied to electronic equipment, and the method comprises the following steps: acquiring a first eye image; when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image; and running an eyeball tracking service according to the second eye image to complete a preset function. By implementing the embodiment of the application, the eye image with low resolution is converted into the eye image with high resolution by the super-resolution technology of the SRGAN, and then the eye image is used for eyeball tracking application, so that the eyeball tracking accuracy and precision are improved.

Description

Eyeball tracking method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an eyeball tracking method and related devices.
Background
Eyeball tracking is a machine vision technology, and is a technology for capturing an eye image of a user through equipment, analyzing the eye image by adopting an algorithm and finally obtaining a fixation position of the user.
However, the eyeball tracking is mainly applied to the medical industry, AR, VR glasses and the like, and is rarely applied to the mobile terminal. One of the important reasons that hinders the application of eye tracking to mobile terminals is that the eye tracking recognition accuracy is reduced because the eye pixels collected by the camera of the mobile terminal are reduced with the increase of the distance.
Disclosure of Invention
The embodiment of the application provides an eyeball tracking method and related equipment, wherein a low-resolution eye image is subjected to super-resolution reconstruction processing through an SRGAN super-resolution technology to obtain a high-resolution eye image, so that the eyeball tracking accuracy and precision are improved.
In a first aspect, an embodiment of the present application provides an eyeball tracking method applied to an electronic device, where the method includes:
acquiring a first eye image;
when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image;
and running an eyeball tracking service according to the second eye image to complete a preset function.
In a second aspect, an embodiment of the present application provides an eyeball tracking apparatus applied to an electronic device, the apparatus including:
an acquisition unit configured to acquire a first eye image;
the processing unit is used for inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image when the resolution of the first eye image is smaller than a first preset resolution threshold value, wherein the resolution of the second eye image is larger than that of the first eye image;
and the operation unit is used for operating the eyeball tracking service according to the second eye image so as to complete a preset function.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing some or all of the steps described in the method according to the first aspect of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement part or all of the steps described in the method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a first eye image is acquired first; when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image; and then running an eyeball tracking service according to the second eye image to complete a preset function. Therefore, the low-resolution eye image is converted into the high-resolution eye image for the eyeball tracking application, so that the eyeball tracking accuracy and precision are improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of hardware of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a software architecture diagram of an eyeball tracking method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an eyeball tracking method provided in an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating another eyeball tracking method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating human-computer interaction of an eye tracking method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an eyeball tracking device according to an embodiment of the application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described in this specification can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Electronic devices may include a variety of handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices communicatively connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
Referring to fig. 1, fig. 1 is a schematic structural diagram of electronic device hardware according to an embodiment of the present disclosure. The electronic device includes a processor, a Memory, a signal processor, a transceiver, a display screen, a speaker, a microphone, a Random Access Memory (RAM), a camera, a sensor, and Infrared light (IR), among others. The storage, the signal processor, the display screen, the loudspeaker, the microphone, the RAM, the camera, the sensor and the IR are connected with the processor, and the transceiver is connected with the signal processor.
The Display screen may be a Liquid Crystal Display (LCD), an Organic or inorganic Light-Emitting Diode (OLED), an Active Matrix/Organic Light-Emitting Diode (AMOLED), or the like.
The camera may be a common camera or an infrared camera, and is not limited herein. The camera may be a front camera or a rear camera, and is not limited herein.
Wherein the sensor comprises at least one of: light-sensitive sensors, gyroscopes, infrared proximity sensors, fingerprint sensors, pressure sensors, etc. Among them, the light sensor, also called an ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a light sensitive element and an analog to digital converter. The photosensitive element is used for converting collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The processor is a control center of the electronic equipment, various interfaces and lines are used for connecting all parts of the whole electronic equipment, and various functions and processing data of the electronic equipment are executed by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the electronic equipment is monitored integrally.
The processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes various functional applications and data processing of the electronic equipment by operating the software programs and/or modules stored in the memory. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, a software program required by at least one function and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Wherein, the IR is used for illuminating the eye to generate a bright spot (glint) on the eye, and the camera is used for shooting the eye to obtain an image comprising the bright spot and the pupil (pupil).
Referring to fig. 2, fig. 2 is a software architecture diagram of an eyeball tracking method according to an embodiment of the present application.
The software architecture diagram includes the following four layers:
the first layer is an eye tracking application, including applications such as e-books, browsers, starters, systems, unlocking, mobile payments, point of interest tracking, and the like. The OEyeTracker SDK is an SDK interface provided for the application, is responsible for providing the fixation point acquisition and input api for the common application, and is in the form of a jar/aar package.
The second layer is an eye tracking service (oeyetracker service), which includes an eye tracking authorization (oeyetracker authorization), an eye tracking policy (oeyetracker strategy), an eye tracking kernel algorithm (oeyetracker paramo), an eye tracking parameter (oeyetracker params), an eye resolution enhancement algorithm (OEyeSRAlgo), and the like. Wherein, the eyeball tracking service (OEyeTracker service) of the second layer is connected with the application of the first layer through an SDK (SDK) interface; the second layer further comprises a camera NDK interface (CameraNDKInterface), a camera service (CameraService), and both are connected to each other; meanwhile, a camera NDK interface (camerandkinnterface) is connected to an eye tracking service (oeyetrackservice).
The eyeball tracking kernel algorithm (oeyeTrackerAlgo) comprises two parts: one part is a calibration algorithm, and the other part is an estimated point of regard algorithm.
The eye tracking strategy (oeyetracker strategy) is related to the post-algorithm processing, such as filtering, view point jumping, view point switching monitoring, view point inputting, and the like.
The eye tracking authorization (oeyeTrackerAuthority) is to change the authentication into an input action, call back each module, and take charge of whether the authentication requester is allowed or not.
The eye tracking parameter (oeyetracker params) is a parameter configuration module, and is responsible for parsing configuration and hot update configuration.
The eye resolution improvement algorithm (OEyeSRAlgo) is an algorithm module for improving the eye resolution in the eye tracking frame, and can improve the eye image with low resolution, so as to identify the position and direction of the pupil.
The third layer comprises a Google HAL Interface (Google HAL Interface), a high-pass HAL Interface (Qualcomm HAL Interface), Cam X, Chi-cdk and the like, wherein the Google HAL Interface is connected with the CameraServer of the second layer, the Qualcomm HAL Interface is connected with the Google HAL Interface, and the Cam X is respectively connected with the Qualcomm HAL Interface and the Chi-cdk.
The fourth layer includes an RGB sensor (RGB sensor), a Digital Signal Processor (DSP), an infrared sensor (IR sensor), a Laser (Laser), a Light Emitting Diode (LED), etc., and the IR sensor is connected to Cam X of the third layer. The connection between OEyeTracker service and OEyeTracker SDK, the connection between CameraService and CameraNDKInterface, and the connection between Google HAL Interface and CameraService are all through Binder architecture.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an eye tracking method according to an embodiment of the present disclosure. As shown in fig. 3, the eye tracking method is applied to an electronic device, including the electronic device shown in fig. 1; it is also applicable to the software architecture as shown in fig. 2. Wherein, the eyeball tracking method comprises the following steps:
s301, the electronic equipment acquires a first eye image.
Wherein the electronic device comprises a camera, and the acquiring the first eye image comprises: when detecting that an eyeball tracking application program requests to start, or when detecting that the eyeball tracking application program requests to start a preset function, or when detecting that the eyeball tracking application program requests to acquire eyeball watching position information, starting the eyeball tracking service; acquiring the first eye image through the camera according to the eye tracking service.
As can be appreciated, the eye tracking applications include e-books, browsers, launchers, systems, unlocking, mobile payments, among others. Taking an electronic book as an example, when the electronic book is turned by eye tracking, the electronic book is requested to be started; or when the electronic book requests to start the eyeball tracking page turning function; or the electronic book starts an eyeball tracking page turning function, and when the electronic book requests to acquire eyeball fixation position information, the eyeball tracking service on the electronic equipment is started, and the eyeball tracking service acquires the first eye image by starting the camera for shooting.
S302, when the resolution of the first eye image is smaller than a first preset resolution threshold, the electronic equipment inputs the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image.
The image super-resolution reconstruction model consists of a first model and a second model, the first model and the second model are obtained by adopting eye image training in different resolution stages, and the first eye image is input into a pre-trained image super-resolution reconstruction model to obtain a second eye image, and the method comprises the following steps: judging whether the resolution of the first eye image is larger than a second preset resolution threshold value or not; if the resolution of the first eye image is larger than a second preset resolution threshold, inputting the first eye image into a first model to obtain a second eye image; and if the resolution of the first eye image is not greater than a second preset resolution threshold, inputting the first eye image into a second model to obtain a second eye image.
For example, the resolution of the first eye image acquired by the electronic device may be 176 × 220, 120 × 160,128 × 128, 128 × 144, 128 × 160, 162 × 216, 208 × 208, 208 × 320, 220 × 220, 240 × 320, 240 × 400, 320 × 240, 352 × 416, 640 × 480, 800 × 600, 1024 × 768, and 1600 × 1200, and the first preset resolution threshold may be set to be 800 × 600, and the second preset resolution threshold may be set to be 208 × 208. When the resolution of the acquired first eye image is greater than 800 × 600, it indicates that the first eye image can be directly used for eye tracking; when the resolution of the acquired first eye image is not more than 800 × 600, it is described that the resolution needs to be raised. Further, when the resolution of the acquired first eye image is larger than 208 multiplied by 208, inputting the first eye image into the first model to obtain a second eye image with high resolution; and when the resolution of the acquired first eye image is not more than 208 multiplied by 208, inputting the first eye image into the second model to obtain a second eye image with high resolution. Wherein the first model is trained by using eye images with the resolution of more than 208 x 208, and the second model is trained by using eye images with the resolution of not more than 208 x 208.
Wherein the first model comprises a first feature extraction network, a first generation network and a first discriminant network, the first generation network comprises a first residual network layer and a first upsampling layer, and the inputting the first eye image into the first model to obtain the second eye image comprises: inputting the first eye image into the first feature extraction network to obtain a first feature layer; inputting the first feature layer into the first residual error network layer to obtain a second feature layer; inputting the second characteristic layer into the first up-sampling layer for up-sampling, and performing characteristic extraction on the up-sampled second characteristic layer to obtain a third eye image; inputting the third eye image into the first feature extraction network to obtain a third feature layer of the third eye image; inputting the first characteristic layer and the third characteristic layer into the first discrimination network for comparison; when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is more than half, outputting the third eye image as the second eye image; and when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is not more than half, returning to execute the step of inputting the first eye image into the first characteristic extraction network to obtain a first characteristic layer.
Wherein, it is understood that the second model includes a second feature extraction network, a second generation network and a second decision network, the second generation network includes a second residual network layer and a second upsampling layer, and the inputting the second eye image into the second model to obtain the second eye image includes: inputting the second eye image into the second feature extraction network to obtain a second feature layer; inputting the second feature layer into the second residual error network layer to obtain a second feature layer; inputting the second characteristic layer into the second up-sampling layer for up-sampling, and performing characteristic extraction on the up-sampled second characteristic layer to obtain a third eye image; inputting the third eye image into the second feature extraction network to obtain a third feature layer of the third eye image; inputting the second feature layer and the third feature layer into the second judgment network for comparison; when the comparison result shows that the probability that the second characteristic layer and the third characteristic layer are the same characteristic layer is more than half, outputting the third eye image as the second eye image; and when the comparison result shows that the probability that the second feature layer and the third feature layer are the same feature layer is not more than half, returning to execute the step of inputting the second eye image into the second feature extraction network to obtain a second feature layer.
Before the first eye image is input into a pre-trained image super-resolution reconstruction model to obtain a second eye image, the method further comprises the following steps: acquiring a plurality of eye images with high resolution; preprocessing the plurality of eye images with high resolution to obtain a plurality of corresponding eye images with low resolution; dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is greater than the second preset resolution threshold, into a first data set, and dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is not greater than the second preset resolution threshold, into a second data set; and dividing the plurality of eye images with high resolution into data sets where the corresponding eye images with low resolution are located to obtain a first training set and a second training set. Respectively training by adopting an SRGAN network according to the first training set and the second training set to obtain the first model and the second model; and combining the first model and the second model to obtain the image super-resolution reconstruction model.
For example, it is assumed that setting the first preset resolution threshold to 800 × 600, i.e., a resolution greater than 800 × 600 belongs to high resolution, and setting the second preset resolution threshold to 208 × 208. A plurality of high resolution eye images with a resolution greater than 800 x 600 may be acquired and converted into a low resolution eye image by resolution modification software. Wherein the same high-resolution eye image a can be converted into an eye image a1 having a resolution of more than 208 × 208 and less than 800 × 600, and a low-resolution eye image a2 having a resolution of not more than 208 × 208, respectively; combining A1 and A into a training set for training the first model using the SRGAN network; a2 and A are combined into a training set for training the second model using the SRGAN network. And respectively converting the plurality of eye images with high resolution into eye images with low resolution at different resolution stages by adopting the same method, and then distinguishing the eye images with high resolution to form a training set for training the first model and the second model.
And S303, the electronic equipment operates an eyeball tracking service according to the second eye image to complete a preset function.
Wherein said running an eye tracking service to perform a predetermined function according to said second eye image comprises: running the eye tracking service according to the second eye image to obtain eye tracking data of the second eye image; sending eye tracking data for the second eye image to the eye tracking application; and executing the eyeball tracking application program according to the eyeball tracking data of the second eye image so as to complete the preset function.
It can be seen that, the eyeball tracking method provided by the embodiment of the application firstly acquires a first eye image; when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image; and then running an eyeball tracking service according to the second eye image to complete a preset function. Therefore, the low-resolution eye image is converted into the high-resolution eye image for the eyeball tracking application, so that the eyeball tracking accuracy and precision are improved.
In one possible embodiment, the electronic device comprises an infrared lamp, and before sending the eye tracking data of the second eye image to the eye tracking application, the method further comprises: turning on the infrared lamp to irradiate the eyes of a shooting target according to the eyeball tracking service, wherein the infrared lamp is used for generating bright spots on the eyes of the shooting target; shooting eyes of the shooting target including the bright spots through the camera to obtain N fourth eye images; obtaining N groups of calibration coordinates according to the N fourth eye images, wherein the N groups of calibration coordinate data correspond to the N fourth eye images one by one, and each group of calibration coordinate data comprises a pupil coordinate and a bright spot coordinate; obtaining N calibration vectors according to the N groups of calibration coordinate data, wherein the N calibration vectors correspond to the N groups of calibration coordinate data one by one, and each calibration vector is determined by corresponding pupil coordinates and bright spot coordinates; calibrating the eye tracking data of the second eye image according to the N calibration vectors.
Wherein calibrating the eye tracking data for the second eye image according to the N calibration vectors comprises: comparing the N calibration vectors with the M reference vectors, calculating the similarity value of each calibration vector and each reference vector, and accumulating all the similarity values; and when the cumulative value of the similarity values reaches a preset cumulative threshold value, calibrating the eyeball tracking data of the second eye image according to the N calibration vectors.
Wherein the obtaining of the M reference vectors comprises: when initial calibration of eyeball tracking is carried out, the infrared lamp is started according to the eyeball tracking service to irradiate the eyes of a calibration user to generate bright spots, and the eyes of the calibration user including the bright spots are shot through the camera to obtain M fifth eye images; obtaining M groups of reference coordinates according to the M fifth eye images, wherein the M groups of reference coordinate data correspond to the M fifth eye images one by one, and each group of reference coordinate data comprises a reference pupil coordinate and a reference bright spot coordinate; and obtaining M reference calibration vectors according to the M groups of reference coordinate data.
As can be seen, in this example, before sending the eyeball tracking data of the second eye image to the eyeball tracking application program, multiple fourth eye images are obtained, a calibration vector is obtained according to the fourth eye images, and the eyeball tracking data of the second eye image is calibrated by using the calibration vector, so that the accuracy and precision of eyeball tracking are further improved.
In one possible embodiment, the electronic device includes a sensor, the method further comprising: determining, with the sensor, a distance between the electronic device and an eye of a photographic target when acquiring the first eye image; determining whether the resolution of the first eye image is less than a first preset resolution threshold according to the distance.
It can be seen that, in this example, the distance between the electronic device and the eyes of the shooting target is detected by the sensor, so that whether the resolution of the first eye image acquired by the camera is smaller than a first preset resolution threshold or not is known, and whether the resolution of the first eye image acquired by the camera needs to be increased or not can be quickly determined.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another eyeball tracking method according to an embodiment of the present disclosure. As shown in fig. 4, the eye tracking method is applied to an electronic device, including the electronic device shown in fig. 1; it is also applicable to the software architecture as shown in fig. 2. The electronic equipment comprises an eyeball tracking application, an eyeball tracking service, a camera and a camera, and the eyeball tracking method comprises the following steps:
s401, when the eyeball tracking application is started, the electronic equipment automatically starts the eyeball tracking service, and the eyeball tracking application requests the eyeball tracking service to acquire eyeball watching position information.
S402, the eyeball tracking service receives the request for obtaining the eyeball fixation position information and sends the request for obtaining the eye image to the camera.
And S403, after receiving the request for acquiring the eye image from the eyeball tracking service, the camera sends a request for acquiring the eye image to the camera head.
S404, the camera collects eye images and sends collected eye image data to the camera.
S405, after receiving the eye image data sent by the camera, the camera sends the eye image data to the eyeball tracking service.
S406, the eyeball tracking service comprises an eyeball tracking core algorithm module and an eyeball resolution improvement algorithm module, the eyeball tracking service transmits the acquired image data to the eyeball tracking core algorithm module for processing after receiving the eye image data, and the eyeball tracking core algorithm module judges whether the eyeball resolution improvement algorithm module is needed to process according to the distance between the eyes and the electronic equipment and the positions of the eyes in the image.
And S407, when the eye image resolution is low, transmitting the eye image data to an eyeball resolution improvement algorithm module for processing, and then transmitting the eye image data to eyeball tracking application.
It can be seen that, in the eyeball tracking method provided in the embodiment of the present application, first, the eyeball tracking application needs to request to acquire an eyeball gaze position, the electronic device automatically starts the eyeball tracking service to request the camera to acquire an eye image through the camera, when the eyeball tracking core algorithm module of the eyeball tracking service determines that the eye resolution is low, the eyeball resolution enhancement algorithm module is used to perform resolution enhancement, and the eye image with the resolution enhanced is sent to the eyeball tracking application. Therefore, the low-resolution eye image is converted into the high-resolution eye image for the eyeball tracking application, so that the eyeball tracking accuracy and precision are improved.
Referring to fig. 5, fig. 5 is a diagram illustrating a human-computer interaction of an eyeball tracking method according to an embodiment of the present application, consistent with the embodiment shown in fig. 4. The hardware in fig. 5 is shown in fig. 1, and the software in fig. 5 is shown in fig. 2. The method comprises the steps that a user firstly starts an eyeball tracking application, and eyeball tracking service is automatically started while the electronic equipment starts the eyeball tracking application; the eye tracking Service initiates an eye tracking core algorithm module, which initiates a Camera Service (Camera Service) and a Camera hal (Camera hal); then, a Camera in the Camera is called to shoot the eyes of the user through corresponding hardware driving Camera driving (IR & Camera Driver). The camera gradually uploads the shot eye images to an eyeball tracking core algorithm module for processing; when the eye image resolution is low, starting an eyeball resolution improvement algorithm module, sending the eye image to the eyeball resolution improvement algorithm module for processing, converting the eye image with low resolution into an eye image with high resolution, and uploading the eye image to eyeball tracking application; when the eye image resolution is high, it is directly uploaded to the eye tracking application.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, which is identical to the embodiments shown in fig. 3 and fig. 4. As shown in fig. 6, the electronic device 600 includes an application processor 610, a memory 620, a communication interface 630, and one or more programs 621, wherein the one or more programs 621 are stored in the memory 620 and configured to be executed by the application processor 610, and the one or more programs 621 include instructions for performing any of the steps of the above method embodiments.
In one possible example, the instructions in the program 621 are to perform the following operations: acquiring a first eye image; when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image; and running an eyeball tracking service according to the second eye image to complete a preset function.
It can be seen that, in the electronic device provided in the embodiment of the present application, first, a first eye image can be acquired; when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image; and then running an eyeball tracking service according to the second eye image to complete a preset function. Therefore, the low-resolution eye image is converted into the high-resolution eye image for the eyeball tracking application, so that the eyeball tracking accuracy and precision are improved.
In one possible example, where the electronic device includes a camera, the program 621 further includes instructions for, in acquiring the first eye image: when detecting that an eyeball tracking application program requests to start, or when detecting that the eyeball tracking application program requests to start a preset function, or when detecting that the eyeball tracking application program requests to acquire eyeball watching position information, starting the eyeball tracking service; acquiring the first eye image through the camera according to the eye tracking service.
In one possible example, in running an eye tracking service to perform a preset function based on the second eye image, the program 621 further includes instructions for: running the eye tracking service according to the second eye image to obtain eye tracking data of the second eye image; sending eye tracking data for the second eye image to the eye tracking application; and executing the eyeball tracking application program according to the eyeball tracking data of the second eye image so as to complete the preset function.
In one possible example, the electronic device includes an infrared light, and in transmitting the eye tracking data for the second eye image to the eye tracking application, the program 621 further includes instructions for: turning on the infrared lamp to irradiate the eyes of a shooting target according to the eyeball tracking service, wherein the infrared lamp is used for generating bright spots on the eyes of the shooting target; shooting eyes of the shooting target including the bright spots through the camera to obtain N fourth eye images; obtaining N groups of calibration coordinates according to the N fourth eye images, wherein the N groups of calibration coordinate data correspond to the N fourth eye images one by one, and each group of calibration coordinate data comprises a pupil coordinate and a bright spot coordinate; obtaining N calibration vectors according to the N groups of calibration coordinate data, wherein the N calibration vectors correspond to the N groups of calibration coordinate data one by one, and each calibration vector is determined by corresponding pupil coordinates and bright spot coordinates; calibrating the eye tracking data of the second eye image according to the N calibration vectors.
In one possible example, the image super-resolution reconstruction model is composed of a first model and a second model, the first model and the second model are trained using eye images of different resolution stages, and the program 621 further includes instructions for: judging whether the resolution of the first eye image is larger than a second preset resolution threshold value or not; if the resolution of the first eye image is larger than a second preset resolution threshold, inputting the first eye image into a first model to obtain a second eye image; if the resolution of the first eye image is not larger than a second preset resolution threshold, inputting the first eye image into a second model to obtain a second eye image;
wherein the first model comprises a first feature extraction network, a first generation network and a first discriminant network, the first generation network comprises a first residual network layer and a first upsampling layer, and the inputting the first eye image into the first model to obtain the second eye image comprises: inputting the first eye image into the first feature extraction network to obtain a first feature layer; inputting the first feature layer into the first residual error network layer to obtain a second feature layer; inputting the second characteristic layer into the first up-sampling layer for up-sampling, and performing characteristic extraction on the up-sampled second characteristic layer to obtain a third eye image; inputting the third eye image into the first feature extraction network to obtain a third feature layer of the third eye image; inputting the first characteristic layer and the third characteristic layer into the first discrimination network for comparison; when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is more than half, outputting the third eye image as the second eye image; and when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is not more than half, returning to execute the step of inputting the first eye image into the first characteristic extraction network to obtain a first characteristic layer.
In one possible example, in inputting the first eye image into a pre-trained super-resolution image reconstruction model to obtain a second eye image, the program 621 further comprises instructions for: acquiring a plurality of eye images with high resolution; preprocessing the plurality of eye images with high resolution to obtain a plurality of corresponding eye images with low resolution; dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is greater than the second preset resolution threshold, into a first data set, and dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is not greater than the second preset resolution threshold, into a second data set; and dividing the plurality of eye images with high resolution into data sets where the corresponding eye images with low resolution are located to obtain a first training set and a second training set. Training by adopting an SRGAN network according to the first training set and the second training set respectively to obtain the first model and the second model; and combining the first model and the second model to obtain the image super-resolution reconstruction model.
In one possible example, the electronic device includes a sensor, and the program 621 further includes instructions for: determining, with the sensor, a distance between the electronic device and an eye of a photographic target when acquiring the first eye image; determining whether the resolution of the first eye image is less than a first preset resolution threshold according to the distance.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the foregoing method embodiment, and a description thereof is omitted here.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of being implemented in hardware or a combination of hardware and computer software to describe the various steps in connection with the embodiments presented herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 7 is a block diagram illustrating functional units of an eye tracking device according to an embodiment of the present disclosure. The eye tracking apparatus 700 is applied to an electronic device supporting eye tracking control, and the apparatus comprises:
an acquisition unit 701 configured to acquire a first eye image;
a processing unit 702, configured to, when a resolution of the first eye image is smaller than a first preset resolution threshold, input the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, where a resolution of the second eye image is greater than a resolution of the first eye image;
an operation unit 703 is configured to operate an eyeball tracking service according to the second eye image to complete a preset function.
It can be seen that, the eyeball tracking device provided by the embodiment of the application can acquire a first eye image firstly; when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image; and then running an eyeball tracking service according to the second eye image to complete a preset function. Therefore, the low-resolution eye image is converted into the high-resolution eye image for the eyeball tracking application, thereby improving the eyeball tracking accuracy and precision.
In one possible example, the electronic device includes a camera, and in acquiring the first eye image, the acquiring unit 701 is specifically configured to: when detecting that an eyeball tracking application program requests to start, or when detecting that the eyeball tracking application program requests to start a preset function, or when detecting that the eyeball tracking application program requests to acquire eyeball watching position information, starting the eyeball tracking service; acquiring the first eye image through the camera according to the eye tracking service.
In one possible example, in terms of operating the eye tracking service to complete the preset function according to the second eye image, the operating unit 703 is specifically configured to: running the eye tracking service according to the second eye image to obtain eye tracking data of the second eye image; sending eye tracking data for the second eye image to the eye tracking application; and executing the eyeball tracking application program according to the eyeball tracking data of the second eye image so as to complete the preset function.
In one possible example, the electronic device comprises an infrared lamp, and in respect of transmitting the eye tracking data of the second eye image to the eye tracking application, the electronic device further comprises a calibration unit for: turning on the infrared lamp to irradiate the eyes of a shooting target according to the eyeball tracking service, wherein the infrared lamp is used for generating bright spots on the eyes of the shooting target; shooting eyes of the shooting target including the bright spots through the camera to obtain N fourth eye images; obtaining N groups of calibration coordinates according to the N fourth eye images, wherein the N groups of calibration coordinate data correspond to the N fourth eye images one by one, and each group of calibration coordinate data comprises a pupil coordinate and a bright spot coordinate; obtaining N calibration vectors according to the N groups of calibration coordinate data, wherein the N calibration vectors correspond to the N groups of calibration coordinate data one by one, and each calibration vector is determined by corresponding pupil coordinates and bright spot coordinates; calibrating the eye tracking data of the second eye image according to the N calibration vectors.
In a possible example, the image super-resolution reconstruction model is composed of a first model and a second model, the first model and the second model are obtained by training eye images in different resolution stages, and in terms of inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, the processing unit 702 is specifically configured to: judging whether the resolution of the first eye image is larger than a second preset resolution threshold value or not; if the resolution of the first eye image is larger than a second preset resolution threshold, inputting the first eye image into a first model to obtain a second eye image; if the resolution of the first eye image is not larger than a second preset resolution threshold, inputting the first eye image into a second model to obtain a second eye image;
wherein the first model comprises a first feature extraction network, a first generation network and a first discriminant network, the first generation network comprises a first residual network layer and a first upsampling layer, and the inputting the first eye image into the first model to obtain the second eye image comprises: inputting the first eye image into the first feature extraction network to obtain a first feature layer; inputting the first feature layer into the first residual error network layer to obtain a second feature layer; inputting the second characteristic layer into the first up-sampling layer for up-sampling, and performing characteristic extraction on the up-sampled second characteristic layer to obtain a third eye image; inputting the third eye image into the first feature extraction network to obtain a third feature layer of the third eye image; inputting the first characteristic layer and the third characteristic layer into the first discrimination network for comparison; when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is more than half, outputting the third eye image as the second eye image; and when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is not more than half, returning to execute the step of inputting the first eye image into the first characteristic extraction network to obtain a first characteristic layer.
In one possible example, in terms of inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, the electronic device further comprises a training unit for: acquiring a plurality of eye images with high resolution; preprocessing the plurality of eye images with high resolution to obtain a plurality of corresponding eye images with low resolution; dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is greater than the second preset resolution threshold, into a first data set, and dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is not greater than the second preset resolution threshold, into a second data set; and dividing the plurality of eye images with high resolution into data sets where the corresponding eye images with low resolution are located to obtain a first training set and a second training set. Training by adopting an SRGAN network according to the first training set and the second training set respectively to obtain the first model and the second model; and combining the first model and the second model to obtain the image super-resolution reconstruction model.
In one possible example, the electronic device includes a sensor, and the obtaining unit 701 is further configured to: determining, with the sensor, a distance between the electronic device and an eye of a photographic target when acquiring the first eye image; determining whether the resolution of the first eye image is less than a first preset resolution threshold according to the distance.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1. An eyeball tracking method applied to an electronic device, the method comprising:
acquiring a first eye image;
when the resolution of the first eye image is smaller than a first preset resolution threshold value, inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, wherein the resolution of the second eye image is larger than that of the first eye image;
running an eye tracking service according to the second eye image to perform a predetermined function, comprising: running the eyeball tracking service according to the second eye image to obtain eyeball tracking data of the second eye image, sending the eyeball tracking data of the second eye image to the eyeball tracking application program, and executing the eyeball tracking application program according to the eyeball tracking data of the second eye image to complete the preset function;
the electronic device includes an infrared light, and before sending the eye tracking data for the second eye image to the eye tracking application, the method further comprises:
turning on the infrared lamp to irradiate the eyes of a shooting target according to the eyeball tracking service, wherein the infrared lamp is used for generating bright spots on the eyes of the shooting target;
shooting eyes of the shooting target including the bright spots through a camera to obtain N fourth eye images;
obtaining N groups of calibration coordinates according to the N fourth eye images, wherein the N groups of calibration coordinate data correspond to the N fourth eye images one by one, and each group of calibration coordinate data comprises a pupil coordinate and a bright spot coordinate;
obtaining N calibration vectors according to the N groups of calibration coordinate data, wherein the N calibration vectors correspond to the N groups of calibration coordinate data one by one, and each calibration vector is determined by corresponding pupil coordinates and bright spot coordinates;
calibrating the eye tracking data of the second eye image according to the N calibration vectors.
2. The method of claim 1, wherein the electronic device comprises a camera, and wherein the acquiring the first eye image comprises:
when detecting that an eyeball tracking application program requests to start, or when detecting that the eyeball tracking application program requests to start a preset function, or when detecting that the eyeball tracking application program requests to acquire eyeball watching position information, starting the eyeball tracking service;
acquiring the first eye image through the camera according to the eye tracking service.
3. The method of claim 1, wherein the image super-resolution reconstruction model is composed of a first model and a second model, the first model and the second model are obtained by training eye images with different resolution stages, and the step of inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image comprises:
judging whether the resolution of the first eye image is larger than a second preset resolution threshold value or not;
if the resolution of the first eye image is larger than a second preset resolution threshold, inputting the first eye image into a first model to obtain a second eye image;
if the resolution of the first eye image is not larger than a second preset resolution threshold, inputting the first eye image into a second model to obtain a second eye image;
wherein the first model comprises a first feature extraction network, a first generation network and a first discriminant network, the first generation network comprises a first residual network layer and a first upsampling layer, and the inputting the first eye image into the first model to obtain the second eye image comprises:
inputting the first eye image into the first feature extraction network to obtain a first feature layer;
inputting the first feature layer into the first residual error network layer to obtain a second feature layer;
inputting the second characteristic layer into the first up-sampling layer for up-sampling, and performing characteristic extraction on the up-sampled second characteristic layer to obtain a third eye image;
inputting the third eye image into the first feature extraction network to obtain a third feature layer of the third eye image;
inputting the first characteristic layer and the third characteristic layer into the first discrimination network for comparison;
when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is more than half, outputting the third eye image as the second eye image;
and when the comparison result shows that the probability that the first characteristic layer and the third characteristic layer are the same characteristic layer is not more than half, returning to execute the step of inputting the first eye image into the first characteristic extraction network to obtain a first characteristic layer.
4. The method of claim 3, wherein before inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image, the method further comprises:
acquiring a plurality of eye images with high resolution;
preprocessing the plurality of eye images with high resolution to obtain a plurality of corresponding eye images with low resolution;
dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is greater than the second preset resolution threshold, into a first data set, and dividing the low-resolution eye images of the plurality of low-resolution eye images, the resolution of which is not greater than the second preset resolution threshold, into a second data set;
dividing the plurality of eye images with high resolution into data sets where the corresponding eye images with low resolution are located to obtain a first training set and a second training set;
training by adopting an SRGAN network according to the first training set and the second training set respectively to obtain the first model and the second model;
and combining the first model and the second model to obtain the image super-resolution reconstruction model.
5. The method of any of claims 1-4, wherein the electronic device includes a sensor, the method further comprising:
determining, with the sensor, a distance between the electronic device and an eye of a photographic target when acquiring the first eye image;
determining whether the resolution of the first eye image is less than a first preset resolution threshold according to the distance.
6. An eye tracking apparatus, applied to an electronic device, the apparatus comprising:
an acquisition unit configured to acquire a first eye image;
the processing unit is used for inputting the first eye image into a pre-trained image super-resolution reconstruction model to obtain a second eye image when the resolution of the first eye image is smaller than a first preset resolution threshold value, wherein the resolution of the second eye image is larger than that of the first eye image;
the operation unit is used for operating the eyeball tracking service according to the second eye image so as to complete a preset function;
the operating unit is specifically configured to operate the eyeball tracking service according to the second eye image to obtain eyeball tracking data of the second eye image, send the eyeball tracking data of the second eye image to the eyeball tracking application program, and execute the eyeball tracking application program according to the eyeball tracking data of the second eye image to complete the preset function;
the electronic device includes an infrared light that transmits eye tracking data of the second eye image to the eye tracking application, and the electronic device further includes a calibration unit configured to: turning on the infrared lamp to irradiate the eyes of a shooting target according to the eyeball tracking service, wherein the infrared lamp is used for generating bright spots on the eyes of the shooting target; shooting eyes of the shooting target including the bright spots through a camera to obtain N fourth eye images; obtaining N groups of calibration coordinates according to the N fourth eye images, wherein the N groups of calibration coordinate data correspond to the N fourth eye images one by one, and each group of calibration coordinate data comprises a pupil coordinate and a bright spot coordinate; obtaining N calibration vectors according to the N groups of calibration coordinate data, wherein the N calibration vectors correspond to the N groups of calibration coordinate data one by one, and each calibration vector is determined by corresponding pupil coordinates and bright spot coordinates; calibrating the eye tracking data of the second eye image according to the N calibration vectors.
7. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any of claims 1-5.
CN201911050893.XA 2019-10-31 2019-10-31 Eyeball tracking method and related equipment Active CN112748797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911050893.XA CN112748797B (en) 2019-10-31 2019-10-31 Eyeball tracking method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911050893.XA CN112748797B (en) 2019-10-31 2019-10-31 Eyeball tracking method and related equipment

Publications (2)

Publication Number Publication Date
CN112748797A CN112748797A (en) 2021-05-04
CN112748797B true CN112748797B (en) 2022-08-09

Family

ID=75641253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911050893.XA Active CN112748797B (en) 2019-10-31 2019-10-31 Eyeball tracking method and related equipment

Country Status (1)

Country Link
CN (1) CN112748797B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973391B (en) * 2022-06-30 2023-03-21 北京万里红科技有限公司 Eyeball tracking method, device and equipment applied to metacarpal space

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964111A (en) * 2010-09-27 2011-02-02 山东大学 Method for improving sight tracking accuracy based on super-resolution

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722875B (en) * 2012-05-29 2014-08-13 杭州电子科技大学 Visual-attention-based variable quality ultra-resolution image reconstruction method
TW201901529A (en) * 2017-05-22 2019-01-01 宏達國際電子股份有限公司 Eye tracking method, electronic device and non-transitory computer readable recording medium
CN107944379B (en) * 2017-11-20 2020-05-15 中国科学院自动化研究所 Eye white image super-resolution reconstruction and image enhancement method based on deep learning
US10558895B2 (en) * 2018-03-30 2020-02-11 Tobii Ab Deep learning for three dimensional (3D) gaze prediction
CN110345815A (en) * 2019-07-16 2019-10-18 吉林大学 A kind of creeper truck firearms method of sight based on Eye-controlling focus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964111A (en) * 2010-09-27 2011-02-02 山东大学 Method for improving sight tracking accuracy based on super-resolution

Also Published As

Publication number Publication date
CN112748797A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN111510630B (en) Image processing method, device and storage medium
CN107909113B (en) Traffic accident image processing method, device and storage medium
KR20180109109A (en) Method of recognition based on IRIS recognition and Electronic device supporting the same
US20190080188A1 (en) Facial recognition method and related product
EP3584740B1 (en) Method for detecting biological feature data, biological feature recognition apparatus and electronic terminal
CN109190509B (en) Identity recognition method, device and computer readable storage medium
CN111225157B (en) Focus tracking method and related equipment
CN107231470B (en) Image processing method, mobile terminal and computer readable storage medium
CN110568930B (en) Method for calibrating fixation point and related equipment
CN108156378B (en) Photographing method, mobile terminal and computer-readable storage medium
KR20180099026A (en) Photographing method using external electronic device and electronic device supporting the same
CN108763998B (en) Bar code identification method and terminal equipment
CN111552389A (en) Method and device for eliminating fixation point jitter and storage medium
CN111880640B (en) Screen control method and device, electronic equipment and storage medium
CN111292504A (en) Method and system for carrying out safety alarm through image identification
CN110881105B (en) Shooting method and electronic equipment
CN112748797B (en) Eyeball tracking method and related equipment
CN113395438B (en) Image correction method and related device for eyeball tracking technology
KR20200144196A (en) Electronic device and method for providing function using corneal image thereof
CN112748798B (en) Eyeball tracking calibration method and related equipment
CN110933314B (en) Focus-following shooting method and related product
CN110930372A (en) Image processing method, electronic equipment and computer readable storage medium
CN108960097B (en) Method and device for obtaining face depth information
CN110941344B (en) Method for obtaining gazing point data and related device
CN115601316A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant