CN111552389B - Gaze point shake eliminating method, gaze point shake eliminating device and storage medium - Google Patents

Gaze point shake eliminating method, gaze point shake eliminating device and storage medium Download PDF

Info

Publication number
CN111552389B
CN111552389B CN202010393282.1A CN202010393282A CN111552389B CN 111552389 B CN111552389 B CN 111552389B CN 202010393282 A CN202010393282 A CN 202010393282A CN 111552389 B CN111552389 B CN 111552389B
Authority
CN
China
Prior art keywords
distance
gaze point
coordinate position
determining
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010393282.1A
Other languages
Chinese (zh)
Other versions
CN111552389A (en
Inventor
韩世广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010393282.1A priority Critical patent/CN111552389B/en
Publication of CN111552389A publication Critical patent/CN111552389A/en
Application granted granted Critical
Publication of CN111552389B publication Critical patent/CN111552389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The application discloses a fixation point jitter elimination method, a device and a storage medium, wherein the method comprises the following steps: acquiring a first coordinate position of a current gaze point and a second coordinate position of a last gaze point of the current gaze point; determining a first distance between the first coordinate location and the second coordinate location; performing low-pass filtering on the first distance to obtain a second distance; determining a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance; and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjusting coefficient and the second coordinate position. By adopting the embodiment of the application, the fixation point position can be accurately determined.

Description

Gaze point shake eliminating method, gaze point shake eliminating device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and apparatus for removing gaze point shake, and a storage medium.
Background
Along with the wide popularization and application of electronic devices (such as mobile phones, tablet computers and the like), the electronic devices can support more and more applications, have more and more functions, and develop towards diversification and individuation, so that the electronic devices become indispensable electronic articles in the life of users.
Eye tracking technology is also becoming a standard technology of electronic devices, but there is a technical problem that the eye tracking output gaze point can shake and fly out of the gaze area in a short time (such as blinking), resulting in inaccurate output gaze point.
Disclosure of Invention
The embodiment of the application provides a gaze point jitter elimination method, a gaze point jitter elimination device and a storage medium, which can accurately determine the gaze point position.
In a first aspect, an embodiment of the present application provides a gaze point shake removing method, including:
acquiring a first coordinate position of a current gaze point and a second coordinate position of a last gaze point of the current gaze point;
determining a first distance between the first coordinate location and the second coordinate location;
performing low-pass filtering on the first distance to obtain a second distance;
determining a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance;
and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjusting coefficient and the second coordinate position.
In a second aspect, an embodiment of the present application provides a gaze point shake removing apparatus, the apparatus including: an acquisition unit, a first determination unit, a filtering unit, a second determination unit and a third determination unit, wherein,
The acquisition unit is used for acquiring a first coordinate position of a current gaze point and a second coordinate position of a last gaze point of the current gaze point;
the first determining unit is used for determining a first distance between the first coordinate position and the second coordinate position;
the filtering unit is used for carrying out low-pass filtering on the first distance to obtain a second distance;
the second determining unit is used for determining a first adjusting coefficient for adjusting the low-pass filtering strength according to the second distance;
the third determining unit is configured to determine, according to the first coordinate position, the first adjustment coefficient, and the second coordinate position, a target coordinate position of the current gaze point corresponding to the first coordinate position.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing part or all of the steps described in the method of the first aspect of embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium, where the computer readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement some or all of the steps described in the method according to the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The implementation of the embodiment of the application has the following beneficial effects:
it can be seen that, in the gaze point shake eliminating method, apparatus and storage medium described in the embodiments of the present application, by obtaining a first coordinate position of a current gaze point and a second coordinate position of a previous gaze point of the current gaze point, determining a first distance between the first coordinate position and the second coordinate position, performing low-pass filtering on the first distance to obtain a second distance, determining a first adjustment coefficient for adjusting low-pass filtering strength according to the second distance, and determining a target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient and the second coordinate position, since shake eliminating processing can be performed on the position of the current gaze point according to the position of the current gaze point above the current gaze point, it can be accurately determined that the current gaze point improves user experience.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a software architecture diagram of a gaze point shake eliminating method according to an embodiment of the present application;
fig. 3A is a schematic flow chart of a gaze point shake eliminating method according to an embodiment of the present application;
fig. 3B is a schematic illustration of an application scenario provided in an embodiment of the present application;
fig. 3C is a schematic illustration of another application scenario provided in an embodiment of the present application;
fig. 3D is a schematic illustration of another application scenario provided in an embodiment of the present application;
fig. 4 is an interaction diagram of a gaze point shake eliminating method according to an embodiment of the present application;
Fig. 5 is a schematic diagram of another hardware structure of an electronic device according to an embodiment of the present application;
fig. 6A is a schematic structural diagram of a gaze point shake eliminating device according to an embodiment of the present application;
fig. 6B is a schematic structural diagram of another gaze point shake eliminating device provided in an embodiment of the present application;
fig. 6C is a schematic structural diagram of another gaze point shake eliminating device provided in an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
The following will describe in detail.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the following, some terms in the present application are explained for easy understanding by those skilled in the art.
The electronic devices may include various handheld devices, vehicle mounted devices, wearable devices (e.g., smart watches, smart glasses, smart bracelets, pedometers, etc.), smart cameras (e.g., smart single-lens cameras, high-speed cameras), computing devices, or other processing devices communicatively coupled to a wireless modem, as well as various forms of User Equipment (UE), mobile Stations (MSs), terminal devices (terminal devices), etc. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
As shown in fig. 1, fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application. The electronic device may include a processor, memory, signal processor, transceiver, display screen, speaker, microphone, random access memory (Random Access Memory, RAM), camera, sensor, infrared light (Infrared light source, IR), and so forth. The device comprises a memory, a signal processor, a display screen, a loudspeaker, a microphone, a RAM, a camera, a sensor and an IR, wherein the memory, the signal processor, the display screen, the loudspeaker, the microphone, the RAM, the camera, the sensor and the IR are connected with the processor, and the transceiver is connected with the signal processor.
The display screen may be a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active Matrix Organic Light-Emitting Diode panel (AMOLED), or the like.
The camera may be a normal camera or an infrared camera, which is not limited herein. The camera may be a front camera or a rear camera, which is not limited herein.
Wherein the sensor comprises at least one of: light sensing sensors, gyroscopes, infrared proximity sensors, fingerprint sensors, pressure sensors, etc. Wherein a light sensor, also called ambient light sensor, is used to detect the ambient light level. The light sensor may comprise a photosensitive element and an analog-to-digital converter. The photosensitive element is used for converting the collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the optical sensor may further include a signal amplifier, where the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The processor is a control center of the electronic device, and is connected with various parts of the whole electronic device by various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the electronic device is monitored as a whole.
The processor may include one or more processing cores. The processor uses various interfaces and lines to connect various portions of the overall electronic device, perform various functions of the electronic device, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in memory, and invoking data stored in memory. The processor may include one or more processing units, such as: the processors may include a central processor (Central Processing Unit, CPU), an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processor (neural-network processing unit, NPU), etc. The controller can be a neural center and a command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. The CPU mainly processes an operating system, a user interface, application programs and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, and so on. Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc. The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of electronic devices can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
A memory may be provided in the processor for storing instructions and data. In some embodiments, the memory in the processor is a cache memory. The memory may hold instructions or data that the processor has just used or recycled. If the processor needs to reuse the instruction or data, it can be called directly from the memory. Repeated access is avoided, waiting time of the processor is reduced, and system efficiency is improved.
The processor may include one or more interfaces, such as an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). The processor may contain multiple sets of I2C interfaces, and touch sensors, chargers, flash lamps, cameras, etc. may be coupled separately through different I2C interfaces. For example: the processor can be coupled with the touch sensor through the I2C interface, so that the processor and the touch sensor can be communicated through the I2C interface, and the touch function of the electronic equipment is realized.
The I2S interface may be used for audio communication. The processor may include multiple sets of I2S interfaces coupled to the audio module through the I2S interfaces to enable communication between the processor and the audio module. The audio module can transmit audio signals to the wireless communication module through the I2S interface, so that the function of answering a call through the Bluetooth headset is realized.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. The audio module and the wireless communication module can be coupled through the PCM interface, and particularly can transmit audio signals to the wireless communication module through the PCM interface, so that the function of answering a call through the Bluetooth headset is realized. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. UART interfaces are commonly used to connect processors to wireless communication modules. For example: the processor communicates with a Bluetooth module in the wireless communication module through a UART interface to realize a Bluetooth function. The audio module can transmit audio signals to the wireless communication module through the UART interface, so that the function of playing music through the Bluetooth headset is realized.
The MIPI interface may be used to connect a processor to peripheral devices such as a display screen, camera, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, the processor and the camera communicate through the CSI interface to implement a photographing function of the electronic device. The processor and the display screen are communicated through the DSI interface, so that the display function of the electronic equipment is realized.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor with a camera, display screen, wireless communication module, audio module, sensor module, or the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface is an interface conforming to the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface can be used for connecting a charger to charge the electronic equipment and can also be used for transmitting data between the electronic equipment and the peripheral equipment. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It will be appreciated that the processor may be mapped to a System on a Chip (SOC) in an actual product, and the processing unit and/or the interface may not be integrated into the processor, and the corresponding functions may be implemented by a single communication Chip or electronic component. The above-mentioned interface connection relationship between the modules is only schematically illustrated, and does not constitute a unique limitation on the structure of the electronic device.
The Memory may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (rom). Optionally, the memory includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). The memory may be used to store instructions, programs, code sets, or instruction sets. The memory may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, which may be an Android (Android) system (including a system developed based on the Android system), an IOS system developed by apple corporation (including a system developed based on the IOS system), or other systems, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the electronic device in use (e.g., phonebook, audio-video data, chat-record data), etc.
The processor may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes the software programs and/or modules stored in the memory so as to execute various functional applications of the electronic device and data processing. The memory may mainly include a memory program area and a memory data area, wherein the memory program area may store an operating system, a software program required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Wherein, IR is used for shining the human eye, produces bright spot (glint) on the human eye, and the camera is used for shooing the human eye, obtains including bright spot and pupil (pupil) image.
As shown in fig. 2, fig. 2 is a software architecture diagram of a gaze point shake eliminating method according to an embodiment of the present application. The software architecture diagram comprises four layers, wherein the first layer is an application layer, and the application layer can comprise an electronic book, a browser, a starter, a system, unlocking, mobile payment, interest point tracking and other applications. The second layer may include an eye tracking service (oeyetracker service), which specifically includes an eye tracking authorization (oeyetracker authorization), an eye tracking policy (oeyetracker strategy), an eye tracking algorithm (oeyetracker algorithm), and an eye tracking parameter (oeyetracker parameters), where oeyetracker service connects with the application of the first layer through an eye tracking SDK (OEyeTrackerSDK) interface; the second layer also includes a camera NDK interface (camera ndkiterface), a camera service (camera service), the camera ndkiterface being connected to oeyetracker service, the camera service being interconnected to the camera ndkiterface. The third layer is a hardware abstraction layer, which may include google HAL interface (Google HAL Interface), high-pass HAL interface (Qualcomm HAL Interface), electronic anti-shake module, cam X, chi-cdk, etc., the high-pass HAL interface (Qualcomm HAL Interface) may be connected to the electronic anti-shake module, google HAL Interface is connected to the Cam service of the second layer, qualcomm HAL Interface is connected to Google HAL Interface, cam X is connected to Qualcomm HAL Interface and Chi-cdk respectively, and the fourth layer is a bottom layer driver, which includes RGB sensor (RGB sensor), digital Signal Processor (DSP), infrared sensor (IR sensor), laser (Laser) and Light Emitting Diode (LED), etc., and IR sensor is connected to Cam X of the third layer. The connection between oeyetracker service and oeyetracker sdk, the connection between camel service and camel ndkiterface, and the connection between Google HAL Interface and camel service all pass through the Binder architecture.
The OEyeTrackerSDK is responsible for providing an api for acquiring a gaze point and inputting the gaze point for a common application, and the api is in the form of jar/aar package. Oeyetracker service is responsible for managing gaze point algorithms, gaze point post-processing, input processing, authentication and parameter settings. EyeTrackeralgo is a core algorithm for eye tracking, including the algorithm for determining gaze point functions in this application. Oeyetracker strategy is related to algorithmic post-processing such as filtering, gaze point hopping, gaze point to listening, gaze point input. The oeyetrackra authentication callback modules are responsible for authenticating whether the requester is allowed. Oeyetrackparam is responsible for resolving the configuration and hot updating the configuration. The electronic anti-shake module is used for realizing an electronic anti-shake function, and the principle of the electronic anti-shake module is that a CCD is firstly fixed on a support capable of moving up and down, left and right, the direction and the amplitude of shake of a camera are sensed through a gyroscope, and then the sensor transmits the data to a processor for screening and amplifying, so that the movement amount of the CCD capable of counteracting shake is calculated.
The eye gaze point is the gaze point position of the plane where the eye gaze electronic device of the user is located, and the eye tracking software development kit interface is a software development kit (software development kit, SDK) interface provided by the electronic device for the eye tracking application, and is responsible for providing an application program interface (application programming interface, API) interface for acquiring the gaze point and inputting for the eye tracking application. The eye tracking service may also invoke a camera application through a camera native development kit (Native Development Kit, NDK) interface, the camera application may invoke a camera, capture face images through the camera, and implement eye tracking through the face images.
As shown in fig. 3A, fig. 3A is a flow chart of a gaze point shake eliminating method according to an embodiment of the present application, which is applied to an electronic device as shown in fig. 1 or fig. 2, and the method includes:
301. and acquiring a first coordinate position of the current gaze point and a second coordinate position of a last gaze point of the current gaze point.
The first coordinate position may be an ordinate position, an abscissa position, or the first coordinate position may include an ordinate position and an ordinate position. The second coordinate position may be an ordinate position, an abscissa position, or the second coordinate position may include an ordinate position and an ordinate position.
In a specific implementation, the first coordinate position of the current gaze point may be calculated by an eye tracking algorithm, and similarly, the second coordinate position of the previous gaze point of the current gaze point may also be obtained by an eye tracking algorithm. The first coordinate position of the current gaze point may be a coordinate position that has not been subjected to the low pass filtering process. The second coordinate position may be a coordinate position subjected to the low-pass filtering process or a coordinate position not subjected to the low-pass filtering process. In this embodiment of the present application, gaze point detection may be performed by an eye tracking algorithm at preset time intervals, where the preset time intervals may be set by a user or default by the system, and the electronic device may determine, by using the eye tracking algorithm, the second coordinate position of the last gaze point of the current gaze point first, and then determine, by using the eye tracking algorithm, the first coordinate position of the current gaze point after the preset time intervals.
In a specific implementation, for the gaze point shake, the shake reasons may include the following cases: (1), the mobile phone and the head have slight shake; (2) Eye attention is not continuously focused, leading to pupil trembling; (3) The current eye tracking algorithm is easy to generate errors when processing infrared images, so that the output of the fixation point is dithered; (4) flying out of the fixation area in a short time. That is, the gaze point output deviates excessively from the region where the user is gazing, because if the above-mentioned jitter is severe, it is too far from the gaze point, that is, it appears as a fly-out gaze region; (5), blinking; (6), glasses or eyelid and the like shielding the pupil; (7) Too far from the head, or a sudden darkening of the image, etc. For a specific application scenario, for example, when the eye tracking gaze point output is used as an event such as click sliding of a mobile phone input, the input position is incorrect due to gaze point shake.
302. A first distance between the first coordinate location and the second coordinate location is determined.
The electronic device may determine a first distance between the first coordinate position and the second coordinate position, for example, may determine the first distance between the first coordinate position and the second coordinate position through a euclidean distance calculation method, or may determine the first distance between the first coordinate position and the second coordinate position through a hamming distance calculation method.
In one possible example, the determining the first distance between the first coordinate position and the second coordinate position in step 302 may include the following steps:
21. obtaining a low-pass filtering result corresponding to the second coordinate position;
22. acquiring a fixation point position detection frequency;
23. and determining the first distance according to the first coordinate position, the fixation point position detection frequency and the low-pass filtering result.
In a specific implementation, the electronic device may perform low-pass filtering on the second coordinate position through a low-pass filter to obtain a low-pass filter result, where the low-pass filter may be at least one of the following: one Euro filter, wavelet filter, butterworth lowpass filter, gaussian lowpass filter, etc., without limitation. Further, the gaze point location detection frequency may also be obtained, and further, the first distance may be determined according to the first coordinate location, the gaze point location detection frequency, and the low pass filtering result, and specifically, may be implemented with reference to the following formula:
dx=(x-x' low-pass-filter )*rate
wherein, the function of the calculation formula is to calculate the difference between the current coordinate and the last coordinate, and the rate refers to the inverse of the gaze point position detection frequency, for example, when the frequency of the gaze point is 30fps, the rate=1/30; x is the current input position of the gaze point with noise to be filtered, which can be the abscissa, the ordinate or both; the x' low-pass-filter is a low-pass filtering result corresponding to a second coordinate position, and the second coordinate position can be based on the One Euro filter to realize low-pass filtering.
303. And carrying out low-pass filtering on the first distance to obtain a second distance.
The electronic device may also perform low-pass filtering processing on the first distance by using a low-pass filter to obtain a second distance, where the low-pass filter may be at least one of the following: one Euro filter, wavelet filter, butterworth lowpass filter, gaussian lowpass filter, etc., without limitation.
In one possible example, the step 303 of low-pass filtering the first distance to obtain a second distance may include the following steps:
31. acquiring a preset adjustment factor and a low-pass filtering distance of the distance between the second coordinate position and the position of the previous gaze point;
32. determining a second adjusting coefficient for adjusting the low-pass filtering intensity according to the gaze point position detecting frequency and the preset adjusting factor;
33. and carrying out low-pass filtering on the first distance according to the second adjusting coefficient and the low-pass filtering distance to obtain the second distance.
The preset adjustment factor can be set by a user or default by the system. In a specific implementation, the electronic device may acquire a preset adjustment factor, and determine a second adjustment coefficient for adjusting the low-pass filtering strength according to the gaze point position detection frequency and the preset adjustment factor, which may be specifically implemented by adopting the following formula:
alpha dcutoff =1/{1+[1/(2*π*dcutoff)]/(1/rate)}
Wherein alpha is dcutoff Representing a second adjustment coefficient, dcutoff representing a preset adjustment factor, rate being the inverse of the gaze point position detection frequency, the second adjustment coefficient may have a value in the range of [0,1 ]]。
Further, the electronic device may perform low-pass filtering on the first distance according to the second adjustment coefficient and a low-pass filtering distance between the second coordinate position and a previous gaze point position to obtain a second distance, and specifically may refer to the following formula:
dx low-pass-filter =alpha dcutoff *dx+(1-alpha dcutoff )*dx' low-pass-filter
wherein the calculation formula is to perform low-pass filtering on the distance between the current coordinate and the last coordinate so as to achieve the effect of reducing the change distance, alpha dcutoff Represents a second adjustment factor, dx low-pass-filter For a second distance dx' low-pass-filter The distance is low-pass filtered for the distance between the second coordinate position and its previous gaze point position.
304. And determining a first adjusting coefficient for adjusting the low-pass filtering strength according to the second distance.
The mapping relationship between the distance and the adjustment coefficient may be stored in the electronic device in advance, and further, a first adjustment coefficient for adjusting the low-pass filtering strength corresponding to the second distance may be determined according to the mapping relationship.
In a possible example, the step 304 of determining the first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance may include the following steps:
341. Acquiring a preset minimum critical value and a target scaling factor required for adjusting the low-pass filtering strength;
342. determining a target critical value according to the preset minimum critical value, the target scaling factor and the second distance;
343. and determining the first adjusting coefficient according to the fixation point position detection frequency and the target critical value.
The preset minimum critical value and the target scaling factor can be set by the user or default by the system. In a specific implementation, the electronic device may obtain a preset minimum critical value and a target scaling factor required for adjusting the low-pass filtering strength, and further determine the target critical value according to the preset minimum critical value and the target scaling factor, which may be implemented specifically by referring to the following formula:
cutoff=mincutoff+beta*|dx low-pass-filter |
wherein, cutoff is a target critical value, mincutoff refers to a preset minimum critical value, beta is a target scaling factor, and cutoff calculated by the formula refers to a critical value needed for calculating the first adjustment coefficient alpha. Beta is used to scale the second distance dx low-pass-filter
Further, the electronic device may determine the first adjustment coefficient according to the gaze point position detection frequency and the target critical value, and may specifically be implemented by referring to the following formula:
alpha=1/{1+[1/(2*π*cutoff)]/(1/rate)},
Wherein, alpha is a first adjustment coefficient, cutoff is a target critical value, and can be configured to adjust the size of alpha, rate refers to the inverse of the gaze point position detection frequency, and in this embodiment, alpha is a value in the interval between [0,1] and is used to adjust the intensity of the low-pass filtering.
305. And determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjusting coefficient and the second coordinate position.
The electronic device may determine a target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient, and the second coordinate position, where the target coordinate position is a shake corrected position, and may be specifically implemented with reference to the following formula:
x low-pass-filter =alpha*x+(1-alpha)*x' low-pass-filter
wherein x is low-pass-filter For the target coordinate position, alpha is a first adjustment coefficient, x is a first coordinate position, x' low-pass-filter Is the low pass filtering result corresponding to the second coordinate position. Thus, according to the embodiment of the application, shake elimination processing can be performed on shake in the x-axis direction, shake in the y-axis direction and gaze point shake generated in both directions, so that determination accuracy of gaze point positions is improved.
In the embodiment of the application, taking One Euro filter as an example, by referring to the One Euro filter filtering idea, the One Euro filter is used for eliminating the problem of gaze point shake, so that the gaze point shake is effectively eliminated, and the delay is low (the following performance is high). Because the One euro filter is accurate and quick in response, the method is particularly suitable for an interactive system capable of reducing jitter and hysteresis at the same time, the One euro filter is used for eliminating the problem of gaze point jitter, the gaze point jitter is effectively eliminated, and the delay is low (high in following performance), so that the gaze point position can be accurately output, and the method is beneficial to high-precision touch operation through an eyeball tracking technology.
In one possible example, between the above steps 302 to 303, the following steps may be further included:
a1, judging whether the first distance is in a first preset range or not;
a2, executing the step of performing low-pass filtering on the first distance to obtain a second distance when the first distance is in the first preset range.
The first preset range may be set by the user or default by the system. In a specific implementation, the electronic device may determine whether the first distance is in a first preset range, and execute step 303 when the first distance is in the first preset range, and may not execute the subsequent steps when the first distance is not in the first preset range.
In one possible example, before the step 301, the following steps may be further included:
b1, determining the jitter offset of the electronic equipment;
and B2, executing the step of acquiring the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point when the jitter offset is smaller than a preset threshold.
The second preset range may be set by the user or default by the system. In a specific implementation, the electronic device may determine, through the gyroscope, a shake offset of the electronic device, where the shake offset may be used to describe a shake degree of the electronic device, and further when the shake offset is in a second preset range, step 301 may be executed, so that when shake is smaller, corresponding electronic anti-shake may be implemented. When the shake offset amount is not in the second preset range, the shake eliminating operation may not be performed on the gaze point.
Further, in one possible example, the determining the jitter offset of the electronic device in the step B1 may include the following steps:
b11, obtaining a jitter variation curve of the electronic equipment in a preset time period, wherein the horizontal axis of the jitter variation curve is time, and the vertical axis of the jitter variation curve is amplitude;
B12, sampling the jitter variation curve to obtain a plurality of amplitude values;
b13, determining an average amplitude value according to the plurality of amplitude values;
b14, determining a first offset corresponding to the average amplitude according to a mapping relation between a preset amplitude and the offset;
b15, carrying out mean square error operation according to the plurality of amplitude values to obtain a target mean square error;
b16, determining a target adjustment coefficient corresponding to the target mean square error according to a mapping relation between the preset mean square error and the adjustment coefficient;
and B17, adjusting the first offset according to the target adjustment coefficient to obtain the jitter offset of the electronic equipment.
The preset time period may be preset or default, for example, the preset time period may be a period of time after receiving the shooting instruction. The mapping relation between the preset amplitude and the offset and the mapping relation between the preset mean square error and the adjustment coefficient can be stored in the electronic equipment in advance.
In a specific implementation, the jitter variation curve can be collected through a gyroscope, the horizontal axis of the jitter variation curve is time, the vertical axis of the jitter variation curve is amplitude, the amplitude can be used for representing the jitter amplitude, the electronic equipment can sample the jitter variation curve to obtain a plurality of amplitudes, a specific sampling mode can be sampling at intervals of preset time, or random sampling is performed, and the preset time interval can be preset or defaults.
Furthermore, the electronic device may determine an average amplitude according to the plurality of amplitudes, and may determine a first offset corresponding to the average amplitude according to a mapping relationship between a preset amplitude and an offset, in addition, the electronic device may perform a mean square error operation according to the plurality of amplitudes to obtain a target mean square error, where the mean square error reflects a stability degree of jitter to a certain extent, and the stability degree of jitter reflects stability of jitter from a side surface, so the electronic device may determine a target adjustment coefficient corresponding to the target mean square error according to a mapping relationship between the preset mean square error and the adjustment coefficient.
In this embodiment of the present application, the value range of the adjustment coefficient may be between-0.15 and 0.15, and of course, the value range may also be set by the user or updated by the system, further, the electronic device may adjust the first offset according to the target adjustment coefficient, so as to obtain the jitter offset, and the specific calculation manner of the jitter offset may refer to the following formula:
jitter offset= (1+target adjustment coefficient) ×first offset
Therefore, the offset can be preliminarily determined through the amplitude, and can be adjusted according to the jitter stability (mean square error) so as to achieve the purpose of accurately determining the jitter offset degree, and the jitter condition of the electronic equipment can be accurately detected.
In one possible example, the step 301 of obtaining the first coordinate position of the current gaze point may include the following steps:
11. acquiring a first image through a first camera;
12. a first coordinate position of a current gaze point of a target object in said first image is determined.
The electronic device may include a first camera, where the first camera may be a rear camera, a side camera, or a front camera, the first camera may also be a single camera, a dual camera, or multiple cameras, the single camera may be an infrared camera, a visible light camera (a common view angle camera or a wide angle camera), the dual camera may be a common view angle camera+a wide angle camera, or the infrared camera+the visible light camera. As shown in fig. 3B, the electronic device may capture a first image by using a first camera, and display the first image on a display screen.
In a specific implementation, when the first camera is a front camera, as shown in fig. 3C, the target object may perform self-timer, and not only the first image may be obtained by the first camera, but also an eye gaze point of the target object in the first image may be obtained.
In a specific implementation, when the first camera is not a front camera, the electronic device may further include a second camera, the second camera may be a front camera, and the second camera may be used for implementing an eye tracking function. As shown in fig. 3D, a first camera is used to capture a captured object, so as to obtain a first image, and the first image is displayed on a display screen, and the target object is looking at the first image, so that an eye gaze point corresponding to the target object, that is, a first coordinate position of a current gaze point, can be determined in the first image.
In a possible example, before the step 301 of obtaining the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point, the method may further include the following steps:
c1, determining N fixation points on a screen, wherein N is an integer greater than 1;
c2, determining an eye tracking positioning corresponding precision value corresponding to each of the N gaze points to obtain N precision values;
c3, determining interpolation parameters corresponding to each precision value in the N precision values to obtain N interpolation parameters;
c4, carrying out interpolation operation on each pixel point of the screen according to the N interpolation parameters to obtain an eyeball tracking positioning accuracy distribution diagram corresponding to the screen;
Then, in the step 31, the preset adjustment factor is obtained, which may be implemented as follows:
and determining the preset adjusting factors corresponding to the first coordinate positions according to the mapping relation between the adjusting factors of the preset coordinate positions.
Wherein the interpolation parameter may be at least one of: the interpolation algorithm, the interpolation control parameter, the interpolation region parameter, and the like, which correspond to the interpolation algorithm, are not limited herein. Wherein the interpolation algorithm may be at least one of: the interpolation control parameters corresponding to the interpolation algorithm may be understood as control parameters corresponding to the interpolation algorithm, adjustment parameters for adjusting the interpolation degree, the interpolation region parameters may be understood as a specific region within which to perform interpolation, and the interpolation region parameters may include at least one of the following: the shape of the region, the position of the region, the area of the region, and the like are not limited herein.
In a specific implementation, since each gaze point corresponds to an actual focus position of an eyeball (pupil) and a certain deviation exists between the actual focus position and the predicted focus position calculated by an eye tracking algorithm, and the deviation determines an accuracy value corresponding to eye tracking positioning, in this embodiment of the present application, the electronic device may determine an accuracy value corresponding to eye tracking positioning corresponding to each gaze point in N gaze points, so as to obtain N accuracy values.
In addition, in the specific implementation, because the position corresponding to each precision value in the N precision values is not fixed or the size of each precision value is different, the interpolation parameters are different, so that the interpolation parameters corresponding to each precision value in the N precision values can be determined to obtain N interpolation parameters, each interpolation parameter in the N interpolation parameters can be responsible for carrying out interpolation operation for an independent area, and the N interpolation parameters can realize interpolation operation on the whole screen, so as to obtain an eyeball tracking positioning precision distribution diagram corresponding to the screen.
Further, in the embodiment of the present application, a mapping relationship between adjustment factors according to preset coordinate positions may be stored in the electronic device in advance, and further, the preset adjustment factor corresponding to the first coordinate position may be determined according to the mapping relationship.
In a possible example, the determining the precision value corresponding to the eye tracking positioning corresponding to each of the N gaze points in the step C2 may include the following steps:
C21, determining a third coordinate position of pupil fixation corresponding to a fixation point i, wherein the fixation point i is any one of the N fixation points;
c22, determining a fourth coordinate position corresponding to the gaze point i and determined by a prestored eye tracking algorithm;
and C23, determining an accuracy value corresponding to eye tracking and positioning corresponding to the gaze point i according to the third coordinate position and the fourth coordinate position.
The electronic device may store an eye tracking algorithm in advance, where the eye tracking algorithm is used to implement eye positioning, taking the gaze point i as an example, where the gaze point i is any one of N gaze points, the electronic device may determine a third coordinate position of pupil gaze corresponding to the gaze point i, that is, a third coordinate position (actual gaze position) of the gaze point i, and the electronic device may determine a fourth coordinate position (predicted gaze position) corresponding to the gaze point i through the eye tracking algorithm stored in advance, and further may determine an accuracy value corresponding to eye tracking positioning corresponding to the gaze point i according to the third coordinate position and the fourth coordinate position, for example, may calculate a target euclidean distance between the third coordinate position and the fourth coordinate position, and determine an accuracy value corresponding to the target euclidean distance according to a mapping relationship between the preset euclidean distance and the accuracy value, so that an accuracy value between the actual gaze position and the gaze position predicted by the eye tracking algorithm may be determined.
In a possible example, the step C3, determining the interpolation parameter corresponding to each of the N precision values to obtain N interpolation parameters may include the following steps:
c31, acquiring a target screen state parameter between an eyeball corresponding to an accuracy value j and the screen, wherein the accuracy value j is any one of the N accuracy values;
and C32, determining an interpolation parameter j corresponding to the target screen state parameter according to a mapping relation between the preset screen state parameter and the interpolation parameter.
In this embodiment of the present application, the screen state parameter may be at least one of the following: the screen size, screen status, distance between the gaze point and the user's pupil, angle between the gaze point and the user's pupil, etc., are not limited herein. The screen state can be a horizontal screen state or a vertical screen state.
In a specific implementation, taking the precision value j as an example, the precision value j is any precision value in the N precision values. The electronic device can acquire the target screen state parameter between the eyeball corresponding to the precision value j and the screen, and can also prestore the mapping relation between the preset screen state parameter and the interpolation parameter, further, the interpolation parameter j corresponding to the target screen state parameter can be determined according to the mapping relation between the preset screen state parameter and the interpolation parameter, and the like, and the interpolation parameter corresponding to each precision value can be determined.
In a possible example, the step C4, performing interpolation operation on each pixel point of the screen according to the N interpolation parameters to obtain an eye tracking positioning accuracy distribution map corresponding to the screen, may include the following steps:
c41, determining interpolation areas corresponding to each interpolation parameter in the N interpolation parameters to obtain N areas to be interpolated, wherein the N areas to be interpolated cover each pixel point of the screen;
and C42, carrying out interpolation operation on the N areas to be interpolated according to the N interpolation parameters and the N precision values to obtain an eyeball tracking positioning precision distribution diagram corresponding to the screen.
The electronic device may determine the to-be-interpolated area corresponding to each interpolation parameter in the N interpolation parameters, so as to obtain N to-be-interpolated areas, where each to-be-interpolated area may be pre-planned, each to-be-interpolated area may correspond to one gaze point, or an area within a certain range of each gaze point in the N gaze points may be used as the to-be-interpolated area, further, interpolation operation may be performed on the N to-be-interpolated areas according to the N interpolation parameters and the N precision values, so as to obtain an eye tracking positioning precision distribution map corresponding to the screen, that is, the N to-be-interpolated areas may use the precision value of the gaze point corresponding to the to-be-interpolated area as a reference, and perform interpolation operation with the interpolation parameter corresponding to the to-be-interpolated area, so as to quickly generate the eye tracking positioning precision distribution map corresponding to the whole screen.
It can be seen that, in the gaze point shake eliminating method described in the embodiment of the present application, by obtaining the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point, determining the first distance between the first coordinate position and the second coordinate position, performing low-pass filtering on the first distance to obtain the second distance, determining the first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance, and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient and the second coordinate position, since shake eliminating processing can be performed on the position of the current gaze point according to the position of the last gaze point of the current gaze point, it can be accurately determined that the current gaze point improves user experience.
In accordance with the embodiment shown in fig. 3A, please refer to fig. 4, fig. 4 is a schematic flow chart of a gaze point shake eliminating method according to an embodiment of the present application, as shown in the drawings, applied to an electronic device shown in fig. 1 or fig. 2, where the gaze point shake eliminating method includes:
401. and acquiring a first coordinate position of the current gaze point and a second coordinate position of a last gaze point of the current gaze point.
402. A first distance between the first coordinate location and the second coordinate location is determined.
403. And judging whether the first distance is in a first preset range or not.
404. And when the first distance is in the first preset range, performing low-pass filtering on the first distance to obtain a second distance.
405. And determining a first adjusting coefficient for adjusting the low-pass filtering strength according to the second distance.
406. And determining a shake correction position corresponding to the first coordinate position according to the first coordinate position, the first adjusting coefficient and the second coordinate position to obtain a target coordinate position of the current gaze point.
The specific description of the steps 401 to 406 may refer to the corresponding steps of the gaze point shake eliminating method as described in fig. 3A, and will not be repeated herein.
It can be seen that, in the gaze point shake eliminating method described in the embodiment of the present application, by obtaining the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point, determining a first distance between the first coordinate position and the second coordinate position, determining whether the first distance is in a first preset range, when the first distance is in the first preset range, performing low-pass filtering on the first distance to obtain the second distance, determining a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance, and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient and the second coordinate position, on one hand, whether there is shake in the gaze point can be identified, and on the other hand, since shake eliminating processing can be performed on the position of the current gaze point according to the position of the last gaze point of the current gaze point, the current gaze point can be accurately determined, and user experience is improved.
In accordance with the above embodiments, referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the programs include instructions for performing the following steps:
acquiring a first coordinate position of a current gaze point and a second coordinate position of a last gaze point of the current gaze point;
determining a first distance between the first coordinate location and the second coordinate location;
performing low-pass filtering on the first distance to obtain a second distance;
determining a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance;
and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjusting coefficient and the second coordinate position.
It can be seen that, in the electronic device described in the embodiment of the present application, by obtaining the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point, determining a first distance between the first coordinate position and the second coordinate position, performing low-pass filtering on the first distance to obtain a second distance, determining a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance, and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient and the second coordinate position, since the shake elimination processing can be performed on the position of the current gaze point according to the position of the current gaze point above the current gaze point, it can be accurately determined that the current gaze point improves the user experience.
In one possible example, in said determining a first distance between said first coordinate position and said second coordinate position, the above-mentioned program comprises instructions for performing the steps of:
obtaining a low-pass filtering result corresponding to the second coordinate position;
acquiring a fixation point position detection frequency;
and determining the first distance according to the first coordinate position, the fixation point position detection frequency and the low-pass filtering result.
In one possible example, in said low-pass filtering said first distance to obtain a second distance, the program comprises instructions for:
acquiring a preset adjustment factor and a low-pass filtering distance of the distance between the second coordinate position and the position of the previous gaze point;
determining a second adjusting coefficient for adjusting the low-pass filtering intensity according to the gaze point position detecting frequency and the preset adjusting factor;
and carrying out low-pass filtering on the first distance according to the second adjusting coefficient and the low-pass filtering distance to obtain the second distance.
In one possible example, in said determining a first adjustment factor for adjusting the low-pass filter strength in dependence of said second distance, the program comprises instructions for:
Acquiring a preset minimum critical value and a target scaling factor required for adjusting the low-pass filtering strength;
determining a target critical value according to the preset minimum critical value, the target scaling factor and the second distance;
and determining the first adjusting coefficient according to the fixation point position detection frequency and the target critical value.
In one possible example, the above-described program further includes instructions for performing the steps of:
judging whether the first distance is in a first preset range or not;
and when the first distance is in the first preset range, executing the step of carrying out low-pass filtering on the first distance to obtain a second distance.
In one possible example, the above-described program further includes instructions for performing the steps of:
determining a jitter offset of the electronic device;
and executing the step of acquiring the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point when the jitter offset is in the second preset range.
In one possible example, in said determining the jitter offset of the electronic device, the program comprises instructions for:
Acquiring a jitter variation curve of the electronic equipment in a preset time period, wherein the transverse axis of the jitter variation curve is time, and the vertical axis of the jitter variation curve is amplitude;
sampling the jitter variation curve to obtain a plurality of amplitude values;
determining an average amplitude value according to the plurality of amplitude values;
determining a first offset corresponding to the average amplitude according to a mapping relation between a preset amplitude and an offset;
performing mean square error operation according to the plurality of amplitude values to obtain a target mean square error;
according to a mapping relation between a preset mean square error and an adjustment coefficient, determining a target adjustment coefficient corresponding to the target mean square error;
and adjusting the first offset according to the target adjustment coefficient to obtain the jitter offset of the electronic equipment.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application may divide the functional units of the electronic device according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 6A is a functional unit block diagram of the gaze point shake eliminating apparatus 600 related to the embodiment of the present application. The gaze point shake eliminating apparatus 600, applied to an electronic device, may include: an acquisition unit 601, a first determination unit 602, a filtering unit 603, a second determination unit 604, and a third determination unit 605, wherein,
the acquiring unit 601 is configured to acquire a first coordinate position of a current gaze point and a second coordinate position of a previous gaze point of the current gaze point;
the first determining unit 602 is configured to determine a first distance between the first coordinate position and the second coordinate position;
The filtering unit 603 is configured to perform low-pass filtering on the first distance to obtain a second distance;
the second determining unit 604 is configured to determine a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance;
the third determining unit 605 is configured to determine a target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient, and the second coordinate position.
It can be seen that, in the gaze point shake eliminating device described in the embodiment of the present application, by obtaining the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point, determining the first distance between the first coordinate position and the second coordinate position, performing low-pass filtering on the first distance to obtain the second distance, determining the first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance, and determining the target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient and the second coordinate position, since shake eliminating processing can be performed on the position of the current gaze point according to the position of the last gaze point of the current gaze point, it can be accurately determined that the current gaze point improves user experience.
In one possible example, in terms of the determining the first distance between the first coordinate position and the second coordinate position, the first determining unit 602 is specifically configured to:
obtaining a low-pass filtering result corresponding to the second coordinate position;
acquiring a fixation point position detection frequency;
and determining the first distance according to the first coordinate position, the fixation point position detection frequency and the low-pass filtering result.
In one possible example, in the aspect of low-pass filtering the first distance to obtain a second distance, the filtering unit 603 is specifically configured to:
acquiring a preset adjustment factor and a low-pass filtering distance of the distance between the second coordinate position and the position of the previous gaze point;
determining a second adjusting coefficient for adjusting the low-pass filtering intensity according to the gaze point position detecting frequency and the preset adjusting factor;
and carrying out low-pass filtering on the first distance according to the second adjusting coefficient and the low-pass filtering distance to obtain the second distance.
In one possible example, in the aspect of determining the first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance, the second determining unit 604 is specifically configured to:
Acquiring a preset minimum critical value and a target scaling factor required for adjusting the low-pass filtering strength;
determining a target critical value according to the preset minimum critical value, the target scaling factor and the second distance;
and determining the first adjusting coefficient according to the fixation point position detection frequency and the target critical value.
In one possible example, as shown in fig. 6B, fig. 6B is a further modified structure of the gaze point shake eliminating apparatus shown in fig. 6A, which may further include, compared to fig. 6A: the judging unit 606 is specifically as follows:
the determining unit 606 is configured to determine whether the first distance is in a first preset range;
and when the first distance is within the first preset range, the filtering unit 603 performs the step of performing low-pass filtering on the first distance to obtain a second distance.
In one possible example, as shown in fig. 6C, fig. 6C is a further modified structure of the gaze point shake eliminating apparatus shown in fig. 6A, which may further include, compared to fig. 6A: the fourth determination unit 607 is specifically as follows:
the fourth determining unit 607 is configured to determine a jitter offset of the electronic device;
The step of acquiring, by the acquiring unit 601, the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point when the shake offset is in the second preset range.
Further, in one possible example, in the determining the jitter offset of the electronic device, the fourth determining unit 607 is specifically configured to:
acquiring a jitter variation curve of the electronic equipment in a preset time period, wherein the transverse axis of the jitter variation curve is time, and the vertical axis of the jitter variation curve is amplitude;
sampling the jitter variation curve to obtain a plurality of amplitude values;
determining an average amplitude value according to the plurality of amplitude values;
determining a first offset corresponding to the average amplitude according to a mapping relation between a preset amplitude and an offset;
performing mean square error operation according to the plurality of amplitude values to obtain a target mean square error;
according to a mapping relation between a preset mean square error and an adjustment coefficient, determining a target adjustment coefficient corresponding to the target mean square error;
and adjusting the first offset according to the target adjustment coefficient to obtain the jitter offset of the electronic equipment.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. A gaze point shake eliminating method, characterized in that the method comprises:
acquiring a first coordinate position of a current gaze point and a second coordinate position of a last gaze point of the current gaze point;
determining a first distance between the first coordinate location and the second coordinate location;
Performing low-pass filtering on the first distance to obtain a second distance;
determining a first adjustment coefficient for adjusting the low-pass filtering strength according to the second distance;
determining a target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient and the second coordinate position;
wherein said determining a first distance between said first coordinate location and said second coordinate location comprises:
obtaining a low-pass filtering result corresponding to the second coordinate position;
acquiring a fixation point position detection frequency;
determining the first distance according to the first coordinate position, the fixation point position detection frequency and the low-pass filtering result;
the low-pass filtering the first distance to obtain a second distance includes:
acquiring a preset adjustment factor and a low-pass filtering distance of the distance between the second coordinate position and the position of the previous gaze point;
determining a second adjusting coefficient for adjusting the low-pass filtering intensity according to the gaze point position detecting frequency and the preset adjusting factor;
and carrying out low-pass filtering on the first distance according to the second adjusting coefficient and the low-pass filtering distance to obtain the second distance.
2. The method of claim 1, wherein determining a first adjustment factor for adjusting a low pass filter strength based on the second distance comprises:
acquiring a preset minimum critical value and a target scaling factor required for adjusting the low-pass filtering strength;
determining a target critical value according to the preset minimum critical value, the target scaling factor and the second distance;
and determining the first adjusting coefficient according to the fixation point position detection frequency and the target critical value.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
judging whether the first distance is in a first preset range or not;
and when the first distance is in the first preset range, executing the step of carrying out low-pass filtering on the first distance to obtain a second distance.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
determining a jitter offset of the electronic device;
and executing the step of acquiring the first coordinate position of the current gaze point and the second coordinate position of the last gaze point of the current gaze point when the jitter offset is in the second preset range.
5. The method of claim 4, wherein the determining the jitter offset of the electronic device comprises:
acquiring a jitter variation curve of the electronic equipment in a preset time period, wherein the transverse axis of the jitter variation curve is time, and the vertical axis of the jitter variation curve is amplitude;
sampling the jitter variation curve to obtain a plurality of amplitude values;
determining an average amplitude value according to the plurality of amplitude values;
determining a first offset corresponding to the average amplitude according to a mapping relation between a preset amplitude and an offset;
performing mean square error operation according to the plurality of amplitude values to obtain a target mean square error;
according to a mapping relation between a preset mean square error and an adjustment coefficient, determining a target adjustment coefficient corresponding to the target mean square error;
and adjusting the first offset according to the target adjustment coefficient to obtain the jitter offset of the electronic equipment.
6. A gaze point shake eliminating apparatus, the apparatus comprising: an acquisition unit, a first determination unit, a filtering unit, a second determination unit and a third determination unit, wherein,
the acquisition unit is used for acquiring a first coordinate position of a current gaze point and a second coordinate position of a last gaze point of the current gaze point;
The first determining unit is used for determining a first distance between the first coordinate position and the second coordinate position;
the filtering unit is used for carrying out low-pass filtering on the first distance to obtain a second distance;
the second determining unit is used for determining a first adjusting coefficient for adjusting the low-pass filtering strength according to the second distance;
the third determining unit is configured to determine a target coordinate position of the current gaze point corresponding to the first coordinate position according to the first coordinate position, the first adjustment coefficient, and the second coordinate position;
wherein said determining a first distance between said first coordinate location and said second coordinate location comprises:
obtaining a low-pass filtering result corresponding to the second coordinate position;
acquiring a fixation point position detection frequency;
determining the first distance according to the first coordinate position, the fixation point position detection frequency and the low-pass filtering result;
the low-pass filtering the first distance to obtain a second distance includes:
acquiring a preset adjustment factor and a low-pass filtering distance of the distance between the second coordinate position and the position of the previous gaze point;
Determining a second adjusting coefficient for adjusting the low-pass filtering intensity according to the gaze point position detecting frequency and the preset adjusting factor;
and carrying out low-pass filtering on the first distance according to the second adjusting coefficient and the low-pass filtering distance to obtain the second distance.
7. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any one of claims 1 to 5.
CN202010393282.1A 2020-05-11 2020-05-11 Gaze point shake eliminating method, gaze point shake eliminating device and storage medium Active CN111552389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010393282.1A CN111552389B (en) 2020-05-11 2020-05-11 Gaze point shake eliminating method, gaze point shake eliminating device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010393282.1A CN111552389B (en) 2020-05-11 2020-05-11 Gaze point shake eliminating method, gaze point shake eliminating device and storage medium

Publications (2)

Publication Number Publication Date
CN111552389A CN111552389A (en) 2020-08-18
CN111552389B true CN111552389B (en) 2023-04-28

Family

ID=72002716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010393282.1A Active CN111552389B (en) 2020-05-11 2020-05-11 Gaze point shake eliminating method, gaze point shake eliminating device and storage medium

Country Status (1)

Country Link
CN (1) CN111552389B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079728A (en) * 2020-08-19 2022-02-22 Oppo广东移动通信有限公司 Shooting anti-shake method and device, electronic equipment and storage medium
CN112672058B (en) * 2020-12-26 2022-05-03 维沃移动通信有限公司 Shooting method and device
CN113766133B (en) * 2021-09-17 2023-05-26 维沃移动通信有限公司 Video recording method and device
CN113658083B (en) * 2021-10-19 2022-01-18 广东唯仁医疗科技有限公司 Eyeball image noise elimination method, system, device and medium
CN114911445A (en) * 2022-05-16 2022-08-16 歌尔股份有限公司 Display control method of virtual reality device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013217834A (en) * 2012-04-11 2013-10-24 Furuno Electric Co Ltd Target motion estimating device, target motion estimating method, and radar device
KR20150025041A (en) * 2013-08-28 2015-03-10 삼성전자주식회사 Method and its apparatus for controlling a mouse cursor using eye recognition
CN104951084A (en) * 2015-07-30 2015-09-30 京东方科技集团股份有限公司 Eye-tracking method and device
CN108989688A (en) * 2018-09-14 2018-12-11 成都数字天空科技有限公司 Virtual camera anti-fluttering method, device, electronic equipment and readable storage medium storing program for executing
CN110969116A (en) * 2019-11-28 2020-04-07 Oppo广东移动通信有限公司 Method for determining gazing point position and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013217834A (en) * 2012-04-11 2013-10-24 Furuno Electric Co Ltd Target motion estimating device, target motion estimating method, and radar device
KR20150025041A (en) * 2013-08-28 2015-03-10 삼성전자주식회사 Method and its apparatus for controlling a mouse cursor using eye recognition
CN104951084A (en) * 2015-07-30 2015-09-30 京东方科技集团股份有限公司 Eye-tracking method and device
CN108989688A (en) * 2018-09-14 2018-12-11 成都数字天空科技有限公司 Virtual camera anti-fluttering method, device, electronic equipment and readable storage medium storing program for executing
CN110969116A (en) * 2019-11-28 2020-04-07 Oppo广东移动通信有限公司 Method for determining gazing point position and related device

Also Published As

Publication number Publication date
CN111552389A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN111552389B (en) Gaze point shake eliminating method, gaze point shake eliminating device and storage medium
CN111510630B (en) Image processing method, device and storage medium
WO2020078237A1 (en) Audio processing method and electronic device
US8738080B2 (en) Docking station for android cellphone
CN112602111A (en) Electronic apparatus that blurs image obtained by combining a plurality of images based on depth information and method of driving the same
WO2020088290A1 (en) Method for obtaining depth information and electronic device
EP2824541A1 (en) Method and apparatus for connecting devices using eye tracking
US11258962B2 (en) Electronic device, method, and computer-readable medium for providing bokeh effect in video
KR102433293B1 (en) Electronic device and operating method thereof
CN113744750B (en) Audio processing method and electronic equipment
US11627437B2 (en) Device searching method and electronic device
CN113823314B (en) Voice processing method and electronic equipment
EP3873084A1 (en) Method for photographing long-exposure image and electronic device
CN112840644A (en) Electronic device and method for acquiring depth information using at least one of a camera or a depth sensor
CN113393856B (en) Pickup method and device and electronic equipment
US20150206317A1 (en) Method for processing image and electronic device thereof
CN112840634A (en) Electronic device and method for obtaining image
KR102385333B1 (en) Electronic device and method for controlling a plurality of image sensors
US20200204747A1 (en) Electronic device and method for obtaining data from second image sensor by means of signal provided from first image sensor
US10491836B2 (en) Electronic device and control method in which the resolution of a combined output image can be increased without deterioration
CN113395438A (en) Image correction method and related device for eyeball tracking technology
WO2022170866A1 (en) Data transmission method and apparatus, and storage medium
WO2022033344A1 (en) Video stabilization method, and terminal device and computer-readable storage medium
US20210288096A1 (en) Imaging apparatus and image sensor including the same
CN113325948B (en) Air-isolated gesture adjusting method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant