CN112989878A - Pupil detection method and related product - Google Patents
Pupil detection method and related product Download PDFInfo
- Publication number
- CN112989878A CN112989878A CN201911283540.4A CN201911283540A CN112989878A CN 112989878 A CN112989878 A CN 112989878A CN 201911283540 A CN201911283540 A CN 201911283540A CN 112989878 A CN112989878 A CN 112989878A
- Authority
- CN
- China
- Prior art keywords
- eye image
- template
- pupil
- region
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000001747 pupil Anatomy 0.000 title claims abstract description 207
- 238000001514 detection method Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000015654 memory Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 9
- 210000001508 eye Anatomy 0.000 description 347
- 210000005252 bulbus oculi Anatomy 0.000 description 33
- 238000004891 communication Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- AMGQUBHHOARCQH-UHFFFAOYSA-N indium;oxotin Chemical compound [In].[Sn]=O AMGQUBHHOARCQH-UHFFFAOYSA-N 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses a pupil detection method and a related product, wherein the method comprises the following steps: by acquiring a first eye image; if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library; repairing the first eye image according to the target eye image template to obtain a second eye image; and carrying out pupil detection based on the second eye image to obtain the center position of the pupil, and thus, under the condition that the pupil is shielded in the acquired first eye image, repairing the shielded first eye image through the human eye image template, thereby solving the problem that the pupil is shielded and improving the efficiency and accuracy of pupil detection.
Description
Technical Field
The application relates to the technical field of pupil detection, in particular to a pupil detection method and a related product.
Background
With the widespread use of electronic devices (such as mobile phones, tablet computers, and the like), the electronic devices have more and more applications and more powerful functions, and the electronic devices are developed towards diversification and personalization, and become indispensable electronic products in the life of users.
The electronic device can be used for realizing a pupil detection technology, and the pupil detection technology is applied to an actual interaction scene of the electronic device, for example, pupil detection can be performed through the electronic device, eyeball tracking can be further realized, a focus point on a screen of the electronic device concerned by a user is determined, and then next operation is performed according to the focus point on the screen concerned by the user. The existing eyeball tracking system identifies the position of a pupil by acquiring a frame of image, but in the process of actually acquiring an eye image, when the eyelid of a user is sagged, or the user looks down, or when the user wears glasses, the pupil can be shielded in the eye image acquired at some angles, so that the center position of the pupil cannot be identified, and the efficiency and the accuracy of the eyeball tracking system are influenced.
Disclosure of Invention
The embodiment of the application provides a pupil detection method and a related product, which can solve the problem that pupils are shielded and improve the efficiency and accuracy of pupil detection.
In a first aspect, an embodiment of the present application provides a pupil detection method, including the following steps:
acquiring a first eye image;
if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library, wherein the shielded area at least comprises a partial pupil area;
repairing the first eye image according to the target eye image template to obtain a second eye image;
and carrying out pupil detection based on the second eye image to obtain the pupil center position.
In a second aspect, an embodiment of the present application provides a pupil detection apparatus, including:
an acquisition unit configured to acquire a first eye image;
a determining unit, configured to determine a target human eye image template successfully matched with the first eye image in a preset human eye image template library if the first eye image includes an occluded region, where the occluded region at least includes a partial pupil region;
the restoration unit is used for restoring the first eye image according to the target eye image template to obtain a second eye image;
and the detection unit is used for carrying out pupil detection based on the second eye image to obtain the pupil center position.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the pupil detection method and the related product provided in the embodiment of the present application acquire the first eye image; if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library; repairing the first eye image according to the target eye image template to obtain a second eye image; and carrying out pupil detection based on the second eye image to obtain the center position of the pupil, and thus, under the condition that the pupil is shielded in the acquired first eye image, repairing the shielded first eye image through the human eye image template, thereby solving the problem that the pupil is shielded and improving the efficiency and accuracy of pupil detection.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a schematic illustration showing an eyeball tracking application running in an electronic device in an eyeball tracking scene according to an embodiment of the application;
fig. 1C is a schematic flowchart of a pupil detection method according to an embodiment of the present disclosure;
fig. 1D is a schematic view of a scene where image shooting is performed when pupil detection is performed by a camera according to an embodiment of the present disclosure;
fig. 1E is a schematic diagram illustrating a first eye image with a completely blocked pupil area according to an embodiment of the present disclosure;
fig. 1F is a schematic diagram illustrating a first eye image with a partially blocked pupil area according to an embodiment of the present disclosure;
fig. 1G is a schematic illustration showing that each template pixel point in the template pupil region replaces a region pixel point corresponding to the template pixel point in the occluded region according to the present application;
fig. 1H is a schematic illustration showing a plurality of first region pixel points being repaired according to a plurality of first template pixel points according to an embodiment of the present application;
fig. 1I is another schematic illustration showing a method for repairing a plurality of first region pixel points according to a plurality of first template pixel points according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another pupil detection method provided in the embodiment of the present application;
fig. 3A is a schematic flowchart illustrating a process of determining a gaze point in an eyeball tracking scene according to an embodiment of the present disclosure;
fig. 3B is a schematic flowchart of pupil detection in an eyeball tracking scene according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a pupil detection device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device related to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices (smart watches, smart bracelets, wireless headsets, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to wireless modems, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application, the electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, where:
the electronic device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may include memory, such as hard drive memory, non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), and so on, and embodiments of the present application are not limited thereto. Processing circuitry in storage and processing circuitry 110 may be used to control the operation of electronic device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the electronic device 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functionality, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100, to name a few.
The electronic device 100 may include input-output circuitry 150. The input-output circuit 150 may be used to enable the electronic device 100 to input and output data, i.e., to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to the external device. The input-output circuit 150 may further include a sensor 170. Sensor 170 may include the ultrasonic fingerprint identification module, may also include ambient light sensor, proximity sensor based on light and electric capacity, touch sensor (for example, based on light touch sensor and/or capacitanc touch sensor, wherein, touch sensor may be a part of touch display screen, also can regard as a touch sensor structure independent utility), acceleration sensor, and other sensors etc., the ultrasonic fingerprint identification module can be integrated in the screen below, or, the ultrasonic fingerprint identification module can set up in electronic equipment's side or back, do not do the restriction here, this ultrasonic fingerprint identification module can be used to gather the fingerprint image.
The sensor 170 may include an Infrared (IR) camera or an RGB camera, and when the IR camera takes a picture, the pupil reflects infrared light, so the IR camera takes a pupil image more accurately than the RGB camera; the RGB camera needs to carry out more follow-up pupil detection, and calculation accuracy and accuracy are higher than the IR camera, and the commonality is better than the IR camera, but the calculated amount is big.
Input-output circuit 150 may also include one or more display screens, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, organic light emitting diode display, electronic ink display, plasma display, display using other display technologies. The display screen 130 may include an array of touch sensors (i.e., the display screen 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The electronic device 100 may also include an audio component 140. The audio component 140 may be used to provide audio input and output functionality for the electronic device 100. The audio components 140 in the electronic device 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 may be used to provide the electronic device 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The electronic device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control the operation of electronic device 100, and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from electronic device 100.
In an eyeball tracking scenario, an electronic device may run an eyeball tracking application, please refer to fig. 1B, where fig. 1B is a schematic diagram illustrating an operation of the eyeball tracking application in the electronic device in the eyeball tracking scenario provided in the embodiment of the present application, where a memory in the storage and processing circuit 110 may store a plurality of application programs, such as an electronic book, a browser, a payment application, and a system application, and one or more microprocessors in the storage and processing circuit 110 may further be used to run the installed application programs to implement various functions, such as an unlocking function, and also, for example, an eyeball point tracking function of a user, and a point of regard tracking function, which require eyeball tracking. The eyeball tracking service is responsible for managing a fixation point algorithm, determining fixation point post-processing, input processing, authentication and parameter setting, the eyeball tracking core algorithm can comprise a calibration algorithm and an estimation fixation point algorithm, and the eyeball tracking strategy is related to the fixation point post-processing. The method mainly comprises the steps of filtering fixation point jumping, monitoring fixation point input through fixation point conversion, enabling eyeball tracking authentication to be used for calling back all modules, and being responsible for judging whether an authentication requester is allowed or not, enabling a parameter setting module to be used for analyzing configuration and updating configuration in real time, and enabling a pupil repairing module to be used for creating a human eye image template library and repairing a collected first eye image. The eyeball tracking service can also call a camera application through a Native Development Kit (NDK) interface of the camera, the camera application can call a camera, and the first eye image is acquired through the camera.
Referring to fig. 1C, fig. 1C is a schematic flowchart of a pupil detection method according to an embodiment of the present disclosure, applied to the electronic device shown in fig. 1A, as shown in fig. 1C, the pupil detection method includes:
101. a first eye image is acquired.
The first eye image is an image containing human eyes of the user.
In the embodiment of the application, the first eye image is acquired, and the first eye image can be shot through a camera arranged on the electronic equipment. The collected first eye image can be used for eyeball tracking, in an eyeball tracking scene, the electronic equipment can collect the first eye image through the camera to perform pupil detection in the process of performing eyeball tracking so as to obtain the pupil center position of the pupil in the first eye image through detection, and therefore the fixation point of a user paying attention to the screen of the electronic equipment is further determined according to the pupil center position, and the operation of the electronic equipment is controlled according to the fixation point. Referring to fig. 1D, fig. 1D is a scene schematic diagram of shooting a first eye image through a camera according to an embodiment of the present disclosure, in a specific shooting scene, when an eyelid of a user is drooping, or the user looks down, or the user wears glasses, a pupil may appear in the first eye image collected at some angles and be blocked, if a pupil area in the shot first eye image is blocked, a pupil center position cannot be directly obtained when performing pupil detection on the first eye image, so that the first eye image can be repaired.
102. And if the first eye image comprises a blocked area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library, wherein the blocked area at least comprises a part of pupil area.
Wherein, the occluded region at least comprises a partial pupil region, which specifically comprises the following conditions: the pupil area in the first eye image is completely blocked but the other eye areas are not completely blocked, or a part of the pupil area in the first eye image is blocked but the other part of the pupil area is not blocked and the other eye areas are not completely blocked. Please refer to fig. 1E, fig. 1E is a schematic diagram illustrating a first eye image with a pupil area completely blocked according to an embodiment of the present disclosure, and fig. 1F is a schematic diagram illustrating a first eye image with a pupil area partially blocked according to an embodiment of the present disclosure.
In specific implementation, the electronic device may preset a human eye image template library, the human eye image template library may include a plurality of human eye image templates collected at different angles and different distances, and a human eye region of each human eye image template is not shielded. Therefore, if the first eye image is a partially shielded eye image, a target eye image template which is successfully matched with the first eye image in a preset eye image template library can be determined.
Optionally, in the step 102, determining a target human eye image template successfully matched with the first eye image in a preset human eye image template library may include the following steps:
21. matching the first eye image with a plurality of eye image templates in the preset eye image template library to obtain a plurality of matching values;
22. if a target matching value larger than a preset matching value exists in the plurality of matching values, determining that the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and determining the human eye image template corresponding to the target matching value as a target human eye image template.
The first eye image is matched with a plurality of eye image templates in the preset eye image template library, a first shooting angle and a first shooting distance of a camera shooting the first eye image can be firstly obtained, then the first shooting angle is matched with a shooting angle corresponding to each eye image template in the plurality of eye image templates to obtain a first matching score, the first shooting distance is matched with a shooting distance corresponding to each eye image template in the plurality of eye image templates to obtain a second matching score, the sum of the first matching score and the second matching score is further determined to obtain a matching value corresponding to the eye image template, therefore, a plurality of matching values of the plurality of eye image templates and the first eye image can be determined, each eye image template corresponds to one matching value, and finally, a target eye image template corresponding to a target eye image template which is larger than the preset matching value in the plurality of matching values can be determined . Thus, the target human eye image template with the shooting angle closest to the first shooting angle and the shooting distance closest to the first shooting distance can be determined.
Optionally, the first eye image is matched with a plurality of eye image templates in the preset eye image template library, the first eye image is further subjected to feature extraction to obtain a first eye feature set, the first feature set is then matched with the eye feature set corresponding to each eye image template in the plurality of eye image templates to obtain a matching value corresponding to the eye image template, so that a plurality of matching values of the plurality of eye image templates and the first eye image can be determined, each eye image template corresponds to one matching value, and finally, a target eye image template corresponding to a target matching value larger than the preset matching value in the plurality of matching values can be determined, so that the target eye image template with the eye feature closest to the first eye feature set can be determined.
Optionally, in this embodiment of the present application, the following steps may also be included:
1021. performing feature extraction on the first eye image to obtain a plurality of eye image features, wherein the eye image features comprise a plurality of eye contour features;
1022. predicting a pupil reference region according to the eye contour features;
1023. and matching partial eye image features of the eye image features in the pupil reference region with preset pupil image features to obtain a matching result, and determining that the first eye image comprises a shielded region if the matching result does not meet a preset condition.
In this embodiment of the application, after a first eye image is obtained, feature extraction may be performed on the first eye image to obtain a plurality of eye image features, and since eye image features presented in different regions of the first eye image are different, a pupil reference region may be predicted according to a plurality of eye contour features, and then, a partial eye image feature in the pupil reference region among the plurality of eye image features may be matched with a preset pupil image feature, where the preset pupil image feature may be stored in the electronic device in advance, so that the predicted partial eye image feature of the pupil reference region may be matched with the preset pupil image feature to obtain a matching result, specifically, it is assumed that the predicted partial eye image feature of the pupil reference region includes m eye image features, m is an integer greater than 1, and each eye image feature of the m eye image features may be matched with the preset pupil image feature, obtaining m feature matching values, taking the m feature matching values as matching results, if n feature matching values smaller than or equal to a preset matching value exist in the m feature matching values and the ratio of n to m is larger than the preset ratio, determining that the first eye image comprises a blocked area, wherein n is a positive integer smaller than m, and further determining that the first eye image is a partially blocked eye image.
103. And repairing the first eye image according to the target eye image template to obtain a second eye image.
Considering the situation that the pupil area is blocked in the first eye image, the target eye image template is the eye image template closest to the first eye image, so that the first eye image can be repaired according to the target eye image template, and the second eye image comprising the complete pupil area can be obtained.
Optionally, in the step 103, the repairing the first eye image according to the target eye image template to obtain a second eye image may include the following steps:
31. determining the occluded region;
32. determining a template pupil area corresponding to the shielded area in the target human eye image template;
33. and repairing the shielded region according to the template pupil region to obtain the second eye image.
The first eye image can be subjected to image analysis, a region corresponding to a pixel point set which is inconsistent with a preset eye pixel point in the first eye image is determined to be a shielded region, the preset eye pixel point can be a preset pupil region pixel point, or a region corresponding to a pixel point set which is inconsistent with pixel point information corresponding to a target eye image template in the first eye image can be determined to be a shielded region.
The template pupil area in the target human eye image template is determined, pupil feature extraction can be carried out on the target human eye image template to obtain pupil feature points, and the area with the preset range size and containing the pupil feature points is determined as the template pupil area according to the pupil feature points.
And repairing the shielded area according to the template pupil area to obtain the second eye image, and replacing the pixel points of the template pupil area with the corresponding pixel points in the first eye image to obtain the second eye image.
Optionally, in the step 33, the repairing the blocked region according to the template pupil region to obtain the second eye image may include the following steps:
3301. and replacing the area pixel point corresponding to the template pixel point in the shielded area by each template pixel point of the template pupil area to obtain the second eye image.
Referring to fig. 1G, fig. 1G is a schematic diagram illustrating that each template pixel point in the template pupil region replaces a region pixel point corresponding to the template pixel point in the blocked region, and when the blocked region includes all blocked pupil regions or the blocked region includes a blocked partial pupil region, each template pixel point in the template pupil region can be used to replace a region pixel point corresponding to the template pixel point in the blocked region, so that the template pupil region in the target eye image template can be directly used as the restored all pupil regions in the first eye image to obtain the second eye image.
Optionally, if the blocked region includes a blocked partial pupil region, in the step 33, the method may include the following steps of repairing the blocked region according to the template pupil region to obtain the second eye image:
3302. determining a plurality of first area pixel points of which the noise is greater than a preset threshold value in the shielded area;
3303. determining a plurality of first template pixel points which correspond to the plurality of first area pixel points in the template pupil area one to one;
3304. and repairing the first area pixel points according to the first template pixel points to obtain the second eye image.
Referring to fig. 1H, fig. 1H is a schematic diagram illustrating a demonstration that a plurality of first area pixel points are repaired according to a plurality of first template pixel points, where if a pupil area of a user is partially blocked, in order to repair the blocked partial pupil area in a first eye image, a plurality of first area pixel points whose noise in the blocked area is greater than a preset threshold may be determined, the plurality of first area pixel points are used as pixel points to be repaired, then a plurality of first template pixel points in the template pupil area, which correspond to the plurality of first area pixel points one to one, are determined, and finally, the plurality of first area pixel points may be repaired according to the plurality of first template pixel points to obtain a second eye image.
In one possible embodiment, the second area pixel point set is repaired according to the second template pixel point set, and specifically, each template pixel point in the second template pixel point set can replace a pixel point in the corresponding second area pixel point set, so as to obtain the second eye image.
Optionally, in the step 3304, the repairing the plurality of first area pixel points according to the plurality of first template pixel points to obtain the second eye image may include the following steps:
3341. determining a plurality of pupil region pixel points which are not shielded in the first eye image;
3342. determining a weighting coefficient according to the plurality of pupil area pixel points;
3343. determining a plurality of target template pixel points according to the weighting coefficients and the plurality of first template pixel points, wherein the plurality of target template pixel points are in one-to-one correspondence with the plurality of first area pixel points;
3344. and replacing each target template pixel point in the plurality of target template pixel points with a first region pixel point corresponding to the target template pixel point in the shielded region to obtain the second eye image.
However, in the case where the pupil region of the user is partially blocked, a plurality of pupil region pixels that are not blocked in the first eye image can be specified, considering that there is also a partial pupil region that is not blocked. In the embodiment of the present application, please refer to fig. 1I, where fig. 1I is another demonstration diagram for repairing a plurality of first region pixels according to a plurality of first template pixel points, provided in the embodiment of the present application, in which a gray value of each pupil region pixel point in the plurality of pupil region pixel points is determined to obtain a plurality of pixel values, an average value of the plurality of pixel values is then determined to obtain a pixel average value, a mapping relationship between the average value and a weighting coefficient is pre-stored in an electronic device, so that a weighting coefficient corresponding to the pixel average value is determined according to the mapping relationship, further, a weighting calculation is sequentially performed according to the weighting coefficient and a pixel value of each first template pixel point in the plurality of first template pixel points to obtain a plurality of target template pixel points, and finally each target template pixel point in the plurality of target template pixel points is replaced with a first region pixel point corresponding to the target template pixel point in a shielded region, a second eye image is obtained so that the blocked partial pupil region can be restored. Therefore, the other blocked part of the pupil area can be repaired according to the unblocked part of the pupil area, and the first eye image can be repaired more accurately to obtain the second eye image.
104. And carrying out pupil detection based on the second eye image to obtain the pupil center position.
After the first eye image is restored to obtain a second eye image including pupil information, pupil detection can be performed according to the second eye image to obtain a pupil center position. In a specific embodiment, pupil feature extraction may be performed on the second eye image to obtain a target pupil feature, and then the pupil center position in the second eye image is determined according to the target pupil feature point.
It can be seen that, in the embodiment of the present application, a first eye image is obtained; if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library; repairing the first eye image according to the target eye image template to obtain a second eye image; and carrying out pupil detection based on the second eye image to obtain the center position of the pupil, and thus, under the condition that the pupil is shielded in the acquired first eye image, repairing the shielded first eye image through the human eye image template, thereby solving the problem that the pupil is shielded and improving the efficiency and accuracy of pupil detection.
Referring to fig. 2, fig. 2 is a schematic flow chart of a pupil detection method according to an embodiment of the present disclosure, where the method includes:
201. a first eye image is acquired.
202. And if the first eye image comprises a blocked area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library, wherein the blocked area at least comprises a part of pupil area.
203. Determining the occluded region.
204. And determining a template pupil area corresponding to the shielded area in the target human eye image template.
205. And repairing the shielded region according to the template pupil region to obtain the second eye image.
206. And carrying out pupil detection based on the second eye image to obtain the pupil center position.
The specific implementation process of the steps 201-206 can refer to the corresponding description in the steps 101-104, and will not be described herein again.
It can be seen that, in the embodiment of the present application, a first eye image is obtained; if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library; determining an occluded area; determining a template pupil area corresponding to the shielded area in the target human eye image template; repairing the shielded area according to the pupil area of the template to obtain a second eye image; the pupil center position is obtained by performing pupil detection based on the second eye image, so that the blocked first eye image is repaired through the human eye image template under the condition that the pupil is blocked in the collected first eye image, the problem that the pupil is blocked is solved, and the efficiency and the accuracy of the pupil detection are improved
Referring to fig. 3A-3B, fig. 3A is a schematic flow chart illustrating a process of determining a gaze point in an eyeball tracking scene according to an embodiment of the present disclosure, and fig. 3B is a schematic flow chart illustrating a process of performing pupil detection in an eyeball tracking scene according to an embodiment of the present disclosure, where the method includes:
as shown in fig. 3A, in a process that the electronic device runs the eye tracking application, the electronic device may request the eye tracking service to obtain an eye tracking gaze location, and the eye tracking service may invoke the camera application, specifically, the eye tracking service may request the camera application to obtain a first eye image, so that the camera application may turn on a camera on the electronic device, collect the first eye image through the camera, transmit the collected first eye image to the camera application, and further transmit the first eye image to the eye tracking service; furthermore, the eyeball tracking service can call an eyeball tracking core algorithm stored in the memory, if the pupil is blocked, the first eye image can be repaired to obtain a second eye image with complete pupil information, then, the eyeball tracking can be carried out on the second eye image to obtain the eyeball fixation point position, and finally, the eyeball tracking service can transmit the eyeball fixation point position to the eyeball tracking application.
As shown in fig. 3B, after a first eye image is acquired by a camera, feature extraction may be performed on the first eye image to obtain a plurality of eye image features, where the plurality of eye image features includes a plurality of eye contour features; predicting a pupil reference region according to the eye contour features; matching partial eye image features in the pupil reference region among the eye image features with preset pupil image features to obtain a matching result, if the matching result does not meet a preset condition, determining that the first eye image comprises a blocked region, determining a target eye image template successfully matched with the first eye image in a preset eye image template library, determining a blocked region in the first eye image, determining a template pupil region corresponding to the blocked region in the target eye image template, if the blocked region comprises all pupil regions, replacing region pixel points corresponding to the template pixel points in the blocked region with each template pixel point of the template pupil region to obtain the second eye image, if the blocked region comprises a blocked partial pupil region, determining a plurality of first area pixel points of which the noise is greater than a preset threshold value in a shielded area; determining a plurality of first template pixel points which correspond to the plurality of first area pixel points in the template pupil area one to one; and repairing the plurality of first area pixel points according to the plurality of first template pixel points to obtain a second eye image, performing pupil detection based on the second eye image to obtain a pupil center position, and finally determining an eyeball fixation point position according to the pupil center position, so that the eyeball fixation point position can be determined in an eyeball tracking scene.
It can be seen that, in the embodiment of the present application, a first eye image is obtained; if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library; if the shielded area comprises all pupil areas, replacing area pixel points corresponding to the template pixel points in the shielded area by using each template pixel point of the template pupil areas to obtain a second eye image, and if the shielded area comprises a shielded part of pupil areas, determining a plurality of first area pixel points of which the noise is greater than a preset threshold value in the shielded area; determining a plurality of first template pixel points which correspond to the plurality of first area pixel points in the pupil area of the template one to one; repairing the first area pixel points according to the first template pixel points to obtain a second eye image; and carrying out pupil detection based on the second eye image to obtain the center position of the pupil, and thus, under the condition that the pupil is shielded in the acquired first eye image, repairing the shielded first eye image through the human eye image template, thereby solving the problem that the pupil is shielded and improving the efficiency and accuracy of pupil detection.
Therefore, by the scheme, the positions of the pupils of the human eyes can be detected under different conditions, so that the efficiency of pupil detection can be improved, and the limitation of pupil detection in the prior art can be improved.
The following is a device for implementing the above pupil detection method, specifically as follows:
in accordance with the above, please refer to fig. 4, where fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, the electronic device includes: a processor 410, a communication interface 430, and a memory 420; and one or more programs 421, the one or more programs 421 stored in the memory 420 and configured to be executed by the processor, the programs 421 including instructions for:
acquiring a first eye image;
if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library, wherein the shielded area at least comprises a partial pupil area;
repairing the first eye image according to the target eye image template to obtain a second eye image;
and carrying out pupil detection based on the second eye image to obtain the pupil center position.
In one possible example, in determining a target eye image template in the preset eye image template library that is successfully matched with the first eye image, the program 421 includes instructions for:
matching the first eye image with a plurality of eye image templates in the preset eye image template library to obtain a plurality of matching values;
if a target matching value larger than a preset matching value exists in the plurality of matching values, determining that the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and determining the human eye image template corresponding to the target matching value as a target human eye image template.
In one possible example, in terms of the repairing the first eye image according to the target human eye image template to obtain a second eye image, the program 421 includes instructions for performing the following steps:
determining the occluded region;
determining a template pupil area corresponding to the shielded area in the target human eye image template;
and repairing the shielded region according to the template pupil region to obtain the second eye image.
In one possible example, in terms of the repairing the occluded region according to the template pupil region, resulting in the second eye image, the program 421 includes instructions for:
and replacing the area pixel point corresponding to the template pixel point in the shielded area by each template pixel point of the template pupil area to obtain the second eye image.
In one possible example, if the occluded region includes an occluded partial pupil region, the program 421 includes instructions for performing the following steps in terms of repairing the occluded region according to the template pupil region to obtain the second eye image:
determining a plurality of first area pixel points of which the noise is greater than a preset threshold value in the shielded area;
determining a plurality of first template pixel points which correspond to the plurality of first area pixel points in the template pupil area one to one;
and repairing the first area pixel points according to the first template pixel points to obtain the second eye image.
In one possible example, in the repairing the plurality of first region pixel points according to the plurality of first template pixel points to obtain the second eye image, the program 421 includes instructions for:
determining a plurality of pupil region pixel points which are not shielded in the first eye image;
determining a weighting coefficient according to the plurality of pupil area pixel points;
determining a plurality of target template pixel points according to the weighting coefficients and the plurality of first template pixel points, wherein the plurality of target template pixel points are in one-to-one correspondence with the plurality of first area pixel points;
and replacing each target template pixel point in the plurality of target template pixel points with a first region pixel point corresponding to the target template pixel point in the shielded region to obtain the second eye image.
In one possible example, the program 421 further includes instructions for performing the steps of:
performing feature extraction on the first eye image to obtain a plurality of eye image features, wherein the eye image features comprise a plurality of eye contour features;
predicting a pupil reference region according to the eye contour features;
and matching partial eye image features of the eye image features in the pupil reference region with preset pupil image features to obtain a matching result, and determining that the first eye image comprises a shielded region if the matching result does not meet a preset condition.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a pupil detection apparatus 500 provided in this embodiment, where the pupil detection apparatus 500 is applied to an electronic device, the electronic device includes an acceleration sensor and a camera, the apparatus 500 includes an obtaining unit 501, a determining unit 502, a repairing unit 503, and a detecting unit 504, where,
the acquiring unit 501 is configured to acquire a first eye image;
the determining unit 502 is configured to determine a target human eye image template successfully matched with the first eye image in a preset human eye image template library if the first eye image includes an occluded region, where the occluded region at least includes a partial pupil region;
the repairing unit 503 is configured to repair the first eye image according to the target eye image template to obtain a second eye image;
the detecting unit 504 is configured to perform pupil detection based on the second eye image to obtain a pupil center position.
Optionally, in terms of determining a target human eye image template successfully matched with the first eye image in the preset human eye image template library, the determining unit 502 is specifically configured to:
matching the first eye image with a plurality of eye image templates in the preset eye image template library to obtain a plurality of matching values;
if a target matching value larger than a preset matching value exists in the plurality of matching values, determining that the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and determining the human eye image template corresponding to the target matching value as a target human eye image template.
Optionally, in terms of obtaining a second eye image by repairing the first eye image according to the target eye image template, the repairing unit 503 is specifically configured to:
determining the occluded region;
determining a template pupil area corresponding to the shielded area in the target human eye image template;
and repairing the shielded region according to the template pupil region to obtain the second eye image.
Optionally, in terms of the repairing the blocked region according to the template pupil region to obtain the second eye image, the repairing unit 503 is specifically configured to:
and replacing the area pixel point corresponding to the template pixel point in the shielded area by each template pixel point of the template pupil area to obtain the second eye image.
Optionally, if the blocked region includes a blocked partial pupil region, in terms of repairing the blocked region according to the template pupil region to obtain the second eye image, the repairing unit 503 is specifically configured to:
determining a plurality of first area pixel points of which the noise is greater than a preset threshold value in the shielded area;
determining a plurality of first template pixel points which correspond to the plurality of first area pixel points in the template pupil area one to one;
and repairing the first area pixel points according to the first template pixel points to obtain the second eye image.
Optionally, in the aspect that the plurality of first area pixel points are repaired according to the plurality of first template pixel points to obtain the second eye image, the detecting unit 503 is specifically configured to:
determining a plurality of pupil region pixel points which are not shielded in the first eye image;
determining a weighting coefficient according to the plurality of pupil area pixel points;
determining a plurality of target template pixel points according to the weighting coefficients and the plurality of first template pixel points, wherein the plurality of target template pixel points are in one-to-one correspondence with the plurality of first area pixel points;
and replacing each target template pixel point in the plurality of target template pixel points with a first region pixel point corresponding to the target template pixel point in the shielded region to obtain the second eye image.
Optionally, the detecting unit 504 is further configured to perform feature extraction on the first eye image to obtain a plurality of eye image features, where the plurality of eye image features includes a plurality of eye contour features; predicting a pupil reference region according to the eye contour features; and matching partial eye image features of the eye image features in the pupil reference region with preset pupil image features to obtain a matching result, and determining that the first eye image comprises a shielded region if the matching result does not meet a preset condition. .
It can be seen that the pupil detection apparatus described in the embodiment of the present application obtains the first eye image; if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library; repairing the first eye image according to the target eye image template to obtain a second eye image; and carrying out pupil detection based on the second eye image to obtain the center position of the pupil, and thus, under the condition that the pupil is shielded in the acquired first eye image, repairing the shielded first eye image through the human eye image template, thereby solving the problem that the pupil is shielded and improving the efficiency and accuracy of pupil detection.
It can be understood that the functions of each program module of the pupil detection apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A pupil detection method, the method comprising:
acquiring a first eye image;
if the first eye image comprises a shielded area, determining a target eye image template which is successfully matched with the first eye image in a preset eye image template library, wherein the shielded area at least comprises a partial pupil area;
repairing the first eye image according to the target eye image template to obtain a second eye image;
and carrying out pupil detection based on the second eye image to obtain the pupil center position.
2. The method of claim 1, wherein determining a target eye image template in a preset eye image template library that is successfully matched with the first eye image comprises:
matching the first eye image with a plurality of eye image templates in the preset eye image template library to obtain a plurality of matching values;
if a target matching value larger than a preset matching value exists in the plurality of matching values, determining that the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and determining the human eye image template corresponding to the target matching value as a target human eye image template.
3. The method according to claim 1 or 2, wherein the repairing the first eye image according to the target human eye image template to obtain a second eye image comprises:
determining the occluded region;
determining a template pupil area corresponding to the shielded area in the target human eye image template;
and repairing the shielded region according to the template pupil region to obtain the second eye image.
4. The method of claim 3, wherein said repairing the occluded region from the template pupil region to obtain the second eye image comprises:
and replacing the area pixel point corresponding to the template pixel point in the shielded area by each template pixel point of the template pupil area to obtain the second eye image.
5. The method according to claim 3, wherein if the occluded region includes an occluded partial pupil region, the repairing the occluded region according to the template pupil region to obtain the second eye image comprises:
determining a plurality of first area pixel points of which the noise is greater than a preset threshold value in the shielded area;
determining a plurality of first template pixel points which correspond to the plurality of first area pixel points in the template pupil area one to one;
and repairing the first area pixel points according to the first template pixel points to obtain the second eye image.
6. The method of claim 5, wherein the repairing the first region pixels according to the first template pixel points to obtain the second eye image comprises:
determining a plurality of pupil region pixel points which are not shielded in the first eye image;
determining a weighting coefficient according to the plurality of pupil area pixel points;
determining a plurality of target template pixel points according to the weighting coefficients and the plurality of first template pixel points, wherein the plurality of target template pixel points are in one-to-one correspondence with the plurality of first area pixel points;
and replacing each target template pixel point in the plurality of target template pixel points with a first region pixel point corresponding to the target template pixel point in the shielded region to obtain the second eye image.
7. The method according to any one of claims 1-6, further comprising:
performing feature extraction on the first eye image to obtain a plurality of eye image features, wherein the eye image features comprise a plurality of eye contour features;
predicting a pupil reference region according to the eye contour features;
and matching partial eye image features of the eye image features in the pupil reference region with preset pupil image features to obtain a matching result, and determining that the first eye image comprises a shielded region if the matching result does not meet a preset condition.
8. A pupil detection device, characterized in that the device comprises:
an acquisition unit configured to acquire a first eye image;
a determining unit, configured to determine a target human eye image template successfully matched with the first eye image in a preset human eye image template library if the first eye image includes an occluded region, where the occluded region at least includes a partial pupil region;
the restoration unit is used for restoring the first eye image according to the target eye image template to obtain a second eye image;
and the detection unit is used for carrying out pupil detection based on the second eye image to obtain the pupil center position.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911283540.4A CN112989878A (en) | 2019-12-13 | 2019-12-13 | Pupil detection method and related product |
PCT/CN2020/130329 WO2021115097A1 (en) | 2019-12-13 | 2020-11-20 | Pupil detection method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911283540.4A CN112989878A (en) | 2019-12-13 | 2019-12-13 | Pupil detection method and related product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112989878A true CN112989878A (en) | 2021-06-18 |
Family
ID=76329486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911283540.4A Pending CN112989878A (en) | 2019-12-13 | 2019-12-13 | Pupil detection method and related product |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112989878A (en) |
WO (1) | WO2021115097A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114244884A (en) * | 2021-12-21 | 2022-03-25 | 北京蔚领时代科技有限公司 | Eyeball tracking-based video coding method applied to cloud game |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030042B (en) * | 2023-02-24 | 2023-06-16 | 智慧眼科技股份有限公司 | Diagnostic device, method, equipment and storage medium for doctor's diagnosis |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100950138B1 (en) * | 2009-08-17 | 2010-03-30 | 퍼스텍주식회사 | A method for detecting the pupils in a face image |
CN103136512A (en) * | 2013-02-04 | 2013-06-05 | 重庆市科学技术研究院 | Pupil positioning method and system |
WO2015148198A1 (en) * | 2014-03-28 | 2015-10-01 | Intel Corporation | Computational array camera with dynamic illumination for eye tracking |
CN105335720A (en) * | 2015-10-28 | 2016-02-17 | 广东欧珀移动通信有限公司 | Iris information acquisition method and acquisition system |
CN108319953A (en) * | 2017-07-27 | 2018-07-24 | 腾讯科技(深圳)有限公司 | Occlusion detection method and device, electronic equipment and the storage medium of target object |
WO2019080061A1 (en) * | 2017-10-26 | 2019-05-02 | 深圳市柔宇科技有限公司 | Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor |
US20190347824A1 (en) * | 2018-05-14 | 2019-11-14 | Beijing Boe Optoelectronics Technology Co., Ltd. | Method and apparatus for positioning pupil, storage medium, electronic device |
CN110472521A (en) * | 2019-07-25 | 2019-11-19 | 中山市奥珀金属制品有限公司 | A kind of Pupil diameter calibration method and system |
JP2023054581A (en) * | 2021-10-04 | 2023-04-14 | 京セラドキュメントソリューションズ株式会社 | Display device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006048328A (en) * | 2004-08-04 | 2006-02-16 | Konica Minolta Holdings Inc | Apparatus and method for detecting face |
CN110472546B (en) * | 2019-08-07 | 2024-01-12 | 南京大学 | Infant non-contact eye movement feature extraction device and method |
-
2019
- 2019-12-13 CN CN201911283540.4A patent/CN112989878A/en active Pending
-
2020
- 2020-11-20 WO PCT/CN2020/130329 patent/WO2021115097A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100950138B1 (en) * | 2009-08-17 | 2010-03-30 | 퍼스텍주식회사 | A method for detecting the pupils in a face image |
CN103136512A (en) * | 2013-02-04 | 2013-06-05 | 重庆市科学技术研究院 | Pupil positioning method and system |
WO2015148198A1 (en) * | 2014-03-28 | 2015-10-01 | Intel Corporation | Computational array camera with dynamic illumination for eye tracking |
CN105335720A (en) * | 2015-10-28 | 2016-02-17 | 广东欧珀移动通信有限公司 | Iris information acquisition method and acquisition system |
CN108319953A (en) * | 2017-07-27 | 2018-07-24 | 腾讯科技(深圳)有限公司 | Occlusion detection method and device, electronic equipment and the storage medium of target object |
WO2019019828A1 (en) * | 2017-07-27 | 2019-01-31 | 腾讯科技(深圳)有限公司 | Target object occlusion detection method and apparatus, electronic device and storage medium |
WO2019080061A1 (en) * | 2017-10-26 | 2019-05-02 | 深圳市柔宇科技有限公司 | Camera device-based occlusion detection and repair device, and occlusion detection and repair method therefor |
US20190347824A1 (en) * | 2018-05-14 | 2019-11-14 | Beijing Boe Optoelectronics Technology Co., Ltd. | Method and apparatus for positioning pupil, storage medium, electronic device |
CN110472521A (en) * | 2019-07-25 | 2019-11-19 | 中山市奥珀金属制品有限公司 | A kind of Pupil diameter calibration method and system |
JP2023054581A (en) * | 2021-10-04 | 2023-04-14 | 京セラドキュメントソリューションズ株式会社 | Display device |
Non-Patent Citations (1)
Title |
---|
高源;梁正友;: "基于融合星射线法和椭圆拟合法的瞳孔定位研究", 电子世界, no. 07, 8 April 2017 (2017-04-08) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114244884A (en) * | 2021-12-21 | 2022-03-25 | 北京蔚领时代科技有限公司 | Eyeball tracking-based video coding method applied to cloud game |
CN114244884B (en) * | 2021-12-21 | 2024-01-30 | 北京蔚领时代科技有限公司 | Video coding method applied to cloud game and based on eye tracking |
Also Published As
Publication number | Publication date |
---|---|
WO2021115097A1 (en) | 2021-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109241859B (en) | Fingerprint identification method and related product | |
CN110020622B (en) | Fingerprint identification method and related product | |
CN109614865B (en) | Fingerprint identification method and related product | |
CN109558804B (en) | Fingerprint acquisition method and related product | |
CN110245607B (en) | Eyeball tracking method and related product | |
CN110933312B (en) | Photographing control method and related product | |
CN111399658B (en) | Calibration method and device for eyeball fixation point, electronic equipment and storage medium | |
CN108833779B (en) | Shooting control method and related product | |
CN111338725A (en) | Interface layout method and related product | |
CN110188666B (en) | Vein collection method and related products | |
CN111445413A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110475020B (en) | Equipment control method and related product | |
CN108848317A (en) | Camera control method and Related product | |
CN110162264B (en) | Application processing method and related product | |
CN113360005B (en) | Color cast adjusting method and related product | |
CN110796673B (en) | Image segmentation method and related product | |
CN110796147B (en) | Image segmentation method and related product | |
CN110198421B (en) | Video processing method and related product | |
CN110221696B (en) | Eyeball tracking method and related product | |
CN114077465A (en) | UI (user interface) rendering method and device, electronic equipment and storage medium | |
CN110363702B (en) | Image processing method and related product | |
CN113282317B (en) | Optical fingerprint parameter upgrading method and related product | |
WO2021115097A1 (en) | Pupil detection method and related product | |
CN111427644B (en) | Target behavior identification method and electronic equipment | |
CN110633685B (en) | Human eye detection method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |