WO2021115097A1 - 瞳孔检测方法及相关产品 - Google Patents

瞳孔检测方法及相关产品 Download PDF

Info

Publication number
WO2021115097A1
WO2021115097A1 PCT/CN2020/130329 CN2020130329W WO2021115097A1 WO 2021115097 A1 WO2021115097 A1 WO 2021115097A1 CN 2020130329 W CN2020130329 W CN 2020130329W WO 2021115097 A1 WO2021115097 A1 WO 2021115097A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye image
template
pupil
area
human eye
Prior art date
Application number
PCT/CN2020/130329
Other languages
English (en)
French (fr)
Inventor
王文东
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021115097A1 publication Critical patent/WO2021115097A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • This application relates to the technical field of pupil detection, in particular to a pupil detection method and related products.
  • Electronic equipment can be used to achieve pupil detection technology and apply the pupil detection technology to the actual interactive scene of the electronic device.
  • pupil detection can be performed through the electronic device to achieve eye tracking and determine the user's attention on the screen of the electronic device Click to proceed to the next step according to the user’s focus on the screen.
  • the current eye tracking system recognizes the position of the pupil by collecting a frame of images, but in the actual process of collecting eye images, when the user’s eyelids droop, or the user looks down, or, when the user wears glasses, in some cases
  • the pupil is blocked in the eye image collected by the angle, which makes it impossible to identify the center of the pupil, which affects the efficiency and accuracy of the eye tracking system.
  • the embodiments of the present application provide a pupil detection method and related products, which can solve the problem of the pupil being blocked and improve the efficiency and accuracy of pupil detection.
  • an embodiment of the present application provides a pupil detection method.
  • the method includes the following steps:
  • the target human eye image template successfully matched with the first eye image in the preset human eye image template library, wherein the occluded area includes at least Part of the pupil area;
  • an embodiment of the present application provides a pupil detection device, the device including:
  • the determining unit is configured to determine a target human eye image template successfully matched with the first eye image in a preset human eye image template library if the first eye image includes a occluded area, wherein the The occluded area includes at least part of the pupil area;
  • a restoration unit configured to restore the first eye image according to the target human eye image template to obtain a second eye image
  • the detection unit is configured to perform pupil detection based on the second eye image to obtain the center position of the pupil.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be processed by the above
  • the above program includes instructions for executing the steps in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the above-mentioned computer-readable storage medium stores a computer program for electronic data exchange, wherein the above-mentioned computer program enables a computer to execute Some or all of the steps described in one aspect.
  • the embodiments of the present application provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in this application.
  • the computer program product may be a software installation package.
  • FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of an eye-tracking application running in an electronic device in an eye-tracking scenario provided by an embodiment of the present application;
  • FIG. 1C is a schematic flowchart of a pupil detection method provided by an embodiment of the present application.
  • FIG. 1D is a schematic diagram of an image shooting scene when pupil detection is performed through a camera provided by an embodiment of the present application.
  • FIG. 1E is a schematic diagram of a first eye image in which the pupil area is completely occluded according to an embodiment of the present application
  • FIG. 1F is a schematic diagram of a first eye image in which the pupil area is partially occluded according to an embodiment of the present application
  • FIG. 1G is a schematic diagram of using each template pixel point of the template pupil area to replace the area pixel point corresponding to the template pixel point in the occluded area according to an embodiment of the present application;
  • FIG. 1H is a schematic diagram of a demonstration of repairing a plurality of pixels in a first area according to a plurality of pixels in a first template according to an embodiment of the present application;
  • FIG. 1I is another schematic diagram of repairing a plurality of pixels in a first region according to a plurality of pixels in a first template according to an embodiment of the present application;
  • FIG. 2 is a schematic flowchart of another pupil detection method provided by an embodiment of the present application.
  • FIG. 3A is a schematic diagram of a process for determining a gaze point in an eye tracking scenario provided by an embodiment of the present application
  • FIG. 3B is a schematic flowchart of pupil detection in an eye tracking scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of a pupil detection device provided by an embodiment of the present application.
  • the electronic devices involved in the embodiments of this application may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices (smart watches, smart bracelets, wireless headsets, augmented reality/virtual reality devices, smart glasses), Computing equipment or other processing equipment connected to a wireless modem, as well as various forms of user equipment (UE), mobile station (MS), terminal equipment (terminal device), and so on.
  • UE user equipment
  • MS mobile station
  • terminal device terminal device
  • FIG. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
  • the electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, wherein:
  • the electronic device 100 may include a control circuit, and the control circuit may include a storage and processing circuit 110.
  • the storage and processing circuit 110 may include memory, such as hard disk drive memory, non-volatile memory (such as flash memory or other electronic programmable read-only memory used to form a solid-state drive, etc.), and volatile memory (such as static or dynamic random access memory). Access to memory, etc.), etc., are not limited in the embodiment of the present application.
  • the processing circuit in the storage and processing circuit 110 may be used to control the operation of the electronic device 100.
  • the processing circuit can be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, etc.
  • the storage and processing circuit 110 can be used to run software in the electronic device 100, such as Internet browsing applications, Voice over Internet Protocol (VOIP) telephone call applications, email applications, media playback applications, and operating system functions Wait. These softwares can be used to perform some control operations, for example, camera-based image capture, ambient light measurement based on ambient light sensors, proximity sensor measurement based on proximity sensors, and information based on status indicators such as the status indicators of light-emitting diodes.
  • VOIP Voice over Internet Protocol
  • Operations, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100 are not limited in the embodiment of the present application.
  • the electronic device 100 may include an input-output circuit 150.
  • the input-output circuit 150 can be used to enable the electronic device 100 to input and output data, that is, to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to the external device.
  • the input-output circuit 150 may further include a sensor 170.
  • the sensor 170 may include an ultrasonic fingerprint recognition module, an ambient light sensor, a proximity sensor based on light and capacitance, a touch sensor (for example, a light-based touch sensor and/or a capacitive touch sensor, where the touch sensor may be a touch sensor). Part of the display screen can also be used independently as a touch sensor structure), acceleration sensor, and other sensors.
  • the ultrasonic fingerprint recognition module can be integrated under the screen, or the ultrasonic fingerprint recognition module can be installed on the side of the electronic device or The back is not limited here.
  • the ultrasonic fingerprint identification module can be used to collect fingerprint images.
  • the sensor 170 may include an infrared (IR) camera or an RGB camera.
  • IR infrared
  • RGB camera When the IR camera shoots, the pupil reflects infrared light. Therefore, the IR camera is more accurate than the RGB camera in capturing pupil images; the RGB camera needs more follow-up pupil detection and calculation The precision and accuracy are higher than that of the IR camera, and the versatility is better than that of the IR camera, but the amount of calculation is large.
  • the input-output circuit 150 may also include one or more display screens, such as the display screen 130.
  • the display screen 130 may include one or a combination of a liquid crystal display screen, an organic light emitting diode display screen, an electronic ink display screen, a plasma display screen, and a display screen using other display technologies.
  • the display screen 130 may include a touch sensor array (ie, the display screen 130 may be a touch display screen).
  • the touch sensor can be a capacitive touch sensor formed by an array of transparent touch sensor electrodes (such as indium tin oxide (ITO) electrodes), or it can be a touch sensor formed using other touch technologies, such as sonic touch, pressure-sensitive touch, and resistance. Touch, optical touch, etc., are not limited in the embodiment of the present application.
  • the electronic device 100 may also include an audio component 140.
  • the audio component 140 may be used to provide audio input and output functions for the electronic device 100.
  • the audio component 140 in the electronic device 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sounds.
  • the communication circuit 120 may be used to provide the electronic device 100 with the ability to communicate with external devices.
  • the communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals.
  • the wireless communication circuit in the communication circuit 120 may include a radio frequency transceiver circuit, a power amplifier circuit, a low noise amplifier, a switch, a filter, and an antenna.
  • the wireless communication circuit in the communication circuit 120 may include a circuit for supporting Near Field Communication (NFC) by transmitting and receiving near-field coupled electromagnetic signals.
  • the communication circuit 120 may include a near field communication antenna and a near field communication transceiver.
  • the communication circuit 120 may also include a cellular phone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so on.
  • the electronic device 100 may further include a battery, a power management circuit, and other input-output units 160.
  • the input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators.
  • the user can input commands through the input-output circuit 150 to control the operation of the electronic device 100, and can use the output data of the input-output circuit 150 to realize receiving status information and other outputs from the electronic device 100.
  • FIG. 1B is a schematic diagram of a demonstration of running an eye-tracking application in an electronic device in an eye-tracking scenario provided by an embodiment of the application, where the storage and The memory in the processing circuit 110 can store multiple application programs, such as e-books, browsers, payment applications, and system applications.
  • One or more microprocessors in the storage and processing circuit 110 can also be used to run installed application programs.
  • the unlocking function can be realized, and for example, the function of tracking the gaze point of the user's eyeball can be realized.
  • eye tracking is required.
  • the eyeball gaze point is the position of the gaze point where the user’s eyeball is gazing at the plane where the electronic device is located
  • the eye-tracking software development kit interface is the software development kit (SDK) interface provided by the electronic device for eye-tracking applications, responsible for
  • SDK software development kit
  • the eye tracking application provides an application programming interface (API) interface for obtaining gaze points and input.
  • the eye tracking service is responsible for managing gaze point algorithms, determining gaze point post-processing, input processing, authentication and parameter settings, and eye tracking
  • the core algorithm may include a calibration algorithm and an estimated gaze point algorithm.
  • the eye tracking strategy is related to the post-processing of the gaze point algorithm.
  • the eye tracking service can also call the camera application through the camera's Native Development Kit (NDK) interface.
  • the camera application will call the camera and collect the first eye image through the camera.
  • NDK Native Development Kit
  • FIG. 1C is a schematic flowchart of a pupil detection method provided by an embodiment of the present application, which is applied to the electronic device shown in FIG. 1A.
  • the pupil detection method provided by the present application includes:
  • the first eye image is an image containing the user's eye parts.
  • the first eye image may be captured by a camera set on the electronic device.
  • the captured first eye image can be used for eye tracking.
  • the electronic device can collect the first eye image through the camera for pupil detection during eye tracking, so as to detect that the pupil is in the first eye
  • the gaze point at which the user pays attention to the screen of the electronic device is further determined according to the pupil center position, and the operation of the electronic device is controlled according to the gaze point.
  • FIG. 1D is a schematic diagram of a scene in which the first eye image is captured by a camera according to an embodiment of the application.
  • the pupils will be blocked in the first eye image collected at some angles. If the pupil area in the first eye image is blocked, the pupil detection of the first eye image will not be possible. The pupil center position is directly obtained, so that the first eye image can be repaired.
  • the first eye image includes a occluded region
  • the occluded area includes at least a part of the pupil area, which may specifically include the following situations: the pupil area in the first eye image is completely occluded, but other eye areas are not completely occluded, or a part of the first eye image The pupil area is blocked, but another part of the pupil area is not blocked, and other eye areas are not completely blocked.
  • FIG. 1E is a schematic diagram of a first eye image in which the pupil area is completely occluded according to an embodiment of the application.
  • FIG. 1F is a partial pupil area provided by an embodiment of the application. Demonstration diagram of the occluded first eye image.
  • the electronic device can preset a human eye image template library.
  • the human eye image template library can include multiple eye image templates collected at different angles and different distances, and the eye area of each eye image template is not blocked. . Therefore, if the first eye image is a partially occluded eye image, a target human eye image template that is successfully matched with the first eye image in the preset human eye image template library can be determined.
  • determining the target human eye image template successfully matched with the first eye image in the preset human eye image template library may include the following steps:
  • the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and the target is matched
  • the human eye image template corresponding to the value is determined as the target human eye image template.
  • the first eye image is matched with a plurality of human eye image templates in the preset human eye image template library, and the first shooting angle and the first shooting angle of the first eye image taken by the camera can be acquired first
  • the first shooting angle is matched with the shooting angle corresponding to each eye image template in the multiple eye image templates to obtain the first matching score
  • the first shooting distance is matched with each person in the multiple eye image templates
  • the shooting distance corresponding to the eye image template is matched to obtain the second matching score, and then the sum of the first matching score and the second matching score is determined, and the matching value corresponding to the human eye image template is obtained.
  • the individual eye image template matches the first eye image with multiple matching values. Each eye image template corresponds to a matching value.
  • the target person corresponding to the target matching value greater than the preset matching value can be determined among the multiple matching values Eye image template. In this way, it is possible to determine the target human eye image template whose shooting angle is closest to the first shooting angle, and the shooting distance is closest to the first shooting distance.
  • the first eye image is matched with a plurality of human eye image templates in the preset human eye image template library, and feature extraction of the first eye image can be performed to obtain the first eye image.
  • Feature set and then match the first feature set with the eye feature set corresponding to each of the multiple eye image templates to obtain the matching value corresponding to the eye image template, so that multiple eye images can be determined
  • the template is matched with the first eye image with multiple matching values, each human eye image template corresponds to a matching value, and finally, the target human eye image template corresponding to the target matching value greater than the preset matching value among the multiple matching values can be determined In this way, the target human eye image template with the human eye feature closest to the first human eye feature set can be determined.
  • the pupil reference area can be predicted based on multiple eye contour features, and then part of the eye image features in the pupil reference area among the multiple eye image features are matched with the preset pupil image features, where,
  • the electronic device can store preset pupil image features in advance, so that part of the eye image features of the predicted pupil reference area can be matched with the preset pupil image features to obtain the matching result.
  • the predicted pupil reference area is Part of the eye image features include m eye image features, m is an integer greater than 1, and each eye image feature of the m eye image features can be matched with the preset pupil image feature to obtain m feature matching values , Take the m feature matching values as the matching result, and if there are n feature matching values less than or equal to the preset matching value among the m feature matching values, and the ratio of n to m is greater than the preset ratio, determine the first
  • the eye image includes a occluded area, where n is a positive integer less than m, and furthermore, it can be determined that the first eye image is a partially occluded eye image.
  • the target human eye image template is the human eye image template closest to the first eye image
  • the first eye image template can be compared to the first eye image template.
  • One eye image is repaired, so that a second eye image including a complete pupil area can be obtained.
  • repairing the first eye image according to the target human eye image template to obtain a second eye image may include the following steps:
  • image analysis can be performed on the first eye image to determine that the area corresponding to the set of pixels in the first eye image that is inconsistent with the preset eye pixels is the occluded area
  • the preset eye pixels can be the preset pupil Area pixels, or, it may also be determined that the area corresponding to the pixel point set in the first eye image that is inconsistent with the pixel point information corresponding to the target human eye image template is the occluded area.
  • the template pupil area in the target human eye image template is determined, and pupil feature extraction can be performed on the target human eye image template to obtain pupil feature points.
  • the area containing the preset range size of the pupil feature points is determined as the template pupil area.
  • the occluded area is repaired according to the template pupil area to obtain the second eye image, and the pixels of the template pupil area can be replaced with corresponding pixels in the first eye image to obtain the second eye image.
  • repairing the occluded area according to the template pupil area to obtain the second eye image may include the following steps:
  • Figure 1G is a schematic diagram of using each template pixel in the pupil area of the template to replace the pixel points in the shaded area corresponding to the template pixel.
  • the shaded area includes all The occluded pupil area, or when the occluded area includes part of the occluded pupil area, each template pixel point in the template pupil area can be used to replace the area pixel point corresponding to the template pixel point in the occluded area, thus,
  • the template pupil area in the target human eye image template can be directly used as all the repaired pupil areas in the first eye image to obtain the second eye image.
  • repairing the occluded area according to the template pupil area to obtain the second eye image may include The following steps:
  • FIG. 1H is a schematic diagram of repairing a plurality of pixels in a first region according to a plurality of pixels in a first template according to an embodiment of the application. Repair part of the occluded pupil area in the first eye image, determine multiple pixels in the first area whose noise in the occluded area is greater than a preset threshold, and use multiple pixels in the first area as the pixels to be repaired , And then determine the multiple first template pixels in the template pupil area that correspond to the multiple first area pixels one-to-one, and finally, repair the multiple first area pixels according to the multiple first template pixels , To obtain the second eye image.
  • the pixel point set of the second area is repaired according to the second template pixel point set.
  • each template pixel point in the second template pixel point set can be replaced with the corresponding second area pixel point set. Pixel points to get the second eye image.
  • repairing the plurality of pixels of the first region according to the plurality of pixels of the first template to obtain the second eye image may include the following steps:
  • each target template pixel point in the plurality of target template pixel points with a first region pixel point corresponding to the target template pixel point in the occluded area to obtain the second eye image.
  • FIG. 1I is another schematic diagram of repairing multiple first region pixels according to multiple first template pixels according to an embodiment of this application. Determine the gray value of each pupil area pixel in multiple pupil area pixels to obtain multiple pixel values, and then determine the average value of multiple pixel values to obtain the pixel average value.
  • the electronic device can store the average value and weight in advance According to the mapping relationship between the coefficients, the weighting coefficient corresponding to the average value of the pixel can be determined according to the mapping relationship, and then, according to the weighting coefficient, the pixel value of each first template pixel point of the plurality of first template pixel points can be determined in turn. Weighted calculation to obtain multiple target template pixels, and finally replace each target template pixel in the multiple target template pixels with the pixels in the first area corresponding to the target template pixel in the occluded area to obtain the second eye
  • the image in this way, can repair part of the blocked pupil area. In this way, it is possible to repair another part of the pupil area that is blocked according to the part of the pupil area that is not blocked, so as to more accurately repair the first eye image to obtain the second eye image.
  • pupil detection may be performed according to the second eye image to obtain the center position of the pupil.
  • pupil feature extraction may be performed on the second eye image to obtain the target pupil feature, and then the pupil center position in the second eye image is determined according to the target pupil feature point.
  • the first eye image is acquired in the embodiment of the application; if the first eye image includes the occluded area, the target person who successfully matches the first eye image in the preset human eye image template library is determined Eye image template; repair the first eye image according to the target human eye image template to obtain the second eye image; perform pupil detection based on the second eye image to obtain the pupil center position, so that in the collected first eye When the pupil in the image is occluded, the occluded first eye image is repaired through the human eye image template, so as to solve the problem of pupil occlusion and improve the efficiency and accuracy of pupil detection.
  • FIG. 2 is a schematic flowchart of a pupil detection method provided by an embodiment of the application, and the method includes:
  • the first eye image includes a occluded region
  • the first eye image is acquired in the embodiment of the application; if the first eye image includes the occluded area, the target person who successfully matches the first eye image in the preset human eye image template library is determined Eye image template; determine the occluded area; determine the template pupil area corresponding to the occluded area in the target human eye image template; repair the occluded area according to the template pupil area to obtain the second eye image; based on the second eye image Perform pupil detection to obtain the center position of the pupil.
  • the human eye image template is used to repair the blocked first eye image to solve the problem that the pupil is blocked. To improve the efficiency and accuracy of pupil detection.
  • FIG. 3A is a schematic diagram of a process for determining a gaze point in an eye-tracking scene provided by an embodiment of the application
  • FIG. 3B is a flow of pupil detection in an eye-tracking scene provided by an embodiment of the application Schematic diagram, the method includes:
  • the electronic device when the electronic device is running the eye-tracking application, it can request the eye-tracking service to obtain the eye-tracking gaze position, and the eye-tracking service can call the camera application.
  • the eye-tracking service The tracking service can request the camera application to obtain the first eye image, so that the camera application will turn on the camera on the electronic device, collect the first eye image through the camera, and the camera will transmit the captured first eye image to the camera application.
  • the camera application transmits the first eye image to the eye tracking service; further, the eye tracking service can call the core eye tracking algorithm stored in the memory, and if the pupil is blocked, the first eye image can be repaired.
  • the eye tracking service can transmit the eye gaze position to the eye tracking application.
  • feature extraction can be performed on the first eye image to obtain multiple eye image features, and the multiple eye image features include multiple eyes Contour features; predict the pupil reference area based on the multiple eye contour features; match part of the eye image features in the pupil reference area among the multiple eye image features with preset pupil image features to obtain a match
  • the matching result does not meet the preset condition, it is determined that the first eye image includes the occluded area, and the target human eye image template that is successfully matched with the first eye image in the preset human eye image template library is determined , Determine the occluded area in the first eye image, determine the template pupil area corresponding to the occluded area in the target human eye image template, and if the occluded area includes all pupil areas, use the template pupil
  • Each template pixel point of the area replaces the area pixel point corresponding to the template pixel point in the occluded area to obtain the second eye image.
  • the occluded area includes a part of the pupil area that is occluded, determine Multiple first area pixels in the occluded area whose noise is greater than a preset threshold; determine multiple first template pixels in the template pupil area that correspond to the multiple first area pixels one-to-one; according to the A plurality of first template pixels repair the plurality of pixels in the first region to obtain the second eye image, and perform pupil detection based on the second eye image to obtain the pupil center position. Finally, according to the pupil The center position determines the position of the eyeball gaze point, so that the eyeball gaze point position can be determined in the eye-tracking scene.
  • the first eye image is acquired in the embodiment of the application; if the first eye image includes the occluded area, the target person who successfully matches the first eye image in the preset human eye image template library is determined Eye image template; if the blocked area includes all pupil areas, use each template pixel in the template pupil area to replace the area pixels corresponding to the template pixel in the blocked area to obtain the second eye image, if the blocked area
  • the occluded part of the pupil area is included in the occluded area, and multiple first area pixels in the occluded area whose noise is greater than a preset threshold are determined; multiple first templates in the pupil area corresponding to the multiple first area pixels in the template are determined one-to-one Pixels; repair multiple pixels in the first region according to multiple first template pixels to obtain a second eye image; perform pupil detection based on the second eye image to obtain the center position of the pupil, so that in the collected first When the pupil of an eye image is blocked, the blocked first eye image is repaired through the human eye image template, so as to solve the problem
  • the following is a device for implementing the above pupil detection method, which is specifically as follows:
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device includes: a processor 410, a communication interface 430, and a memory 420; and one or more programs 421
  • the one or more programs 421 are stored in the memory 420 and are configured to be executed by the processor, and the program 421 includes instructions for executing the following steps:
  • the first eye image includes a occluded area
  • the program 421 includes instructions for executing the following steps :
  • the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and the target matching value is corresponding to The human eye image template of is determined as the target human eye image template.
  • the program 421 includes instructions for executing the following steps:
  • the program 421 includes instructions for executing the following steps:
  • Each template pixel point in the template pupil area is used to replace the area pixel point corresponding to the template pixel point in the occluded area to obtain the second eye image.
  • the program 421 includes instructions for executing the following steps:
  • the program 421 includes: Follow the instructions for the following steps:
  • Each target template pixel point in the plurality of target template pixel points is replaced with a first region pixel point corresponding to the target template pixel point in the occluded area to obtain the second eye image.
  • the program 421 further includes instructions for executing the following steps:
  • each step of the above method can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software units in the processor.
  • the software unit may be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor executes the instructions in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • For specific implementation steps and other implementation steps in the embodiment of the present application please refer to the steps of the method embodiment shown in FIG. 1C. To avoid repetition, detailed descriptions are omitted here.
  • FIG. 5 is a schematic structural diagram of a pupil detection device provided by this embodiment.
  • the pupil detection device 500 is applied to an electronic device.
  • the electronic device includes an acceleration sensor and a camera.
  • the device 500 includes an acquisition unit 501. , The determination unit 502, the repair unit 503, and the detection unit 504, wherein:
  • the acquiring unit 501 is configured to acquire a first eye image
  • the determining unit 502 is configured to determine a target human eye image template successfully matched with the first eye image in a preset human eye image template library if the first eye image includes a occluded area, wherein , The occluded area includes at least a part of the pupil area;
  • the restoration unit 503 is configured to restore the first eye image according to the target human eye image template to obtain a second eye image
  • the detection unit 504 is configured to perform pupil detection based on the second eye image to obtain the center position of the pupil.
  • the determining unit 502 is specifically configured to:
  • the human eye image template corresponding to the target matching value is successfully matched with the first eye image, and the target matching value is corresponding to The human eye image template of is determined as the target human eye image template.
  • the determining unit 502 is specifically configured to:
  • the sum of the first matching score and the second matching score is determined to obtain a matching value corresponding to the human eye image template.
  • the determining unit 502 is specifically configured to:
  • the first feature set is matched with a human eye feature set corresponding to each human eye image template in the plurality of human eye image templates to obtain a matching value corresponding to the human eye image template.
  • the repairing unit 503 is specifically configured to:
  • the repairing unit 503 is specifically configured to:
  • Each template pixel point in the template pupil area is used to replace the area pixel point corresponding to the template pixel point in the occluded area to obtain the second eye image.
  • the repairing The unit 503 is specifically used for:
  • the detection unit 503 is specifically configured to:
  • Each target template pixel point in the plurality of target template pixel points is replaced with a first region pixel point corresponding to the target template pixel point in the occluded area to obtain the second eye image.
  • the detection unit 504 is further configured to perform feature extraction on the first eye image to obtain multiple eye image features, and the multiple eye image features include multiple eye contour features; Predict the pupil reference area according to the multiple eye contour features; match some of the eye image features in the pupil reference area among the multiple eye image features with preset pupil image features to obtain a matching result, if The matching result does not satisfy the preset condition, and it is determined that the first eye image includes the occluded area.
  • the pupil detection device described in the embodiment of the present application acquires the first eye image; if the first eye image includes the occluded area, it is determined that the preset human eye image template library matches the first eye image.
  • the target human eye image template is matched successfully by the partial image; the first eye image is repaired according to the target human eye image template to obtain the second eye image; the pupil detection is performed based on the second eye image to obtain the pupil center position, so, In the case that the pupil of the collected first eye image is blocked, the human eye image template is used to repair the blocked first eye image, thereby solving the problem of the pupil being blocked and improving the efficiency and accuracy of pupil detection .
  • each program module of the pupil detection device of this embodiment can be implemented according to the method in the above method embodiment, and the specific implementation process can be referred to the related description of the above method embodiment, which will not be repeated here.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any method as recorded in the above method embodiment ,
  • the above-mentioned computer includes electronic equipment.
  • the embodiments of the present application also provide a computer program product.
  • the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program.
  • the above-mentioned computer program is operable to cause a computer to execute any of the methods described in the above-mentioned method embodiments. Part or all of the steps of the method.
  • the computer program product may be a software installation package, and the above-mentioned computer includes electronic equipment.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the foregoing methods of the various embodiments of the present application.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: a flash disk , Read-only memory (English: Read-Only Memory, abbreviation: ROM), random access device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.

Abstract

本申请实施例公开了一种瞳孔检测方法及相关产品,所述方法包括如下步骤:通过获取第一眼部图像;若第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与第一眼部图像匹配成功的目标人眼图像模板;根据目标人眼图像模板对第一眼部图像进行修复,得到第二眼部图像;基于第二眼部图像进行瞳孔检测,得到瞳孔中心位置,如此,在采集的第一眼部图像中瞳孔有被遮挡的情况下,通过人眼图像模板将被遮挡的第一眼部图像进行修复,从而解决瞳孔被遮挡的问题,提高瞳孔检测的效率和准确度。

Description

瞳孔检测方法及相关产品 技术领域
本申请涉及瞳孔检测技术领域,具体涉及一种瞳孔检测方法及相关产品。
背景技术
随着电子设备(如手机、平板电脑等等)的大量普及应用,电子设备能够支持的应用越来越多,功能越来越强大,电子设备向着多样化、个性化的方向发展,成为用户生活中不可缺少的电子用品。
电子设备可用于实现瞳孔检测的技术,并将瞳孔检测的技术应用到电子设备的实际交互场景,例如,可通过电子设备进行瞳孔检测,进而实现眼球追踪,确定用户关注电子设备的屏幕上的关注点,从而根据用户关注屏幕的关注点进行下一步的操作。现在的眼球追踪系统通过采集一帧图像来识别瞳孔的位置,但在实际采集眼部图像的过程中,当用户眼皮下垂,或者用户向下看,又或者,用户戴眼镜的情况下,在一些角度采集的眼部图像中会出现瞳孔被遮挡,导致无法识别到瞳孔中心位置,影响眼球追踪系统的效率和准确度。
发明内容
本申请实施例提供了一种瞳孔检测方法及相关产品,能够解决瞳孔被遮挡的问题,提高瞳孔检测的效率和准确度。
第一方面,本申请实施例提供一种瞳孔检测方法,所述方法包括如下步骤:
获取第一眼部图像;
若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域;
根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像;
基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
第二方面,本申请实施例提供一种瞳孔检测装置,所述装置包括:
获取单元,用于获取第一眼部图像;
确定单元,用于若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域;
修复单元,用于根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像;
检测单元,用于基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种电子设备的结构示意图;
图1B是本申请实施例提供的一种眼球追踪场景下电子设备中的运行眼球追踪应用的演示示意图;
图1C是本申请实施例提供的一种瞳孔检测方法的流程示意图;
图1D是本申请实施例提供的一种通过摄像头进行瞳孔检测时进行图像拍摄的场景示意图;
图1E是本申请实施例提供的一种瞳孔区域被全部遮挡的第一眼部图像的演示示意图;
图1F是本申请实施例提供的一种瞳孔区域被部分遮挡的第一眼部图像的演示示意图;
图1G是本申请实施例提供的使用模板瞳孔区域的每一模板像素点替换被遮挡区域中与该模板像素点对应的区域像素点的演示示意图;
图1H是本申请实施例提供的一种根据多个第一模板像素点对多个第一区域像素点进行修复的演示示意图;
图1I是本申请实施例提供的一种根据多个第一模板像素点对多个第一区域像素点进行修复的另一种演示示意图;
图2是是本申请实施例提供的另一种瞳孔检测方法的流程示意图;
图3A是本申请实施例提供的一种眼球追踪场景下确定注视点的流程示意图;
图3B是本申请实施例提供的一种眼球追踪场景下进行瞳孔检测的流程示意图;
图4是本申请实施例提供的一种电子设备的结构示意图;
图5是本申请实施例提供的一种瞳孔检测装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备(智能手表、智能手环、无线耳机、增强现实/虚拟现实设备、智能眼镜)、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(user equipment,UE),移动台(mobile station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为电子设备。
下面对本申请实施例进行详细介绍。
请参阅图1A,图1A是本申请实施例公开的一种电子设备的结构示意图,电子设备100包括存储和处理电路110,以及与所述存储和处理电路110连接的传感器170,其中:
电子设备100可以包括控制电路,该控制电路可以包括存储和处理电路110。该存储和处理电路110可以包括存储器,例如硬盘驱动存储器,非易失性存储器(例如闪存或用于形成固态驱动器的其它电子可编程只读存储器等),易失性存储器(例如静态或动态随机存取存储器等)等,本申请实施例不作限制。存储和处理电路110中的处理电路可以用于控制电子设备100的运转。该处理电路可以基于一个或多个微处理器,微控制器,数字信号处理器,基带处理器,功率管理单元,音频编解码器芯片,专用集成电路,显示驱动器集成电路等来实现。
存储和处理电路110可用于运行电子设备100中的软件,例如互联网浏览应用程序,互联网协议语音(Voice over Internet Protocol,VOIP)电话呼叫应用程序,电子邮件应用程序,媒体播放应用程序,操作系统功能等。这些软件可以用于执行一些控制操作,例如,基于照相机的图像采集,基于环境光传感器的环境光测量,基于接近传感器的接近传感器测量,基于诸如发光二极管的状态指示灯等状态指示器实现的信息显示功能,基于触摸传感器的触摸事件检测,与在多个(例如分层的)显示屏上显示信息相关联的功能,与执行无线通信功能相关联的操作,与收集和产生音频信号相关联的操作,与收集和处理按钮按压事件数据相关联的控制操作,以及电子设备100中的其它功能等,本申请实施例不作限制。
电子设备100可以包括输入-输出电路150。输入-输出电路150可用于使电子设备100实现数据的输入和输出,即允许电子设备100从外部设备接收数据和也允许电子设备100将数据从电子设备100输出至外部设备。输入-输出电路150可以进一步包括传感器170。传感器170可以包括超声波指纹识别模组,还可以包括环境光传感器,基于光和电容的接近传感器,触摸传感器(例如,基于光触摸传感器和/或电容式触摸传感器,其中,触摸传感器可以是触控显示屏的一部分,也可以作为一个触摸传感器结构独立使用),加速度传感器,和其它传感器等,超声波指纹识别模组可以集成于屏幕下方,或者,超声波指纹识别模组可以设置于电子设备的侧面或者背面,在此不作限定,该超声波指纹识别模组可以用于采集指纹图像。
传感器170可以包括红外(IR)摄像头或RGB摄像头,IR摄像头在拍摄时,瞳孔反射红外光,因此IR摄像头在拍摄瞳孔图像会比RGB相机更加准确;RGB摄像头需要进行更多的后续瞳孔检测,计算精度和准确性比IR摄像头要高,通用性比IR摄像头更好,但是计算量大。
输入-输出电路150还可以包括一个或多个显示屏,例如显示屏130。显示屏130可以包括液晶显示屏,有机发光二极管显示屏,电子墨水显示屏,等离子显示屏,使用其它显示技术的显示屏中一种或者几种的组合。显示屏130可以包括触摸传感器阵列(即,显示屏130可以是触控显示屏)。触摸传感器可以是由透明的触摸传感器电极(例如氧化铟锡(ITO)电极)阵列形成的电容式触摸传感器,或者可以是使用其它触摸技术形成的触摸传感器,例如音波触控,压敏触摸,电阻触摸,光学触摸等,本申请实施例不作限制。
电子设备100还可以包括音频组件140。音频组件140可以用于为电子设备100提供音频输入和输出功能。电子设备100中的音频组件140可以包括扬声器,麦克风,蜂鸣器,音调发生器以及其它用于产生和检测声音的组件。
通信电路120可以用于为电子设备100提供与外部设备通信的能力。通信电路120可以包括模拟和数字输入-输出接口电路,和基于射频信号和/或光信号的无线通信电路。通信电路120中的无线通信电路可以包括射频收发器电路、功率放大器电路、低噪声放大器、 开关、滤波器和天线。举例来说,通信电路120中的无线通信电路可以包括用于通过发射和接收近场耦合电磁信号来支持近场通信(Near Field Communication,NFC)的电路。例如,通信电路120可以包括近场通信天线和近场通信收发器。通信电路120还可以包括蜂窝电话收发器和天线,无线局域网收发器电路和天线等。
电子设备100还可以进一步包括电池,电力管理电路和其它输入-输出单元160。输入-输出单元160可以包括按钮,操纵杆,点击轮,滚动轮,触摸板,小键盘,键盘,照相机,发光二极管和其它状态指示器等。
用户可以通过输入-输出电路150输入命令来控制电子设备100的操作,并且可以使用输入-输出电路150的输出数据以实现接收来自电子设备100的状态信息和其它输出。
在眼球追踪场景下,电子设备可以运行眼球追踪应用,请参阅图1B,图1B为本申请实施例提供的一种眼球追踪场景下电子设备中的运行眼球追踪应用的演示示意图,其中,存储和处理电路110中的存储器可存储多个应用程序,例如电子书、浏览器、支付应用、系统应用,存储和处理电路110中的一个或多个微处理器还可用于运行安装的应用程序,以实现各个功能,例如可以实现解锁功能,又例如可以实现用户眼球注视点跟踪的功能,实现注视点跟踪的功能,需要进行眼球追踪。其中,眼球注视点为用户的眼球注视电子设备所在平面的注视点位置,眼球追踪软件开发工具包接口是电子设备为眼球追踪应用提供的软件开发工具包(software development kit,SDK)接口,负责为眼球追踪应用提供获取注视点以及输入的应用程序接口(application programming interface,API)接口,其中,眼球追踪服务负责管理注视点算法、确定注视点后处理、输入处理以及鉴权和参数设置,眼球追踪核心算法可包括标定算法和估计注视点算法,眼球追踪策略与注视点算法后处理相关。主要包括滤波注视点跳动、注视点转监听注视点输入的应用,眼球追踪鉴权用于回调各模块,负责鉴权请求者是否被允许,参数设置模块用于解析配置和实时更新配置,瞳孔修复模块用于创建人眼图像模板库,以及用于对采集的第一眼部图像进行修复。其中,眼球追踪服务还可通过照相机原生开发工具包(Native Development Kit,NDK)接口调用照相机应用,照相机应用会调用摄像头,通过摄像头采集第一眼部图像。
请参阅图1C,图1C是本申请实施例提供的一种瞳孔检测方法的流程示意图,应用于如图1A所示的电子设备,如图1C所示,本申请提供的瞳孔检测方法包括:
101、获取第一眼部图像。
其中,第一眼部图像是包含用户人眼部位的图像。
本申请实施例中,获取第一眼部图像,可通过电子设备上设置的摄像头拍摄第一眼部图像。采集后的第一眼部图像可用于进行眼球追踪,在眼球追踪场景下,电子设备可在进行眼球追踪过程中,通过摄像头采集第一眼部图像进行瞳孔检测,以检测得到瞳孔在第一眼部图像的瞳孔中心位置,从而进一步根据瞳孔中心位置确定用户关注电子设备屏幕的注视点,进而根据该注视点控制电子设备的操作。请参阅图1D,图1D为本申请实施例提供的一种通过摄像头拍摄第一眼部图像的场景示意图,在具体拍摄场景下,当用户眼皮下垂,或者用户向下看,又或者,用户戴眼镜的情况下,在一些角度采集的第一眼部图像中会出现瞳孔被遮挡,若拍摄到的第一眼部图像中瞳孔区域被遮挡,则对第一眼部图像进行瞳孔检测时,无法直接得到瞳孔中心位置,从而,可对第一眼部图像进行修复。
102、若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域。
其中,被遮挡区域至少包括部分瞳孔区域具体地可包括以下情况:第一眼部图像中的瞳孔区域被全部遮挡,但其他眼部区域未被完全遮挡,或者,第一眼部图像中的一部分瞳孔区域被遮挡,但另一部分瞳孔区域未被遮挡,其他眼部区域未被完全被遮挡。请参阅图1E,图1E为本申请实施例提供的一种瞳孔区域被全部遮挡的第一眼部图像的演示示意图, 请参阅图1F,图1F为本申请实施例提供的一种部分瞳孔区域被遮挡的第一眼部图像的演示示意图。
具体实现中,电子设备可预先设置人眼图像模板库,人眼图像模板库中可包括不同角度、不同距离下采集的多个人眼图像模板,每一人眼图像模板的人眼区域都未被遮挡。从而,若第一眼部图像为被部分遮挡的眼部图像,可确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板。
可选地,上述步骤102中,确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,可包括以下步骤:
21、将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,得到多个匹配值;
22、若所述多个匹配值中存在大于预设匹配值的目标匹配值,确定所述目标匹配值对应的人眼图像模板与所述第一眼部图像匹配成功,并将所述目标匹配值对应的人眼图像模板确定为目标人眼图像模板。
其中,将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,可首先获取摄像头拍摄第一眼部图像的第一拍摄角度和第一拍摄距离,然后将第一拍摄角度与多个人眼图像模板中每一人眼图像模板对应的拍摄角度进行匹配,得到第一匹配分值,以及,将第一拍摄距离与多个人眼图像模板中每一人眼图像模板对应的拍摄距离进行匹配,得到第二匹配分值,进而确定第一匹配分值和第二匹配分值之和,得到与该人眼图像模板对应的匹配值,从而,可确定多个人眼图像模板与第一眼部图像进行匹配的多个匹配值,每一人眼图像模板对应一个匹配值,最后,可确定多个匹配值中大于预设匹配值的目标匹配值对应的目标人眼图像模板。如此,可确定拍摄角度与第一拍摄角度最接近,拍摄距离与第一拍摄距离最接近的目标人眼图像模板。
可选地,将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,还可将第一眼部图像进行特征提取,得到第一人眼特征集,然后将第一特征集与多个人眼图像模板中每一人眼图像模板对应的人眼特征集进行匹配,得到与该人眼图像模板对应的匹配值,从而,可确定多个人眼图像模板与第一眼部图像进行匹配的多个匹配值,每一人眼图像模板对应一个匹配值,最后,可确定多个匹配值中大于预设匹配值的目标匹配值对应的目标人眼图像模板,如此,可确定人眼特征与第一人眼特征集最接近的目标人眼图像模板。
可选地,本申请实施例中,还可包括以下步骤:
1021、对所述第一眼部图像进行特征提取,得到多个眼部图像特征,所述多个眼部图像特征中包括多个眼部轮廓特征;
1022、根据所述多个眼部轮廓特征预测瞳孔参考区域;
1023、将所述多个眼部图像特征中处于所述瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,若匹配结果不满足预设条件,确定所述第一眼部图像包括被遮挡区域。
本申请实施例中,在获取到第一眼部图像后,可先对第一眼部图像进行特征提取,得到多个眼部图像特征,由于第一眼部图像中不同区域呈现的眼部图像特征不一样,因此,可根据多个眼部轮廓特征预测瞳孔参考区域,然后,将多个眼部图像特征中处于瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,其中,电子设备中可预先存储预设瞳孔图像特征,从而,可将预测的瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,具体地,假定预测的瞳孔参考区域的部分眼部图像特征包括m个眼部图像特征,m为大于1的整数,可将m个眼部图像特征中每一眼部图像特征与预设瞳孔图像特征进行匹配,得到m个特征匹配值,将该m个特征匹配值作为匹配结果,若 m个特征匹配值中存在小于或等于预设匹配值的n个特征匹配值,且n与m的比值大于预设比值,确定所述第一眼部图像包括被遮挡区域,其中,n为小于m的正整数,进而,可确定第一眼部图像为被部分遮挡的眼部图像。
103、根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像。
其中,考虑到第一眼部图像中存在瞳孔区域被遮挡的情况,由于目标人眼图像模板为与第一眼部图像最接近的人眼图像模板,因此,可根据目标人眼图像模板对第一眼部图像进行修复,从而,可得到包括完整的瞳孔区域的第二眼部图像。
可选地,上述步骤103中,根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像,可包括以下步骤:
31、确定所述被遮挡区域;
32、确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域;
33、根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像。
其中,可对第一眼部图像进行图像分析,确定第一眼部图像中与预设眼部像素点不一致的像素点集对应的区域为被遮挡区域,预先眼部像素点可以为预设瞳孔区域像素点,或者,还可确定第一眼部图像中与目标人眼图像模板对应的像素点信息不一致的像素点集对应的区域为被遮挡区域。
其中,确定目标人眼图像模板中的模板瞳孔区域,可对目标人眼图像模板进行瞳孔特征提取,得到瞳孔特征点,根据瞳孔特征点确定包含瞳孔特征点的预设范围大小的区域为模板瞳孔区域。
其中,根据模板瞳孔区域对被遮挡区域进行修复,得到所述第二眼部图像,可将模板瞳孔区域的像素点替换第一眼部图像中对应像素点,得到第二眼部图像。
可选地,上述步骤33中,根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像,可包括以下步骤:
3301、使用所述模板瞳孔区域的每一模板像素点替换所述被遮挡区域中与该模板像素点对应的区域像素点,得到所述第二眼部图像。
请参阅图1G,图1G为本申请实施例提供的使用模板瞳孔区域的每一模板像素点替换被遮挡区域中与该模板像素点对应的区域像素点的演示示意图,当被遮挡区域中包括全部被遮挡的瞳孔区域,或者,被遮挡区域包括被遮挡的部分瞳孔区域时,均可使用模板瞳孔区域的每一模板像素点替换被遮挡区域中与该模板像素点对应的区域像素点,从而,可直接将目标人眼图像模板中的模板瞳孔区域作为第一眼部图像中的被修复后的全部瞳孔区域,得到第二眼部图像。
可选地,若所述被遮挡区域中包括被遮挡的部分瞳孔区域,上述步骤33中,根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像,可包括以下步骤:
3302、确定所述被遮挡区域中噪声大于预设阈值的多个第一区域像素点;
3303、确定所述模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点;
3304、根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像。
请参阅图1H,图1H为本申请实施例提供的一种根据多个第一模板像素点对多个第一区域像素点进行修复的演示示意图,其中,若用户的瞳孔区域被部分遮挡,为了修复第一眼部图像中的被遮挡的部分瞳孔区域,可确定被遮挡区域中的噪声大于预设阈值的多个第一区域像素点,将多个第一区域像素点作为待修复的像素点,然后确定模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点,最后,可根据多个第一模板像素点对多个第一区域像素点进行修复,得到所述第二眼部图像。
其中,一种可能的实施例中,根据第二模板像素点集对第二区域像素点集进行修复,具体可将第二模板像素点集中的每一模板像素点替换对应第二区域像素点集中的像素点,得到第二眼部图像。
可选地,上述步骤3304中,根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像,可包括以下步骤:
3341、确定所述第一眼部图像中未被遮挡的多个瞳孔区域像素点;
3342、根据所述多个瞳孔区域像素点确定加权系数;
3343、根据所述加权系数和所述多个第一模板像素点确定多个目标模板像素点,所述多个目标模板像素点与所述多个第一区域像素点一一对应;
3344、将所述多个目标模板像素点中每一目标模板像素点替换所述被遮挡区域中与该目标模板像素点对应的第一区域像素点,得到所述第二眼部图像。
其中,考虑到在用户的瞳孔区域被部分遮挡的情况下,还存在未被遮挡的部分瞳孔区域,因此,可确定第一眼部图像中未被遮挡的多个瞳孔区域像素点。本申请实施例中,请参阅图1I,图1I为本申请实施例提供的一种根据多个第一模板像素点对多个第一区域像素点进行修复的另一种演示示意图,其中,可确定多个瞳孔区域像素点中每一瞳孔区域像素点的灰度值,得到多个像素值,然后确定多个像素值的平均值,得到像素平均值,电子设备中可预先存储平均值与加权系数之间的映射关系,从而可根据该映射关系确定与像素平均值对应的加权系数,进而,根据该加权系数依次与多个第一模板像素点中每一第一模板像素点的像素值进行加权计算,得到多个目标模板像素点,最后将多个目标模板像素点中每一目标模板像素点替换被遮挡区域中与该目标模板像素点对应的第一区域像素点,得到第二眼部图像,如此,可将被遮挡的部分瞳孔区域进行修复。如此,可根据未被遮挡的部分瞳孔区域对被遮挡的另一部分瞳孔区域进行修复,更加准确地对第一眼部图像进行修复,得到第二眼部图像。
104、基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
其中,在对第一眼部图像进行修复,得到包括瞳孔信息的第二眼部图像后,可根据第二眼部图像进行瞳孔检测,得到瞳孔中心位置。具体实施例中,可针对第二眼部图像进行瞳孔特征提取,得到目标瞳孔特征,然后根据目标瞳孔特征点确定第二眼部图像中的瞳孔中心位置。
可以看出,本申请实施例中通过获取第一眼部图像;若第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与第一眼部图像匹配成功的目标人眼图像模板;根据目标人眼图像模板对第一眼部图像进行修复,得到第二眼部图像;基于第二眼部图像进行瞳孔检测,得到瞳孔中心位置,如此,在采集的第一眼部图像中瞳孔有被遮挡的情况下,通过人眼图像模板将被遮挡的第一眼部图像进行修复,从而解决瞳孔被遮挡的问题,提高瞳孔检测的效率和准确度。
请参阅图2,图2为本申请实施例提供的一种瞳孔检测方法的流程示意图,所述方法包括:
201、获取第一眼部图像。
202、若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域。
203、确定所述被遮挡区域。
204、确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域。
205、根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像。
206、基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
其中,上述步骤201-206的具体实现过程可参照步骤101-步骤104中相应的描述,在此不再赘述。
可以看出,本申请实施例中通过获取第一眼部图像;若第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与第一眼部图像匹配成功的目标人眼图像模板;确定被遮挡区域;确定目标人眼图像模板中与被遮挡区域对应的模板瞳孔区域;根据模板瞳孔区域对被遮挡区域进行修复,得到第二眼部图像;基于第二眼部图像进行瞳孔检测,得到瞳孔中心位置,如此,在采集的第一眼部图像中瞳孔有被遮挡的情况下,通过人眼图像模板将被遮挡的第一眼部图像进行修复,从而解决瞳孔被遮挡的问题,提高瞳孔检测的效率和准确度。
需要说明的是,本申请实施例中的具体实施步骤和其他实施步骤,可参见图1C所示的方法实施例的步骤,为避免重复,这里不再详细描述。
请参阅图3A-图3B,图3A为本申请实施例提供的一种眼球追踪场景下确定注视点的流程示意图,图3B为本申请实施例提供的一种眼球追踪场景下进行瞳孔检测的流程示意图,所述方法包括:
其中,与上述图1B一致的,如图3A所示,电子设备运行眼球追踪应用的过程中,可向眼球追踪服务请求获得眼球追踪注视点位置,眼球追踪服务可调用照相机应用,具体地,眼球追踪服务可向照相机应用请求获取第一眼部图像,从而,照相机应用会开启电子设备上的摄像头,通过摄像头采集第一眼部图像,摄像头会将采集的第一眼部图像传输至照相机应用,进而,照相机应用会将第一眼部图像传输至眼球追踪服务;进而,眼球追踪服务可调用存储器存储的眼球追踪核心算法,若存在瞳孔被遮挡的情况,可对第一眼部图像进行修复,得到具有完整瞳孔信息的第二眼部图像,进而,可针对第二眼部图像进行眼球追踪,得到眼球注视点位置,最后,眼球追踪服务可将眼球注视点位置传输至眼球追踪应用。
如图3B所示,在通过摄像头获取第一眼部图像后,可对所述第一眼部图像进行特征提取,得到多个眼部图像特征,多个眼部图像特征中包括多个眼部轮廓特征;根据所述多个眼部轮廓特征预测瞳孔参考区域;将所述多个眼部图像特征中处于所述瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,若匹配结果不满足预设条件,确定所述第一眼部图像包括被遮挡区域,确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,确定所述第一眼部图像中的被遮挡区域,确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域,若被遮挡区域包括全部瞳孔区域,使用所述模板瞳孔区域的每一模板像素点替换所述被遮挡区域中与该模板像素点对应的区域像素点,得到所述第二眼部图像,若所述被遮挡区域中包括被遮挡的部分瞳孔区域,确定被遮挡区域中噪声大于预设阈值的多个第一区域像素点;确定所述模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点;根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像,基于第二眼部图像进行瞳孔检测,得到瞳孔中心位置,最后,根据所述瞳孔中心位置确定眼球注视点位置,从而可实现眼球追踪场景下,确定眼球注视点位置。
可以看出,本申请实施例中通过获取第一眼部图像;若第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与第一眼部图像匹配成功的目标人眼图像模板;若被遮挡区域包括全部瞳孔区域,使用模板瞳孔区域的每一模板像素点替换被遮挡区域中与该模板像素点对应的区域像素点,得到第二眼部图像,若被遮挡区域中包括被遮挡的部分瞳孔区域,确定被遮挡区域中噪声大于预设阈值的多个第一区域像素点;确定模板瞳孔区域中与多个第一区域像素点一一对应的多个第一模板像素点;根据多个第一模板像素点对多个第一区域像素点进行修复,得到第二眼部图像;基于第二眼部图像进行瞳孔检测,得到瞳孔 中心位置,如此,在采集的第一眼部图像中瞳孔有被遮挡的情况下,通过人眼图像模板将被遮挡的第一眼部图像进行修复,从而解决瞳孔被遮挡的问题,提高瞳孔检测的效率和准确度。
可见,通过上述方案,在不同情况下,均可实现对人眼瞳孔位置的检测,从而,可提高瞳孔检测的效率,改善现有技术中瞳孔检测的局限性。
需要说明的是,本申请实施例中的具体实施步骤和其他实施步骤,可参见图1C所示的方法实施例的步骤,为避免重复,这里不再详细描述。
以下是实施上述瞳孔检测方法的装置,具体如下:
与上述一致地,请参阅图4,图4是本申请实施例提供的一种电子设备的结构示意图,该电子设备包括:处理器410、通信接口430和存储器420;以及一个或多个程序421,所述一个或多个程序421被存储在所述存储器420中,并且被配置成由所述处理器执行,所述程序421包括用于执行以下步骤的指令:
获取第一眼部图像;
若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域;
根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像;
基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
在一个可能的示例中,在所述确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板方面,所述程序421包括用于执行以下步骤的指令:
将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,得到多个匹配值;
若所述多个匹配值中存在大于预设匹配值的目标匹配值,确定所述目标匹配值对应的人眼图像模板与所述第一眼部图像匹配成功,并将所述目标匹配值对应的人眼图像模板确定为目标人眼图像模板。
在一个可能的示例中,在所述根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像方面,所述程序421包括用于执行以下步骤的指令:
确定所述被遮挡区域;
确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域;
根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像。
在一个可能的示例中,在所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像方面,所述程序421包括用于执行以下步骤的指令:
使用所述模板瞳孔区域的每一模板像素点替换所述被遮挡区域中与该模板像素点对应的区域像素点,得到所述第二眼部图像。
在一个可能的示例中,若所述被遮挡区域中包括被遮挡的部分瞳孔区域,在所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像方面,所述程序421包括用于执行以下步骤的指令:
确定所述被遮挡区域中噪声大于预设阈值的多个第一区域像素点;
确定所述模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点;
根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像。
在一个可能的示例中,在所述根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像方面,所述程序421包括用于执行以下步骤的指令:
确定所述第一眼部图像中未被遮挡的多个瞳孔区域像素点;
根据所述多个瞳孔区域像素点确定加权系数;
根据所述加权系数和所述多个第一模板像素点确定多个目标模板像素点,所述多个目标模板像素点与所述多个第一区域像素点一一对应;
将所述多个目标模板像素点中每一目标模板像素点替换所述被遮挡区域中与该目标模板像素点对应的第一区域像素点,得到所述第二眼部图像。
在一个可能的示例中,所述程序421还包括用于执行以下步骤的指令:
对所述第一眼部图像进行特征提取,得到多个眼部图像特征,所述多个眼部图像特征中包括多个眼部轮廓特征;
根据所述多个眼部轮廓特征预测瞳孔参考区域;
将所述多个眼部图像特征中处于所述瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,若匹配结果不满足预设条件,确定所述第一眼部图像包括被遮挡区域。
需要说明的是,在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件单元组合执行完成。软件单元可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器执行存储器中的指令,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。本申请实施例中的具体实施步骤和其他实施步骤,可参见图1C所示的方法实施例的步骤,为避免重复,这里不再详细描述。
请参阅图5,图5是本实施例提供的一种瞳孔检测装置的结构示意图,所述瞳孔检测装置500应用于电子设备,所述电子设备包括加速度传感器和摄像头,该装置500包括获取单元501、确定单元502、修复单元503和检测单元504,其中,
所述获取单元501,用于获取第一眼部图像;
所述确定单元502,用于若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域;
所述修复单元503,用于根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像;
所述检测单元504,用于基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
可选地,在所述确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板方面,所述确定单元502具体用于:
将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,得到多个匹配值;
若所述多个匹配值中存在大于预设匹配值的目标匹配值,确定所述目标匹配值对应的人眼图像模板与所述第一眼部图像匹配成功,并将所述目标匹配值对应的人眼图像模板确定为目标人眼图像模板。
可选地,在所述将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配方面,所述确定单元502具体用于:
获取摄像头拍摄所述第一眼部图像的第一拍摄角度和第一拍摄距离;
将所述第一拍摄角度与所述多个人眼图像模板中每一人眼图像模板对应的拍摄角度进行匹配,得到第一匹配分值;将所述第一拍摄距离与多个人眼图像模板中每一人眼图像模 板对应的拍摄距离进行匹配,得到第二匹配分值;
确定所述第一匹配分值和所述第二匹配分值之和,得到与该人眼图像模板对应的匹配值。
可选地,在所述将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配方面,所述确定单元502具体用于:
将所述第一眼部图像进行特征提取,得到第一人眼特征集;
将所述第一特征集与所述多个人眼图像模板中每一人眼图像模板对应的人眼特征集进行匹配,得到与该人眼图像模板对应的匹配值。
可选地,在所述根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像方面,所述修复单元503具体用于:
确定所述被遮挡区域;
确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域;
根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像。
可选地,在所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像方面,所述修复单元503具体用于:
使用所述模板瞳孔区域的每一模板像素点替换所述被遮挡区域中与该模板像素点对应的区域像素点,得到所述第二眼部图像。
可选地,若所述被遮挡区域中包括被遮挡的部分瞳孔区域,在所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像方面,所述修复单元503具体用于:
确定所述被遮挡区域中噪声大于预设阈值的多个第一区域像素点;
确定所述模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点;
根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像。
可选地,在所述根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像方面,所述检测单元503具体用于:
确定所述第一眼部图像中未被遮挡的多个瞳孔区域像素点;
根据所述多个瞳孔区域像素点确定加权系数;
根据所述加权系数和所述多个第一模板像素点确定多个目标模板像素点,所述多个目标模板像素点与所述多个第一区域像素点一一对应;
将所述多个目标模板像素点中每一目标模板像素点替换所述被遮挡区域中与该目标模板像素点对应的第一区域像素点,得到所述第二眼部图像。
可选地,所述检测单元504,还用于对所述第一眼部图像进行特征提取,得到多个眼部图像特征,所述多个眼部图像特征中包括多个眼部轮廓特征;根据所述多个眼部轮廓特征预测瞳孔参考区域;将所述多个眼部图像特征中处于所述瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,若匹配结果不满足预设条件,确定所述第一眼部图像包括被遮挡区域。
可以看出,本申请实施例中所描述的瞳孔检测装置,通过获取第一眼部图像;若第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与第一眼部图像匹配成功的目标人眼图像模板;根据目标人眼图像模板对第一眼部图像进行修复,得到第二眼部图像;基于第二眼部图像进行瞳孔检测,得到瞳孔中心位置,如此,在采集的第一眼部图像中瞳孔有被遮挡的情况下,通过人眼图像模板将被遮挡的第一眼部图像进行修复,从而解决瞳孔被遮挡的问题,提高瞳孔检测的效率和准确度。
可以理解的是,本实施例的瞳孔检测装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括电子设备。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种瞳孔检测方法,其特征在于,所述方法包括:
    获取第一眼部图像;
    若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域;
    根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像;
    基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
  2. 根据权利要求1所述的方法,其特征在于,所述确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,包括:
    将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,得到多个匹配值;
    若所述多个匹配值中存在大于预设匹配值的目标匹配值,确定所述目标匹配值对应的人眼图像模板与所述第一眼部图像匹配成功,并将所述目标匹配值对应的人眼图像模板确定为目标人眼图像模板。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,包括:
    获取摄像头拍摄所述第一眼部图像的第一拍摄角度和第一拍摄距离;
    将所述第一拍摄角度与所述多个人眼图像模板中每一人眼图像模板对应的拍摄角度进行匹配,得到第一匹配分值;将所述第一拍摄距离与多个人眼图像模板中每一人眼图像模板对应的拍摄距离进行匹配,得到第二匹配分值;
    确定所述第一匹配分值和所述第二匹配分值之和,得到与该人眼图像模板对应的匹配值。
  4. 根据权利要求2所述的方法,其特征在于,所述将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,包括:
    将所述第一眼部图像进行特征提取,得到第一人眼特征集;
    将所述第一特征集与所述多个人眼图像模板中每一人眼图像模板对应的人眼特征集进行匹配,得到与该人眼图像模板对应的匹配值。
  5. 根据权利要求1或2所述的方法,其特征在于,所述根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像,包括:
    确定所述被遮挡区域;
    确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域;
    根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像,包括:
    使用所述模板瞳孔区域的每一模板像素点替换所述被遮挡区域中与该模板像素点对应的区域像素点,得到所述第二眼部图像。
  7. 根据权利要求5所述的方法,其特征在于,若所述被遮挡区域中包括被遮挡的部分 瞳孔区域,所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像,包括:
    确定所述被遮挡区域中噪声大于预设阈值的多个第一区域像素点;
    确定所述模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点;
    根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像,包括:
    确定所述第一眼部图像中未被遮挡的多个瞳孔区域像素点;
    根据所述多个瞳孔区域像素点确定加权系数;
    根据所述加权系数和所述多个第一模板像素点确定多个目标模板像素点,所述多个目标模板像素点与所述多个第一区域像素点一一对应;
    将所述多个目标模板像素点中每一目标模板像素点替换所述被遮挡区域中与该目标模板像素点对应的第一区域像素点,得到所述第二眼部图像。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述方法还包括:
    对所述第一眼部图像进行特征提取,得到多个眼部图像特征,所述多个眼部图像特征中包括多个眼部轮廓特征;
    根据所述多个眼部轮廓特征预测瞳孔参考区域;
    将所述多个眼部图像特征中处于所述瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,若匹配结果不满足预设条件,确定所述第一眼部图像包括被遮挡区域。
  10. 一种瞳孔检测装置,其特征在于,所述装置包括:
    获取单元,用于获取第一眼部图像;
    确定单元,用于若所述第一眼部图像包括被遮挡区域,则确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板,其中,所述被遮挡区域至少包括部分瞳孔区域;
    修复单元,用于根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像;
    检测单元,用于基于所述第二眼部图像进行瞳孔检测,得到瞳孔中心位置。
  11. 根据权利要求10所述的装置,其特征在于,在所述确定预设的人眼图像模板库中与所述第一眼部图像匹配成功的目标人眼图像模板方面,所述确定单元具体用于:
    将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配,得到多个匹配值;
    若所述多个匹配值中存在大于预设匹配值的目标匹配值,确定所述目标匹配值对应的人眼图像模板与所述第一眼部图像匹配成功,并将所述目标匹配值对应的人眼图像模板确定为目标人眼图像模板。
  12. 根据权利要求11所述的装置,其特征在于,在所述将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配方面,所述确定单元具体用于:
    获取摄像头拍摄所述第一眼部图像的第一拍摄角度和第一拍摄距离;
    将所述第一拍摄角度与所述多个人眼图像模板中每一人眼图像模板对应的拍摄角度进行匹配,得到第一匹配分值;将所述第一拍摄距离与多个人眼图像模板中每一人眼图像模板对应的拍摄距离进行匹配,得到第二匹配分值;
    确定所述第一匹配分值和所述第二匹配分值之和,得到与该人眼图像模板对应的匹配值。
  13. 根据权利要求11所述的装置,其特征在于,在所述将所述第一眼部图像与所述预设的人眼图像模板库中的多个人眼图像模板进行匹配方面,所述确定单元具体用于:
    将所述第一眼部图像进行特征提取,得到第一人眼特征集;
    将所述第一特征集与所述多个人眼图像模板中每一人眼图像模板对应的人眼特征集进行匹配,得到与该人眼图像模板对应的匹配值。
  14. 根据权利要求10或11所述的装置,其特征在于,在所述根据所述目标人眼图像模板对所述第一眼部图像进行修复,得到第二眼部图像方面,所述修复单元具体用于:
    确定所述被遮挡区域;
    确定所述目标人眼图像模板中与所述被遮挡区域对应的模板瞳孔区域;
    根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像。
  15. 根据权利要求14所述的装置,其特征在于,在所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像方面,所述修复单元具体用于:
    使用所述模板瞳孔区域的每一模板像素点替换所述被遮挡区域中与该模板像素点对应的区域像素点,得到所述第二眼部图像。
  16. 根据权利要求14所述的装置,其特征在于,若所述被遮挡区域中包括被遮挡的部分瞳孔区域,在所述根据所述模板瞳孔区域对所述被遮挡区域进行修复,得到所述第二眼部图像方面,所述修复单元具体用于:
    确定所述被遮挡区域中噪声大于预设阈值的多个第一区域像素点;
    确定所述模板瞳孔区域中与所述多个第一区域像素点一一对应的多个第一模板像素点;
    根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像。
  17. 根据权利要求16所述的装置,其特征在于,在所述根据所述多个第一模板像素点对所述多个第一区域像素点进行修复,得到所述第二眼部图像方面,所述检测单元具体用于:
    确定所述第一眼部图像中未被遮挡的多个瞳孔区域像素点;
    根据所述多个瞳孔区域像素点确定加权系数;
    根据所述加权系数和所述多个第一模板像素点确定多个目标模板像素点,所述多个目标模板像素点与所述多个第一区域像素点一一对应;
    将所述多个目标模板像素点中每一目标模板像素点替换所述被遮挡区域中与该目标模板像素点对应的第一区域像素点,得到所述第二眼部图像。
  18. 根据权利要求10-17任一项所述的装置,其特征在于,所述检测单元还用于:
    对所述第一眼部图像进行特征提取,得到多个眼部图像特征,所述多个眼部图像特征中包括多个眼部轮廓特征;
    根据所述多个眼部轮廓特征预测瞳孔参考区域;
    将所述多个眼部图像特征中处于所述瞳孔参考区域的部分眼部图像特征与预设瞳孔图像特征进行匹配,得到匹配结果,若匹配结果不满足预设条件,确定所述第一眼部图像包括被遮挡区域。
  19. 一种电子设备,其特征在于,包括处理器、存储器,所述存储器用于存储一个或多个程序,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-9任一项所述的方法中的步骤的指令。
  20. 一种计算机可读存储介质,其特征在于,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-9任一项所述的方法。
PCT/CN2020/130329 2019-12-13 2020-11-20 瞳孔检测方法及相关产品 WO2021115097A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911283540.4A CN112989878A (zh) 2019-12-13 2019-12-13 瞳孔检测方法及相关产品
CN201911283540.4 2019-12-13

Publications (1)

Publication Number Publication Date
WO2021115097A1 true WO2021115097A1 (zh) 2021-06-17

Family

ID=76329486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/130329 WO2021115097A1 (zh) 2019-12-13 2020-11-20 瞳孔检测方法及相关产品

Country Status (2)

Country Link
CN (1) CN112989878A (zh)
WO (1) WO2021115097A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030042A (zh) * 2023-02-24 2023-04-28 智慧眼科技股份有限公司 一种针对医生目诊的诊断装置、方法、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244884B (zh) * 2021-12-21 2024-01-30 北京蔚领时代科技有限公司 应用于云游戏的基于眼球跟踪的视频编码方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048328A (ja) * 2004-08-04 2006-02-16 Konica Minolta Holdings Inc 顔検出装置および顔検出方法
CN105335720A (zh) * 2015-10-28 2016-02-17 广东欧珀移动通信有限公司 虹膜信息的采集方法及采集系统
CN110472546A (zh) * 2019-08-07 2019-11-19 南京大学 一种婴幼儿非接触式眼动特征提取装置及方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100950138B1 (ko) * 2009-08-17 2010-03-30 퍼스텍주식회사 얼굴 이미지에서의 눈동자 검출 방법
CN103136512A (zh) * 2013-02-04 2013-06-05 重庆市科学技术研究院 一种瞳孔定位方法及系统
US9361519B2 (en) * 2014-03-28 2016-06-07 Intel Corporation Computational array camera with dynamic illumination for eye tracking
CN108319953B (zh) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 目标对象的遮挡检测方法及装置、电子设备及存储介质
WO2019080061A1 (zh) * 2017-10-26 2019-05-02 深圳市柔宇科技有限公司 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法
CN108648201A (zh) * 2018-05-14 2018-10-12 京东方科技集团股份有限公司 瞳孔定位方法及装置、存储介质、电子设备
CN110472521B (zh) * 2019-07-25 2022-12-20 张杰辉 一种瞳孔定位校准方法及系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048328A (ja) * 2004-08-04 2006-02-16 Konica Minolta Holdings Inc 顔検出装置および顔検出方法
CN105335720A (zh) * 2015-10-28 2016-02-17 广东欧珀移动通信有限公司 虹膜信息的采集方法及采集系统
CN110472546A (zh) * 2019-08-07 2019-11-19 南京大学 一种婴幼儿非接触式眼动特征提取装置及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIE, XIAOBING: "Research on Image Occlusion Removal Algorithm Based on Feature Matching", DIGITIZATION USER, no. 25, 24 June 2018 (2018-06-24), pages 241, XP009528617, ISSN: 1009-0843 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030042A (zh) * 2023-02-24 2023-04-28 智慧眼科技股份有限公司 一种针对医生目诊的诊断装置、方法、设备及存储介质
CN116030042B (zh) * 2023-02-24 2023-06-16 智慧眼科技股份有限公司 一种针对医生目诊的诊断装置、方法、设备及存储介质

Also Published As

Publication number Publication date
CN112989878A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
EP2879095B1 (en) Method, apparatus and terminal device for image processing
EP3411780B1 (en) Intelligent electronic device and method of operating the same
AU2017293746B2 (en) Electronic device and operating method thereof
WO2020034710A1 (zh) 指纹识别方法及相关产品
US20180173321A1 (en) Apparatus, method and recording medium for controlling user interface using input image
WO2019052418A1 (en) FACIAL RECOGNITION METHOD AND RELATED PRODUCT
CN110245607B (zh) 眼球追踪方法及相关产品
CN109558804B (zh) 指纹采集方法及相关产品
US11508179B2 (en) Electronic device, fingerprint image processing method and related products
CN110099219B (zh) 全景拍摄方法及相关产品
WO2019011098A1 (zh) 解锁控制方法及相关产品
CN111445413B (zh) 图像处理方法、装置、电子设备及存储介质
CN108958587B (zh) 分屏处理方法、装置、存储介质和电子设备
WO2021017815A1 (zh) 指纹识别方法及相关产品
CN108833779B (zh) 拍摄控制方法及相关产品
WO2021115097A1 (zh) 瞳孔检测方法及相关产品
US20240005695A1 (en) Fingerprint Recognition Method and Electronic Device
US11881011B2 (en) Fingerprint anti-counterfeiting method and electronic device
CN108431731B (zh) 用于执行基于生物计量信号的功能的方法、存储介质和电子设备
CN110363702B (zh) 图像处理方法及相关产品
CN110198421B (zh) 视频处理方法及相关产品
CN110221696B (zh) 眼球追踪方法及相关产品
CN112748798B (zh) 一种眼球追踪校准方法及相关设备
CN109145849A (zh) 一种用于终端设备的指纹成像方法及装置
CN110162264B (zh) 应用处理方法及相关产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899264

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899264

Country of ref document: EP

Kind code of ref document: A1