WO2024021250A1 - 身份信息采集方法、装置、电子设备和存储介质 - Google Patents

身份信息采集方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2024021250A1
WO2024021250A1 PCT/CN2022/118799 CN2022118799W WO2024021250A1 WO 2024021250 A1 WO2024021250 A1 WO 2024021250A1 CN 2022118799 W CN2022118799 W CN 2022118799W WO 2024021250 A1 WO2024021250 A1 WO 2024021250A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pupil
under
screen
wearer
Prior art date
Application number
PCT/CN2022/118799
Other languages
English (en)
French (fr)
Inventor
韦燕华
Original Assignee
上海闻泰电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰电子科技有限公司 filed Critical 上海闻泰电子科技有限公司
Publication of WO2024021250A1 publication Critical patent/WO2024021250A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • the present disclosure relates to identity information collection methods, devices, electronic devices and storage media.
  • VR virtual reality
  • Mixed Reality Mixed reality
  • user identity information needs to be collected.
  • the user's identity information may be used to authenticate users who log in to VR games, or to create a virtual character that looks similar to the user in a VR scene.
  • some head-mounted displays can extract user identity information from the user's face, but there is still a problem of insufficient accuracy.
  • some head-mounted displays can extract user identity information from the user's face, but there is still a problem of insufficient accuracy.
  • an identity information collection method device, electronic device and storage medium are provided.
  • An identity information collection method includes:
  • the first iris image of the wearer of the head-mounted display is captured by the under-screen camera of the head-mounted display;
  • the head-mounted display includes a display screen, the under-screen camera and a visible light source, and the visible light source will be configured as Emitting visible light;
  • the under-screen camera is arranged on the back of the display screen, and the visible light source is arranged around the display screen;
  • the image position of the wearer's pupil in the first iris image it is determined whether the relative position of the wearer's pupil with respect to the under-screen camera matches a reference position;
  • the reference position is the screen The intersection point of the center line of the lower camera and each visible light ray is relative to the position of the under-screen camera;
  • the wearer's identity information is extracted from the second iris image.
  • the head-mounted display further includes: a driving device; the driving device is provided on the back of the display screen and connected to the under-screen camera, and the driving device is In controlling the movement of the under-screen camera; the method also includes: if the relative position does not match the reference position, identifying the deviation direction of the relative position from the reference position; when the deviation direction includes parallel Determine the deviation distance of the relative position from the reference position in the first direction in the first direction of the display screen; drive the under-screen camera to move in the first direction through the driving device Describe the deviation distance.
  • the method further includes: recalibrating the centerline position of the under-screen camera; re-determining the centerline of the under-screen camera based on the calibrated centerline position. The position of the intersection point with each visible ray relative to the display screen is used to update the reference position.
  • identifying the deviation direction of the relative position from the reference position includes: an internal reference of the under-screen camera; and adjusting the wearer's pupil according to the internal reference.
  • the image position in the first iris image is converted from the image coordinate system to the camera coordinate system, and the position of the pupil in the camera coordinate system is obtained as the relative position of the pupil relative to the under-screen camera; according to The coordinates of the relative position and the reference position in the camera coordinate system are used to calculate the deviation direction of the relative position from the reference position.
  • identifying the deviation direction of the relative position from the reference position includes: obtaining the image position of the wearer's pupil in a standard image, and the standard image is the iris image captured by the under-screen camera when the pupil is in the reference position; the deviation of the relative position from the relative position of the pupil in the first iris image and the standard image is determined based on the The deviation direction of the reference position; or, the deviation direction of the relative position from the reference position is determined according to the area of the image area corresponding to the pupil in the first iris image and the standard image.
  • the method further includes: if the relative position does not match the reference position, identifying the deviation direction of the relative position from the reference position; when the relative position deviates from the reference position;
  • the deviation direction includes a second direction perpendicular to the display screen, and movement prompt information is output through the head-mounted display; the movement prompt information is used to prompt the wearer to move in the second direction.
  • the head-mounted display further includes: at least two eye cameras; the at least two eye cameras are arranged around the display screen, and the at least two eye cameras The eye cameras respectively correspond to different shooting angles; the method further includes: photographing the eye areas of the wearer through the at least two eye cameras to obtain at least two frames of eye images; and, from the at least two eye cameras, Extracting the identity information of the wearer from the second iris image includes: fusing the second iris image and the at least two frames of eye images through an image fusion model to obtain the face output by the image fusion model. Fusion image; extract the identity information of the wearer from the face fusion image.
  • the method further includes: calculating a registration matrix respectively corresponding to the second iris image and the eye image of each of the at least two frames of eye images; Convert each frame of the eye image to a reference perspective according to the registration matrix corresponding to the eye image of each frame; the reference perspective is the shooting perspective when the under-screen camera captures the second iris image; And, the fusion of the second iris image and the at least two frames of eye images through an image fusion model to obtain the face fusion image output by the image fusion model includes: using the image fusion model to fuse the second iris image and the at least two frames of eye images. The two iris images and the eye images of each frame converted to the reference angle are fused to obtain a face fusion image output by the image fusion model.
  • the head-mounted display further includes: an eye-tracking sensor, the eye-tracking sensor is disposed on one side of the display screen;
  • the camera captures the first iris image of the wearer of the head-mounted display, including: detecting the wearer's line of sight direction through the eye tracking sensor; and when detecting the wearer's line of sight direction, the wearer is looking at the display screen.
  • the first iris image of the wearer of the head-mounted display is captured through the under-screen camera.
  • the method further includes: controlling the display screen to stop emitting light when the under-screen camera captures an iris image.
  • the relative position of the wearer's pupil relative to the under-screen camera is determined based on the image position of the wearer's pupil in the first iris image. Whether the position matches the reference position includes: obtaining the internal parameters of the under-screen camera; converting the image position of the wearer's pupil in the first iris image from the image coordinate system to the camera coordinate system according to the internal parameters, The position of the pupil in the camera coordinate system is obtained as the relative position of the pupil relative to the under-screen camera; the coordinates of the relative position and the reference position in the camera coordinate system are calculated. The distance between the relative position and the relative position; if the distance exceeds a distance threshold, it is determined that the relative position does not match the reference position; if the distance does not exceed the distance threshold, it is determined that the The relative position matches the reference position.
  • the relative position of the wearer's pupil relative to the under-screen camera is determined based on the image position of the wearer's pupil in the first iris image. Whether the position matches the reference position includes: obtaining the image position of the wearer's pupil in a standard image, the standard image being an iris image captured by the under-screen camera when the pupil is at the reference position; if If the image position of the pupil in the first iris image is the same as the image position of the pupil in the standard image, then it is determined that the relative position matches the reference position; if the pupil is in the third If the image position in an iris image is different from the image position of the pupil in the standard image, it is determined that the relative position does not match the reference position.
  • the method further includes: if the image position of the pupil in the first iris image is the same as the image position of the pupil in the standard image, then Determine whether the area of the image area corresponding to the pupil in the first iris image and the standard image is the same; if the area of the image area corresponding to the pupil in the first iris image is the same as the area where the pupil is located If the area of the corresponding image area in the standard image is the same, it is determined that the relative position matches the reference position; if the area of the image area corresponding to the pupil in the first iris image is the same as the area of the pupil in the standard image If the areas of corresponding image areas in are not the same, it is determined that the relative position does not match the reference position.
  • An identity information collection device includes:
  • a photography module configured to capture the first iris image of the wearer of the head-mounted display through an under-screen camera of the head-mounted display;
  • the head-mounted display includes a display screen, the under-screen camera and Visible light source, the visible light source is used to emit visible light;
  • the under-screen camera is arranged on the back of the display screen, and the visible light source is arranged around the display screen;
  • a determination module configured to determine whether the relative position of the wearer's pupil relative to the screen matches a reference position based on the image position of the wearer's pupil in the first iris image;
  • the reference position is the position of the intersection point of the center line of the under-screen camera and each visible light ray relative to the display screen;
  • the photographing module is further configured to control the under-screen camera to capture a second iris image after the judgment module determines that the relative position matches the reference position;
  • An extraction module configured to extract the identity information of the wearer from the second iris image.
  • An electronic device includes a memory and one or more processors, the memory being configured as a module for storing computer readable instructions; when the computer readable instructions are executed by the processor, the one or more processors The processor executes the steps of any of the above identity information collection methods.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, they cause one or more processors to perform any of the above-mentioned identity information collection. Method steps.
  • Figure 1A is an application scenario diagram of the identity information collection method provided by one or more embodiments of the present disclosure
  • FIG. 1B is a schematic structural diagram of a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 2A is an arrangement example diagram of a visible light source provided by one or more embodiments of the present disclosure
  • Figure 2B is an arrangement example diagram of another visible light source provided by one or more embodiments of the present disclosure.
  • FIG. 3 is a schematic flowchart of an identity information collection method provided by one or more embodiments of the present disclosure
  • Figure 4 is an example diagram of reference positions provided by one or more embodiments of the present disclosure.
  • FIG. 5 is a schematic flowchart of another identity information collection method provided by one or more embodiments of the present disclosure.
  • Figure 6 is a schematic structural diagram of another head-mounted display provided by one or more embodiments of the present disclosure.
  • FIG. 7 is a schematic flowchart of another identity information collection method provided by one or more embodiments of the present disclosure.
  • Figure 8 is a schematic structural diagram of an identity information collection device provided by one or more embodiments of the present disclosure.
  • Figure 9 is a schematic structural diagram of an electronic device provided by one or more embodiments of the present disclosure.
  • first, second, etc. in the description and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
  • first camera and the second camera are used to distinguish different cameras, rather than to describe a specific order of the cameras.
  • words such as “exemplary” or “such as” are used to represent examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “such as” in the present disclosure is not intended to be construed as preferred or advantageous over other embodiments or designs. To be precise, the use of words such as “exemplary” or “such as” is intended to present relevant concepts in a specific manner. In addition, in the description of the embodiments of the present disclosure, unless otherwise stated, the meaning of "plurality” refers to both one or more than two.
  • Embodiments of the present disclosure disclose an identity information collection method, device, electronic device and storage medium, which can collect more accurate iris images and improve the accuracy of identity information extracted from iris images. Each is explained in detail below.
  • Figure 1A is a schematic diagram of an application scenario of an identity information collection method disclosed in an embodiment.
  • a first operating environment is given, which may include a head-mounted display 101 , a terminal device 102 and a server 103 .
  • the user may wear the head mounted display 101 so that the head mounted display 101 acquires data.
  • the head-mounted display 101 does not have data processing capabilities. After acquiring the data, it can transmit the data with the terminal 102 through short-range communication technology.
  • the terminal device 102 may include electronic devices such as smart TVs, three-dimensional visual display devices, large-scale projection systems, multimedia playback devices, mobile phones, tablet computers, game consoles, and PCs (Personal Computers).
  • the terminal device 102 can receive the data transmitted by the head-mounted display 101 and process the data.
  • the server 103 will be configured to provide background services for the terminal 102, so that the terminal 102 can process the received data transmitted by the head-mounted display 101, thereby completing the identity information collection method provided by the present disclosure.
  • the server 103 can also generate corresponding control instructions according to the data processing results.
  • the control instructions can be sent to the terminal 102 and/or the head-mounted display 101 respectively to control the terminal 102 and/or the head-mounted display 103.
  • server 103 may be a backend server.
  • the server 103 may be one server, a server cluster composed of multiple servers, or a cloud computing service center.
  • the server 103 provides background services for multiple terminals 102 at the same time.
  • a second operating environment is given, which may include a head-mounted display 101 and a terminal device 102 .
  • the head-mounted display 101 may include various types of devices as stated above.
  • the head-mounted display 101 does not have data processing capabilities. After acquiring the data, it can transmit data with the terminal 102 through short-range communication technology. .
  • the terminal device 102 may include various types of electronic devices stated above.
  • the terminal device 102 can receive the data transmitted by the head-mounted display 101 and process the data to complete the identity information collection method provided by the present disclosure.
  • the terminal 102 can also generate corresponding control instructions according to the data processing results, and the control instructions can be sent to the head-mounted display 101 respectively to control the head-mounted display 103.
  • a third operating environment is given, which only includes the head-mounted display 101 .
  • the head-mounted display 101 not only has data acquisition capabilities, but also has data processing capabilities, that is, it can call the program code through the processor in the head-mounted display 101 to realize the functions of the identity information collection method provided by the present disclosure.
  • the program code can be stored in a computer storage medium. It can be seen that the head-mounted display at least includes a processor and a storage medium.
  • FIG. 1B is a schematic structural diagram of a head-mounted display disclosed in an embodiment.
  • the head-mounted display may also include components such as a fixing strap not shown in FIG. 1B , and the fixing strap may fix the head-mounted display on the wearer's head.
  • the head-mounted display 101 may include two display screens 20 , respectively corresponding to the left eye and the right eye of the human body.
  • An under-screen camera 70 may be provided on the back of each display screen 20 .
  • the under-screen camera 70 is hidden on the back of the display screen 20 and can capture the scene in front of the display screen through the display screen 20 .
  • Head mounted display 101 may also include one or more visible light sources 40.
  • the visible light source 40 can be any element capable of emitting visible light, such as a single-color LED lamp. Each visible light source 40 may be disposed around the display screen 20 . And the visible light emitted by each visible light source can be at a certain angle with the center line of the under-screen camera 70 .
  • the visible light source 40 By arranging the visible light source 40, the illumination of the under-screen camera 70 when capturing the iris image can be increased, and the captured image can be prevented from being locally too bright. Moreover, the brighter the light, the smaller the pupil, and the larger the iris, which is conducive to increasing the iris details in the captured iris image, so that richer identity information can be extracted. In addition, compared with infrared light, the visible light source 40 costs less, occupies less equipment volume, and causes less damage to the eyes.
  • FIG. 2A is an arrangement diagram of a visible light source disclosed in an embodiment.
  • a plurality of visible light sources 40 may be arranged around the periphery of the display screen 20 , and the plurality of visible light sources 40 may be arranged in an annular shape to form an annular belt.
  • visible light rays from various angles can be uniformly illuminated on the iris of the human eye, which is beneficial to obtaining high-quality iris images and reducing the occurrence of local bright spots.
  • FIG. 2B is an arrangement example diagram of another visible light source disclosed in an embodiment.
  • two visible light sources 40 may be arranged around the periphery of the display screen 20 , respectively located directly above and directly below the display screen 20 .
  • the following content describes the identity information collection method disclosed in the embodiments of the present disclosure. It should be noted that the following content takes one of the display screens included in the head-mounted display as an example to describe how the head-mounted display controls the under-screen camera, visible light source and other components corresponding to the display screen, and processes the data collected by each component. data methods.
  • the control and data processing methods of the remaining display screens and related components can be referred to the following content, and the details will not be described again.
  • Figure 3 is a schematic flowchart of an identity information collection method disclosed in one embodiment.
  • the identity information collection method can be applied to any electronic device with data processing capabilities, including but not limited to any of the aforementioned.
  • the method When the method is executed by the head-mounted display, it can be executed by a device with computing capabilities such as a central processing unit (CPU) or a microprocessor (Micro Control Unit, MCU) of the head-mounted display.
  • the method may include the following steps:
  • the display screen when the head-mounted display is worn by the wearer, the display screen can face the wearer's eyes, the under-screen camera can perform a shooting operation, and the captured image can be used as the wearer's first iris image.
  • the iris image includes the iris and may also include eyelashes, eyelids and other parts, which can be determined based on the field of view of the under-screen camera and the distance between the wearer's eyes and the under-screen camera.
  • the display screen can be controlled to stop emitting light to increase the probability that external light passes through the display screen and reaches the under-screen camera, thereby enhancing the light transmittance of the display screen, thereby increasing the number of captured iris images. of clarity.
  • step 320 Based on the image position of the wearer's pupil in the first iris image, determine whether the relative position of the wearer's pupil relative to the camera under the screen matches the reference position; if yes, execute step 330; if not, end the process. .
  • the electronic device may position the pupil in the first iris image to determine the image position of the pupil.
  • the electronic device can identify the image position of the pupil through methods such as feature matching or deep learning, and there is no specific limit.
  • the reference position is the position of the intersection point of the center line of the under-screen camera and the visible light rays of each visible light source relative to the under-screen camera.
  • FIG. 4 is an example diagram of a reference position disclosed in an embodiment.
  • the under-screen camera 410 is disposed behind the display screen 420, that is, the back of the display screen.
  • a lens 430 can also be disposed in front of the display screen 420.
  • the lens 430 can transmit light and connect the wearer to the display screen 420. Go into isolation.
  • the visible light source 440 may be disposed above the display screen 420, and there may be an included angle A between the visible light emitted by the visible light source 440 and the center line of the under-screen camera 410.
  • the center line of the under-screen camera 410 and the visible light emitted by the visible light source 440 can be compared with point D, and the position shown by point D can be the aforementioned reference position.
  • the electronic device can obtain the internal parameters of the under-screen camera.
  • the internal parameters are the conversion parameters for converting the camera coordinate system and the image coordinate system.
  • the head-mounted display can convert the image position of the pupil from the image coordinate system to the camera coordinate system according to the internal parameters of the under-screen camera to obtain the position of the pupil in the camera coordinate system.
  • the position below is used as the relative position of the pupil relative to the camera under the screen.
  • the reference position can also be represented by coordinates in the camera coordinate system.
  • the head-mounted display can determine whether the relative position matches the reference position based on the coordinates of the reference position and the relative position in the camera coordinate system. For example, the head-mounted display can calculate the distance between the reference position and the relative position based on their respective coordinates. If the distance between the two positions exceeds the distance threshold, it can be confirmed that the relative position does not match the reference position; if the distance between the two positions If the distance between them does not exceed the distance threshold, it can be confirmed that the relative position matches the reference position.
  • the electronic device can also obtain the image position of the pupil in a standard image.
  • the standard image can be an iris image captured by an under-screen camera when the pupil is at the reference position.
  • the electronic device can determine whether the aforementioned relative position and the reference position match based on the image positions of the pupil in the first iris image and the standard image respectively. For example, if the image position of the pupil in the first iris image is the same as the image position of the pupil in the standard image, then it can be confirmed that the aforementioned relative position matches the reference position; if the image position of the pupil in the first iris image is the same as the image position of the pupil in the standard image, If the image positions in the standard image are not the same, it can be confirmed that the aforementioned relative position does not match the reference position.
  • the head-mounted display can further determine that the pupil corresponds to the first iris image and the standard image respectively. Whether the area of the image area of the pupil is the same; if the area of the image area corresponding to the pupil in the first iris image is the same as the area of the corresponding image area in the standard image, then it can be confirmed that the aforementioned relative position matches the reference position; otherwise, it can be confirmed that the aforementioned The relative position does not match the reference position.
  • step 320 When it is determined in step 320 that the relative position of the pupil relative to the camera under the screen matches the reference position, it means that the wearer's pupil may be in the best shooting position, and a clear iris image with small local bright spots can be captured. . Therefore, the electronic device can control the under-screen camera to perform a shooting operation to obtain the second iris image.
  • the electronic device can extract the color, texture and other characteristics of the iris from the second iris image as the wearer's identity information.
  • the first iris image of the wearer can be captured through the under-screen camera of the head-mounted display, and the relative position of the current pupil relative to the under-screen camera can be determined based on the image position of the pupil in the first iris image. Whether it is in a more reasonable shooting position (reference position). If so, then use the under-screen camera to capture a second iris image for identity information extraction. Since the second iris image is taken at a reasonable shooting position, the image quality of the second iris image is higher, and the identity information extracted from the second iris image is more accurate and can more accurately characterize the wearer. identity of.
  • the identity information obtained based on the identity information collection method disclosed in the embodiments of the present disclosure can be used to verify the identity of the wearer, log in to the account, construct the wearer's avatar, etc., and is not specifically limited.
  • the head-mounted display may further include a driving device, which is connected to the under-screen camera and is disposed on the back of the display screen.
  • the driving device can be used to control the movement of the under-screen camera.
  • the driving device can include a connecting rod and a motor.
  • the connecting rod connects the motor and the under-screen camera. When the motor rotates, it drives the connecting rod. The movement of the connecting rod drives the under-screen camera to move.
  • Figure 5 is a schematic flowchart of an identity information collection method disclosed in one embodiment. As shown in Figure 5, the method may include the following steps:
  • step 520 Based on the image position of the wearer's pupil in the first iris image, determine whether the relative position of the wearer's pupil relative to the camera under the screen matches the reference position; if yes, perform steps 530 to 540; if not, then Execute step 550.
  • steps 510 to 540 For the implementation of steps 510 to 540, reference may be made to the foregoing embodiments, and the following content will not be described again.
  • the deviation direction may include a first direction parallel to the display screen and/or a second direction perpendicular to the display screen. That is, the relative position may deviate from the reference position in directions parallel or perpendicular to the display screen alone; or, the relative position may deviate from the reference position in directions parallel and perpendicular to the display screen simultaneously.
  • the wearer's pupils can be located at the reference position.
  • the wearer's posture of wearing the head-mounted display remains unchanged, but the direction of sight changes, it may cause the relative position of the pupil to deviate from the reference position in a direction parallel to the display screen.
  • the deviation direction may be parallel to the top of the display screen. , down, left, right, etc. any first direction.
  • the relative position of the pupil may be caused to be in a direction perpendicular to the display screen.
  • the deviation direction may be a second direction close to the display screen or away from the display screen.
  • the head-mounted display can determine whether the above two positions match based on the relative position and the coordinates of the reference position in the camera coordinate system in the aforementioned step 520, then in step 550 it can further determine according to The coordinates corresponding to the above two positions respectively determine the deviation direction of the relative position from the reference position.
  • the head-mounted display determines whether the relative position and the reference position match based on the image position of the pupil in the first iris image and the standard image in the aforementioned step 520, then in step 550, The deviation direction of the relative position from the reference position is further determined based on the image positions of the pupil in the above two images, or based on the corresponding image area areas of the pupil in the above two images.
  • the deviation direction includes the first direction. For example, if the image position of the pupil in the first iris image is located above the standard image, it can be determined that the deviation direction includes the first direction, and the first direction can be parallel to the top of the display screen.
  • the deviation direction includes the second direction. For example, if the area of the image area corresponding to the pupil in the first iris image is smaller than the area of the corresponding image area in the standard image, it can be determined that the deviation direction includes a second direction, and the second direction is a direction away from the display screen. If the area of the image area corresponding to the pupil in the first iris image is larger than the area of the corresponding image area in the standard image, the second direction can be determined to be a direction close to the display screen.
  • the deviation direction includes a first direction parallel to the display screen, determine the deviation distance of the relative position from the reference position in the first direction, and drive the under-screen camera through the driving device to move the deviation distance in the first direction.
  • the electronic device may further calculate a deviation distance of the relative position from the reference position in a direction parallel to the display screen.
  • the electronic device can calculate the distance component of the above two positions in the first direction based on the coordinates of the relative position and the reference position in the camera coordinate system, thereby obtaining the deviation distance of the relative position in the first direction.
  • the electronic device can also calculate the distance between the image position of the pupil in the first iris image and the standard image, and convert the image position distance of the pupil into a distance in the camera coordinate system according to the internal parameters of the under-screen camera, to obtain The deviation distance of the relative position in the first direction.
  • the electronic device may control the driving device to drive the under-screen camera to move the offset distance in the first direction.
  • the intersection position of the center line of the under-screen camera and each visible light ray can coincide with the pupil position, so that the wearer of the head-mounted display can adjust the position of the under-screen camera without taking any action.
  • the position of the camera changes the reference position so that the pupil is in a more reasonable shooting position to capture high-quality iris images.
  • the electronic device can recalibrate the centerline position of the under-screen camera, and recalibrate the centerline position according to the calibrated centerline position.
  • the updated reference position can be configured to match the next determined relative position of the iris, so that it can be more accurately determined whether the relative position of the pupil is in a reasonable shooting position.
  • deviation direction includes a second direction perpendicular to the display screen
  • output movement prompt information prompting the wearer to move in the second direction through the head-mounted display.
  • the position of the under-screen camera in the direction perpendicular to the display screen is relatively fixed. Therefore, when the pupil deviates from the reference position in the direction perpendicular to the display screen, the electronic device can remind the wearer to move through the movement prompt message, and adjust the relative position of the pupil relative to the camera under the screen while the reference position remains unchanged, so that the pupil is at In a more reasonable shooting position, high-quality iris images can be obtained.
  • the head-mounted display can output movement prompt information through one or more methods such as text, voice, video, etc., and there is no specific limit.
  • the movement prompt information can be a simulation animation, which can be used to show the movement trajectory of the wearer moving in the second direction.
  • the head-mounted display can output the simulation animation on the display screen, so that the wearer can intuitively see the second direction to move, which is beneficial to lowering the threshold for use.
  • the electronic device may perform step 560 when the deviation direction includes the first direction, and perform steps different from step 570 when the deviation direction includes the second direction.
  • the head-mounted display may also perform step 570 when the deviation direction includes the second direction, and perform steps different from step 560 when the deviation direction includes the first direction, which is not specifically limited.
  • the electronic device can directly return to step 530 to capture a second iris image for extracting identity information.
  • the electronic device can return to step 510 to re-determine the relative position of the iris relative to the under-screen camera, and re-determine whether it can take pictures for identity extraction. Information about the second iris image.
  • the electronic device can match the relative position of the wearer's pupil with respect to the under-screen camera and the reference position to improve the image quality of the iris image used for identity information extraction. Moreover, when the electronic device determines that the relative position of the pupil does not match the reference position, it can adjust the reference position by driving the under-screen camera to move through the driving device; or it can adjust the relative position of the pupil by outputting movement prompt information. , which can improve the shooting quality of iris images through the movement of the under-screen camera or the movement of the wearer.
  • FIG. 6 is a schematic structural diagram of another head-mounted display disclosed in an embodiment.
  • the head-mounted display shown in FIG. 6 may be obtained by optimizing the head-mounted display shown in FIG. 1 .
  • the head-mounted display may also include:
  • the display screen 20 may output a digitally rendered virtual picture or a mixed picture of a mixture of virtuality and reality.
  • the head-mounted display 101 may include multiple eye cameras, for example, may include 7 eye cameras, namely the eye camera device 31 , the eye camera device 32 , and the eye camera device 32 . 33.
  • the eye camera device 34, the eye camera device 35, the eye camera device 36 and the eye camera device 37 are respectively provided in the four directions of upper, lower, left and right of each display screen 20.
  • the same eye camera 32 can be shared at the middle position of the two display screens 20 .
  • the head-mounted display 101 may also be provided with an eye tracking sensor 50 .
  • the eye movement sensor 50 may be disposed on one side of the display screen 20 , for example, on the left or right side of the display screen 20 .
  • Eye tracking sensors can be used to detect, for example, the direction of eye gaze.
  • the eye-tracking sensor may be an electrode-type eye-tracking sensor. The electrodes in the sensor detect muscle movements around the eyes to obtain the direction of the eye's gaze.
  • the head-mounted display 101 may also include a driving device 60 , which is connected to the under-screen camera and is disposed on the back of the display screen 20 .
  • FIG. 7 is a schematic flowchart of an identity information collection method disclosed in one embodiment. As shown in Figure 7, the method may include the following steps:
  • the wearer's pupil when it is detected that the wearer's sight direction is looking at the display screen, the wearer's pupil is more likely to be at the reference position.
  • the first iris image is captured and the first iris image is used to calculate the pupil. Verifying the relative position, the relative position of the iris is more likely to match the reference position, which helps reduce adjustments and capture high-quality iris images more quickly.
  • step 740 Based on the image position of the wearer's pupil in the first iris image, determine whether the relative position of the wearer's pupil relative to the under-screen camera matches the reference position; if yes, perform step 760; if not, perform step 750.
  • step 750 by the head-mounted display, reference may be made to the foregoing embodiments, and the following content will not be described again.
  • the image obtained after each camera performs a shooting operation can be used as a frame of eye image. It should be noted that since each eye camera is set at a different position on the head-mounted display, the eye images captured by different eye cameras may include part of the same eye parts and part of different eye parts.
  • the eye image captured by the eye camera 33 shown in Figure 5 may include the eyeball and the lower eyelid of the eye, but does not include the upper eyelid; the eye image captured by the eye camera 31 shown in Figure 5
  • the eye image can include the eyeball and the upper eyelid of the eye, excluding the lower eyelid.
  • the eye image captured by the eye camera 32 may include the corners of the left eye and the right eye, and the bridge of the nose between the two eyes.
  • step 770 may also perform step 770 to capture two or more frames of eye images when detecting that the wearer's line of sight is directed toward the display screen.
  • the image fusion model may be a trained neural network.
  • the image fusion model can be a Transformer structure, including an encoder and a decoder.
  • the encoder can be composed of a parallel two-layer network.
  • the first layer of the network can include a 3X3 convolution layer for extracting depth features in the second iris image and each frame of eye image.
  • the second layer of the network can include 3 convolutional layers, and the output of each convolutional layer is cascaded to the input of the next layer.
  • the structure of the encoder can preserve the depth features and salient features in the second iris image and eye images of each frame as much as possible.
  • the decoder includes four cascaded convolutional layers.
  • the decoder can fuse the depth features and salient features in the second iris image extracted by the encoder and each frame of eye image, and restore the fused features into an image. pixels to obtain the face fusion image.
  • the image fusion model can also be a Pulse Coupled Neural Network (PCNN).
  • PCNN has characteristics that traditional neural networks do not have, such as pulse emission synchronization, variable thresholds, wave formation and propagation, etc., and can Better fusion of the first iris image with multi-frame eye images.
  • the second iris image and each frame of the eye image obtained by original shooting can be directly input into the image fusion model.
  • the electronic device may also first calculate a registration matrix corresponding to the second iris image and each of the at least two frames of eye images. That is, the shooting angle when the under-screen camera shoots the second iris image is used as the reference angle. Then, according to the registration matrix corresponding to each frame of eye image, each frame of eye image is converted to the reference perspective, and then the second iris image in the reference perspective and the reference perspective are input to the image fusion model, and the image fusion model is used to The second iris image and the eye images of each frame converted to the reference angle are fused to obtain a face fusion image output by the image fusion model.
  • each frame of the eye image may include at least a part of the iris.
  • the electronic device may compare each frame of the eye image and the second iris image. The images are matched with feature points, and a registration matrix between each frame of the eye image and the second iris image is calculated based on the feature point matching results.
  • the face fusion image fuses the depth features and salient features of the second iris image and eye images of each frame.
  • the identity information extracted from the face fusion image can more accurately characterize the identity of the head-mounted display wearer, extracting The identity information provided is more accurate.
  • the electronic device can fuse the iris image captured by the under-screen camera and the eye images captured by the eye cameras with different shooting angles.
  • the fused face fusion image includes both the iris and the iris.
  • the iris information provided by the image also includes eye information such as eyelashes, eyelids, eye periphery, and bridge of the nose provided by eye images from different viewing angles. Therefore, the identity information extracted from the face fusion image can more accurately characterize the identity of the wearer.
  • FIG. 8 is a schematic structural diagram of an identity information collection device disclosed in an embodiment.
  • the identity information collection device can be applied to any of the aforementioned electronic devices.
  • the identity information collection device 800 may include: a photographing module 810 , a judgment module 820 and an extraction module 830 .
  • the photography module 810 will be configured to capture the first iris image of the wearer of the head-mounted display through the under-screen camera of the head-mounted display;
  • the determination module 820 will be configured to determine whether the relative position of the wearer's pupil relative to the screen matches the reference position based on the image position of the wearer's pupil in the first iris image; the reference position is the center line of the under-screen camera and each The position of the intersection point of visible rays relative to the display screen;
  • the photography module 820 will also be configured to control the under-screen camera to capture the second iris image after the determination module 820 determines that the relative position matches the reference position;
  • the extraction module 830 will be configured to extract the wearer's identity information from the second iris image.
  • the head-mounted display also includes: a driving device; the driving device is provided on the back of the display screen and connected to the under-screen camera, and the driving device will be configured to control the movement of the under-screen camera; the identity information collection device 800 may include: recognition modules and driver modules.
  • the identification module may be configured to identify the deviation direction of the relative position from the reference position when the judgment module 820 determines that the relative position does not match the reference position.
  • the driving module may be configured to determine the deviation distance of the relative position from the reference position in the first direction when the deviation direction includes a first direction parallel to the display screen; and drive the under-screen camera to move in the first direction through the driving device Offset distance.
  • the identity information collection device 800 may include: a calibration module.
  • the calibration module can be configured to recalibrate the centerline position of the under-screen camera; and, based on the calibrated centerline position, re-determine the position of the intersection of the centerline of the under-screen camera and each visible light ray relative to the display screen to compare the reference The location is updated.
  • the identity information collection device 800 may include: an identification module and an output module.
  • the identification module may be configured to identify the deviation direction of the relative position from the reference position when the judgment module 820 determines that the relative position does not match the reference position.
  • the output module may be configured to output movement prompt information when the deviation direction includes a second direction perpendicular to the display screen; the movement prompt information may be configured to prompt the wearer to move in the second direction.
  • the head-mounted display further includes: at least two eye cameras; at least two eye cameras are arranged around the display screen, and the at least two eye cameras correspond to different shooting angles;
  • the photography module 820 will also be configured to capture the wearer's eye areas through at least two eye cameras to obtain at least two frames of eye images;
  • the extraction module 830 may include: a fusion unit and an extraction unit.
  • the fusion unit may be configured to fuse the second iris image and at least two frames of eye images through the image fusion model to obtain a face fusion image output by the image fusion model;
  • the extraction unit may be configured to extract the identity information of the wearer from the face fusion image.
  • the extraction module 830 may also include: a conversion unit.
  • the conversion unit may be configured to calculate a registration matrix respectively corresponding to the second iris image and each frame of the eye image in the at least two frames of eye images; and, according to the registration matrix corresponding to each frame of the eye image, convert each frame to The eye image is converted to a reference perspective; the reference perspective is the shooting perspective when the under-screen camera captures the second iris image.
  • the fusion unit may also be configured to fuse the second iris image and the eye images of each frame converted to the reference angle through the image fusion model to obtain a face fusion image output by the image fusion model.
  • the first iris image of the wearer can be captured by the under-screen camera, and the relative position of the current pupil relative to the under-screen camera can be determined based on the image position of the pupil in the first iris image. In a more reasonable shooting position (reference position). If so, the head-mounted display then uses the under-screen camera to capture a second iris image configured to extract identity information. Since the second iris image is taken at a reasonable shooting position, the image quality of the second iris image is higher, and the identity information extracted from the second iris image is more accurate and can more accurately characterize the wearer. identity of.
  • Each module in the above-mentioned identity information collection device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • an electronic device is provided.
  • the electronic device may be a terminal device, and its internal structure diagram may be as shown in FIG. 9 .
  • the electronic device includes a processor, memory, communication interface, database, display screen and input device connected through a system bus.
  • the processor of the electronic device is configured as a module providing computing and control capabilities.
  • the memory of the electronic device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions.
  • This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the communication interface of the electronic device is configured as a wired or wireless communication module with an external terminal.
  • the wireless mode can be implemented through WIFI, operator network, near field communication (NFC) or other technologies.
  • the computer-readable instructions are executed by the processor, the identity information collection method provided in the above embodiments is implemented.
  • the display screen of the electronic device may be a liquid crystal display or an electronic ink display.
  • the input device of the electronic device may be a touch layer covered on the display screen, or may be a button, trackball or touch pad provided on the housing of the electronic device. , it can also be an external keyboard, trackpad or mouse, etc.
  • FIG. 9 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the electronic equipment to which the disclosed solution is applied.
  • Specific electronic devices can May include more or fewer parts than shown, or combine certain parts, or have a different arrangement of parts.
  • the identity information collection device provided by the present disclosure can be implemented in the form of a computer-readable instruction, and the computer-readable instruction can be run on the electronic device as shown in Figure 9.
  • the memory of the electronic device can store various program modules that make up the electronic device.
  • the computer-readable instructions composed of each program module cause the processor to execute the steps in the identity information collection method of each embodiment of the present disclosure described in this specification.
  • an electronic device includes a memory and one or more processors, the memory being configured as a module for storing computer readable instructions; when the computer readable instructions are executed by the processor , causing the one or more processors to execute the steps of the identity information collection method described in the above method embodiment.
  • the electronic device provided in this embodiment can implement the identity information collection method provided in the above method embodiment.
  • the implementation principle and technical effect are similar and will not be described again here.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, they cause one or more processors to perform any of the above-mentioned identity information collection. Method steps.
  • the computer-readable instructions stored on the computer-readable storage medium provided by this embodiment can implement the identity information collection method provided by the above-mentioned method embodiments.
  • the implementation principles and technical effects are similar and will not be described again here.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • the identity information collection method provided by the present disclosure can capture iris images with higher image quality.
  • the identity information extracted from the iris images with higher image quality is more accurate and can more accurately characterize the identity of the wearer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本公开实施例提供了一种身份信息采集方法、装置、电子设备和存储介质,该方法包括:通过头戴式显示器的屏下摄像头拍摄佩戴者的第一虹膜图像。其中,头戴式显示器还可包括显示屏幕和用于发射可见光线的可见光光源,各个光源围绕显示屏幕设置;根据佩戴者的瞳孔在第一虹膜图像中的图像位置,判断佩戴者的瞳孔相对于屏下摄像头的相对位置与参考位置是否匹配;参考位置是屏下摄像头的中心线与各个可见光线的交点相对于屏下摄像头的位置;若匹配,则控制屏下摄像头拍摄第二虹膜图像,并从第二虹膜图像中提取佩戴者的身份信息。实施本公开实施例,能够采集到更高质量的虹膜图像,可以提高从虹膜图像中提取出的身份信息的准确性。

Description

身份信息采集方法、装置、电子设备和存储介质
相关交叉引用
本公开要求于2022年7月28日提交中国专利局、申请号为202210900800.3、发明名称为“身份信息采集方法、装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及身份信息采集方法、装置、电子设备和存储介质。
背景技术
随着虚拟现实(Virtual Reality,VR)、混合现实(Mixed Reality)技术的发展,头戴式显示器被广泛应用于VR、MR的应用场景中。用户佩戴头戴式显示器后,可通过头戴式显示器的显示屏幕观看各种内容,在视觉上接收信息输入,以达到沉浸式的用户体验。
在一些VR或MR的应用场景中,需要对用户的身份信息进行采集。例如,对登录VR游戏的用户进行身份验证、或者在VR场景中创造与用户的外貌相似的虚拟角色等场景下,都有可能使用到用户的身份信息。目前,部分头戴式显示器可从用户的脸部提取出用户身份信息,但仍然存在准确性不足的问题。
发明内容
(一)要解决的技术问题
在现有技术中,部分头戴式显示器可从用户的脸部提取出用户身份信息,但仍然存在准确性不足的问题。
(二)技术方案
根据本公开公开的各种实施例,提供一种身份信息采集方法、装置、电子设备和存储介质。
一种身份信息采集方法,所述方法包括:
通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像;所述头戴式显示器包括显示屏幕、所述屏下摄像头和可见光光源,所述可见光光源将配置为发射可见光线;所述屏下摄像头设置在所述显示屏幕背面,所述可见光光源围绕所述显示屏幕设置;
根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏下摄像头的相对位置与参考位置是否匹配;所述参考位置是所述屏下摄像头的中心线与各个可见光线的交点相对于所述屏下摄像头的位置;
若所述相对位置与所述参考位置匹配,则控制所述屏下摄像头拍摄第二虹膜图像;
从所述第二虹膜图像中提取所述佩戴者的身份信息。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括:驱动装置;所述驱动装置设置在所述显示屏幕背面,与所述屏下摄像头连接,所述驱动装置用于控制所述屏下摄像头移动;所述方法还包括:若所述相对位置与所述参考位置不匹配,则识别所述相对位置偏离所述参考位置的偏离方向;当所述偏离 方向包括平行于所述显示屏幕的第一方向,确定所述相对位置在所述第一方向上偏离所述参考位置的偏离距离;通过所述驱动装置驱动所述屏下摄像头按照所述第一方向移动所述偏离距离。
作为本公开实施例一种可选的实施方式,所述方法还包括:重新校准所述屏下摄像头的中心线位置;根据校准后的所述中心线位置重新确定所述屏下摄像头的中心线与各个可见光线的交点相对于所述显示屏幕的位置,以对所述参考位置进行更新。
作为本公开实施例一种可选的实施方式,所述识别所述相对位置偏离所述参考位置的偏离方向,包括:所述屏下摄像头的内参;根据所述内参将所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置从图像坐标系转换到相机坐标系,得到所述瞳孔在所述相机坐标系下的位置作为所述瞳孔相对于所述屏下摄像头的相对位置;根据所述相对位置和所述参考位置分别在所述相机坐标系下的坐标计算所述相对位置偏离所述参考位置的偏离方向。
作为本公开实施例一种可选的实施方式,所述识别所述相对位置偏离所述参考位置的偏离方向,包括:获取所述佩戴者的瞳孔在标准图像中的图像位置,所述标准图像是所述瞳孔处于所述参考位置时所述屏下摄像头拍摄到的虹膜图像;根据所述瞳孔分别在所述第一虹膜图像和所述标准图像中的图像位置确定所述相对位置偏离所述参考位置的偏离方向;或者,根据所述瞳孔在所述第一虹膜图像和所述标准图像中分别对应的图像区域的面积确定所述相对位置偏离所述参考位置的偏离方向。
作为本公开实施例一种可选的实施方式,所述方法还包括:若所述相对位置与所述参考位置不匹配,则识别所述相对位置偏离所述参考位置的偏离方向;当所述偏离方向包括垂直于所述显示屏幕的第二方向,通过所述头戴式显示器输出移动提示信息;所述移动提示信息用于提示所述佩戴者按照所述第二方向移动。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括:至少两个眼部摄像头;所述至少两个眼部摄像头围绕所述显示屏幕设置,且所述至少两个眼部摄像头分别对应的拍摄视角不同;所述方法还包括:通过所述至少两个眼部摄像头分别拍摄所述佩戴者的眼部区域,得到至少两帧眼部图像;以及,所述从所述第二虹膜图像中提取所述佩戴者的身份信息,包括:通过图像融合模型对所述第二虹膜图像以及所述至少两帧眼部图像进行融合,得到所述图像融合模型输出的人脸融合图像;从所述人脸融合图像中提取出所述佩戴者的身份信息。
作为本公开实施例一种可选的实施方式,所述方法还包括:计算所述第二虹膜图像与所述至少两帧眼部图像中每帧所述眼部图像分别对应的配准矩阵;根据每帧所述眼部图像分别对应的配准矩阵,将每帧所述眼部图像转换至参考视角;所述参考视角是所述屏下摄像头拍摄所述第二虹膜图像时的拍摄视角;以及,所述通过图像融合模型对所述第二虹膜图像以及所述至少两帧眼部图像进行融合,得到所述图像融合模型输出的人脸融合图像,包括:通过图像融合模型对所述第二虹膜图像以及转换至所述参考角度下的各帧所述眼部图像进行融合,得到所述图像融合模型输出的人脸融合图像。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括:眼动追踪传感器,所述眼动追踪传感器设置在所述显示屏幕的一侧;所述通过所述屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像,包括:通过所述眼动追踪传感器检测所述佩戴者的视线方向;在检测到所述佩戴者的视线方向为注视所述 显示屏幕时,通过所述屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像。
作为本公开实施例一种可选的实施方式,所述方法还包括:在所述屏下摄像头拍摄虹膜图像时,控制所述显示屏幕停止发光。
作为本公开实施例一种可选的实施方式,所述根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏下摄像头的相对位置与参考位置是否匹配,包括:获取所述屏下摄像头的内参;根据所述内参将所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置从图像坐标系转换到相机坐标系,得到所述瞳孔在所述相机坐标系下的位置作为所述瞳孔相对于所述屏下摄像头的相对位置;根据所述相对位置和所述参考位置分别在所述相机坐标系下的坐标计算所述相对位置和所述相对位置之间的距离;若所述距离超过距离阈值,则确定所述相对位置与所述参考位置不匹配;若所述距离未超过所述距离阈值,则确定所述相对位置与所述参考位置匹配。
作为本公开实施例一种可选的实施方式,所述根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏下摄像头的相对位置与参考位置是否匹配,包括:获取所述佩戴者的瞳孔在标准图像中的图像位置,所述标准图像是所述瞳孔处于所述参考位置时所述屏下摄像头拍摄到的虹膜图像;若所述瞳孔在所述第一虹膜图像中的图像位置与所述瞳孔在所述标准图像中的图像位置相同,则确定所述相对位置与所述参考位置匹配;若所述瞳孔在所述第一虹膜图像中的图像位置与所述瞳孔在所述标准图像中的图像位置不相同,则确定所述相对位置与所述参考位置不匹配。
作为本公开实施例一种可选的实施方式,所述方法还包括:若所述瞳孔在所述第一虹膜图像中的图像位置与所述瞳孔在所述标准图像中的图像位置相同,则判断所述瞳孔在所述第一虹膜图像和所述标准图像中分别对应的图像区域的面积是否相同;若所述瞳孔在所述第一虹膜图像中对应的图像区域面积与所述瞳孔在所述标准图像中对应的图像区域面积相同,则确定所述相对位置与所述参考位置匹配;若所述瞳孔在所述第一虹膜图像中对应的图像区域面积与所述瞳孔在所述标准图像中对应的图像区域面积不相同,则确定所述相对位置与所述参考位置不匹配。
一种身份信息采集装置,所述装置包括:
拍摄模块,将所述拍摄模块配置为通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像;所述头戴式显示器包括显示屏幕、所述屏下摄像头和可见光光源,所述可见光光源用于发射可见光线;所述屏下摄像头设置在所述显示屏幕背面,所述可见光光源围绕所述显示屏幕设置;
判断模块,将所述判断模块配置为根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏幕的相对位置与参考位置是否匹配;所述参考位置是所述屏下摄像头的中心线与各个可见光线的交点相对于所述显示屏幕的位置;
所述拍摄模块,还将所述拍摄模块配置为在所述判断模块判断出所述相对位置与所述参考位置匹配后,控制所述屏下摄像头拍摄第二虹膜图像;
提取模块,将所述提取模块配置为从所述第二虹膜图像中提取所述佩戴者的身份信息。
一种电子设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一 个或多个处理器执行上述任一项所述的身份信息采集方法的步骤。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的身份信息采集方法的步骤。
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得,本公开的一个或多个实施例的细节在下面的附图和描述中提出。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举可选实施例,并配合所附附图,作详细说明如下。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用来解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1A为本公开一个或多个实施例提供的身份信息采集方法的应用场景图;
图1B为本公开一个或多个实施例提供的头戴式显示器的结构示意图;
图2A为本公开一个或多个实施例提供的一种可见光光源的排布示例图;
图2B为本公开一个或多个实施例提供的另一种可见光光源的排布示例图;
图3为本公开一个或多个实施例提供的一种身份信息采集方法的方法流程示意图;
图4为本公开一个或多个实施例提供的参考位置的示例图;
图5为本公开一个或多个实施例提供的另一种身份信息采集方法的方法流程示意图;
图6为本公开一个或多个实施例提供的另一种头戴式显示器的结构示意图;
图7为本公开一个或多个实施例提供的另一种身份信息采集方法的方法流程示意图;
图8为本公开一个或多个实施例提供的身份信息采集装置的结构示意图;
图9为本公开一个或多个实施例提供的电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用来区别不同的对象,而不是用来描述对象的特定顺序。例如,第一摄像头和第二摄像头是为了区别不同的摄像头,而不是为了描述摄像头的特定顺序。
在本公开实施例中,“示例性的”或者“例如”等词来表示作例子、例证或说 明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,此外,在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
本公开实施例公开了一种身份信息采集方法、装置、电子设备以及存储介质,能够采集到更加准确的虹膜图像,提高从虹膜图像中提取出的身份信息的准确性。以下分别进行详细说明。
请参阅图1A,图1A是一个实施例公开的一种身份信息采集方法的应用场景示意图。如图1所示,给出第一种运行环境,该运行环境可以包括头戴式显示器101、终端设备102和服务器103。
用户可以佩戴头戴式显示器101以使头戴式显示器101获取数据。这里,头戴式显示器101不具备数据处理能力,其在获取到数据后,可以与终端102之间通过近距离通信技术传输数据。
终端设备102可以包括智能电视、三维视觉显示设备、大型投影系统、多媒体播放设备、手机、平板电脑、游戏主机、PC(Personal Computer,个人计算机)等电子设备。终端设备102可以接收头戴式显示器101传输的数据,对数据进行处理。
服务器103将配置为为终端102提供后台服务,以供终端102对接收到的头戴式显示器101传输的数据进行处理,从而完成本公开所提供的身份信息采集方法。可选的,服务器103还可根据数据处理结果生成相对应的控制指令,控制指令可分别发送至终端102和/或头戴式显示器101,以对终端102和/或头戴式显示103进行控制。例如,服务器103可以是后台服务器。服务器103可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算服务中心。可选地,服务器103同时为多个终端102提供后台服务。
再给出第二种运行环境,该运行环境可以包括头戴式显示器101和终端设备102。
这里,头戴式显示器101可以包括上述所陈述的各种类型的设备,头戴式显示器101不具备数据处理能力,其在获取到数据后,可以与终端102之间通过近距离通信技术传输数据。
终端设备102可以包括上述所陈述的各种类型的电子设备。终端设备102可以接收头戴式显示器101传输的数据,对数据进行处理,以完成本公开所提供的身份信息采集方法。可选的,终端102还可根据数据处理结果生成相对应的控制指令,控制指令可分别发送头戴式显示器101,以对头戴式显示103进行控制。
再给出第三种运行环境,该运行环境仅包括头戴式显示器101。这里,头戴式显示器101不仅具有数据获取能力,还具有数据处理能力,即能够通过头戴式显示器101中的处理器调用程序代码来实现本公开提供的身份信息采集方法所实现的功能,当然程序代码可以保存在计算机存储介质中,可见,该头戴式显示器至少包括处理器和存储介质。
请参阅图1B,图1B是一个实施例公开的一种头戴式显示器的结构示意图。需要说明的是,头戴式显示器还可包括图1B中未示出的固定带等部件,固定带可将头戴式显示器固定在佩戴者头部。
如图1B所示,该头戴式显示器101可包括两个显示屏幕20,分别与人体的左眼和右眼对应。每个显示屏幕20的背面可设置有一个屏下摄像头70。屏下摄像头 70隐藏在显示屏幕20的背面,可以透过显示屏幕20拍摄到显示屏幕前方的景象。
头戴式显示器101还可包括一个或多个可见光光源40。可见光光源40可以是任意一种能够发射可见光的元件,如单色LED灯粒。各个可见光光源40可围绕显示屏幕20设置。并且每个可见光光源发射的可见光线可与屏下摄像头70的中心线呈一定角度。
通过设置可见光光源40,可以提高屏下摄像头70拍摄虹膜图像时的光照度,避免拍摄的图像局部过亮。并且,光线越亮,瞳孔越小,虹膜越大,有利于增加拍摄到的虹膜图像中的虹膜细节,以便于提取出更丰富的身份信息。此外,相对于红外光而言,可见光光源40的成本较少、占用的设备体积较少、对眼睛的伤害也较小。
请参阅图2A,图2A是一个实施例公开的一种可见光光源的排布示例图。如图2A所示,显示屏幕20的周边可围绕设置有多个可见光光源40,多个可见光光源40可呈环状排布,形成一个环带。当多个可见光光源40成环状分布时,可以使得各个角度的可见光光线均匀地照射在人眼的虹膜上,有利于得到高质量的虹膜图像,有利于减少局部亮斑的产生。
请参阅图2B,图2B是一个实施例公开的另一种可见光光源的排布示例图。如图2B所示,显示屏幕20的周边可围绕设置有两个可见光光源40,分别设置在显示屏幕20的正上方和正下方。
基于前述的头戴式显示器,以下内容对本公开实施例公开的身份信息采集方法进行说明。需要说明的是,以下内容以头戴显示器包括的其中一个显示屏幕为例,描述头戴式显示器对与该显示屏幕对应的屏下摄像头、可见光光源等部件进行控制,以及处理各个部件采集到的数据的方法。当头戴式显示器包括两个或以上的显示屏幕时,其余显示屏幕及相关部件的控制和数据处理方法可参考以下内容,具体不再赘述。
请参阅图3,图3是一个实施例公开的一种身份信息采集方法的方法流程示意图,该身份信息采集方法可应用于具有数据处理能力的任意一种电子设备,包括但不限于前述的任意一种头戴式显示器、与头戴式显示器通信连接的终端设备,或者为与头戴式显示器通信连接的终端设备提供后台服务的后台服务器。其中,该方法由头戴式显示器执行时可由头戴式显示器的中央处理器(Central Progressing Unit,CPU)、微处理器(Micro Control Unit,MCU)等具有计算能力的器件执行。如图3所示,该方法可包括以下步骤:
310、通过屏下摄像头拍摄头戴式显示器佩戴者的第一虹膜图像。
在本公开实施例中,当头戴式显示器被佩戴者佩戴时,显示屏幕可正对佩戴者的眼睛,屏下摄像头可执行拍摄操作,拍摄得到的图像可以作为佩戴者的第一虹膜图像。需要说明的是,虹膜图像中包括虹膜,也可能包括睫毛、眼睑等部位,可根据屏下摄像头的视野范围,以及佩戴者的眼睛与屏下摄像头之间的距离确定。
可选的,在屏下摄像头拍摄虹膜图像时,可控制显示屏幕停止发光,以增加外界光线透过显示屏幕到达屏下摄像头的概率,增强显示屏幕的透光性能,从而增加拍摄到的虹膜图像的清晰度。
320、根据佩戴者的瞳孔在第一虹膜图像中的图像位置,判断佩戴者的瞳孔相对于屏下摄像头的相对位置与参考位置是否匹配;若是,则执行步骤330;若否,则结束本流程。
在本公开实施例中,电子设备可对第一虹膜图像中的瞳孔进行定位,以确定 瞳孔的图像位置。其中,电子设备可通过特征匹配或者深度学习等方法识别瞳孔的图像位置,具体不做限定。
参考位置是屏下摄像头的中心线与各个可见光光源的可见光线的交点相对于屏下摄像头的位置。当人眼的瞳孔位于参考位置时,可见光线在人眼上的反光点落在瞳孔位置,有利于减少虹膜区域出现的亮斑。并且,当人眼的瞳孔位于参考位置时,人眼处于屏下摄像头的焦距范围内,有利于拍摄到清晰的虹膜图像。
示例性的,请参阅图4,图4是一个实施例公开的一种参考位置的示例图。如图4所示,屏下摄像头410设置在显示屏幕420的后方,即显示屏幕的背面,显示屏幕420的前方还可设置有透镜430,透镜430可透光,并将佩戴者与显示屏幕420进行隔离。可见光光源440可设置在显示屏幕420的上方,可见光光源440发射的可见光线与屏下摄像头410的中心线之间可存在一个夹角A。如图4所示,屏下摄像头410的中心线与可见光光源440发射的可见光线可相较于点D,点D所示出的位置可为前述的参考位置。
作为一种可选的实施方式,电子设备可获取到屏下摄像头的内参,内参是相机坐标系与图像坐标系进行转换的转换参数。
因此,在识别到瞳孔在第一虹膜图像中的图像位置之后,头戴式显示器可根据屏下摄像头的内参,将瞳孔的图像位置从图像坐标系转换到相机坐标系,得到瞳孔在相机坐标系下的位置作为瞳孔相对于屏下摄像头的相对位置。
参考位置也可通过相机坐标系下的坐标进行表示,头戴式显示器可根据参考位置和相对位置在相机坐标系下的坐标判断相对位置与参考位置是否匹配。例如,头戴式显示器可根据各自的坐标计算参考位置和相对位置之间的距离,若两个位置之间的距离超过距离阈值,则可确认相对位置与参考位置不匹配;若两个位置之间的距离未超过距离阈值,则可确认相对位置与参考位置匹配。
作为另一种可选的实施方式,电子设备也可以获取瞳孔在标准图像中的图像位置,标准图像可以是瞳孔处于参考位置时屏下摄像头拍摄到的虹膜图像。
电子设备在识别到瞳孔在第一虹膜图像中的图像位置之后,可根据瞳孔分别在第一虹膜图像和标准图像中的图像位置判断前述的相对位置和参考位置是否匹配。例如,若瞳孔在第一虹膜图像中的图像位置与瞳孔在标准图像中的图像位置相同,则可确认前述的相对位置与参考位置匹配;若瞳孔在第一虹膜图像中的图像位置与瞳孔在标准图像中的图像位置不相同,则可确认前述的相对位置与参考位置不匹配。
可选的,若瞳孔在第一虹膜图像和标准图像中的图像位置相同,在确认相对位置于参考位置匹配之前,头戴式显示器还可进一步判断瞳孔在第一虹膜图像和标准图像中分别对应的图像区域的面积是否相同;若瞳孔在第一虹膜图像中对应的图像区域面积与标准图像中对应的图像区域面积相同,则可以确认前述的相对位置与参考位置匹配;否则,则可确认前述的相对位置与参考位置不匹配。
330、控制屏下摄像头拍摄第二虹膜图像。
当前述步骤320中判断出瞳孔相对于屏下摄像头的相对位置与参考位置相匹配时,说明佩戴者的瞳孔当前可能处于最佳的拍摄位置,可以拍摄到清晰且局部亮斑较小的虹膜图像。因此,电子设备可控制屏下摄像头执行拍摄操作,以得到第二虹膜图像。
340、从第二虹膜图像中提取佩戴者的身份信息。
在本公开实施例中,电子设备可从第二虹膜图像中提取虹膜的颜色、纹理等 特征作为佩戴者的身份信息。
可见,在前述实施例中,可先通过头戴式显示器的屏下摄像头拍摄佩戴者的第一虹膜图像,并根据第一虹膜图像中瞳孔的图像位置判断当前瞳孔相对于屏下摄像头的相对位置是否处于较为合理的拍摄位置(参考位置)。若是,则再通过屏下摄像头拍摄用于进行身份信息提取的第二虹膜图像。由于第二虹膜图像是在合理的拍摄位置上拍摄得到的,因此第二虹膜图像的图像质量较高,从第二虹膜图像中提取出的身份信息准确性更高,可以更准确地表征佩戴者的身份。
基于本公开实施例公开的身份信息采集方法得到的身份信息,可用于对佩戴者进行身份校验、账号登录、佩戴者虚拟形象的构建等,具体不做限定。
在一些实施例中,头戴式显示器还可包括驱动装置,驱动装置与屏下摄像头连接,设置在显示屏幕的背面。驱动装置可用于控制屏下摄像头移动,例如驱动装置可包括连接杆和马达,连接杆连接马达和屏下摄像头,马达转动时带动连接杆,连接杆的移动带动屏下摄像头移动。
基于包括驱动装置的头戴式显示器,请参阅图5,图5是一个实施例公开的一种身份信息采集方法的方法流程示意图。如图5所示,该方法可以包括以下步骤:
510、通过屏下摄像头拍摄头戴式显示器佩戴者的第一虹膜图像。
520、根据佩戴者的瞳孔在第一虹膜图像中的图像位置,判断佩戴者的瞳孔相对于屏下摄像头的相对位置与参考位置是否匹配;若是,则执行步骤530-步骤540;若否,则执行步骤550。
530、控制屏下摄像头拍摄第二虹膜图像。
540、从第二虹膜图像中提取佩戴者的身份信息。
其中,步骤510-步骤540的实施方式可参见前述实施例,以下内容不再赘述。
550、识别瞳孔相对于屏下摄像头的相对位置偏离参考位置的偏离方向。
在本公开实施例中,偏离方向可以包括平行于显示屏幕的第一方向和/或垂直于显示屏幕的第二方向。即,相对位置可能单独在平行或垂直于显示屏幕的方向上偏离参考位置;或者,相对位置也可能同时在平行和垂直与显示屏幕的方向上偏离参考位置。
示例性的,假设佩戴者以合理的姿势佩戴头戴式显示器,且佩戴者的视线方向注视显示屏幕时,佩戴者的瞳孔可位于参考位置。
若佩戴者佩戴头戴式显示器的姿势不变,但视线方向发生了改变,则可能会导致瞳孔的相对位置在平行于显示屏幕的方向上偏离参考位置,偏离方向可以是平行于显示屏幕的上、下、左、右等任意一种第一方向。
若佩戴者仍注视显示屏幕,但佩戴头戴式显示器的姿势发生了改变,例如把头戴式显示器往远离或靠近身体的方向拉扯,则可能会导致瞳孔的相对位置在垂直于显示屏幕的方向上偏离参考位置,偏离方向可以是靠近显示屏幕或远离显示屏幕的第二方向。
作为一种可选的实施方式,若头戴式显示器在前述的步骤520中可通过相对位置和参考位置在相机坐标系中的坐标判断上述两个位置是否匹配,则在步骤550中可进一步根据上述两个位置分别对应的坐标判断相对位置偏离参考位置的偏离方向。
作为另一种可选的实施方式,若头戴式显示器在前述的步骤520中通过瞳孔在第一虹膜图像和标准图像中的图像位置判断相对位置和参考位置是否匹配,则在步骤550中可进一步根据瞳孔在上述两个图像中的图像位置,或者根据瞳孔在 上述两个图像中分别对应的图像区域面积判断相对位置偏离参考位置的偏离方向。
其中,若瞳孔在第一虹膜图像和标准图像中的图像位置不相同,则可以确定偏离方向包括第一方向。例如,若瞳孔在第一虹膜图像中的图像位置位于标准图像的上方,则可以确定偏离方向包括第一方向,且第一方向可以是平行于显示屏幕的上方。
其中,若瞳孔在第一虹膜图像和标准图像中分别对应的图像区域的面积不同,则可以确定偏离方向包括第二方向。例如,若瞳孔在第一虹膜图像中对应的图像区域面积小于标准图像中对应的图像区域面积,则可以确定偏离方向包括第二方向,且第二方向为远离显示屏幕的方向。若瞳孔在第一虹膜图像中对应的图像区域面积大于标准图像中对应的图像区域面积,则可以确定第二方向为靠近显示屏幕的方向。
560、当偏离方向包括平行于显示屏幕的第一方向,确定相对位置在第一方向上偏离参考位置的偏离距离,并通过驱动装置驱动屏下摄像头按照第一方向移动上述的偏离距离。
在本公开实施例中,电子设备可进一步计算相对位置在平行于显示屏幕的方向上偏离参考位置的偏离距离。
可选的,电子设备可根据相对位置和参考位置在相机坐标系下的坐标计算上述两个位置在第一方向上的距离分量,从而得到相对位置在第一方向上的偏离距离。
可选的,电子设备也可计算瞳孔在第一虹膜图像和标准图像中的图像位置之间的距离,并根据屏下摄像头的内参将瞳孔的图像位置距离转换成相机坐标系下的距离,得到相对位置在第一方向上的偏离距离。
在计算得到偏离距离之后,电子设备可控制驱动装置驱动屏下摄像头在第一方向上移动上述的偏离距离。在屏下摄像头移动完毕之后,屏下摄像头的中心线与各个可见光线的交点位置可与瞳孔位置重合,从而可以在头戴式显示器的佩戴者不做任何动作的情况下,通过调正屏下摄像头的位置改变参考位置,使得瞳孔处于较为合理的拍摄位置上,以拍摄得到高质量的虹膜图像。
在一些实施例中,电子设备在通过驱动装置驱动屏下摄像头在第一方向上移动上述的偏离距离之后,可以重新校准所述屏下摄像头的中心线位置,并根据校准后的中心线位置重新确定屏下摄像头的中心线与各个可见光线的交点相对于显示屏幕的位置,以对参考位置进行更新。更新后的参考位置可将配置为与下一次确定出的虹膜相对位置进行匹配判断,可以更准确地判断瞳孔的相对位置是否处于合理的拍摄位置上。
570、当偏离方向包括垂直于显示屏幕的第二方向,通过头戴式显示器输出提示佩戴者按照第二方向移动的移动提示信息。
在本公开实施例中,屏下摄像头在垂直于显示屏幕的方向上的设置位置较为固定。因此,当瞳孔在垂直与显示屏幕的方向上偏离参考位置时,电子设备可通过移动提示信息提醒佩戴者移动,在参考位置不变的情况调整瞳孔相对于屏下摄像头的相对位置,使得瞳孔处于较为合理的拍摄位置上,以拍摄得到高质量的虹膜图像。
其中,头戴式显示器可通过文字、语音、视频等一种或多种方式输出移动提示信息,具体不做限定。可选的,移动提示信息可以是模拟动画,该模拟动画可 用于示出佩戴者按照第二方向移动的移动轨迹。头戴式显示器可以在在显示屏幕中输出该模拟动画,使得佩戴者可以直观地看到需要移动的第二方向,有利于降低使用门槛。
需要说明的是,在一些实施例中,电子设备可以在偏离方向包括第一方向时执行步骤560,并且在偏离方向包括第二方向时执行与步骤570不同的步骤。或者,头戴式显示器也可以在偏离方向包括第二方向时执行步骤570,在偏离方向包括第一方向时执行与步骤560不同的步骤,具体不做限定。
此外,在一些实施例中,电子设备在步骤560或步骤570执行完毕之后,可以直接返回执行步骤530,以拍摄用于提取身份信息的第二虹膜图像。或者,在另一些实施例中,电子设备在步骤560或步骤570执行完毕时,可以返回执行步骤510,以重新确定虹膜相对于屏下摄像头的相对位置,并重新判断是否可以拍摄用于提取身份信息的第二虹膜图像。
可见,在前述实施例中,电子设备可对佩戴者的瞳孔相对于屏下摄像头的相对位置与参考位置进行匹配,以提高用于进行身份信息提取的虹膜图像的图像质量。并且,电子设备还可以在判断出瞳孔的相对位置与参考位置不匹配时,通过驱动装置驱动屏下摄像头移动的方式对参考位置进行调整;或者,通过输出移动提示信息对瞳孔的相对位置进行调整,从而可以通过屏下摄像头的移动或佩戴者的移动提高虹膜图像的拍摄质量。
请参阅图6,图6是一个实施例公开的另一种头戴式显示器的结构示意图。图6所示的头戴式显示器可以是对图1所示的头戴式显示器进行优化得到的。如图6所示,该头戴式显示器还可包括:
至少两个眼部摄像头,每个显示屏幕20周边都可围绕设置有一个或多个眼部摄像头。在头戴式显示器101被佩戴时,显示屏幕20中可输出数字渲染而成的虚拟画面或者输出虚拟与现实混合的混合画面。
示例性的,如图1所示,头戴式显示器101可包括多个眼部摄像头,例如可包括7个眼部摄像头,分别为眼部摄像头装置31、眼部摄像头装置32、眼部摄像头装置33、眼部摄像头装置34、眼部摄像头装置35、眼部摄像头装置36和眼部摄像头装置37,分别在每个显示屏幕20的上、下、左、右四个方向上设置。并且,两个显示屏幕20的中间位置可共用同一个眼部摄像头32。
可选的,头戴式显示器101还可设置有眼动追踪传感器50。眼动传感器50可设置在显示屏幕20的一侧,例如设置在显示屏幕20的左侧或者右侧。眼动追踪传感器可用于检测眼睛的视线方向等。示例性的,眼动追踪传感器可以是电极式的眼动传感器,通过传感器中的电极检测眼睛周边的肌肉动作,从而得到眼睛的是视线方向。
可选的,头戴式显示器101还可包括驱动装置60,驱动装置60与屏下摄像头连接,设置在显示屏幕20的背面。
基于前述的头戴式显示器,请参阅图7,图7是一个实施例公开的一种身份信息采集方法的方法流程示意图。如图7所示,该方法可以包括以下步骤:
710、通过眼动追踪传感器检测佩戴者的视线方向。
720、在检测到佩戴者的视线方向为注视显示屏幕时,通过屏下摄像头拍摄头戴式显示器佩戴者的第一虹膜图像。
在本公开实施例中,当检测到佩戴者的视线方向为注视显示屏幕时,佩戴者的瞳孔处于参考位置的可能性较大,此时拍摄第一虹膜图像并利用第一虹膜图像 对瞳孔的相对位置进行校验,虹膜的相对位置与参考位置匹配的可能性较高,有利于减少调整,更快速地拍摄得到高质量的虹膜图像。
730、通过屏下摄像头拍摄头戴式显示器佩戴者的第一虹膜图像。
740、根据佩戴者的瞳孔在第一虹膜图像中的图像位置,判断佩戴者的瞳孔相对于屏下摄像头的相对位置与参考位置是否匹配;若是,则执行步骤760;若否,执行步骤750。
750、移动屏下摄像头的位置和/或输出移动提示信息,并执行步骤760。
其中,头戴式显示器执行步骤750的实施方式可参见前述的实施例,以下内容不再赘述。
760、控制屏下摄像头拍摄第二虹膜图像。
770、通过至少两个眼部摄像头分别拍摄佩戴者的眼部区域,得到至少两帧眼部图像。
在本公开实施例中,每个摄像头执行拍摄操作后得到的图像可以作为一帧眼部图像。需要说明的是,由于各个眼部摄像头在头戴式显示器上的设置位置不同,不同的眼部摄像头拍摄到的眼部图像可能包括部分相同的眼睛部位,以及部分不同的眼睛部位。
示例性的,请结合图5,图5所示的眼部摄像头33拍摄得到的眼部图像可以包括眼球以及眼睛的下眼睑,但不包括上眼睑;图5所示的眼部摄像头31拍摄得到的眼部图像可以包括眼球以及眼睛的上眼睑,不包括下眼睑。眼部摄像头32拍摄得到的眼部图像可以包括左眼和右眼的眼角部分,以及两眼之间的鼻梁。
需要说明的是,步骤770与前述的步骤720-步骤760在逻辑上没有必然的先后顺序。在一些可能的实施例中,电子设备也可以在执行步骤710之后,在检测到佩戴者的视线方向为注视显示屏幕时,执行步骤770以拍摄得到两帧或以上的眼部图像。
780、通过图像融合模型对第二虹膜图像以及至少两帧眼部图像进行融合,得到图像融合模型输出的人脸融合图像。
在本公开实施例中,图像融合模型可以是经过训练的神经网络。
示例性的,图像融合模型可以是Transformer结构,包括编码器和解码器。
编码器可由并列的两层网络组成,第一层网络可包括一个3X3的卷积层,用于提取第二虹膜图像和各帧眼部图像中的深度特征。第二层网络可包括3个卷积层,每个卷积层的输出级联下一层的输入。编码器的结构可以尽可能的保留第二虹膜图像和各帧眼部图像中的深度特征和显著特征。
解码器包括四个级联的卷积层,解码器可将编码器提取出的第二虹膜图像和各帧眼部图像中的深度特征和显著特征进行融合,并将融合后的特征还原成图像的像素点,从而得到人脸融合图像。
示例性的,图像融合模型也可以是脉冲耦合神经网络(Pulse Coupled Neural Network,PCNN),PCNN具有传统神经网络所没有的特性,如脉冲发放同步、变阈值、波的形成与传播等特性,可以更好地将第一虹膜图像与多帧眼部图像进行融合。
作为一种可选的实施方式,电子设备在执行前述的步骤780时,可以直接原始拍摄得到的第二虹膜图像和各帧眼部图像输入至图像融合模型。
作为另一种可选的实施方式,电子设备在执行步骤780时,也可以先计算第二虹膜图像与至少两帧眼部图像中每帧眼部图像分别对应的配准矩阵。即,以屏 下摄像头拍摄第二虹膜图像时的拍摄视角作为参考视角。然后,根据每帧眼部图像分别对应的配准矩阵,将每帧眼部图像转换至参考视角,再将参考视角下的第二虹膜图像和参考视角输入至图像融合模型,通过图像融合模型对第二虹膜图像以及转换至参考角度下的各帧眼部图像进行融合,得到图像融合模型输出的人脸融合图像。
需要说明的是,每帧眼部图像可至少包括一部分的虹膜,在计算每帧眼部图像与第二虹膜图像之间的配准矩阵时,电子设备可对每帧眼部图像与第二虹膜图像进行特征点匹配,并根据特征点匹配结果计算每帧眼部图像与第二虹膜图像之间的配准矩阵。
790、从人脸融合图像中提取出佩戴者的身份信息。
人脸融合图像融合了第二虹膜图像和各帧眼部图像的深度特征和显著特征,从人脸融合图像中提取出的身份信息,可以更准确地表征头戴式显示器佩戴者的身份,提取出的身份信息更加准确。
可见,在前述实施例中,电子设备可将屏下摄像头拍摄得到的虹膜图像,以及不同拍摄视角的眼部摄像头分别拍摄得到的眼部图像进行融合,融合得到的人脸融合图像中既包括虹膜图像提供的虹膜信息,也包括不同视角的眼部图像提供的睫毛、眼睑、眼周、鼻梁等眼部信息。因此,从人脸融合图像中提取出的身份信息可以更准确地表征佩戴者的身份。
应该理解的是,虽然图3、图5、图7的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图3、图5、图7中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
请参阅图8,图8是一个实施例公开的一种身份信息采集装置的结构示意图。该身份信息采集装置可应用于前述的任意一种电子设备。如图8所示,身份信息采集装置800可包括:拍摄模块810、判断模块820和提取模块830。
拍摄模块810,将配置为通过头戴式显示器的屏下摄像头拍摄头戴式显示器佩戴者的第一虹膜图像;
判断模块820,将配置为根据佩戴者的瞳孔在第一虹膜图像中的图像位置,判断佩戴者的瞳孔相对于屏幕的相对位置与参考位置是否匹配;参考位置是屏下摄像头的中心线与各个可见光线的交点相对于显示屏幕的位置;
拍摄模块820,还将配置为在判断模块820判断出相对位置与参考位置匹配后,控制屏下摄像头拍摄第二虹膜图像;
提取模块830,将配置为从第二虹膜图像中提取佩戴者的身份信息。
在一个实施例中,头戴式显示器还包括:驱动装置;驱动装置设置在显示屏幕背面,与屏下摄像头连接,驱动装置将配置为控制屏下摄像头移动;身份信息采集装置800可包括:识别模块和驱动模块。
识别模块,可将配置为在判断模块820判断出相对位置与参考位置不匹配时,识别相对位置偏离参考位置的偏离方向。
驱动模块,可将配置为当偏离方向包括平行于显示屏幕的第一方向,时,确定相对位置在第一方向上偏离参考位置的偏离距离;并通过驱动装置驱动屏下摄 像头按照第一方向移动偏离距离。
在一个实施例中,身份信息采集装置800可包括:校准模块。
校准模块,可将配置为重新校准屏下摄像头的中心线位置;以及,根据校准后的中心线位置重新确定屏下摄像头的中心线与各个可见光线的交点相对于显示屏幕的位置,以对参考位置进行更新。
在一个实施例中,身份信息采集装置800可包括:识别模块和输出模块。
识别模块,可将配置为在判断模块820判断出相对位置与参考位置不匹配时,识别相对位置偏离参考位置的偏离方向。
输出模块,可将配置为在偏离方向包括垂直于显示屏幕的第二方向时,输出移动提示信息;移动提示信息将配置为提示佩戴者按照第二方向移动。
在一个实施例中,头戴式显示器还包括:至少两个眼部摄像头;至少两个眼部摄像头围绕显示屏幕设置,且至少两个眼部摄像头分别对应的拍摄视角不同;
拍摄模块820,还将配置为通过至少两个眼部摄像头分别拍摄佩戴者的眼部区域,得到至少两帧眼部图像;
提取模块830,可包括:融合单元和提取单元。
融合单元,可将配置为通过图像融合模型对第二虹膜图像以及至少两帧眼部图像进行融合,得到图像融合模型输出的人脸融合图像;
提取单元,可将配置为从人脸融合图像中提取出佩戴者的身份信息。
在一个实施例中,提取模块830,还可包括:转换单元。
转换单元,可将配置为计算第二虹膜图像与至少两帧眼部图像中每帧眼部图像分别对应的配准矩阵;以及,根据每帧眼部图像分别对应的配准矩阵,将每帧眼部图像转换至参考视角;参考视角是屏下摄像头拍摄第二虹膜图像时的拍摄视角。
融合单元,还可将配置为通过图像融合模型对第二虹膜图像以及转换至参考角度下的各帧眼部图像进行融合,得到图像融合模型输出的人脸融合图像。
可见,实施前述实施例公开的身份信息采集装置,可以通过屏下摄像头拍摄佩戴者的第一虹膜图像,并根据第一虹膜图像中瞳孔的图像位置判断当前瞳孔相对于屏下摄像头的相对位置是否处于较为合理的拍摄位置(参考位置)。若是,则头戴式显示器再通过屏下摄像头拍摄将配置为进行身份信息提取的第二虹膜图像。由于第二虹膜图像是在合理的拍摄位置上拍摄得到的,因此第二虹膜图像的图像质量较高,从第二虹膜图像中提取出的身份信息准确性更高,可以更准确地表征佩戴者的身份。
关于身份信息采集的具体限定可以参见上文中对于身份信息采集方法的限定,在此不再赘述。上述身份信息采集装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种电子设备,该电子设备可以是终端设备,其内部结构图可以如图9所示。该电子设备包括通过系统总线连接的处理器、存储器、通信接口、数据库、显示屏和输入装置。其中,该电子设备的处理器配置成提供计算和控制能力的模块。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该电子设备的通信接 口配置成与外部的终端进行有线或无线方式的通信模块,无线方式可通过WIFI、运营商网络、近场通信(NFC)或其他技术实现。该计算机可读指令被处理器执行时以实现上述实施例提供的身份信息采集方法。该电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该电子设备的输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图9中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的电子设备的限定,具体的电子设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本公开提供的身份信息采集装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图9所示的电子设备上运行。电子设备的存储器中可存储组成该电子设备的各个程序模块。各个程序模块构成的计算机可读指令使得处理器执行本说明书中描述的本公开各个实施例的身份信息采集方法中的步骤。
在一个实施例中,提供了一种电子设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述方法实施例所述的身份信息采集方法的步骤。
本实施例提供的电子设备,可以实现上述方法实施例提供的身份信息采集方法,其实现原理与技术效果类似,此处不再赘述。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的身份信息采集方法的步骤。
本实施例提供的计算机可读存储介质上存储的计算机可读指令,可以实现上述方法实施例提供的身份信息采集方法,其实现原理与技术效果类似,此处不再赘述。
本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成的,计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些 都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。
工业实用性
本公开提供的身份信息采集方法,可拍摄到图像质量较高的虹膜图像,从图像质量较高的虹膜图像中提取出的身份信息准确性更高,可以更准确地表征佩戴者的身份。

Claims (16)

  1. 一种身份信息采集方法,其特征在于,所述方法包括:
    通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像;所述头戴式显示器包括显示屏幕、所述屏下摄像头和可见光光源,所述可见光光源用于发射可见光线;所述屏下摄像头设置在所述显示屏幕背面,所述可见光光源围绕所述显示屏幕设置;
    根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏下摄像头的相对位置与参考位置是否匹配;所述参考位置是所述屏下摄像头的中心线与各个可见光线的交点相对于所述屏下摄像头的位置;
    若所述相对位置与所述参考位置匹配,则控制所述屏下摄像头拍摄第二虹膜图像;
    从所述第二虹膜图像中提取所述佩戴者的身份信息。
  2. 根据权利要求1所述的方法,其特征在于,所述头戴式显示器还包括:驱动装置;所述驱动装置设置在所述显示屏幕背面,与所述屏下摄像头连接,所述驱动装置用于控制所述屏下摄像头移动;所述方法还包括:
    若所述相对位置与所述参考位置不匹配,则识别所述相对位置偏离所述参考位置的偏离方向;
    当所述偏离方向包括平行于所述显示屏幕的第一方向,确定所述相对位置在所述第一方向上偏离所述参考位置的偏离距离;
    通过所述驱动装置驱动所述屏下摄像头按照所述第一方向移动所述偏离距离。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    重新校准所述屏下摄像头的中心线位置;
    根据校准后的所述中心线位置重新确定所述屏下摄像头的中心线与各个可见光线的交点相对于所述显示屏幕的位置,以对所述参考位置进行更新。
  4. 根据权利要求2所述的方法,其特征在于,所述识别所述相对位置偏离所述参考位置的偏离方向,包括:
    所述屏下摄像头的内参;
    根据所述内参将所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置从图像坐标系转换到相机坐标系,得到所述瞳孔在所述相机坐标系下的位置作为所述瞳孔相对于所述屏下摄像头的相对位置;
    根据所述相对位置和所述参考位置分别在所述相机坐标系下的坐标计算所述相对位置偏离所述参考位置的偏离方向。
  5. 根据权利要求2所述的方法,其特征在于,所述识别所述相对位置偏离所述参考位置的偏离方向,包括:
    获取所述佩戴者的瞳孔在标准图像中的图像位置,所述标准图像是所述瞳孔处于所述参考位置时所述屏下摄像头拍摄到的虹膜图像;
    根据所述瞳孔分别在所述第一虹膜图像和所述标准图像中的图像位置确定所述相对位置偏离所述参考位置的偏离方向;或者,
    根据所述瞳孔在所述第一虹膜图像和所述标准图像中分别对应的图像区域的面积确定所述相对位置偏离所述参考位置的偏离方向。
  6. 根据权利要求1所述的法,其特征在于,所述方法还包括:
    若所述相对位置与所述参考位置不匹配,则识别所述相对位置偏离所述参考位置的偏离方向;
    当所述偏离方向包括垂直于所述显示屏幕的第二方向,通过所述头戴式显示器输出移动提示信息;所述移动提示信息用于提示所述佩戴者按照所述第二方向移动。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述头戴式显示器还包括:至少两个眼部摄像头;所述至少两个眼部摄像头围绕所述显示屏幕设置,且所述至少两个眼部摄像头分别对应的拍摄视角不同;所述方法还包括:
    通过所述至少两个眼部摄像头分别拍摄所述佩戴者的眼部区域,得到至少两帧眼部图像;
    以及,所述从所述第二虹膜图像中提取所述佩戴者的身份信息,包括:
    通过图像融合模型对所述第二虹膜图像以及所述至少两帧眼部图像进行融合,得到所述图像融合模型输出的人脸融合图像;
    从所述人脸融合图像中提取出所述佩戴者的身份信息。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    计算所述第二虹膜图像与所述至少两帧眼部图像中每帧所述眼部图像分别对应的配准矩阵;
    根据每帧所述眼部图像分别对应的配准矩阵,将每帧所述眼部图像转换至参考视角;所述参考视角是所述屏下摄像头拍摄所述第二虹膜图像时的拍摄视角;
    以及,所述通过图像融合模型对所述第二虹膜图像以及所述至少两帧眼部图像进行融合,得到所述图像融合模型输出的人脸融合图像,包括:
    通过图像融合模型对所述第二虹膜图像以及转换至所述参考角度下的各帧所述眼部图像进行融合,得到所述图像融合模型输出的人脸融合图像。
  9. 根据权利要求1-6任一项所述的方法,其特征在于,所述头戴式显示器还包括:眼动追踪传感器,所述眼动追踪传感器设置在所述显示屏幕的一侧;所述通过所述屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像,包括:
    通过所述眼动追踪传感器检测所述佩戴者的视线方向;
    在检测到所述佩戴者的视线方向为注视所述显示屏幕时,通过所述屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述屏下摄像头拍摄虹膜图像时,控制所述显示屏幕停止发光。
  11. 根据权利要求1所述的方法,其特征在于,所述根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏下摄像头的相对位置与参考位置是否匹配,包括:
    获取所述屏下摄像头的内参;
    根据所述内参将所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置从图像坐标系转换到相机坐标系,得到所述瞳孔在所述相机坐标系下的位置作为所述瞳孔相对于所述屏下摄像头的相对位置;
    根据所述相对位置和所述参考位置分别在所述相机坐标系下的坐标计算所述相对位置和所述相对位置之间的距离;
    若所述距离超过距离阈值,则确定所述相对位置与所述参考位置不匹配;
    若所述距离未超过所述距离阈值,则确定所述相对位置与所述参考位置匹配。
  12. 根据权利要求1所述的方法,其特征在于,所述根据所述佩戴者的瞳孔在 所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏下摄像头的相对位置与参考位置是否匹配,包括:
    获取所述佩戴者的瞳孔在标准图像中的图像位置,所述标准图像是所述瞳孔处于所述参考位置时所述屏下摄像头拍摄到的虹膜图像;
    若所述瞳孔在所述第一虹膜图像中的图像位置与所述瞳孔在所述标准图像中的图像位置相同,则确定所述相对位置与所述参考位置匹配;
    若所述瞳孔在所述第一虹膜图像中的图像位置与所述瞳孔在所述标准图像中的图像位置不相同,则确定所述相对位置与所述参考位置不匹配。
  13. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    若所述瞳孔在所述第一虹膜图像中的图像位置与所述瞳孔在所述标准图像中的图像位置相同,则判断所述瞳孔在所述第一虹膜图像和所述标准图像中分别对应的图像区域的面积是否相同;
    若所述瞳孔在所述第一虹膜图像中对应的图像区域面积与所述瞳孔在所述标准图像中对应的图像区域面积相同,则确定所述相对位置与所述参考位置匹配;
    若所述瞳孔在所述第一虹膜图像中对应的图像区域面积与所述瞳孔在所述标准图像中对应的图像区域面积不相同,则确定所述相对位置与所述参考位置不匹配。
  14. 一种身份信息采集装置,其特征在于,所述装置包括:
    拍摄模块,将所述拍摄模块配置为通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的第一虹膜图像;所述头戴式显示器包括显示屏幕、所述屏下摄像头和可见光光源,所述可见光光源用于发射可见光线;所述屏下摄像头设置在所述显示屏幕背面,所述可见光光源围绕所述显示屏幕设置;
    判断模块,将所述判断模块配置为根据所述佩戴者的瞳孔在所述第一虹膜图像中的图像位置,判断所述佩戴者的瞳孔相对于所述屏幕的相对位置与参考位置是否匹配;所述参考位置是所述屏下摄像头的中心线与各个可见光线的交点相对于所述显示屏幕的位置;
    所述拍摄模块,还将所述拍摄模块配置为在所述判断模块判断出所述相对位置与所述参考位置匹配后,控制所述屏下摄像头拍摄第二虹膜图像;
    提取模块,将所述提取模块配置为从所述第二虹膜图像中提取所述佩戴者的身份信息。
  15. 一种电子设备,包括:存储器和一个或多个处理器,所述存储器中存储有计算机可读指令;所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-13任一项所述的身份信息采集方法的步骤。
  16. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-13任一项所述的身份信息采集方法的步骤。
PCT/CN2022/118799 2022-07-28 2022-09-14 身份信息采集方法、装置、电子设备和存储介质 WO2024021250A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210900800.3 2022-07-28
CN202210900800.3A CN115205957A (zh) 2022-07-28 2022-07-28 身份信息采集方法、装置、电子设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2024021250A1 true WO2024021250A1 (zh) 2024-02-01

Family

ID=83584966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118799 WO2024021250A1 (zh) 2022-07-28 2022-09-14 身份信息采集方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN115205957A (zh)
WO (1) WO2024021250A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206193792U (zh) * 2016-08-25 2017-05-24 北京中科虹霸科技有限公司 支持头戴模式人机交互与身份认证的虚拟现实设备
CN106934265A (zh) * 2017-03-13 2017-07-07 深圳市金立通信设备有限公司 一种穿戴式电子设备及身份认证系统
CN110287687A (zh) * 2019-06-26 2019-09-27 Oppo广东移动通信有限公司 注册方法、注册装置、头戴设备和存储介质
US20190370450A1 (en) * 2018-06-05 2019-12-05 North Inc. Method and system for authenticating a user on a wearable heads-up display
CN112149473A (zh) * 2019-06-28 2020-12-29 北京眼神科技有限公司 虹膜图像采集方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206193792U (zh) * 2016-08-25 2017-05-24 北京中科虹霸科技有限公司 支持头戴模式人机交互与身份认证的虚拟现实设备
CN106934265A (zh) * 2017-03-13 2017-07-07 深圳市金立通信设备有限公司 一种穿戴式电子设备及身份认证系统
US20190370450A1 (en) * 2018-06-05 2019-12-05 North Inc. Method and system for authenticating a user on a wearable heads-up display
CN110287687A (zh) * 2019-06-26 2019-09-27 Oppo广东移动通信有限公司 注册方法、注册装置、头戴设备和存储介质
CN112149473A (zh) * 2019-06-28 2020-12-29 北京眼神科技有限公司 虹膜图像采集方法

Also Published As

Publication number Publication date
CN115205957A (zh) 2022-10-18

Similar Documents

Publication Publication Date Title
US11094127B2 (en) Systems and methods for presenting perspective views of augmented reality virtual object
US10643394B2 (en) Augmented reality
US11533489B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
US10607075B2 (en) Eye-tracking enabled wearable devices
TWI498769B (zh) 頭戴式顯示裝置及其登入方法
US8963956B2 (en) Location based skins for mixed reality displays
US9076033B1 (en) Hand-triggered head-mounted photography
KR102098277B1 (ko) 시선 추적을 이용한 시인성 개선 방법, 저장 매체 및 전자 장치
US20170053445A1 (en) Augmented Reality
CN107710284B (zh) 用于在虚拟图像生成系统中更有效地显示文本的技术
CN104956252A (zh) 用于近眼显示设备的外围显示器
KR20130139280A (ko) 증강 현실 디스플레이를 위한 자동 가변 가상 초점
WO2021095277A1 (ja) 視線検出方法、視線検出装置、及び制御プログラム
US20240020371A1 (en) Devices, methods, and graphical user interfaces for user authentication and device management
US20240077937A1 (en) Devices, methods, and graphical user interfaces for controlling avatars within three-dimensional environments
WO2024021250A1 (zh) 身份信息采集方法、装置、电子设备和存储介质
CN111736692B (zh) 显示方法、显示装置、存储介质与头戴式设备
WO2024021251A1 (zh) 身份校验方法、装置、电子设备以及存储介质
CN117041670B (zh) 图像处理方法及相关设备
TW201947522A (zh) 頭戴式電子裝置及其使用方法
US20240104859A1 (en) User interfaces for managing live communication sessions
US11863860B2 (en) Image capture eyewear with context-based sending
US20230171484A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
US20240104819A1 (en) Representations of participants in real-time communication sessions
US20240087221A1 (en) Method and apparatus for determining persona of avatar object in virtual space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22952691

Country of ref document: EP

Kind code of ref document: A1