WO2024021251A1 - 身份校验方法、装置、电子设备以及存储介质 - Google Patents

身份校验方法、装置、电子设备以及存储介质 Download PDF

Info

Publication number
WO2024021251A1
WO2024021251A1 PCT/CN2022/118800 CN2022118800W WO2024021251A1 WO 2024021251 A1 WO2024021251 A1 WO 2024021251A1 CN 2022118800 W CN2022118800 W CN 2022118800W WO 2024021251 A1 WO2024021251 A1 WO 2024021251A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearer
eye
head
mounted display
identity
Prior art date
Application number
PCT/CN2022/118800
Other languages
English (en)
French (fr)
Inventor
韦燕华
Original Assignee
上海闻泰电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰电子科技有限公司 filed Critical 上海闻泰电子科技有限公司
Publication of WO2024021251A1 publication Critical patent/WO2024021251A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera

Definitions

  • the present disclosure relates to identity verification methods, devices, electronic devices and storage media.
  • VR virtual reality
  • Mixed Reality Mixed reality
  • head-mounted display may be used illegally.
  • head-mounted displays are at risk of theft. Once the head-mounted display is lost, anyone can use it, which causes property damage to the original owner and also poses a certain risk of privacy leakage.
  • the head-mounted display has the risk of being stolen, and the use safety of the head-mounted display is low.
  • an identity verification method, device, electronic device and storage medium are provided.
  • An identity verification method includes:
  • the iris image of the wearer of the head-mounted display is captured by the under-screen camera of the head-mounted display;
  • the head-mounted display includes an under-screen camera and at least two eye cameras;
  • the head-mounted display also includes a display screen , the under-screen camera is arranged on the back of the display screen, the at least two eye cameras are arranged around the display screen, and the corresponding shooting angles of the at least two eye cameras are different;
  • the at least two eye cameras respectively capture the eye areas of the wearer of the head-mounted display to obtain at least two frames of eye images
  • the identity of the wearer is verified according to the user identity characteristics, and whether to respond to the user operation input by the wearer is determined based on the verification result.
  • the head-mounted display further includes a driving device; the driving device is provided on the back of the display screen and is connected to the under-screen camera for controlling the screen.
  • the under-screen camera moves; and, capturing the iris image of the wearer of the head-mounted display through the under-screen camera, including: detecting the wearer's pupil movement data; driving the under-screen camera through the driving device according to the required The pupil movement data is moved so that the shooting angle of the under-screen camera is facing the wearer's pupil; after the under-screen camera is moved, an iris image of the wearer is captured by the under-screen camera.
  • the head-mounted display further includes an eye-tracking sensor, and the eye-tracking sensor is disposed on one side of the display screen;
  • the pupil movement data includes: pupil Movement distance and pupil movement direction;
  • detecting the wearer's pupil movement data includes: determining the current pupil position of the head-mounted display wearer through the eye movement data currently collected by the eye-tracking sensor. ; Acquire the last pupil position of the wearer determined based on the last eye movement data collected by the eye tracking sensor; Compare the current pupil position with the last pupil position to determine the wearer's Pupil movement distance and pupil movement direction.
  • the method further includes: fusing the at least two frames of eye images to obtain a fused image; and, based on the iris image and the at least two frames, Extracting the user identity feature of the wearer from the eye image includes: extracting the iris feature from the iris image, and extracting the eye feature from the fused image; combining the iris feature and the eye feature Fusion is performed to obtain user identity characteristics.
  • the fusion of the at least two frames of eye images to obtain the fused image includes: determining the iris image corresponding to the reference perspective as the reference image; the reference perspective The corresponding iris image is captured by the under-screen camera when the wearer is looking at the display screen; based on the reference image, the at least two frames of eye images are converted to the reference perspective, At least two frames of eye images corresponding to the reference perspective are obtained; at least two frames of eye images corresponding to the reference perspective are fused to obtain a fused image.
  • the head-mounted display further includes a plurality of infrared light sources; the infrared light sources are arranged in a ring shape around the display screen; and the method further includes: When the under-screen camera captures the iris image, it emits light through the plurality of infrared light sources.
  • the method further includes: if the identity verification result is that the identity of the wearer is illegal, sending a vibration command to the handle; the handle and the head The wearable display is connected through communication, and the vibration command is used to instruct the handle to vibrate at a preset frequency.
  • the method further includes:
  • the display screen is controlled to stop emitting light.
  • the head-mounted display further includes: an eye-tracking sensor, the eye-tracking sensor is disposed on one side of the display screen; the method further includes: according to the The eye movement data collected by the eye-tracking sensor determines whether the wearer of the head-mounted display is looking at the display screen; and, the iris of the wearer of the head-mounted display is photographed through the under-screen camera of the head-mounted display.
  • the image includes: if it is determined that the wearer is looking at the display screen, taking an iris image of the wearer of the head-mounted display through an under-screen camera of the head-mounted display; and, through the at least two
  • the eye camera separately captures the eye area of the wearer of the head-mounted display to obtain at least two frames of eye images, including: if it is determined that the wearer is watching the display screen, through the at least two eye images
  • the camera separately captures the eye area of the wearer of the head-mounted display to obtain at least two frames of eye images.
  • verifying the identity of the wearer based on the user identity characteristics includes: obtaining the legal identity characteristics corresponding to the legal user of the head-mounted display, and match the user identity feature with the legal identity feature; if the user identity feature matches the legal identity feature, it is determined that the wearer's identity is legal; if the user identity feature matches the legal identity feature If the legal identity characteristics do not match, the wearer's identity is illegal.
  • verifying the identity of the wearer according to the user identity characteristics includes: using a trained classification model to classify the user identity characteristics; The classification model is trained using the legal identity features corresponding to legal users; it is judged through the classification model whether the user identity features and legal identity features belong to the same category; if so, it is determined that the identity of the wearer is legal; if If not, it is determined that the identity of the wearer is illegal.
  • the user operation input by the wearer includes: a power-on operation of turning on the head-mounted display; or a triggering operation of opening a certain application built into the head-mounted display; Or, the login operation of logging into a game application in the head-mounted display; or the login operation of entertainment applications such as audio and video playback software.
  • An identity verification device including:
  • a first photography module configured to capture an iris image of the wearer of the head-mounted display through an under-screen camera of the head-mounted display;
  • the head-mounted display includes an under-screen camera and at least two Eye camera;
  • the head-mounted display also includes a display screen, the under-screen camera is arranged on the back of the display screen, the at least two eye cameras are arranged around the display screen, and the at least two eye cameras are arranged around the display screen.
  • the corresponding shooting angles of the cameras are different;
  • a second photographing module configured to photograph the eyes of the wearer of the head-mounted display through the at least two eye cameras to obtain at least two frames of eye images
  • An extraction module configured to extract the user identity feature of the wearer based on the iris image and the at least two frames of eye images
  • a verification module configured to verify the identity of the wearer based on the user identity characteristics, and determine whether to respond to the user operation input by the wearer based on the verification result.
  • a computer device includes a memory and one or more processors, the memory being configured as a module for storing computer readable instructions; when the computer readable instructions are executed by the processor, the one or more processors The processor performs the steps of any of the above identity verification methods.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, they cause one or more processors to perform the identity verification described in any of the above. Method steps.
  • Figure 1A is an application scenario diagram of the identity verification method provided by one or more embodiments of the present disclosure.
  • FIG. 1B is a schematic structural diagram of a head-mounted display provided by one or more embodiments of the present disclosure
  • Figure 2 is an example diagram of the arrangement of infrared light sources provided by one or more embodiments of the present disclosure
  • Figure 3 is a flow chart of steps of an identity verification method provided by one or more embodiments of the present disclosure.
  • Figure 4 is a step flow chart of another identity verification method provided by one or more embodiments of the present disclosure.
  • Figure 5 is a step flow chart of another identity verification method provided by one or more embodiments of the present disclosure.
  • Figure 6 is a structural block diagram of an identity verification device in one or more embodiments of the present disclosure.
  • Figure 7 is an internal structure diagram of a computer device in one or more embodiments of the present disclosure.
  • first, second, etc. in the description and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
  • first camera and the second camera are used to distinguish different cameras, rather than to describe a specific order of the cameras.
  • words such as “exemplary” or “for example” mean examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “such as” in the present disclosure is not intended to be construed as preferred or advantageous over other embodiments or designs. To be precise, the use of words such as “exemplary” or “such as” is intended to present relevant concepts in a specific manner. In addition, in the description of the embodiments of the present disclosure, unless otherwise stated, the meaning of "plurality" refers to both one or more than two.
  • Embodiments of the present disclosure disclose an identity verification method, device, electronic device and storage medium, which can verify the identity of the wearer of the head-mounted display and improve the use safety of the head-mounted display. Each is explained in detail below.
  • Figure 1A is a schematic diagram of an application scenario of an identity information collection method disclosed in an embodiment.
  • a first operating environment is given, which may include a head-mounted display 101 , a terminal device 102 and a server 103 .
  • the user may wear the head mounted display 101 so that the head mounted display 101 acquires data.
  • the head-mounted display 101 does not have data processing capabilities. After acquiring the data, it can transmit the data with the terminal 102 through short-range communication technology.
  • the terminal device 102 may include electronic devices such as smart TVs, three-dimensional visual display devices, large-scale projection systems, multimedia playback devices, mobile phones, tablet computers, game consoles, and PCs (Personal Computers).
  • the terminal device 102 can receive the data transmitted by the head-mounted display 101 and process the data.
  • the server 103 is used to provide background services for the terminal 102, so that the terminal 102 processes the received data transmitted by the head-mounted display 101, thereby completing the identity information collection method provided by the present disclosure.
  • the server 103 can also generate corresponding control instructions according to the data processing results.
  • the control instructions can be sent to the terminal 102 and/or the head-mounted display 101 respectively to control the terminal 102 and/or the head-mounted display 103.
  • server 103 may be a backend server.
  • the server 103 may be one server, a server cluster composed of multiple servers, or a cloud computing service center.
  • the server 103 provides background services for multiple terminals 102 at the same time.
  • a second operating environment is given, which may include a head-mounted display 101 and a terminal device 102 .
  • the head-mounted display 101 may include various types of devices as stated above.
  • the head-mounted display 101 does not have data processing capabilities. After acquiring the data, it can transmit data with the terminal 102 through short-range communication technology. .
  • the terminal device 102 may include various types of electronic devices stated above.
  • the terminal device 102 can receive the data transmitted by the head-mounted display 101 and process the data to complete the identity information collection method provided by the present disclosure.
  • the terminal 102 can also generate corresponding control instructions according to the data processing results, and the control instructions can be sent to the head-mounted display 101 respectively to control the head-mounted display 103.
  • a third operating environment is given, which only includes the head-mounted display 101 .
  • the head-mounted display 101 not only has data acquisition capabilities, but also has data processing capabilities, that is, it can call the program code through the processor in the head-mounted display 101 to realize the functions of the identity information collection method provided by the present disclosure.
  • the program code can be stored in a computer storage medium. It can be seen that the head-mounted display at least includes a processor and a storage medium.
  • FIG. 1B is a schematic structural diagram of a head-mounted display disclosed in an embodiment of the present disclosure.
  • the head-mounted display may also include components such as a fixing strap not shown in FIG. 1B , and the fixing strap may fix the head-mounted display on the wearer's head.
  • the head-mounted display 10 may include two display screens 20 , respectively corresponding to the left eye and the right eye of the human body.
  • An under-screen camera 70 may be provided on the back of each display screen 20 .
  • the under-screen camera 70 is hidden on the back of the display screen 20 and can capture the scene in front of the display screen through the display screen 20 .
  • the head-mounted display 10 may further include at least two eye cameras, and one or more eye cameras may be arranged around each display screen 20 .
  • the display screen 20 may output a digitally rendered virtual picture or a mixed picture that is a mixture of virtuality and reality.
  • the head-mounted display 10 may include multiple eye cameras, such as 7 eye cameras, namely eye camera device 31, eye camera device 32, eye camera device 33, The eye camera device 34, the eye camera device 35, the eye camera device 36 and the eye camera device 37 are respectively provided in the four directions of upper, lower, left and right of each display screen 20. Moreover, the same eye camera 32 can be shared at the middle position of the two display screens 20 .
  • the head-mounted display 10 can also be provided with multiple infrared light sources 40 .
  • the infrared light source 40 can be any element capable of emitting infrared light, such as infrared LED lamp particles, but is not limited thereto. Multiple infrared light sources 40 may be arranged in a ring shape around the display screen 20 .
  • FIG. 2 is an example diagram of the arrangement of an infrared light source disclosed in an embodiment.
  • multiple infrared light sources 40 can be arranged in a ring to form an annular belt.
  • each display screen 20 may be surrounded by an annular zone composed of infrared light sources 40 .
  • the infrared light source 40 can be used to emit infrared light to supplement the shooting of the camera under the screen. Using infrared light as a supplementary light source can avoid vertigo caused by light exposure.
  • the multiple infrared light sources 40 form a ring shape, which can improve the uniformity of the fill light, so that the infrared light can be illuminated evenly from different angles to avoid local highlights.
  • the infrared light source 40 can be used to emit infrared rays with a wavelength of 850 millimeters.
  • the head-mounted display 10 may also be provided with an eye tracking sensor 50 .
  • the eye movement sensor 50 may be disposed on one side of the display screen 20 , for example, on the left or right side of the display screen 20 .
  • Eye tracking sensors can be used to collect eye movement data.
  • the eye-tracking sensor may be an electrode-type eye-tracking sensor, and the electrodes in the sensor detect muscle movements around the eyes to obtain eye-tracking data.
  • the head-mounted display 10 may also include a driving device 60 , which is connected to the under-screen camera and is disposed on the back of the display screen 20 .
  • the driving device 60 can be used to control the movement of the under-screen camera.
  • the driving device 60 can include a connecting rod and a motor.
  • the connecting rod connects the motor and the under-screen camera. When the motor rotates, it drives the connecting rod. The movement of the connecting rod drives the under-screen camera to move.
  • the following content describes the identity verification method disclosed in the embodiment of the present disclosure. It should be noted that the following content takes one of the display screens included in the head-mounted display as an example to describe the operation of the head-mounted display on the under-screen camera, eye camera, eye tracking sensor, driving device and other components corresponding to the display screen. Control, and methods for processing data collected by each component. When the head-mounted display includes two or more display screens, the control and data processing methods of the remaining display screens and related components can be referred to the following content, and the details will not be described again.
  • FIG. 3 is a schematic flowchart of an identity verification method disclosed in an embodiment of the present disclosure.
  • the identity verification method can be applied to any electronic device with data processing capabilities, including but not limited to the aforementioned Any head-mounted display, a terminal device communicating with the head-mounted display, or a background server that provides background services for the terminal device communicating with the head-mounted display.
  • the method When the method is executed by a head-mounted display, it can be executed by a device with computing capabilities such as a central processing unit (CPU) or a microprocessor (Micro Control Unit, MCU) of the head-mounted display.
  • the method may include the following steps:
  • the display screen when the head-mounted display is worn by the wearer, the display screen can face the wearer's eyes, the camera under the screen can perform a shooting operation, and the captured image can be used as the wearer's iris image.
  • the iris image includes the iris and may also include eyelashes, eyelids and other parts, which can be determined based on the field of view of the under-screen camera and the distance between the wearer's eyes and the under-screen camera.
  • the display screen can be controlled to stop emitting light to increase the probability that external light passes through the display screen and reaches the under-screen camera, thereby enhancing the light transmittance of the display screen, thereby increasing the number of captured iris images. of clarity.
  • the image obtained after each camera performs a shooting operation can be used as a frame of eye image. It should be noted that since each eye camera is set at a different position on the head-mounted display, the eye images captured by different eye cameras may include part of the same eye parts and part of different eye parts.
  • the eye image captured by the eye camera 33 shown in Figure 2 may include the eyeball and the lower eyelid of the eye, but does not include the upper eyelid; the eye image captured by the eye camera 31 shown in Figure 2
  • the eye image can include the eyeball and the upper eyelid of the eye, excluding the lower eyelid.
  • the eye image captured by the eye camera 32 may include the corners of the left eye and the right eye, and the bridge of the nose between the two eyes.
  • steps 310 and 320 have no necessary logical sequence, and steps 310 and 320 can also be executed at the same time, without any specific limitation.
  • the electronic device can determine whether the wearer of the head-mounted display is looking at the display screen based on the eye movement data collected by the eye-tracking sensor. If it is determined that the wearer is looking at the display screen, the aforementioned steps 310 and 320 can be performed.
  • the user identity feature may be any one or more identity features extracted from the iris image and the eye image.
  • user identity features may include iris features extracted from iris images; or user identity features may also include eye features related to eye periphery, eyebrows, eyelashes, etc. extracted from eye images; or user identity Features may include iris features extracted from iris images, and eye features extracted from eye images related to eye periphery, eyebrows, eyelashes, etc.
  • the electronic device may extract identity features for each of the at least two frames of eye images.
  • the electronic device may first fuse at least two frames of eye images, and extract identity features from the fused image obtained after the fusion, without any specific limitation.
  • the electronic device after the electronic device extracts the iris features from the iris image and extracts the eye features from the eye image, the electronic device can add the iris features by concatenating (Concat) or merging (Add). It is fused with eye features to form the wearer’s user identity.
  • Concat concatenating
  • Additional merging
  • the electronic device can obtain the legal identity characteristics corresponding to the legal user of the head-mounted display, and match the user identity characteristics extracted from the image with the legal identity characteristics; if the user identity characteristics match the legal identity characteristics If the identity characteristics match, it can be determined that the identity of the wearer of the head-mounted display is legitimate; if the identity characteristics of the user do not match the legal identity characteristics, it can be determined that the identity of the wearer of the head-mounted display is illegal.
  • the electronic device can also train the classification model with the legal identity characteristics corresponding to the legal users, and then use the trained classification model to classify the user's identity characteristics, and use the classification model to determine whether the user's identity characteristics are consistent with the legal identity characteristics. Whether the identity features belong to the same category; if so, it can be determined that the identity of the head-mounted display wearer is legal; if not, it can be determined that the identity of the head-mounted display wearer is illegal.
  • the classification model can include but is not limited to Support Vector Machine (SVM) and deep neural network.
  • the head-mounted display can respond to the user operation input by the wearer; if the verification result is that the identity of the wearer of the head-mounted display is legal, If the identity is illegal, the head-mounted display can refuse to respond to user operations input by the wearer.
  • the user operations input by the wearer may include: a power-on operation of turning on the head-mounted display, a triggering operation of opening an application built into the head-mounted display, logging into a game application in the head-mounted display, and audio and video playback.
  • the login operation of entertainment applications such as software is not specifically limited.
  • the electronic device can capture the iris image through the under-screen camera of the head-mounted display, and capture the eye image of the wearer through two or more eye cameras arranged around the screen.
  • the shooting angles corresponding to two or more eye cameras are different, and the image content that can be used to represent the user's identity in the eye images captured separately is also different, making the image information in the eye images richer and conducive to improving the user experience.
  • Accuracy of identity verification The user identity features extracted from the iris image and at least two frames of eye images can be used to characterize the wearer's identity, and thus can be used to determine whether the wearer is legitimate to verify the user's identity, which can improve the performance of the head-mounted display. Use security.
  • the electronic device can fuse the iris features in the iris image and the eye features in the eye images corresponding to multiple viewing angles to verify the wearer's identity. It has been proven that iris features and eye features can complement each other, making it more difficult for illegal users to counterfeit, and can greatly improve the security of head-mounted displays.
  • Figure 4 is a schematic flowchart of another identity verification method disclosed in one embodiment.
  • the identity verification method is applied to electronic devices.
  • a head-mounted display includes the under-screen camera included in the above-mentioned embodiment, at least two eye cameras, and a driving device.
  • the method may include the following steps:
  • the pupil movement data may include a pupil movement distance and a pupil movement direction.
  • the electronic device may include the aforementioned eye tracking sensor.
  • the eye tracking sensor may be used to collect eye movement data.
  • the eye movement data may include pupil position, pupil gaze duration, eye beat count, pupil expansion data.
  • the electronic device can process the eye movement data to obtain the wearer's pupil movement data.
  • the electronic device obtains the wearer's last pupil position determined based on the last eye movement data collected by the eye tracking sensor, and determines the wearer's current pupil position based on the eye movement data currently collected by the eye tracking sensor;
  • the electronic device can compare the current pupil position with the last pupil position to determine the wearer's pupil movement distance and pupil movement direction.
  • the electronic device can identify the iris image captured last time by the under-screen camera to obtain the last pupil position; the electronic device can also control the under-screen camera to capture a frame of pre-recognized images first.
  • Iris image Recognize the pre-recognized iris image to obtain the current pupil position, and compare the current pupil position with the last pupil position to determine the wearer's pupil movement distance and pupil movement direction.
  • the electronic device can also determine the pupil movement distance included in the pupil movement data; if the pupil movement distance is greater than the distance threshold, the electronic device can continue to perform the following step 420 - Step 430, move the under-screen camera; if the pupil movement distance is less than or equal to the distance threshold, the electronic device can directly control the under-screen camera to capture the iris image without moving the under-screen camera.
  • the head-mounted display can control the driving device to move the pupil movement distance in the above-mentioned pupil movement direction, thereby driving the movement of the under-screen camera.
  • the wearer's eyes may not always be looking at the display screen.
  • the wearer's eyes may move or rotate, causing the iris in the iris image captured by the under-screen camera to deform.
  • the electronic device can drive the under-screen camera to move through the driving device. After the under-screen camera is moved, the under-screen camera can capture the iris relatively completely, and the iris can be located in the center of the iris image, so that iris features can be extracted from the iris image, which is beneficial to improving the accuracy of iris feature extraction.
  • the electronic device can detect the pupil movement data before taking the iris image, and control the driving device to drive the under-screen camera to move according to the pupil movement data, so that the iris image captured by the under-screen camera can be captured more completely. to the wearer's pupils to improve the accuracy of pupil feature extraction.
  • the head-mounted display includes an eye-tracking sensor
  • the eye movement data detected by the eye-tracking sensor can be used to determine the pupil movement data; if the head-mounted display does not include an eye-tracking sensor, the iris image can be used to determine the pupil Mobile data.
  • Figure 5 is a schematic flowchart of another identity verification method disclosed in one embodiment.
  • the identity verification method is applied to any of the aforementioned electronic devices. As shown in Figure 5, the method may include the following steps:
  • the electronic device can perform feature point matching on each frame of eye images, determine the pixel points at corresponding positions between any two frames of eye images based on the result of feature point matching, and add the pixels at the corresponding positions to Points are averaged. That is to say, for the overlapping pixels in each frame image, the electronic device can sum and average the pixel values of the overlapping pixels. For the non-overlapping pixels, the head-mounted display can average the non-overlapping pixels. The pixel value of the pixel is directly set to the pixel value of the pixel at the corresponding position in the fused image, thereby obtaining a fused image composed of multiple frames of eye images.
  • the electronic device may also determine the iris image corresponding to the reference viewing angle as the reference image.
  • the iris image corresponding to the reference viewing angle is captured by the under-screen camera when the wearer is looking at the display screen.
  • the electronic device can use the reference image as a reference to convert at least two frames of eye images to the reference perspective to obtain at least two frames of eye images corresponding to the reference perspective.
  • Each frame of the eye image may include at least a part of the iris, so the head-mounted display can perform feature point matching on each frame of the eye image and the reference image, and calculate the difference between each frame of the eye image and the reference image based on the feature point matching results.
  • registration matrix The electronic device can multiply the registration matrix corresponding to each frame of the eye image by the pixel value of each pixel point included in the frame of the eye image, so as to convert each frame of the eye image to a reference viewing angle.
  • the electronic device can fuse at least two frames of eye images corresponding to the reference viewing angle to obtain a fused image.
  • fusing at least two frames of eye images corresponding to the reference perspective may include summing and averaging the pixel values of overlapping pixels in each eye image, and directly setting the pixel values of non-overlapping pixels to the fusion The pixel value of the pixel at the corresponding position in the image.
  • Converting the perspective of multiple frames of eye images before fusion can improve the accuracy of image fusion, preserve the image details in each frame of eye images to the greatest extent, and help improve the accuracy of extracting eye features.
  • the electronic device extracting iris features from the iris image may include the following steps:
  • the head-mounted display can also perform non-linear changes on the boundary grayscale value in the grayscale iris image to enhance the boundary grayscale value.
  • the head-mounted display can identify the iris in the grayscale iris image to locate the image position of the iris in the grayscale iris iris image, and center the image position of the iris with a preset radius or a preset radius. The length and preset width build the iris area.
  • the head-mounted display can use Hough transform method, calculus method, etc. to calculate the gray value in the iris area to remove noise.
  • the implementation of locating the outer edge of the iris may include: obtaining the gradient value corresponding to each gray value in the iris area; it should be noted that the calculation of the gradient value corresponding to the gray value may convert the iris image into a gray image in step S1 It's going on.
  • the center of the pupil circle is used as the starting point of the search for the outer edge of the iris, and the pupil radius is used as the starting radius of the search.
  • the circular difference algorithm is used to calculate the starting point and starting radius.
  • the integral of the gradient value of the arc formed gradually increase the length of the starting radius, and further calculate the gradient integral of the arc formed after increasing the starting radius; determine the outer edge of the iris based on the gradient integral change trend.
  • the electronic device can accurately identify the outer edge of the iris to locate the accurate iris area from the iris image, and then extract the iris features from the iris area, which can reduce the remaining parts of the eyeball.
  • the possibility that the corresponding image information is mistakenly extracted as iris features improves the extraction accuracy of iris features.
  • eye features are extracted from the fused image.
  • the electronic device can reduce the number of repetitions of extracting eye features from the eye image, reduce redundant information in the eye features, and help reduce calculations. volume to speed up the response speed of user identity verification.
  • the head-mounted display can also perform one or more of the following operations:
  • Operation 1 The electronic device can send a vibration command to the handle connected to the head-mounted display, so that the handle vibrates at a preset frequency after receiving the vibration.
  • a head-mounted display is used in conjunction with a controller to form a complete VR or MR interactive device.
  • the vibration command can be used to control the vibration of the handle, so that the illegal user can neither use the head-mounted display nor the handle normally.
  • the vibration of the handle will also cause discomfort to the illegal user.
  • Operation 2 The electronic device can also control the head-mounted display to play a beep sound through the speaker, causing auditory discomfort to illegal users, so that illegal users can stop using the head-mounted display as soon as possible.
  • the electronic device can trigger the head-mounted display to scan surrounding Bluetooth devices or Ultra Wide Band (UWB) devices, and send the device identification of the head-mounted display to the connectable Bluetooth device or UWB device.
  • any target device receives the device identification of the head-mounted display through Bluetooth or UWB connection with the head-mounted display, it can send the device identification of the head-mounted display and the positioning information of the target device to the server.
  • the server can query the terminal device bound to the head-mounted display based on the device identification of the head-mounted display, and send the positioning information of the target device to the terminal device, so that the user of the terminal device can confirm the head-mounted display through the positioning information of the target device.
  • the position of the head-mounted display is determined so that the user of the terminal device can find the lost head-mounted display based on the positioning information.
  • illegal users can be warned through operations such as handle vibration or buzzer output; or, the possible positioning information of the head-mounted display can also be sent to the bound terminal device to facilitate finding the head.
  • Wearable displays can be warned through operations such as handle vibration or buzzer output; or, the possible positioning information of the head-mounted display can also be sent to the bound terminal device to facilitate finding the head.
  • steps in the flowcharts of FIGS. 3 to 5 are shown in sequence as indicated by arrows, these steps are not necessarily executed in the order indicated by arrows. Unless explicitly stated in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figures 3 to 5 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The execution order of the stages is not necessarily sequential, but may be performed in turn or alternately with other steps or sub-steps of other steps or at least part of the stages.
  • the embodiment of the present disclosure also provides an identity verification device.
  • This device embodiment corresponds to the foregoing method embodiment.
  • this device embodiment no longer refers to the foregoing method.
  • the details in the embodiments will be described one by one, but it should be clear that the device in this embodiment can correspondingly implement all the contents in the foregoing method embodiments.
  • FIG. 6 is a structural block diagram of an identity verification device provided by an embodiment of the present disclosure.
  • the identity verification device 600 provided by this embodiment includes: a first photography module 610, a second photography module 620, and an extraction module 630. , Verification module 640.
  • the first photography module 610 is configured to capture the iris image of the wearer of the head-mounted display through the under-screen camera of the head-mounted display;
  • the second photography module 620 is configured to capture the eyes of the wearer of the head-mounted display through at least two eye cameras to obtain at least two frames of eye images;
  • the extraction module 630 is configured to extract the user identity feature of the wearer based on the iris image and at least two frames of eye images;
  • the verification module 640 is configured to verify the wearer's identity based on the user's identity characteristics, and determine whether to respond to the user operation input by the wearer based on the verification results.
  • the first shooting module 610 may also be configured to control the display screen to stop emitting light when the under-screen camera captures an iris image.
  • the head-mounted display further includes a driving device; the driving device is provided on the back of the display screen and connected to the under-screen camera, and the driving device is configured to control the movement of the under-screen camera;
  • the first photographing module 610 may include: a detection unit, a driving unit and a photographing unit.
  • a detection unit configured to detect pupil movement data of the wearer
  • the driving unit is configured to drive the under-screen camera to move according to the pupil movement data through the driving device, so that the shooting angle of the under-screen camera faces the wearer's pupil;
  • the shooting unit is configured to capture the wearer's iris image through the under-screen camera after the under-screen camera moves.
  • the detection unit is configured to, after detecting the wearer's pupil movement data, determine whether the pupil movement distance included in the pupil movement data is greater than the distance threshold, and if so, trigger the driving unit to execute
  • the driving device drives the under-screen camera to move according to the pupil movement data; if not, triggers the shooting unit to perform the operation of capturing the wearer's iris image through the under-screen camera.
  • the head-mounted display further includes an eye-tracking sensor, and the eye-tracking sensor is provided on one side of the display screen;
  • the pupil movement data includes: pupil movement distance and pupil movement direction;
  • a detection unit configured to determine the current pupil position of the wearer of the head-mounted display through the eye movement data currently collected by the eye tracking sensor; and, obtain the wearer's current pupil position determined based on the last eye movement data collected by the eye tracking sensor. The last pupil position; and, comparing the current pupil position with the last pupil position, determines the wearer's pupil movement distance and pupil movement direction.
  • the identity verification device may also include: a fusion module;
  • the fusion module is configured to combine at least two frames of eye images after the second shooting module 620 captures at least two frames of eye images, and before the extraction module extracts the user identity feature of the wearer based on the iris image and the at least two frames of eye images.
  • the images are fused to obtain the fused image;
  • the extraction module 630 may also be configured to extract iris features from the iris image and extract eye features from the fused image; and fuse the iris features and eye features to obtain user identity features.
  • the fusion module may include: a determination unit, a conversion unit, and a fusion unit.
  • the determination unit may be configured to determine the iris image corresponding to the reference viewing angle as the reference image; the iris image corresponding to the reference viewing angle is captured by the under-screen camera when the wearer is looking at the display screen;
  • the conversion unit may be configured to convert at least two frames of eye images to the reference perspective based on the reference image to obtain at least two frames of eye images corresponding to the reference perspective;
  • the fusion unit may be configured to fuse at least two frames of eye images corresponding to the reference perspective to obtain a fused image.
  • the head-mounted display further includes multiple infrared light sources; the infrared light sources are arranged in a ring shape around the display screen.
  • the second photography module 620 may also be configured to emit light through multiple infrared light sources when the under-screen camera captures an iris image.
  • the identity verification device may further include: a communication module.
  • the communication module can be configured to send a vibration command to the handle when the identity verification result verified by the verification module 640 is that the identity of the wearer is illegal; the handle is communicated with the head-mounted display, and the vibration command is used to instruct the handle to Vibrate at a preset frequency.
  • the identity verification device provided in this embodiment can execute the identity verification method provided in the above method embodiment. Its implementation principles and technical effects are similar and will not be described again here.
  • Each module in the above-mentioned identity verification device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • an electronic device is provided.
  • the electronic device may be a terminal device, and its internal structure diagram may be as shown in FIG. 7 .
  • the electronic device includes a processor, memory, communication interface, database, display screen and input device connected through a system bus.
  • the processor of the electronic device is configured as a module providing computing and control capabilities.
  • the memory of the electronic device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions.
  • This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the communication interface of the electronic device is configured as a wired or wireless communication module with an external terminal.
  • the wireless mode can be implemented through WIFI, operator network, near field communication (NFC) or other technologies.
  • the computer-readable instructions implement the identity verification method provided in the above embodiment.
  • the display screen of the electronic device may be a liquid crystal display or an electronic ink display.
  • the input device of the electronic device may be a touch layer covered on the display screen, or may be a button, trackball or touch pad provided on the housing of the electronic device. , it can also be an external keyboard, trackpad or mouse, etc.
  • Figure 7 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the computer equipment to which the disclosed solution is applied.
  • Specific computer equipment can May include more or fewer parts than shown, or combine certain parts, or have a different arrangement of parts.
  • the identity verification device provided by the present disclosure can be implemented in the form of computer-readable instructions, and the computer-readable instructions can be run on the electronic device as shown in Figure 7.
  • Each program module that makes up the electronic device can be stored in the memory of the electronic device.
  • the computer-readable instructions composed of each program module cause the processor to execute the steps in the identity verification method of various embodiments of the present disclosure described in this specification.
  • an electronic device in one embodiment, includes a memory and one or more processors, the memory being configured as a module for storing computer readable instructions; when the computer readable instructions are executed by the processor , causing the one or more processors to execute the steps of the identity verification method described in the above method embodiment.
  • the electronic device provided in this embodiment can implement the identity verification method provided in the above method embodiment. Its implementation principle and technical effect are similar, and will not be described again here.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, they cause one or more processors to perform the identity verification described in any of the above. Method steps.
  • the computer-readable instructions stored on the computer-readable storage medium provided by this embodiment can implement the identity verification method provided by the above method embodiment.
  • the implementation principles and technical effects are similar and will not be described again here.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • the identity verification method provided by the present disclosure can verify the identity of the wearer of the head-mounted display and improve the use safety of the head-mounted display.
  • eye images captured by eye cameras with different viewing angles can include different image contents, making the eye images richer in image information, thereby improving the accuracy of user identity features extracted based on eye images.
  • iris features and eye features can complement each other, making it more difficult for illegal users to make counterfeit information, which can greatly improve the security of the head-mounted display.

Abstract

一种身份校验方法、装置、电子设备和存储介质;该方法包括:通过头戴式显示器的屏下摄像头拍摄头戴式显示器佩戴者的虹膜图像;头戴式显示器还包括显示屏幕和至少两个眼部摄像头;屏下摄像头设置在显示屏幕背面,至少两个眼部摄像头围绕显示屏幕设置,且分别对应的拍摄视角不同;通过至少两个眼部摄像头分别拍摄佩戴者的眼睛,得到至少两帧眼部图像;根据虹膜图像和至少两帧眼部图像提取佩戴者的用户身份特征;根据用户身份特征对佩戴者的身份进行校验,并根据校验结果确定是否对佩戴者输入的用户操作进行响应,从而能够对头戴式显示器的佩戴者进行身份校验,提高头戴式显示器的使用安全性。

Description

身份校验方法、装置、电子设备以及存储介质
相关交叉引用
本公开要求于2022年7月28日提交中国专利局、申请号为202210900796.0、发明名称为“身份校验方法、装置、电子设备以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及身份校验方法、装置、电子设备和存储介质。
背景技术
随着虚拟现实(Virtual Reality,VR)、混合现实(Mixed Reality)技术的发展,头戴式显示器被广泛应用于VR、MR的应用场景中。用户佩戴头戴式显示器后,可通过头戴式显示器的显示屏幕观看各种内容,在视觉上接收信息输入,以达到沉浸式的用户体验。
然而,在实践中发现,在一些应用场景中,存在头戴式显示器被非法使用的风险。例如,头戴式显示器有被盗的风险。头戴式显示器一旦丢失,任何人都可以使用,这为原主人造成了财产损失,还存在一定的隐私泄露风险。
发明内容
(一)要解决的技术问题
在现有技术中,头戴式显示器有被盗的风险,头戴式显示器的使用安全性较低。
(二)技术方案
根据本公开公开的各种实施例,提供一种身份校验方法、装置、电子设备和存储介质。
一种身份校验方法,所述方法包括:
通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像;所述头戴式显示器包括屏下摄像头和至少两个眼部摄像头;所述头戴式显示器还包括显示屏幕,所述屏下摄像头设置在所述显示屏幕背面,所述至少两个眼部摄像头围绕所述显示屏幕设置,所述至少两个眼部摄像头对应的拍摄视角不同;
通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼部区域,得到至少两帧眼部图像;
根据所述虹膜图像和所述至少两帧眼部图像提取所述佩戴者的用户身份特 征;
根据所述用户身份特征对所述佩戴者的身份进行校验,并根据校验结果确定是否对所述佩戴者输入的用户操作进行响应。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括驱动装置;所述驱动装置设置在所述显示屏幕背面,与所述屏下摄像头连接,用于控制所述屏下摄像头移动;以及,通过所述屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像,包括:检测所述佩戴者的瞳孔移动数据;通过所述驱动装置驱动所述屏下摄像头按照所述瞳孔移动数据移动,以使所述屏下摄像头的拍摄视角正对所述佩戴者的瞳孔;在所述屏下摄像头移动后,通过所述屏下摄像头拍摄所述佩戴者的虹膜图像。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括眼动追踪传感器,所述眼动追踪传感器设置在所述显示屏幕的一侧;所述瞳孔移动数据包括:瞳孔移动距离和瞳孔移动方向;以及,所述检测所述佩戴者的瞳孔移动数据,包括:通过所述眼动追踪传感器当前采集到的眼动数据确定所述头戴式显示器佩戴者当前的瞳孔位置;获取基于所述眼动追踪传感器上一次采集到的眼动数据确定出的所述佩戴者上一次的瞳孔位置;将当前的瞳孔位置与上一次的瞳孔位置进行比较,确定所述佩戴者的瞳孔移动距离和瞳孔移动方向。
作为本公开实施例一种可选的实施方式,所述方法还包括:将所述至少两帧眼部图像进行融合,得到融合图像;以及,所述根据所述虹膜图像和所述至少两帧眼部图像提取所述佩戴者的用户身份特征,包括:从所述虹膜图像中提取所述虹膜特征,并从所述融合图像中提取眼部特征;将所述虹膜特征和所述眼部特征进行融合,得到用户身份特征。
作为本公开实施例一种可选的实施方式,所述将所述至少两帧眼部图像进行融合,得到融合图像,包括:将与参考视角对应的虹膜图像确定为参考图像;所述参考视角对应的虹膜图像是所述佩戴者注视所述显示屏幕时由所述屏下摄像头拍摄到的;以所述参考图像为基准,将所述至少两帧眼部图像转换到所述参考视角下,得到与所述参考视角对应的至少两帧眼部图像;将与所述参考视角对应的至少两帧眼部图像进行融合,得到融合图像。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括多个红外光源;所述红外光源呈环状围绕所述显示屏幕设置;以及,所述方法还包括:在所述屏下摄像头拍摄所述虹膜图像时,通过所述多个红外光源发光。
作为本公开实施例一种可选的实施方式,所述方法还包括:若所述身份校验结果为所述佩戴者的身份不合法,则向手柄发送震动指令;所述手柄与所述头戴式显示器通信连接,所述震动指令用于指示所述手柄以预设频率震动。
作为本公开实施例一种可选的实施方式,所述方法还包括:
在所述屏下摄像头拍摄所述虹膜图像时,控制所述显示屏幕停止发光。
作为本公开实施例一种可选的实施方式,所述头戴式显示器还包括:眼动追踪传感器,所述眼动追踪传感器设置在所述显示屏幕的一侧;所方法还包括:根据所述眼动追踪传感器采集到的眼动数据判断所述头戴式显示器的佩戴者是否注视显示屏幕;以及,所述通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像,包括:若判断出所述佩戴者注视所述显示屏幕,则通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像;以及,所述通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼部区域,得到至少两帧眼部图像,包括:若判断出所述佩戴者注视所述显示屏幕,则通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼部区域,得到至少两帧眼部图像。
作为本公开实施例一种可选的实施方式,所述根据所述用户身份特征对所述佩戴者的身份进行校验,包括:获取所述头戴式显示器的合法用户对应的合法身份特征,并将所述用户身份特征与所述合法身份特征进行匹配;若所述用户身份特征与所述合法身份特征相匹配,则确定所述佩戴者的身份合法;若所述用户身份特征与所述合法身份特征不匹配,则所述佩戴者的身份不合法。
作为本公开实施例一种可选的实施方式,所述根据所述用户身份特征对所述佩戴者的身份进行校验,包括:利用训练好的分类模型对所述用户身份特征进行分类;所述分类模型是利用合法用户对应的合法身份特征进行训练得到的;通过所述分类模型判断所述用户身份特征与合法身份特征是否属于同一分类;若是,则确定所述佩戴者的身份合法;若否,则确定所述佩戴者的身份不合法。
作为本公开实施例一种可选的实施方式,所述佩戴者输入的用户操作,包括:开启头戴式显示器的开机操作;或者,打开头戴式显示器内置的某一应用程序的触发操作;或者,登录头戴式显示器中的游戏应用程序的登录操作;或者,影音播放软件等娱乐应用程序的登录操作。
一种身份校验装置,包括:
第一拍摄模块,将所述第一拍摄模块配置成通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像;所述头戴式显示器包括屏下摄像头和至少两个眼部摄像头;所述头戴式显示器还包括显示屏幕,所述屏下摄像头设置在所述显示屏幕背面,所述至少两个眼部摄像头围绕所述显示屏幕设置,所述至少两个眼部摄像头对应的拍摄视角不同;
第二拍摄模块,将所述第二拍摄模块配置成通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼睛,得到至少两帧眼部图像;
提取模块,将所述提取模块配置成根据所述虹膜图像和所述至少两帧眼部图像提取所述佩戴者的用户身份特征;
校验模块,将所述校验模块配置成根据所述用户身份特征对所述佩戴者的身份进行校验,并根据校验结果确定是否对所述佩戴者输入的用户操作进行响应。
一种计算机设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述任一项所述的身份校验方法的步骤。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的身份校验方法的步骤。
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得,本公开的一个或多个实施例的细节在下面的附图和描述中提出。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举可选实施例,并配合所附附图,作详细说明如下。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用来解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1A为本公开一个或多个实施例提供的身份校验方法方法的应用场景图;
图1B为本公开一个或多个实施例提供的头戴式显示器的结构示意图;
图2为本公开一个或多个实施例提供的红外光源的排布示例图;
图3为本公开一个或多个实施例提供的一种身份校验方法的步骤流程图;
图4为本公开一个或多个实施例提供的另一种身份校验方法的步骤流程图;
图5为本公开一个或多个实施例提供的另一种身份校验方法的步骤流程图;
图6为本公开一个或多个实施例中身份校验装置的结构框图;
图7为本公开一个或多个实施例中计算机设备的内部结构图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可 以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用来区别不同的对象,而不是用来描述对象的特定顺序。例如,第一摄像头和第二摄像头是为了区别不同的摄像头,而不是为了描述摄像头的特定顺序。
在本公开实施例中,“示例性的”或者“例如”等词来表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,此外,在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
本公开实施例公开了一种身份校验方法、装置、电子设备以及存储介质,能够对头戴式显示器的佩戴者进行身份校验,提高头戴式显示器的使用安全性。以下分别进行详细说明。
请参阅图1A,图1A是一个实施例公开的一种身份信息采集方法的应用场景示意图。如图1所示,给出第一种运行环境,该运行环境可以包括头戴式显示器101、终端设备102和服务器103。
用户可以佩戴头戴式显示器101以使头戴式显示器101获取数据。这里,头戴式显示器101不具备数据处理能力,其在获取到数据后,可以与终端102之间通过近距离通信技术传输数据。
终端设备102可以包括智能电视、三维视觉显示设备、大型投影系统、多媒体播放设备、手机、平板电脑、游戏主机、PC(Personal Computer,个人计算机)等电子设备。终端设备102可以接收头戴式显示器101传输的数据,对数据进行处理。
服务器103用于为终端102提供后台服务,以供终端102对接收到的头戴式显示器101传输的数据进行处理,从而完成本公开所提供的身份信息采集方法。可选的,服务器103还可根据数据处理结果生成相对应的控制指令,控制指令可分别发送至终端102和/或头戴式显示器101,以对终端102和/或头戴式显示103进行控制。例如,服务器103可以是后台服务器。服务器103可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算服务中心。可选地,服务器103同时为多个终端102提供后台服务。
再给出第二种运行环境,该运行环境可以包括头戴式显示器101和终端设备102。
这里,头戴式显示器101可以包括上述所陈述的各种类型的设备,头戴式显示器101不具备数据处理能力,其在获取到数据后,可以与终端102之间通过近距离通信技术传输数据。
终端设备102可以包括上述所陈述的各种类型的电子设备。终端设备102可以接收头戴式显示器101传输的数据,对数据进行处理,以完成本公开所提供的身份信息采集方法。可选的,终端102还可根据数据处理结果生成相对应的控制指令,控制指令可分别发送头戴式显示器101,以对头戴式显示103进行控制。
再给出第三种运行环境,该运行环境仅包括头戴式显示器101。这里,头戴式显示器101不仅具有数据获取能力,还具有数据处理能力,即能够通过头戴式显示器101中的处理器调用程序代码来实现本公开提供的身份信息采集方法所实现的功能,当然程序代码可以保存在计算机存储介质中,可见,该头戴式显示器至少包括处理器和存储介质。
请参阅图1B,图1B是本公开实施例公开的一种头戴式显示器的结构示意图。需要说明的是,头戴式显示器还可包括图1B中未示出的固定带等部件,固定带可将头戴式显示器固定在佩戴者头部。
如图1所示,该头戴式显示器10可包括两个显示屏幕20,分别与人体的左眼和右眼对应。每个显示屏幕20的背面可设置有一个屏下摄像头70。屏下摄像头70隐藏在显示屏幕20的背面,可以透过显示屏幕20拍摄到显示屏幕前方的景象。
头戴式显示器10还可包括至少两个眼部摄像头,每个显示屏幕20周边都可围绕设置有一个或多个眼部摄像头。在头戴式显示器10被佩戴时,显示屏幕20中可输出数字渲染而成的虚拟画面或者输出虚拟与现实混合的混合画面。
示例性的,如图1所示,头戴式显示器10可包括多个眼部摄像头,比如7个眼部摄像头,分别为眼部摄像头装置31、眼部摄像头装置32、眼部摄像头装置33、眼部摄像头装置34、眼部摄像头装置35、眼部摄像头装置36和眼部摄像头装置37,分别在每个显示屏幕20的上、下、左、右四个方向上设置。并且,两个显示屏幕20的中间位置可共用同一个眼部摄像头32。
可选的,头戴式显示器10还可设置有多个红外光源40。红外光源40可以是任意一种能够发射红外光线的元件,如红外LED灯粒,但不限于此。多个红外光源40可呈环状围绕显示屏幕20设置。
示例性的,请参阅图2,图2是一个实施例公开的一种红外光源的排布示例图。如图2所示,多个红外光源40可呈环状排布,形成一个环带。如图1所示,每个显示屏幕20周边都可对应有一个由红外光源40构成的环带。
红外光源40可用于发射红外光,为屏下摄像头的拍摄进行补光。以红外光作为补光光源,可以避免光照照射引起人眼眩晕。多个红外光源40形成环状,可以提高补光的均匀性,使得红外光线可以从不同的角度均匀照射,避免局部高亮。示例性的,红外光源40可用于发射波长为850毫米的红外线。
可选的,头戴式显示器10还可设置有眼动追踪传感器50。眼动传感器50可设置在显示屏幕20的一侧,例如设置在显示屏幕20的左侧或者右侧。眼动追踪 传感器可用于采集眼动数据。例如,瞳孔位置、视线落点等。示例性的,眼动追踪传感器可以是电极式的眼动传感器,通过传感器中的电极检测眼睛周边的肌肉动作,从而得到眼动数据。
可选的,头戴式显示器10还可包括驱动装置60,驱动装置60与屏下摄像头连接,设置在显示屏幕20的背面。驱动装置可60用于控制屏下摄像头移动,例如驱动装置60可包括连接杆和马达,连接杆连接马达和屏下摄像头,马达转动时带动连接杆,连接杆的移动带动屏下摄像头移动。
基于前述的头戴式显示器,以下内容对本公开实施例公开的身份校验方法进行说明。需要说明的是,以下内容以头戴显示器包括的其中一个显示屏幕为例,描述头戴式显示器对与该显示屏幕对应的屏下摄像头、眼部摄像头、眼动追踪传感器、驱动装置等部件进行控制,以及处理各个部件采集到的数据的方法。当头戴式显示器包括两个或以上的显示屏幕时,其余显示屏幕及相关部件的控制和数据处理方法可参考以下内容,具体不再赘述。
请参阅图3,图3是本公开实施例公开的一种身份校验方法的方法流程示意图,该身份校验方法可应用于具有数据处理能力的任意一种电子设备,包括但不限于前述的任意一种头戴式显示器、与头戴式显示器通信连接的终端设备,或者为与头戴式显示器通信连接的终端设备提供后台服务的后台服务器。其中,该方法由头戴式显示器执行时,可由头戴式显示器的中央处理器(Central Progressing Unit,CPU)、微处理器(Micro Control Unit,MCU)等具有计算能力的器件执行。如图3所示,该方法可包括以下步骤:
310、通过屏下摄像头拍摄头戴式显示器佩戴者的虹膜图像。
在本公开实施例中,当头戴式显示器被佩戴者佩戴时,显示屏幕可正对佩戴者的眼睛,屏下摄像头可执行拍摄操作,拍摄得到的图像可以作为佩戴者的虹膜图像。需要说明的是,虹膜图像中包括虹膜,也可能包括睫毛、眼睑等部位,可根据屏下摄像头的视野范围,以及佩戴者的眼睛与屏下摄像头之间的距离确定。
可选的,在屏下摄像头拍摄虹膜图像时,可控制显示屏幕停止发光,以增加外界光线透过显示屏幕到达屏下摄像头的概率,增强显示屏幕的透光性能,从而增加拍摄到的虹膜图像的清晰度。
320、通过至少两个眼部摄像头分别拍摄头戴式显示器佩戴者的眼睛,得到至少两帧眼部图像。
在本公开实施例中,每个摄像头执行拍摄操作后得到的图像可以作为一帧眼部图像。需要说明的是,由于各个眼部摄像头在头戴式显示器上的设置位置不同,不同的眼部摄像头拍摄到的眼部图像可能包括部分相同的眼睛部位,以及部分不同的眼睛部位。
示例性的,请结合图2,图2所示的眼部摄像头33拍摄得到的眼部图像可以 包括眼球以及眼睛的下眼睑,但不包括上眼睑;图2所示的眼部摄像头31拍摄得到的眼部图像可以包括眼球以及眼睛的上眼睑,不包括下眼睑。眼部摄像头32拍摄得到的眼部图像可以包括左眼和右眼的眼角部分,以及两眼之间的鼻梁。
需要说明的是,前述的步骤310和步骤320在逻辑上没有必然的先后顺序,步骤310和步骤320也可以同时执行,具体不做限定。
在一些实施例中,若头戴式显示器包括前述的眼动追踪传感器,则电子设备可根据眼动追踪传感器采集到的眼动数据判断头戴式显示器的佩戴者是否注视显示屏幕。若判断出佩戴者注视显示屏幕,则可执行前述的步骤310和步骤320。
330、根据虹膜图像和至少两帧眼部图像提取佩戴者的用户身份特征。
在本公开实施例中,用户身份特征可以是虹膜图像和眼部图像中提取出的任意一种或多种身份特征。例如,用户身份特征可包括从虹膜图像中提取出的虹膜特征;或者,用户身份特征也可包括从眼部图像中提取出的与眼周、眉毛、睫毛等相关的眼部特征;或者用户身份特征可包括从虹膜图像中提取出的虹膜特征,以及从眼部图像中提取出的与眼周、眉毛、睫毛等相关的眼部特征。
作为一种可选的实施方式,电子设备可对至少两帧眼部图像中的每一帧眼部图像都进行身份特征的提取。或者,电子设备也可以先对至少两帧眼部图像进行融合,并对融合后得到的融合图像进行身份特征的提取,具体不做限定。
在本公开实施例中,电子设备在从虹膜图像中提取出虹膜特征,以及从眼部图像中提取出眼部特征之后,电子设备可通过拼接(Concat)或合并(Add)等方式将虹膜特征和眼部特征进行融合,以形成佩戴者的用户身份特征。
340、根据用户身份特征对佩戴者的身份进行校验,根据校验结果确定是否对佩戴者输入的用户操作进行响应。
作为一种可选的实施方式,电子设备可获取头戴式显示器的合法用户对应的合法身份特征,并将从图像中提取出的用户身份特征与合法身份特征进行匹配;若用户身份特征与合法身份特征相匹配,则可确定头戴式显示器佩戴者的身份合法;若用户身份特征与合法身份特征不匹配,则可确定头戴式显示器佩戴者的身份不合法。
作为另一种可选的实施方式,电子设备还可以合法用户对应的合法身份特征对分类模型进行训练,然后利用训练好的分类模型对用户身份特征进行分类,通过分类模型判断用户身份特征与合法身份特征是否属于同一分类;若是,则可确定头戴式显示器佩戴者的身份合法;若否,则可确定头戴式显示器佩戴者的身份不合法。其中,分类模型可以包括但不限于支持向量机(Support Vector Machine,SVM)、深度神经网络。
在本公开实施例中,若校验结果为头戴式显示器佩戴者的身份合法,则头戴式显示器可以对佩戴者输入的用户操作进行响应;若校验结果为头戴式显示器佩 戴者的身份不合法,则头戴式显示器可以拒绝对佩戴者输入的用户操作进行响应。
示例性的,佩戴者输入的用户操作可以包括:开启头戴式显示器的开机操作、打开头戴式显示器内置的某一应用程序的触发操作、登录头戴式显示器中的游戏应用程序、影音播放软件等娱乐应用程序的登录操作,具体不做限定。
可见,在前述实施例中,电子设备可通过头戴式显示器的屏下摄像头拍摄虹膜图像,并通过屏幕周边设置的两个或以上的眼部摄像头拍摄佩戴者的眼部图像。其中,两个或以上的眼部摄像头对应的拍摄视角不同,分别拍摄得到的眼部图像中可用于表征用户身份的图像内容也不相同,使得眼部图像中图像信息更丰富,有利于提高用户身份校验的准确性。从虹膜图像和至少两帧眼部图像中提取出的用户身份特征,可用于表征佩戴者的身份,从而可用于判断佩戴者是否合法,以对用户身份进行校验,可以提高头戴式显示器的使用安全性。
并且,在对头戴式显示器的佩戴者进行身份校验时,电子设备可以融合虹膜图像中的虹膜特征,以及多个视角分别对应的眼部图像中的眼部特征对佩戴者的身份进行校验,虹膜特征与眼部特征可以互为补充,增加了非法使用者的造假难度,可以极大地提高头戴式显示器的安全性。
请参阅图4,图4是一个实施例公开的另一种身份校验方法的方法流程示意图,该身份校验方法应用于电子设备。在本公开实施例中,头戴式显示器包括上述实施例包括的屏下摄像头、至少两个眼部摄像头以及驱动装置。如图4所示,该方法可包括以下步骤:
410、检测佩戴者的瞳孔移动数据。
在本公开实施例中,瞳孔移动数据可包括瞳孔移动距离和瞳孔移动方向。
作为一种可选的实施方式,电子设备可包括前述的眼动追踪传感器,眼动追踪传感器可用于采集眼动数据,眼动数据可包括包括瞳孔位置、瞳孔的注视时长、眼睛跳动次数、瞳孔扩张等数据。电子设备可对眼动数据进行处理,以得到佩戴者的瞳孔移动数据。其中,电子设备获取基于眼动追踪传感器上一次采集到的眼动数据确定出的佩戴者上一次的瞳孔位置,并基于眼动追踪传感器当前采集到的眼动数据确定佩戴者当前的瞳孔位置;电子设备可将当前的瞳孔位置与上一次的瞳孔位置进行比较,确定佩戴者的瞳孔移动距离和瞳孔移动方向。
作为另一种可选的实施方式,电子设备可对屏下摄像头上一次拍摄到的虹膜图像进行识别,以得到上一次的瞳孔位置;电子设备还可控制屏下摄像头先拍摄一帧预识别的虹膜图像,对预识别的虹膜图像进行识别,得到当前的瞳孔位置,并将当前的瞳孔位置与上一次的瞳孔位置进行比较,确定佩戴者的瞳孔移动距离和瞳孔移动方向。
可选的,电子设备在执行步骤410得到瞳孔移动数据之后,还可对瞳孔移动数据包括的瞳孔移动距离进行判断;若瞳孔移动距离大于距离阈值,则电子设备 可以继续执行下述的步骤420-步骤430,对屏下摄像头进行移动;若瞳孔移动距离小于或等于距离阈值,则电子设备可以直接控制屏下摄像头拍摄虹膜图像,可以不移动屏下摄像头。
420、通过驱动装置驱动屏下摄像头按照瞳孔移动数据移动,以使屏下摄像头的拍摄视角正对佩戴者的瞳孔。
430、在屏下摄像头移动后,通过屏下摄像头拍摄佩戴者的虹膜图像。
在前述的步骤420中,头戴式显示器可控制驱动装置在上述的瞳孔移动方向上移动瞳孔移动距离,从而带动屏下摄像头的移动。
在头戴式显示器被佩戴时,佩戴者的眼睛可能并非是一直注视着显示屏幕的,佩戴者的眼睛可能会移动或者转动,从而导致屏下摄像头拍摄到的虹膜图像中的虹膜发生形变,不利于虹膜特征的提取。因此,在本公开实施例中,电子设备可以通过驱动装置驱动屏下摄像头移动。在屏下摄像头移动后,屏下摄像头可以较为完整地拍摄到虹膜,并且虹膜可以处于虹膜图像的中央,以便于从虹膜图像中提取出虹膜特征,有利于提高虹膜特征的提取准确性。
440、通过至少两个眼部摄像头分别拍摄头戴式显示器佩戴者的眼睛,得到至少两帧眼部图像。
450、根据虹膜图像和至少两帧眼部图像提取佩戴者的用户身份特征。
460、根据用户身份特征对佩戴者的身份进行校验,并根据校验结果确定是否对佩戴者输入的用户操作进行响应。
前述的步骤440-步骤460的实施方式可参见前述实施例,以下内容不再赘述。
可见,在前述实施例中,电子设备在拍摄虹膜图像之前,可检测瞳孔移动数据,并根据瞳孔移动数据控制驱动装置驱动屏下摄像头移动,使得屏下摄像头拍摄到的虹膜图像可以较为完整地拍摄到佩戴者的瞳孔,以提高瞳孔特征的提取准确性。此外,若头戴式显示器包括眼动追踪传感器,则可以利用眼动追踪传感器检测到的眼动数据确定瞳孔移动数据;若头戴式显示器不包括眼动追踪传感器,则可以利用虹膜图像确定瞳孔移动数据。
请参阅图5,图5是一个实施例公开的另一种身份校验方法的方法流程示意图,该身份校验方法应用于前述的任意一种电子设备。如图5所示,该方法可包括以下步骤:
510、通过屏下摄像头拍摄头戴式显示器佩戴者的虹膜图像。
520、通过至少两个眼部摄像头分别拍摄头戴式显示器佩戴者的眼睛,得到至少两帧眼部图像。
530、将至少两帧眼部图像进行融合,得到融合图像。
作为一种可选的实施方式,电子设备可以对各帧眼部图像进行特征点匹配,根据特征点匹配的结果确定任意两帧眼部图像之间对应位置的像素点,并将对应 位置的像素点进行求和平均。也就是说,对于各帧图像中重合的像素点而言,电子设备可以将重合的像素点的像素值进行求和平均,对于未重合的像素点而言,头戴式显示器可以将未重合的像素点的像素值直接设置为融合图像中对应位置的像素点的像素值,从而得到由多帧眼部图像拼合而成的融合图像。
作为另一种可选的实施方式,电子设备还可以将与参考视角对应的虹膜图像确定为参考图像。其中,参考视角对应的虹膜图像是佩戴者注视显示屏幕时由屏下摄像头拍摄到的。
以及,电子设备可以参考图像为基准,将至少两帧眼部图像转换到参考视角下,得到与参考视角对应的至少两帧眼部图像。其中,每帧眼部图像都可至少包括一部分虹膜,因此头戴式显示器可对每帧眼部图像与参考图像进行特征点匹配,并根据特征点匹配结果计算每帧眼部图像与参考图像之间的配准矩阵。电子设备可将每帧眼部图像对应的配准矩阵与该帧眼部图像包括的各个像素点的像素值相乘,以将每帧眼部图像转换至参考视角之下。
在视角转换后,电子设备可以将与参考视角对应的至少两帧眼部图像进行融合,得到融合图像。其中,将参考视角对应的至少两帧眼部图像进行融合,可包括将各帧眼部图像中重合的像素点的像素值进行求和平均,将未重合的像素点的像素值直接设置为融合图像中对应位置的像素点的像素值。
将多帧眼部图像进行视角转换后再进行融合,可以提高图像融合的准确性,可以最大程度保留各帧眼部图像中的图像细节信息,有利于提高提取眼部特征的准确性。
540、从虹膜图像中提取虹膜特征。
在本公开实施例中,电子设备从虹膜图像中提取虹膜特征可包括一下步骤:
S1、将虹膜图像转换成灰度图,得到灰度虹膜图像。可选的,在得到灰度虹膜图像之后,以及在执行下述的步骤S2之前,头戴式显示器还可以对灰度虹膜图像中的边界灰度值进行非线性变化,以增强边界灰度值。
S2、对灰度虹膜图像中的虹膜进行定位,识别出虹膜在灰度虹膜图像中的虹膜区域。其中,头戴式显示器可以对灰度虹膜图像中的虹膜进行识别,以定位出虹膜在灰度虹膜虹膜图像中的图像位置,并以虹膜的图像位置为中心,以预设半径或者以预设长度和预设宽度构建虹膜区域。
S3、去除虹膜区域中的睫毛、眼脸以及光斑,并对虹膜区域进行去噪处理。其中,头戴式显示器可以采用霍夫(Hough)变换方法、微积分方法等对虹膜区域中的灰度值进行计算,以此去除噪点。
S4、通过径向对称变换方法粗定位虹膜区域中的瞳孔位置,得到瞳孔的圆心点位置和半径。
S5、根据瞳孔的圆心点和半径确定瞳孔的边界,并在瞳孔边界周围的区域中, 采用圆周差分算法精定位虹膜的外边缘位置。定位虹膜的外边缘的实施方式可包括:获取虹膜区域中各个灰度值分别对应的梯度值;需要说明的是,灰度值对应的梯度值计算可以在步骤S1将虹膜图像转换成灰度图是进行。在得到虹膜区域中各个灰度值分别对应的梯度值之后,以瞳孔圆心作为虹膜外边缘搜索的起始点,以为瞳孔半径作为搜索的起始半径,采用圆周差分算法计算以起始点和起始半径所形成的圆弧的梯度值的积分;逐步增加起始半径的长度,进一步对增加起始半径后形成的圆弧进行梯度积分计算;根据梯度积分变化趋势确定虹膜的外边缘。
S6、根据瞳孔的圆心位置和半径确定瞳孔的外边缘,并根据瞳孔的外边缘以及虹膜的外边缘对虹膜区域进行修正,得到修正后的虹膜区域。
S7、从修正后的虹膜区域中提取出虹膜特征。
也就是说,在本公开实施例中,电子设备可以对虹膜的外边缘进行准确识别,以从虹膜图像中定位出准确的虹膜区域,再从虹膜区域中提取出虹膜特征,可以减少眼球其余部位对应的图像信息被错误地提取为虹膜特征的可能性,提高虹膜特征的提取精度。
550、从融合图像中提取眼部特征。
在本公开实施例中,眼部特征是从融合图像中提取出的,电子设备可以减少从眼部图像中提取眼部特征的重复次数,减少眼部特征中的冗余信息,有利于减少计算量,加快用户身份校验的响应速度。
560、将虹膜特征和眼部特征进行融合,得到用户身份特征。
570、根据用户身份特征对佩戴者的身份进行校验,根据校验结果确定是否对佩戴者输入的用户操作进行响应。
前述步骤560-步骤570的实施方式可参见前述实施例,以下内容不再赘述。
此外,在一些可能的实施例中,当佩戴者的身份校验结果为头戴式显示器的佩戴者身份不合法时,头戴式显示器还可执行以下一种或多种操作:
操作一:电子设备可向与头戴式显示器通信连接的手柄发送震动指令,以使手柄在接收到震动之后后,以预设频率震动。一般而言,头戴式显示器与手柄配合使用,形成一套完整的VR或MR交互设备。当头戴式显示器判断出佩戴者为非法用户时,可以通过震动指令控制手柄震动,使得非法用户既无法正常使用头戴式显示器,也无法正常使用手柄。并且,当非法用户握持手柄时,手柄的震动还会对非法用户造成不适。
操作二:电子设备也可以控制头戴式显示器通过扬声器播放蜂鸣声,从听觉上对非法用户带来不适感,使得非法用户可以尽早停止使用头戴式显示器。
操作三:电子设备可触发头戴式显示器可扫描周边的蓝牙设备或超宽带(Ultra Wide Band,UWB)设备,并向可连接的蓝牙设备或UWB设备发送头戴式显示器的设备标识。任意一个目标设备通过与头戴式显示器的蓝牙或UWB连接接收到头 戴式显示器的设备标识之后,可以将头戴式显示器的设备标识以及目标设备的定位信息发送至服务器。服务器可根据头戴式显示器的设备标识查询与头戴式显示器绑定的终端设备,并将目标设备的定位信息发送至终端设备,使得终端设备的使用者可以通过目标设备的定位信息确认头戴式显示器的位置,以便于终端设备的使用者根据该定位信息找寻丢失的头戴式显示器。
可见,在前述实施例中,可以通过手柄震动或蜂鸣声输出等操作对非法用户进行警告;或者,还可将头戴式显示器可能的定位信息发送至绑定的终端设备,以便于寻找头戴式显示器。
应该理解的是,虽然图3-图5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图3-图5中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
基于同一发明构思,作为对上述方法的实现,本公开实施例还提供了一种身份校验装置,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
图6为本公开实施例提供的身份校验装置的结构框图,如图6所示,本实施例提供的身份校验装置600包括:第一拍摄模块610、第二拍摄模块620、提取模块630、校验模块640。
第一拍摄模块610,将第一拍摄模块610配置为通过头戴式显示器的屏下摄像头拍摄头戴式显示器佩戴者的虹膜图像;
第二拍摄模块620,将第二拍摄模块620配置为通过至少两个眼部摄像头分别拍摄头戴式显示器佩戴者的眼睛,得到至少两帧眼部图像;
提取模块630,将提取模块630配置为根据虹膜图像和至少两帧眼部图像提取佩戴者的用户身份特征;
校验模块640,将校验模块640配置为根据用户身份特征对佩戴者的身份进行校验,并根据校验结果确定是否对佩戴者输入的用户操作进行响应。
作为本公开实施例一种可选的实施方式,第一拍摄模块610还可配置为在屏下摄像头拍摄虹膜图像时控制显示屏幕停止发光。
作为本公开实施例一种可选的实施方式,头戴式显示器还包括驱动装置;驱动装置设置在显示屏幕背面,与屏下摄像头连接,驱动装置配置为控制屏下摄像头移动;
第一拍摄模块610,可包括:检测单元、驱动单元和拍摄单元。
检测单元,配置为检测佩戴者的瞳孔移动数据;
驱动单元,配置为通过驱动装置驱动屏下摄像头按照瞳孔移动数据移动,以使屏下摄像头的拍摄视角正对佩戴者的瞳孔;
拍摄单元,配置为在屏下摄像头移动后,通过屏下摄像头拍摄佩戴者的虹膜图像。
作为本公开实施例一种可选的实施方式,检测单元,配置为在检测到佩戴者的瞳孔移动数据之后,判断瞳孔移动数据包括的瞳孔移动距离是否大于距离阈值,若是,则触发驱动单元执行通过驱动装置驱动屏下摄像头按照瞳孔移动数据移动的操作;若否,则触发拍摄单元执行通过屏下摄像头拍摄佩戴者的虹膜图像的操作。
作为本公开实施例一种可选的实施方式,头戴式显示器还包括眼动追踪传感器,眼动追踪传感器设置在显示屏幕的一侧;瞳孔移动数据包括:瞳孔移动距离和瞳孔移动方向;
检测单元,配置为通过眼动追踪传感器当前采集到的眼动数据确定头戴式显示器佩戴者当前的瞳孔位置;以及,获取基于眼动追踪传感器上一次采集到的眼动数据确定出的佩戴者上一次的瞳孔位置;以及,将当前的瞳孔位置与上一次的瞳孔位置进行比较,确定佩戴者的瞳孔移动距离和瞳孔移动方向。
作为本公开实施例一种可选的实施方式,身份校验装置还可包括:融合模块;
融合模块,配置为在第二拍摄模块620拍摄得到至少两帧眼部图像之后,以及在提取模块根据虹膜图像和至少两帧眼部图像提取佩戴者的用户身份特征之前,将至少两帧眼部图像进行融合,得到融合图像;
提取模块630,还可配置为从虹膜图像中提取虹膜特征,并从融合图像中提取眼部特征;以及,将虹膜特征和眼部特征进行融合,得到用户身份特征。
作为本公开实施例一种可选的实施方式,融合模块可包括:确定单元、转换单元、融合单元。
确定单元,可配置为将与参考视角对应的虹膜图像确定为参考图像;参考视角对应的虹膜图像是佩戴者注视显示屏幕时由屏下摄像头拍摄到的;
转换单元,可配置为以参考图像为基准,将至少两帧眼部图像转换到参考视角下,得到与参考视角对应的至少两帧眼部图像;
融合单元,可配置为将与参考视角对应的至少两帧眼部图像进行融合,得到融合图像。
作为本公开实施例一种可选的实施方式,头戴式显示器还包括多个红外光源;红外光源呈环状围绕显示屏幕设置。
第二拍摄模块620,还可配置为在屏下摄像头拍摄虹膜图像时,通过多个红外 光源发光。
作为本公开实施例一种可选的实施方式,身份校验装置还可包括:通信模块。
通信模块,可配置为在校验模块640校验出的身份校验结果为佩戴者的身份不合法时,向手柄发送震动指令;手柄与头戴式显示器通信连接,震动指令用于指示手柄以预设频率震动。
本实施例提供的身份校验装置可以执行上述方法实施例提供的身份校验方法,其实现原理与技术效果类似,此处不再赘述。上述身份校验装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种电子设备,该电子设备可以是终端设备,其内部结构图可以如图7所示。该电子设备包括通过系统总线连接的处理器、存储器、通信接口、数据库、显示屏和输入装置。其中,该电子设备的处理器配置成提供计算和控制能力的模块。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该电子设备的通信接口配置成与外部的终端进行有线或无线方式的通信模块,无线方式可通过WIFI、运营商网络、近场通信(NFC)或其他技术实现。该计算机可读指令被处理器执行时以实现上述实施例提供的身份校验方法。该电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该电子设备的输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图7中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本公开提供的身份校验装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图7所示的电子设备上运行。电子设备的存储器中可存储组成该电子设备的各个程序模块,各个程序模块构成的计算机可读指令使得处理器执行本说明书中描述的本公开各个实施例的身份校验方法中的步骤。
在一个实施例中,提供了一种电子设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述方法实施例所述的身份校验方法的步骤。
本实施例提供的电子设备,可以实现上述方法实施例提供的身份校验方法,其实现原理与技术效果类似,此处不再赘述。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的身份校验方法的步骤。
本实施例提供的计算机可读存储介质上存储的计算机可读指令,可以实现上述方法实施例提供的身份校验方法,其实现原理与技术效果类似,此处不再赘述。
本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成的,计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。
工业实用性
本公开提供的身份校验方法,可对头戴式显示器的佩戴者进行身份校验,提高头戴式显示器的使用安全性。此外,不同视角的眼部摄像头分别拍摄得到的眼部图像中可包括不同的图像内容,使得眼部图像中图像信息更丰富,从而可以提高基于眼部图像提取出的用户身份特的准确性。并且,在对头戴式显示器的佩戴者进行身份校验时,虹膜特征与眼部特征可以互为补充,增加了非法使用者的造假难度,可以极大地提高头戴式显示器的安全性。

Claims (15)

  1. 一种身份校验方法,其特征在于,所述方法包括:
    通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像;所述头戴式显示器包括屏下摄像头和至少两个眼部摄像头;所述头戴式显示器还包括显示屏幕,所述屏下摄像头设置在所述显示屏幕背面,所述至少两个眼部摄像头围绕所述显示屏幕设置,所述至少两个眼部摄像头对应的拍摄视角不同;
    通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼部区域,得到至少两帧眼部图像;
    根据所述虹膜图像和所述至少两帧眼部图像提取所述佩戴者的用户身份特征;
    根据所述用户身份特征对所述佩戴者的身份进行校验,并根据校验结果确定是否对所述佩戴者输入的用户操作进行响应。
  2. 根据权利要求1所述的方法,其特征在于,所述头戴式显示器还包括驱动装置;所述驱动装置设置在所述显示屏幕背面,与所述屏下摄像头连接,用于控制所述屏下摄像头移动;以及,通过所述屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像,包括:
    检测所述佩戴者的瞳孔移动数据;
    通过所述驱动装置驱动所述屏下摄像头按照所述瞳孔移动数据移动,以使所述屏下摄像头的拍摄视角正对所述佩戴者的瞳孔;
    在所述屏下摄像头移动后,通过所述屏下摄像头拍摄所述佩戴者的虹膜图像。
  3. 根据权利要求2所述的方法,其特征在于,所述头戴式显示器还包括眼动追踪传感器,所述眼动追踪传感器设置在所述显示屏幕的一侧;所述瞳孔移动数据包括:瞳孔移动距离和瞳孔移动方向;以及,所述检测所述佩戴者的瞳孔移动数据,包括:
    通过所述眼动追踪传感器当前采集到的眼动数据确定所述头戴式显示器佩戴者当前的瞳孔位置;
    获取基于所述眼动追踪传感器上一次采集到的眼动数据确定出的所述佩戴者上一次的瞳孔位置;
    将当前的瞳孔位置与上一次的瞳孔位置进行比较,确定所述佩戴者的瞳孔移动距离和瞳孔移动方向。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将所述至少两帧眼部图像进行融合,得到融合图像;
    以及,所述根据所述虹膜图像和所述至少两帧眼部图像提取所述佩戴者的用户身份特征,包括:
    从所述虹膜图像中提取所述虹膜特征,并从所述融合图像中提取眼部特征;
    将所述虹膜特征和所述眼部特征进行融合,得到用户身份特征。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述至少两帧眼部图像进行融合,得到融合图像,包括:
    将与参考视角对应的虹膜图像确定为参考图像;所述参考视角对应的虹膜图像是所述佩戴者注视所述显示屏幕时由所述屏下摄像头拍摄到的;
    以所述参考图像为基准,将所述至少两帧眼部图像转换到所述参考视角下,得到与所述参考视角对应的至少两帧眼部图像;
    将与所述参考视角对应的至少两帧眼部图像进行融合,得到融合图像。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述头戴式显示器还包括多个红外光源;所述红外光源呈环状围绕所述显示屏幕设置;以及,所述方法还包括:
    在所述屏下摄像头拍摄所述虹膜图像时,通过所述多个红外光源发光。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    若所述身份校验结果为所述佩戴者的身份不合法,则向手柄发送震动指令;所述手柄与所述头戴式显示器通信连接,所述震动指令用于指示所述手柄以预设频率震动。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述屏下摄像头拍摄所述虹膜图像时,控制所述显示屏幕停止发光。
  9. 根据权利要求1所述的方法,其特征在于,所述头戴式显示器还包括:眼动追踪传感器,所述眼动追踪传感器设置在所述显示屏幕的一侧;所方法还包括:
    根据所述眼动追踪传感器采集到的眼动数据判断所述头戴式显示器的佩戴者是否注视显示屏幕;
    以及,所述通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像,包括:
    若判断出所述佩戴者注视所述显示屏幕,则通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像;
    以及,所述通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼部区域,得到至少两帧眼部图像,包括:
    若判断出所述佩戴者注视所述显示屏幕,则通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼部区域,得到至少两帧眼部图像。
  10. 根据权利要求1所述的方法,其特征在于,所述根据所述用户身份特征对所述佩戴者的身份进行校验,包括:
    获取所述头戴式显示器的合法用户对应的合法身份特征,并将所述用户身份特征与所述合法身份特征进行匹配;
    若所述用户身份特征与所述合法身份特征相匹配,则确定所述佩戴者的身份合法;
    若所述用户身份特征与所述合法身份特征不匹配,则所述佩戴者的身份不合法。
  11. 根据权利要求1所述的方法,其特征在于,所述根据所述用户身份特征对所述佩戴者的身份进行校验,包括:
    利用训练好的分类模型对所述用户身份特征进行分类;所述分类模型是利用合法用户对应的合法身份特征进行训练得到的;
    通过所述分类模型判断所述用户身份特征与合法身份特征是否属于同一分类;
    若是,则确定所述佩戴者的身份合法;若否,则确定所述佩戴者的身份不合法。
  12. 根据权利要求1所述的方法,其特征在于,所述佩戴者输入的用户操作,包括:开启头戴式显示器的开机操作;或者,
    打开头戴式显示器内置的某一应用程序的触发操作;或者,
    登录头戴式显示器中的游戏应用程序的登录操作;或者,
    影音播放软件等娱乐应用程序的登录操作。
  13. 一种身份校验装置,包括:
    第一拍摄模块,将所述第一拍摄模块配置成通过头戴式显示器的屏下摄像头拍摄所述头戴式显示器佩戴者的虹膜图像;所述头戴式显示器包括屏下摄像头和至少两个眼部摄像头;所述头戴式显示器还包括显示屏幕,所述屏下摄像头设置在所述显示屏幕背面,所述至少两个眼部摄像头围绕所述显示屏幕设置,所述至少两个眼部摄像头对应的拍摄视角不同;
    第二拍摄模块,将所述第二拍摄模块配置成通过所述至少两个眼部摄像头分别拍摄所述头戴式显示器佩戴者的眼睛,得到至少两帧眼部图像;
    提取模块,将所述提取模块配置成根据所述虹膜图像和所述至少两帧眼部图像提取所述佩戴者的用户身份特征;
    校验模块,将所述校验模块配置成根据所述用户身份特征对所述佩戴者的身份进行校验,并根据校验结果确定是否对所述佩戴者输入的用户操作进行响应。
  14. 一种计算机设备,包括:存储器和一个或多个处理器,所述存储器中存储有计算机可读指令;所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-12任一项所述的身份校验方法的步骤。
  15. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-12任一项所述的身份校验方法的步骤。
PCT/CN2022/118800 2022-07-28 2022-09-14 身份校验方法、装置、电子设备以及存储介质 WO2024021251A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210900796.0A CN115270093A (zh) 2022-07-28 2022-07-28 身份校验方法、装置、电子设备以及存储介质
CN202210900796.0 2022-07-28

Publications (1)

Publication Number Publication Date
WO2024021251A1 true WO2024021251A1 (zh) 2024-02-01

Family

ID=83770443

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118800 WO2024021251A1 (zh) 2022-07-28 2022-09-14 身份校验方法、装置、电子设备以及存储介质

Country Status (2)

Country Link
CN (1) CN115270093A (zh)
WO (1) WO2024021251A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205485072U (zh) * 2016-03-04 2016-08-17 北京加你科技有限公司 一种头戴式显示设备
CN106056092A (zh) * 2016-06-08 2016-10-26 华南理工大学 基于虹膜与瞳孔的用于头戴式设备的视线估计方法
CN107392192A (zh) * 2017-09-19 2017-11-24 信利光电股份有限公司 一种身份识别方法、装置及多摄像头模组
CN108960937A (zh) * 2018-08-10 2018-12-07 陈涛 Ar智能眼镜的应用基于眼动追踪技术的广告推送方法
CN109190509A (zh) * 2018-08-13 2019-01-11 阿里巴巴集团控股有限公司 一种身份识别方法、装置和计算机可读存储介质
CN111091103A (zh) * 2019-12-23 2020-05-01 深圳职业技术学院 一种基于深度强化学习的人脸识别新方法
US20210181514A1 (en) * 2018-07-19 2021-06-17 Magic Leap, Inc. Content interaction driven by eye metrics
CN216352422U (zh) * 2021-09-22 2022-04-19 北京鹰瞳科技发展股份有限公司 多模态图像采集装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205485072U (zh) * 2016-03-04 2016-08-17 北京加你科技有限公司 一种头戴式显示设备
CN106056092A (zh) * 2016-06-08 2016-10-26 华南理工大学 基于虹膜与瞳孔的用于头戴式设备的视线估计方法
CN107392192A (zh) * 2017-09-19 2017-11-24 信利光电股份有限公司 一种身份识别方法、装置及多摄像头模组
US20210181514A1 (en) * 2018-07-19 2021-06-17 Magic Leap, Inc. Content interaction driven by eye metrics
CN108960937A (zh) * 2018-08-10 2018-12-07 陈涛 Ar智能眼镜的应用基于眼动追踪技术的广告推送方法
CN109190509A (zh) * 2018-08-13 2019-01-11 阿里巴巴集团控股有限公司 一种身份识别方法、装置和计算机可读存储介质
CN111091103A (zh) * 2019-12-23 2020-05-01 深圳职业技术学院 一种基于深度强化学习的人脸识别新方法
CN216352422U (zh) * 2021-09-22 2022-04-19 北京鹰瞳科技发展股份有限公司 多模态图像采集装置

Also Published As

Publication number Publication date
CN115270093A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
US11551482B2 (en) Facial recognition-based authentication
US9607138B1 (en) User authentication and verification through video analysis
US11693475B2 (en) User recognition and gaze tracking in a video system
US10521662B2 (en) Unguided passive biometric enrollment
CN110692062A (zh) 虹膜代码的累积和置信分配
US11163995B2 (en) User recognition and gaze tracking in a video system
TW201832125A (zh) 基於虛擬實境情況的業務認證方法及裝置
US11126878B2 (en) Identification method and apparatus and computer-readable storage medium
Ahuja et al. Eyespyvr: Interactive eye sensing using off-the-shelf, smartphone-based vr headsets
JP2020515949A (ja) ユーザ識別認証のために目の生理的特性を使用する仮想現実デバイス
WO2021095277A1 (ja) 視線検出方法、視線検出装置、及び制御プログラム
KR20200144196A (ko) 전자 장치 및 각막 이미지를 이용한 전자 장치의 기능 제공 방법
JP2021077333A (ja) 視線検出方法、視線検出装置、及び制御プログラム
WO2024021251A1 (zh) 身份校验方法、装置、电子设备以及存储介质
KR20230043749A (ko) 전자 디바이스들에 대한 적응적 사용자 등록
US20210365533A1 (en) Systems and methods for authenticating a user of a head-mounted display
WO2024021250A1 (zh) 身份信息采集方法、装置、电子设备和存储介质
CN117041670B (zh) 图像处理方法及相关设备
US11948402B2 (en) Spoof detection using intraocular reflection correspondences
US20240129617A1 (en) Image capture eyewear with context-based sending
CN117765621A (zh) 活体检测方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22952692

Country of ref document: EP

Kind code of ref document: A1