WO2018192313A1 - 一种虹膜识别的方法和装置 - Google Patents

一种虹膜识别的方法和装置 Download PDF

Info

Publication number
WO2018192313A1
WO2018192313A1 PCT/CN2018/078092 CN2018078092W WO2018192313A1 WO 2018192313 A1 WO2018192313 A1 WO 2018192313A1 CN 2018078092 W CN2018078092 W CN 2018078092W WO 2018192313 A1 WO2018192313 A1 WO 2018192313A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris
information
unit
user
instruction
Prior art date
Application number
PCT/CN2018/078092
Other languages
English (en)
French (fr)
Inventor
黄建东
Original Assignee
上海耕岩智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海耕岩智能科技有限公司 filed Critical 上海耕岩智能科技有限公司
Publication of WO2018192313A1 publication Critical patent/WO2018192313A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the present invention relates to the field of electronic device control, and in particular, to a method and device for iris recognition.
  • touch display panels have been widely used in devices that require human-computer interaction interfaces, such as operating screens of industrial computers, tablet computers, touch screens of smart phones, and the like.
  • human-computer interaction interfaces such as operating screens of industrial computers, tablet computers, touch screens of smart phones, and the like.
  • VR/AR virtual reality or augmented reality
  • the identity recognition method that conforms to the user's sensory experience is not as good as the biometric recognition technology of mobile devices, for example, fingerprint recognition technology is mature.
  • Identity recognition usually combines the user's biometric information with operational instructions to achieve such an operation through biometric identification, and iris recognition is an important one.
  • the iris is the texture of the muscle fiber tissue of the colored part of the human eyeball.
  • the iris recognition refers to the iris feature information of the eyeball and recognizes it, predicts the user's identity status and needs, and responds to achieve the purpose of controlling the device by identifying the iris feature information.
  • an infrared camera outside the display screen of the mobile device is generally used to capture the change feature of the eyeball feature, because the infrared camera outside the display screen of the mobile device is often independently disposed at the edge of the device (such as at the top of the mobile phone), in the wearable
  • the application of electronic products deviates from the optical axis of the eyeball imaging.
  • the existing device structure cannot accurately capture the user's iris information, and there are problems such as low recognition efficiency and poor recognition accuracy.
  • the technical problem to be solved by the present invention is to provide a technical solution for iris recognition, which is characterized in that the position of the camera outside the display screen of the wearable device or the mobile device is deviated from the optical axis, and the iris of the user's eyeball cannot be accurately and timely captured.
  • the information causes the iris information to be completely captured, the recognition efficiency is low, and the user's sensory experience is poor.
  • the technical solution adopted by the present invention is: a method for iris recognition, the method is applied to an iris recognition device, the device includes a display unit and a sensing unit; and the display unit is provided with an iris An identification unit, the sensing unit is located below the iris recognition area, the sensing unit includes an infrared sensing layer, and the infrared sensing layer is configured to emit infrared light when receiving the first trigger signal, and When the second trigger signal is received, the infrared light signal is detected and the infrared light signal reflected by the user's iris is sensed to capture the iris information of the user; the method includes the following steps:
  • the operation configuration information includes a correspondence relationship between the iris information and the operation instruction
  • Capturing the iris information of the user on the iris recognition area determining whether the captured iris information matches the preset iris information in the operation configuration information, and if so, executing an operation command corresponding to the iris information, otherwise the operation instruction is not executed.
  • step of “presetting the operation configuration information” includes:
  • the operation instruction identifier list includes identifiers corresponding to one or more operation instructions, and each operation instruction identifier corresponds to an operation instruction;
  • the user selects a selection instruction for the operation instruction identifier, establishes a correspondence between the operation instruction corresponding to the selected operation instruction identifier and the captured iris information, and saves the operation configuration information.
  • the operation instruction is a screen switching instruction; the step "determines whether the captured iris information matches the preset iris information in the operation configuration information, and if so, executes an operation command corresponding to the iris information, otherwise the operation is not performed.
  • the instructions include:
  • the operation instruction is a payment instruction; the step “determines whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, executes an operation command corresponding to the iris information, otherwise the operation instruction is not executed. "include:
  • the operation instruction is a user identity information registration instruction; the step “determining whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, executing the operation command corresponding to the iris information, otherwise the execution is not performed.
  • the operation instructions include:
  • the sensing unit comprises a TFT image sensing array film
  • the infrared sensing layer comprises an array formed by infrared photodiodes.
  • the step of “determining whether the captured iris information matches the preset iris information in the operation configuration information” specifically includes:
  • the feature value is calculated according to the captured iris information, and compared with the feature value of the iris information preset in the operation configuration information; when the error is less than the preset value, it is determined to be matched, otherwise it is determined as a mismatch.
  • the method further includes the steps of:
  • the prompt information is issued.
  • the prompt information includes one or more of voice prompt information, image prompt information, light prompt information, and video prompt information.
  • the display unit comprises an AMOLED display or an LCD liquid crystal display.
  • a backlight unit is further disposed under the sensing unit, and the sensing unit is disposed between the backlight unit and the LCD liquid crystal display.
  • the iris recognition area includes a plurality of iris recognition sub-areas, and a sensing unit is disposed under each iris recognition sub-area.
  • the device further includes a sensing unit control circuit, the method further comprising:
  • the sensing unit control circuit turns on the sensing unit below the iris recognition sub-area, and receives a closing instruction of the user to the iris recognition sub-area, and the sensing unit control circuit turns off the The sensing unit below the iris recognition sub-area.
  • the inventors provide an apparatus for iris recognition, the method is applied to an iris recognition device, the device includes a display unit and a sensing unit; the display unit is provided with an iris recognition area, and the sensing unit is located at the Below the iris recognition area, the sensing unit includes an infrared sensing layer; the infrared sensing layer is configured to emit infrared light when receiving the first trigger signal, and for receiving the second trigger signal when Detecting an infrared light state and sensing an infrared light signal reflected by the user's iris to capture the user's iris information; the device further includes an operation information setting unit, a determining unit, and a processing unit;
  • the operation information setting unit is configured to preset operation configuration information, where the operation configuration information includes a correspondence relationship between the iris information and the operation instruction;
  • the sensing unit is configured to capture iris information of the user on the iris recognition area, and the determining unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to execute The operation command corresponding to the iris information, otherwise the processing unit does not execute the operation instruction.
  • the device includes an operation instruction receiving unit, and the “operation information setting unit is configured to preset operation configuration information” includes:
  • the operation instruction receiving unit is configured to receive a user setting command, and the display unit is configured to display an iris recognition area;
  • the sensing unit is configured to capture and save the iris information of the user
  • the display unit is configured to display an operation instruction identifier list;
  • the operation instruction identifier list includes an identifier corresponding to one or more operation instructions, and each operation instruction identifier corresponds to an operation instruction;
  • the operation instruction receiving unit is configured to receive a selection instruction of the operation instruction identifier by the user, and the processing unit is configured to establish a correspondence between the operation instruction corresponding to the selected operation instruction identifier and the captured iris information, and save the operation configuration information.
  • the operation instruction is a screen switching instruction; the determining unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to execute the corresponding iris information. Operation command, otherwise the processing unit does not execute the operation instruction" includes:
  • the determining unit is configured to determine whether the iris information of the captured user matches the iris information corresponding to the screen switching instruction, and if yes, the processing unit is configured to switch the screen, otherwise the processing unit does not switch the screen.
  • the operation instruction is a payment instruction; the “determination unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to perform the operation corresponding to the iris information. Command, otherwise the processing unit does not execute the operation instruction" includes:
  • the determining unit is configured to determine whether the captured iris information matches the iris information corresponding to the payment instruction, and if yes, the processing unit executes the payment instruction, and the payment is successful, otherwise the processing unit does not execute the payment instruction, and the payment fails.
  • the operation instruction is a user identity information registration instruction; the “determination unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to execute the iris information. Corresponding operation command, otherwise the processing unit does not execute the operation instruction" includes:
  • the determining unit is configured to determine whether the captured iris information matches the iris information corresponding to the screen switching instruction, and if the processing unit executes the user information identity registration instruction, the login is successful, otherwise the processing unit does not execute the user information identity login command, and logs in. failure.
  • the sensing unit comprises a TFT image sensing array film
  • the infrared sensing layer comprises an array formed by infrared photodiodes.
  • the determining unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and specifically includes:
  • the determining unit is configured to calculate the feature value according to the captured iris information, and compare with the feature value of the iris information preset in the operation configuration information; when the error is less than the preset value, it is determined to be matched, otherwise the determination is not matched.
  • processing unit is further configured to issue the prompt information when the determining unit determines that the preset iris information does not match the captured iris information in the operation configuration information.
  • the prompt information includes one or more of voice prompt information, image prompt information, light prompt information, and video prompt information.
  • the display unit comprises an AMOLED display or an LCD liquid crystal display.
  • a backlight unit is further disposed under the sensing unit, and the sensing unit is disposed between the backlight unit and the LCD liquid crystal display.
  • the iris recognition area includes a plurality of iris recognition sub-areas, and a sensing unit is disposed under each iris recognition sub-area.
  • the device further includes a sensing unit control circuit and an operation instruction receiving unit, wherein the operation instruction receiving unit is configured to receive a start instruction of the user for the iris recognition sub-area, and the sensing unit control circuit is configured to open the a sensing unit below the iris recognition sub-area, and the operation instruction receiving unit is configured to receive a user's closing instruction for the iris recognition sub-area, the sensing unit control circuit is configured to close the lower portion of the iris recognition sub-area Sensing unit.
  • the operation instruction receiving unit is configured to receive a start instruction of the user for the iris recognition sub-area
  • the sensing unit control circuit is configured to open the a sensing unit below the iris recognition sub-area
  • the operation instruction receiving unit is configured to receive a user's closing instruction for the iris recognition sub-area
  • the sensing unit control circuit is configured to close the lower portion of the iris recognition sub-area Sensing unit.
  • the beneficial effect after adopting the above technical solution is that: by providing a sensing unit under the iris recognition area of the display unit, a projection of the user iris through the optical device is located on the iris recognition area, and the center of the sensing unit can be set in the eyeball imaging.
  • the optical axis position or the paraxial position is compared with the structure in which the camera is disposed at the edge position independently of the display screen, and the present invention can timely capture the user iris characteristic information, and then compare with the preset iris information to execute the iris.
  • the operation instruction corresponding to the information effectively improves the accuracy of the iris information and enhances the user experience.
  • the sensing unit can emit infrared light to the iris of the user's eyeball or receive the infrared light signal reflected by the iris of the user's eyeball to capture the iris characteristic information, so that the device does not need to additionally set the infrared light source. (such as infrared LED devices), you can capture the user's iris characteristics information.
  • the sensing unit is disposed under the display unit, which can effectively reduce the overall thickness of the mobile device compared to the structure in which the camera is independently protruded from the display screen area, so that the wearable device or the mobile device is thinner and more suitable for use. Flexible wearable devices or mobile devices to meet market demands.
  • FIG. 1 is a flow chart of a method for iris recognition according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a method for iris recognition according to another embodiment of the present invention.
  • FIG. 3 is a flowchart of an apparatus for identifying iris according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of an apparatus for identifying iris according to another embodiment of the present invention.
  • FIG. 5 is a flowchart of an application scenario of an iris recognition device according to another embodiment of the present invention.
  • Figure 6 is a schematic view of a conventional sensing unit
  • FIG. 7 is a schematic diagram of a sensing unit according to an embodiment of the present invention.
  • An operation information setting unit
  • An operation instruction receiving unit
  • FIG. 1 is a flowchart of a method for identifying iris according to an embodiment of the present invention.
  • the method is applied to an iris recognition device, and the device is an electronic device having a display screen or a touch display screen, such as a smart mobile device such as a mobile phone, a tablet computer, a personal digital assistant, or a personal computer, an industrial equipment computer, or the like.
  • the device can also be combined with an optical imaging device disposed between the display unit and the user's eyes (ie, above the display screen), as shown in FIG. 5, the user's eyeball projection is first in the optical imaging device.
  • the projected projection is located within the iris recognition area on the display unit such that the iris information of the user's eye can be captured by the sensing unit below the iris recognition area.
  • the effect of simulating a VR/AR device can be achieved by the cooperation between the optical imaging device and the display unit.
  • the device includes a display unit and a sensing unit.
  • the method is applied to an iris recognition device, the device includes a display unit and a sensing unit; the display unit is provided with an iris recognition area, and the sensing unit is located below the iris recognition area, the sensing
  • the unit includes an infrared sensing layer; the infrared sensing layer is configured to emit infrared light when receiving the first trigger signal, and to detect the infrared light state when the second trigger signal is received, and sense the user's iris reflection
  • the infrared light signal captures the user's iris information.
  • the display unit includes an AMOLED display or an LCD liquid crystal display; in other embodiments, the display unit may also be other electronic components having a display function.
  • the method includes the following steps:
  • the process proceeds to step S101 to preset operation configuration information, where the operation configuration information includes a correspondence relationship between the iris information and the operation instruction.
  • the operation instruction includes one or more of a text operation instruction, an image operation instruction, a video operation instruction, and an application operation instruction.
  • the text operation instruction includes a selected text instruction, a delete text instruction, a copy text instruction, and the like;
  • the image operation instruction includes a selected image instruction, a copy image instruction, a cut image instruction, an image deletion instruction, a switching image screen, and the like;
  • the instructions include intercepting, pausing, saving, deleting, fast forwarding, rewinding, zooming, and volume adjustment of the video;
  • the application operation instructions include starting, deleting, selecting, moving, etc. of the software application (such as a mobile phone APP). .
  • the iris information in the operation configuration information is that the user inputs the stored iris information in advance, and each iris information can be matched with a plurality of operation instructions, so that after the captured user iris information is authenticated, the user can perform multiple operations on the device.
  • the operation configuration information may be stored in a storage unit of the device, such as a memory of the mobile phone or a hard disk of the computer, or may be stored in a storage unit of the server.
  • the device When the operation configuration information needs to be acquired, the device only needs to establish a communication connection with the server, and then The operational configuration information stored in advance is obtained from the server, and the communication connection includes a wired communication connection or a wireless communication connection.
  • the coverage of the sensing unit is adapted to the size of the display unit.
  • the shape of the sensing unit is rectangular, and the size of the rectangle is located at the center of the display unit, so as to ensure that the eyeball is not offset and imaged.
  • Optical axis This ensures that as long as the user's eyes are aligned with the display unit, the sensing unit can accurately and quickly acquire the user's iris information regardless of how the user's eye moves.
  • step S103 determines whether the captured iris information matches the preset iris information in the operation configuration information. If yes, the process proceeds to step S104 to execute an operation command corresponding to the iris information, and otherwise proceeds to step S105 to not execute the operation command.
  • the iris information comparison can be realized by an iris feature recognition algorithm, and the iris feature recognition algorithm can be stored in the storage unit of the device. When the infrared sensing layer of the sensing unit acquires the iris information, the processor of the device will call the storage unit. The iris feature recognition algorithm compares the acquired iris information with the iris information in the preset operation configuration information to determine whether the two match.
  • the iris feature recognition algorithm includes steps such as preprocessing of iris feature information, data feature extraction, feature matching, iris recognition, etc., which can be implemented by various algorithms, which are mature existing technologies and have been applied to various fields. In the above, the details are not repeated here.
  • FIG. 2 is a flowchart of a method for identifying iris according to another embodiment of the present invention.
  • the step of “presetting the operation configuration information” includes:
  • step S201 the process proceeds to step S201 to receive a user setting command to display the iris recognition area.
  • the setting instruction can be triggered by the user clicking a button in the setting column on the screen, and after receiving the setting instruction, the device will display the iris recognition area, so that the user can input the iris information.
  • displaying the iris recognition area may include: increasing the brightness of the iris recognition area or displaying a prompt input box on the iris recognition area.
  • the method before receiving the user setting instruction, the method further includes receiving user account information, where the account information includes a user ID and a password.
  • the user needs to input the correct user ID and password by means of voice control, eye control, or key password control, and the setting command can be triggered only after the user account is logged in, so that the security of the operation configuration information setting can be improved on the one hand.
  • voice control eye control
  • key password control key password control
  • the setting command can be triggered only after the user account is logged in, so that the security of the operation configuration information setting can be improved on the one hand.
  • step S202 to capture the iris information of the user and save.
  • the collected iris information is preset iris information, which can be stored in the storage unit.
  • the step of “capturing the iris information of the user and saving” includes: determining whether the iris information in the user setting process has been stored in the storage unit, and when the determination is yes, prompting the user that the iris information has been entered, and issuing a prompt. The information reminds the user; when the determination is no, the iris information is saved to the storage unit. This can effectively avoid the repeated entry of iris information.
  • step S203 the process proceeds to step S203 to display an operation instruction identification list, and receives a selection instruction of the operation instruction identifier by the user, and establishes a correspondence between the operation instruction corresponding to the selected operation instruction identifier and the captured iris information, and saves the operation configuration information.
  • the operation instruction identifier list includes identifiers corresponding to one or more operation instructions, and each operation instruction identifier corresponds to an operation instruction.
  • the operation instruction identifier can be displayed in the form of text or a picture, and the selection instruction can be triggered by the user clicking a check, double clicking, or the like.
  • the sensing unit includes a TFT image sensing array film, and the infrared sensing layer includes an array formed by infrared photodiodes.
  • existing liquid crystal display (LCD) panels or organic light emitting diode (OLED) display panels are all driven by a TFT structure to scan a single pixel to realize the display function of the pixel array on the panel.
  • the main structure for forming the TFT switching function is a metal oxide semiconductor field effect transistor (MOSFET), and the well-known semiconductor layer material mainly includes amorphous silicon, polycrystalline silicon, indium gallium zinc oxide (IGZO), or organic mixed with carbon nano materials. Compounds and so on.
  • the TFT photodetecting diode Since the structure of the light sensing diode can also be prepared by using such a semiconductor material, and the production equipment is also compatible with the production equipment of the TFT array, in recent years, the TFT photodetecting diode has been produced by the TFT array preparation method.
  • the TFT image sensing array film described in this embodiment is the above-mentioned TFT photodetecting diode (such as the photo sensing diode region in FIG. 6 ).
  • the production process of the TFT image sensing array film is different from that of the display panel TFT in that the pixel opening area of the display panel is changed to the light sensing area in the production process.
  • the TFT can be prepared by using a thin glass substrate or a high temperature resistant plastic material as described in US Pat. No. 6,943,070 B2.
  • the sensing unit shown in FIG. 6 is susceptible to reflection or refraction of visible light emitted by ambient light or display pixels, causing optical interference, which seriously affects the signal of the TFT image sensing array film embedded under the display panel.
  • Noise ratio (SNR) in order to improve the signal-to-noise ratio, the sensing unit of the present invention is further improved on the basis of the sensing unit shown in FIG. 6, so that the TFT image sensing array film can detect and recognize the iris of the user's eye.
  • the infrared signal back.
  • the infrared sensing layer is an array formed by infrared photosensitive diodes.
  • the TFT image sensing array film of FIG. 6 is improved, specifically, the TFT image of FIG. 6 is replaced by an infrared photosensitive diode.
  • the photodiode layer of the array film is sensed, and the infrared photodiode comprises a microcrystalline silicon photodiode or an amorphous silicon photodiode.
  • Embodiment 1 The amorphous silicon p-type/i-type/n-type photodiode structure (photodiode layer in FIG. 6) is changed from a microcrystalline silicon p-type/i-type/n-type photodiode structure.
  • the degree of microcrystallization of the photodiode is mainly in a chemical meteorological deposition process, and a gaseous hydrogen silane (SiH4) is mixed with a suitable hydrogen concentration to control the dangling bond of the hydrogen atom-bonded amorphous silicon.
  • SiH4 gaseous hydrogen silane
  • Coating of microcrystalline silicon p-type / i-type / n-type photodiode structure By adjusting the hydrogen concentration of chemical weather deposition, the operating wavelength range of the microcrystalline photodiode can be extended to the wavelength range of 600 nm to 1000 nm.
  • the microcrystalline silicon photodiode in order to further improve the quantum efficiency of the photoelectric conversion, can also be formed by serially connecting a p-type/i-type/n-type structure of a double junction or more.
  • the first junction p-type/i-type/n-type material of the photodiode is still an amorphous structure, and the p-type/i-type/n-type material above the second junction layer may be a microcrystalline structure or a polycrystalline structure.
  • Embodiment 2 Changing the amorphous silicon p-type/i-type/n-type photodiode structure (the photodiode layer in FIG. 6) to the p-type/i-type/n of the amorphous silicon compound doped with the expandable photosensitive wavelength range
  • a preferred photodiode structure, an example of a preferred compound is amorphous silicon germanium.
  • the intrinsic layer (i-type) of the photodiode is mixed with silane (SiH4) by gas decane (GeH4) during chemical vapor deposition to achieve amorphous silicon germanium p-type/i type/
  • the photosensitive range of the n-type photodiode reaches a range of light wavelengths of 600 nm to 1000 nm.
  • the amorphous silicon photodiode in order to improve the quantum efficiency of photoelectric conversion, can also be formed by serially connecting a p-type/i-type/n-type structure of a double junction or more.
  • the first junction p-type/i-type/n-type material of the photodiode is still an amorphous silicon structure, and the p-type/i-type/n-type material above the second junction layer may be a microcrystalline structure, a polycrystalline structure or a doped Compound materials that extend the range of photosensitive wavelengths.
  • the infrared sensing layer is an array formed by an infrared photosensitive diode
  • a bias voltage may be applied by scanning the TFT.
  • the TFT image sensing array film emits infrared light
  • the first trigger signal can be triggered by applying a forward bias between the p-type/i-type/n-type infrared photodiodes, and the second trigger signal can be passed in the p-type/i-type/n-type infrared A zero or negative bias is applied between the photodiodes. For example, if the array formed by the infrared photodiode is assumed to have 10 columns (assuming the number is 1-10), then a forward bias can be applied to the array of odd-numbered pixel points so that the array of odd-numbered columns of pixels can emit infrared light signals.
  • a zero-bias or a negative bias is applied to the even-numbered pixel lattice so that the even-numbered pixel dot array is in the infrared light-detecting state to capture the infrared light reflected back by the user's eye iris and convert it into an infrared image for output.
  • the first trigger signal can be triggered by applying a zero bias or a negative bias between the p-type / i-type / n-type infrared photodiode, and the second trigger signal can be passed in p-type / i A forward bias is applied between the type/n type infrared photodiodes.
  • a forward bias or a zero bias or a negative bias, may be alternately applied between the p-type/i-type/n-type infrared photodiodes to trigger the first trigger signal or the second Trigger signal.
  • an array of infrared photodiodes has 10 columns of pixel lattices, and a forward bias is applied to the p-type/i-type/n-type infrared photodiodes in the first period, so that 10 columns of pixel lattices are emitted.
  • Infrared light state applying a zero or negative bias to the p-type/i-type/n-type infrared photodiode in the second period, so that the 10-row pixel lattice is in the infrared light detection state for capturing the user's eye iris Reflecting the infrared light information and generating a corresponding infrared image output; applying a forward bias to the p-type/i-type/n-type infrared photodiode in the third period, so that the 10 columns of pixel lattices are all emitting infrared light State, alternating, and so on.
  • the time interval between adjacent periods may be set according to actual needs.
  • the time interval may be set to a time required for the TFT array to scan and scan each frame of the infrared photodiode array to receive at least one complete image signal. .
  • the operation instruction is a screen switching instruction; the step of “determining whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, executing the operation command corresponding to the iris information, otherwise
  • the execution of the operation instruction includes: determining whether the captured iris information matches the iris information corresponding to the screen switching instruction, and if so, switching the screen, otherwise the screen is not switched. Since the video stream data is composed of one frame of image picture, the method of the embodiment is also applicable to the judgment of the video stream data.
  • the operation instruction is a payment instruction; the step “determining whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, executing the operation command corresponding to the iris information, otherwise not executing The operation instruction includes: determining whether the captured iris information matches the iris information corresponding to the payment instruction, and if yes, the payment is successful, otherwise the payment fails. Linking the payment instruction with the user's iris information identification can effectively enhance the security of the transaction payment, and avoid unnecessary losses caused by other users to the owner.
  • the operation instruction is a user identity information login instruction; the step of “determining whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, executing the operation command corresponding to the iris information, Otherwise, the operation instruction is not executed.
  • the method includes: determining whether the captured iris information matches the iris information corresponding to the screen switching instruction. If the user identity information is successfully registered, the user identity information fails to be registered. Logging in user identity information and hooking up with user iris information identification can effectively enhance the security of the user identity login process.
  • the step of “determining whether the captured iris information matches the preset iris information in the operation configuration information” specifically includes: calculating a feature value according to the captured iris information, and preset with the operation configuration information. The characteristic values of the iris information are compared; when the error is less than the preset value, it is determined to be matched, otherwise it is determined to be a mismatch.
  • the method further comprises the step of: issuing a prompt message when it is determined that there is no preset iris information in the operational configuration information that matches the captured iris information.
  • the prompt information includes one or more of voice prompt information, image prompt information, light prompt information, and video prompt information.
  • the voice prompt information includes voice prompt information prompting the user to fail the iris recognition
  • the image prompt information includes popup prompt information prompting the user to fail the iris recognition
  • the video prompt information includes prompt information indicating that the iris recognition fails
  • the light prompt Information includes changing the brightness of the screen or letting the display emit light of different colors.
  • a backlight unit is further disposed under the sensing unit, and the sensing unit is disposed on the backlight unit and the LCD liquid crystal. Between the displays. Since the LCD liquid crystal display is not a self-illuminating element, it is necessary to add a backlight unit below the sensing unit during installation.
  • the backlight unit may be an LCD backlight module or other electronic components having a self-luminous function.
  • the display unit is an AMOLED display screen, since the OLED display screen is a self-luminous element, there is no need to provide a backlight unit.
  • the iris recognition area includes a plurality of iris recognition sub-areas, and a sensing unit is disposed under each iris recognition sub-area.
  • the device further includes a sensing unit control circuit, the method further comprising: receiving a user's activation command for the iris recognition sub-area, the sensing unit control circuit turning on the sensing unit below the iris recognition sub-area, and receiving the user The sensing unit control circuit closes the sensing unit below the iris recognition sub-area for the closing command of the iris recognition sub-area.
  • the two iris recognition sub-areas may be evenly distributed on the screen one above or one left and one right, or may be distributed in the screen in other arrangements.
  • the iris recognition sub-area covers the entire display screen, which ensures that when both iris recognition sub-areas are set to the on state, the imaging projection of the user's eyeball is always within the range of the sensing unit, effectively improving the eye characteristics of the user. Capture to enhance the user experience.
  • the range of the two iris recognition sub-regions may also occupy 2/3, 3/4, etc. of the entire display screen area, and only needs to satisfy the optical axis of the iris recognition sub-region without deviating from the optical axis of the eyeball imaging.
  • the user can also set one iris recognition sub-area to be turned on according to his own preference, and another iris recognition sub-area is turned off. It is also possible to set both identification sub-areas to the off state when no operation is required on the device.
  • the number of iris recognition sub-regions may also be other values, which may be set according to actual needs.
  • the sensing unit under each iris recognition sub-area is turned on or off, and can be set according to the user's own preferences.
  • FIG. 3 is a schematic diagram of an apparatus for eyeball tracking operation according to an embodiment of the present invention.
  • the device is an electronic device with a touch display screen, such as a smart mobile device such as a mobile phone, a tablet computer, a personal digital assistant, or an electronic device such as a personal computer or a computer for industrial equipment.
  • the device can also be combined with an optical imaging device disposed between the display unit and the user's eyes.
  • the user's eyeball projection is first imaged in the optical imaging device, and the imaged projection is displayed.
  • the iris recognition area on the unit is captured by the sensing unit below the iris recognition, and the effect of the analog VR device can be achieved by the cooperation between the optical imaging device and the display unit.
  • the method is applied to an iris recognition device, and the device includes a display unit 101 and a sensing unit 102.
  • the display unit 101 is provided with an iris recognition area, and the sensing unit 102 is located below the iris recognition area.
  • the sensing unit 102 includes an infrared sensing layer; the infrared sensing layer is configured to emit infrared light when receiving the first trigger signal, and to detect the infrared light state when the second trigger signal is received, And sensing the infrared light signal reflected by the user's iris to capture the user's iris information;
  • the device further includes an operation information setting unit 104, a determining unit 108, and a processing unit 106;
  • the operation information setting unit 104 is configured to preset operation configuration information, where the operation configuration information includes a correspondence relationship between the iris information and the operation instruction.
  • the operation instruction includes one or more of a text operation instruction, an image operation instruction, a video operation instruction, and an application operation instruction.
  • the text operation instruction includes a selected text instruction, a delete text instruction, a copy text instruction, and the like;
  • the image operation instruction includes a selected image instruction, a copy image instruction, a cut image instruction, an image deletion instruction, a switching image screen, and the like;
  • the instructions include intercepting, pausing, saving, deleting, fast forwarding, rewinding, zooming, and volume adjustment of the video;
  • the application operation instructions include starting, deleting, selecting, moving, etc. of the software application (such as a mobile phone APP). .
  • the iris information in the operation configuration information is that the user inputs the stored iris information in advance, and each iris information can be matched with a plurality of operation instructions, so that after the captured user iris information is authenticated, the user can perform multiple operations on the device.
  • the operation configuration information may be stored in the storage unit 107 of the device, such as the memory of the mobile phone or the hard disk of the computer, or may be stored in the storage unit of the server.
  • the device When the operation configuration information needs to be acquired, the device only needs to establish a communication connection with the server. Then, the operational configuration information stored in advance is obtained from the server, and the communication connection includes a wired communication connection or a wireless communication connection.
  • the sensing unit 102 is configured to capture iris information of the user on the iris recognition area.
  • the coverage of the sensing unit is adapted to the size of the display unit.
  • the shape of the sensing unit is rectangular, and the size of the rectangle is located at the center of the display unit, so as to ensure that the eyeball is not offset and imaged.
  • Optical axis This ensures that as long as the user's eyes are aligned with the display unit, the sensing unit can accurately and quickly acquire the user's iris information regardless of how the user's eye moves.
  • the determining unit 108 is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit 106 is configured to execute an operation command corresponding to the iris information, otherwise the processing unit 106 does not perform the Operation instructions.
  • the iris information comparison can be realized by an iris feature recognition algorithm, and the iris feature recognition algorithm can be stored in the storage unit of the device. When the infrared sensing layer of the sensing unit acquires the iris information, the processor of the device will call the storage unit.
  • the iris feature recognition algorithm compares the acquired iris information with the iris information in the preset operation configuration information to determine whether the two match.
  • the iris feature recognition algorithm includes steps such as preprocessing of iris feature information, data feature extraction, feature matching, iris recognition, etc., which can be implemented by various algorithms, which are mature existing technologies and have been applied to various fields. In the above, the details are not repeated here.
  • the apparatus includes an operation instruction receiving unit 105, and the “operation information setting unit is configured to preset operation configuration information”, the operation instruction receiving unit is configured to receive a user setting command, the display The unit is used to display the iris recognition area.
  • the setting instruction can be triggered by the user clicking a button in the setting column on the screen, and after receiving the setting instruction, the device will display the iris recognition area, so that the user can input the iris information.
  • displaying the iris recognition area may include: increasing the brightness of the iris recognition area or displaying a prompt input box on the iris recognition area.
  • the method before receiving the user setting instruction, the method further includes receiving user account information, where the account information includes a user ID and a password.
  • the user needs to input the correct user ID and password by means of voice control, eye control, or key password control, and the setting command can be triggered only after the user account is logged in, so that the security of the operation configuration information setting can be improved on the one hand.
  • voice control eye control
  • key password control key password control
  • the setting command can be triggered only after the user account is logged in, so that the security of the operation configuration information setting can be improved on the one hand.
  • the sensing unit is configured to capture and save the iris information of the user.
  • the collected iris information is preset iris information, which can be stored in the storage unit.
  • the step of “capturing the iris information of the user and saving” includes: determining whether the iris information in the user setting process has been stored in the storage unit, and when the determination is yes, prompting the user that the iris information has been entered, and issuing a prompt. The information reminds the user; when the determination is no, the iris information is saved to the storage unit. This can effectively avoid the repeated entry of iris information.
  • the display unit is configured to display a list of operation instruction identifiers
  • the operation instruction receiving unit is configured to receive a selection instruction of the operation instruction identifier by the user
  • the processing unit is configured to establish an operation instruction corresponding to the selected operation instruction identifier and the captured operation
  • the correspondence of the iris information is saved to the operation configuration information.
  • the operation instruction identifier list includes identifiers corresponding to one or more operation instructions, and each operation instruction identifier corresponds to an operation instruction.
  • the operation instruction identifier can be displayed in the form of text or a picture, and the selection instruction can be triggered by the user clicking a check, double clicking, or the like.
  • the operation instruction is a screen switching instruction; the determining unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to execute the The operation command corresponding to the iris information, otherwise the processing unit does not execute the operation instruction" includes: the determining unit is configured to determine whether the iris information of the captured user matches the iris information corresponding to the screen switching instruction, and if so, the processing unit is configured to use the screen Switching is performed, otherwise the processing unit does not switch the screen. Since the video stream data is composed of one frame of image picture, the method of the embodiment is also applicable to the judgment of the video stream data.
  • the operation instruction is a payment instruction; the “determination unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to execute the iris
  • the operation command corresponding to the information, otherwise the processing unit does not execute the operation instruction includes: the determining unit is configured to determine whether the captured iris information matches the iris information corresponding to the payment instruction, and if so, the processing unit executes the payment instruction, and the payment Successful, otherwise the processing unit does not execute the payment instruction and the payment fails. Linking the payment instruction with the user's iris information identification can effectively enhance the security of the transaction payment, and avoid unnecessary losses caused by other users to the owner.
  • the operation instruction is a user identity information login instruction; the “determination unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and if yes, the processing unit is configured to: Executing the operation command corresponding to the iris information, otherwise the processing unit does not execute the operation instruction, the method includes: the determining unit is configured to determine whether the captured iris information matches the iris information corresponding to the screen switching instruction, and if the processing unit executes the user information The login command is successful, and the login succeeds. Otherwise, the processing unit does not execute the user information identity login command, and the login fails. Logging in user identity information and hooking up with user iris information identification can effectively enhance the security of the user identity login process.
  • the sensing unit comprises a TFT image sensing array film
  • the infrared sensing layer comprises an array of infrared photodiodes.
  • the determining unit is configured to determine whether the captured iris information matches the preset iris information in the operation configuration information, and specifically includes: determining, by the determining unit, the feature value according to the captured iris information, and pre-predicting the operation configuration information The characteristic values of the iris information are set for comparison; when the error is less than the preset value, it is determined to be matched, otherwise it is determined to be mismatched.
  • the processing unit is further configured to issue the prompt information when the determining unit determines that the preset iris information does not match the captured iris information in the operation configuration information.
  • the prompt information includes one or more of voice prompt information, image prompt information, light prompt information, and video prompt information.
  • the voice prompt information includes voice prompt information prompting the user to fail the iris recognition
  • the image prompt information includes popup prompt information prompting the user to fail the iris recognition
  • the video prompt information includes prompt information indicating that the iris recognition fails
  • the light prompt Information includes changing the brightness of the screen or letting the display emit light of different colors.
  • a backlight unit 103 is disposed under the sensing unit, and the sensing unit is disposed between the backlight unit and the LCD liquid crystal display. Since the LCD liquid crystal display is not a self-illuminating element, it is necessary to add a backlight unit below the sensing unit during installation.
  • the backlight unit may be an LCD backlight module or other electronic components having a self-luminous function.
  • the display unit is an AMOLED display screen, since the OLED display screen is a self-luminous element, there is no need to provide a backlight unit.
  • the iris recognition area includes a plurality of iris recognition sub-areas, and a sensing unit is disposed under each iris recognition sub-area.
  • the device further includes a sensing unit control circuit, the method further comprising: receiving a user's activation command for the iris recognition sub-area, the sensing unit control circuit turning on the sensing unit below the iris recognition sub-area, and receiving the user The sensing unit control circuit closes the sensing unit below the iris recognition sub-area for the closing command of the iris recognition sub-area.
  • the two iris recognition sub-areas may be evenly distributed on the screen one above or one left and one right, or may be distributed in the screen in other arrangements.
  • the iris recognition sub-area covers the entire display screen, which ensures that when both iris recognition sub-areas are set to the on state, the imaging projection of the user's eyeball is always within the range of the sensing unit, effectively improving the eye characteristics of the user. Capture to enhance the user experience.
  • the range of the two iris recognition sub-regions may also occupy 2/3, 3/4, etc. of the entire display screen area, and only needs to satisfy the optical axis of the iris recognition sub-region without deviating from the optical axis of the eyeball imaging.
  • the user can also set one iris recognition sub-area to open according to his own preference, and another iris recognition sub-area is closed. It is also possible to set both identification sub-areas to the off state when no operation is required on the device.
  • the number of iris recognition sub-regions may also be other values, which may be set according to actual needs.
  • the sensing unit under each iris recognition sub-area is turned on or off, and can be set according to the user's own preferences.
  • the present invention has the following advantages: by providing a sensing unit under the iris recognition area of the display unit, a projection of the user's iris imaged by the optical device is located on the iris recognition area, and the center of the sensing unit can be disposed at the optical axis position of the eyeball or It is a paraxial position. Compared with the structure in which the camera is disposed at the edge position independently of the outer area of the display screen, the present invention can capture the iris characteristic information of the user in time, and then compare with the preset iris information, and perform the corresponding iris information. The operation instructions effectively enhance the user experience.
  • the sensing unit can emit infrared light to the iris of the user's eyeball or receive the infrared light signal reflected by the iris of the user's eyeball to capture the iris characteristic information, so that the device does not need to additionally set the infrared light source. (such as infrared LED devices), you can capture the user's iris characteristics information.
  • the sensing unit is disposed under the display unit, which can effectively reduce the overall thickness of the mobile device compared to the structure in which the camera is independently protruded from the display screen, so that the wearable device or the mobile device is more light and thin, and is more suitable for flexible wear. Equipment or mobile devices to meet the needs of the market.
  • the computer device including but not limited to: a personal computer, a server, a general purpose computer, a special purpose computer, a network device, an embedded device, a programmable device, a smart mobile terminal, a smart home device, a wearable smart device, a vehicle smart device, and the like;
  • the storage medium includes, but is not limited to, a RAM, a ROM, a magnetic disk, a magnetic tape, an optical disk, a flash memory, a USB flash drive, a mobile hard disk, a memory card, a memory stick, a network server storage, a network cloud storage, and the like.
  • the computer program instructions can also be stored in a computer device readable memory that can direct the computer device to operate in a particular manner, such that instructions stored in the computer device readable memory produce an article of manufacture comprising the instruction device, the instruction device being implemented in the process Figure One or more processes and/or block diagrams of the functions specified in a block or blocks.
  • These computer program instructions can also be loaded onto a computer device such that a series of operational steps are performed on the computer device to produce computer-implemented processing, such that instructions executed on the computer device are provided for implementing one or more processes in the flowchart And/or block diagram of the steps of a function specified in a box or blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Image Input (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本发明提供了一种虹膜识别的方法和装置,通过在显示单元的虹膜识别区下方设置传感单元,用户眼球的投影位于所述虹膜识别区上,相较于摄像头独立于显示屏外设置在边缘位置的结构,本发明可以及时捕捉到用户眼球虹膜信息,进而与预设的虹膜信息进行比对,执行该虹膜信息对应的操作指令,有效提高虹膜识别的精准度。此外,传感单元在接收到不同的触发信号时,既可以发出红外光以观察用户眼球虹膜,也可以接收用户眼球虹膜反射回的红外光信号以捕捉虹膜特征信息。相较于摄像头独立突出设置于显示屏的结构,可以有效缩小移动设备的整体厚度,使得移动设备更加轻薄、满足市场的需求。

Description

一种虹膜识别的方法和装置 技术领域
本发明涉及电子设备控制领域,特别涉及一种虹膜识别的方法和装置。
背景技术
随着科技的发展和技术的进步,触控显示面板已经广泛应用在需要进行人机交互界面的装置中,如工业计算机的操作屏幕、平板计算机、智能手机的触控屏幕等等。然而以穿戴式电子装置而言,人机交互界面技术仍有众多进步空间。以虚拟现实或增强现实(VR/AR)装置为例,符合用户感官体验的身分识别方式尚不如移动设备之生物特征识别技术,例如:指纹识别技术来得成熟。身分识别通常会将用户的生物特征信息与操作指令相结合,来达到通过生物特征识别进行此类操作装置的目的,而虹膜识别就是其中重要一项。
虹膜是人体眼球中有色部分的肌肉纤维组织纹理,虹膜识别指对捕捉眼球的虹膜特征信息并加以识别,预测用户的身分状态和需求,并进行响应,达到通过识别虹膜特征信息来控制设备的目的。目前,一般采用移动设备上显示屏外的红外摄像头来捕捉眼球特征变化信息,由于移动设备上显示屏外的红外摄像头往往独立设置于设备的边缘位置(如设置于手机的顶部),在穿戴式电子产品应用上偏离了眼球成像的光轴,采用现有的装置结构无法精准捕捉到用户的虹膜信息,存在着识别效率低、识别准确度差等问题。
发明内容
本发明所要解决的技术问题是提供一种虹膜识别的技术方案,用以解决由于穿戴式设备或是移动设备显示屏外的摄像头设置位置偏离光轴、无法精准且及时捕捉到用户眼球虹膜的特征信息,导致虹膜特征信息无法完整捕捉、识别效率低、用户感官体验差等问题。
为解决上述技术问题,本发明采取的技术方案为:一种虹膜识别的方法,所述方法应用于虹膜识别的装置,所述装置包括显示单元和传感单元;所述显示单元上设置有虹膜识别区,所述传感单元位于所述虹膜识别区的下方,所述传感单元包括红外感应层;所述红外感应层用于在接收到第一触发信号时,发出红外光,以及用于在接收到第二触发信号时,处于侦测红外光状态,并感知用户虹膜反射的红外光信号以捕捉用户的虹膜信息;所述方法包括以下步骤:
预先设置操作配置信息,所述操作配置信息包括虹膜信息与操作指令的对应关系;
捕捉用户在虹膜识别区上的虹膜信息,判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令。
进一步地,所述步骤“预先设置操作配置信息”包括:
接收用户设置命令,显示虹膜识别区;
捕捉用户的虹膜信息并保存;
显示一操作指令标识列表,所述操作指令标识列表中包含着一个或多个操作指令对应的标识,每一操作指令标识对应一操作指令;
接收用户对操作指令标识的选择指令,建立所选中的操作指令标识对应的操作指令与所捕捉的虹膜信息的对应关系,并保存至操作配置信息。
进一步地,所述操作指令为画面切换指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:
判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是则对画面进行切换,否则不对画面进行切换。
进一步地,所述操作指令为支付指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:
判断捕捉到的虹膜信息与支付指令对应的虹膜信息是否匹配,若是则支付成功,否则支付失败。
进一步地,所述操作指令为用户身份信息登录指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:
判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是用户身份信息登录成功,否则用户身份信息登录失败。
进一步地,所述传感单元包括TFT影像感测阵列薄膜,所述红外感应层包括红外光敏二极管所形成的阵列。
进一步地,步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配”具体包括:
根据捕捉到虹膜信息计算其特征值,并与操作配置信息中预设的虹膜信息的特征值进行对比;当误差小于预设值时,判定为相匹配,否则判定为不匹配。
进一步地,所述方法还包括步骤:
当判定操作配置信息中没有与所捕捉的虹膜信息相匹配的预设虹膜信息时,发出提示信息。
进一步地,所述提示信息包括声音提示信息、图像提示信息、光线提示信息、视频提示信息中的一种或多种。
进一步地,所述显示单元包括AMOLED显示屏或LCD液晶显示屏。
进一步地,当所述显示单元为LCD液晶显示屏时,所述传感单元的下方还设置有背光单元,所述传感单元设置于背光单元和LCD液晶显示屏之间。
进一步地,所述虹膜识别区包括多个虹膜识别子区域,每一虹膜识别子区域的下方对应设置一传感单元。
进一步地,所述装置还包括传感单元控制电路,所述方法还包括:
接收用户对虹膜识别子区域的启动指令,传感单元控制电路开启所述虹膜识别子区域的下方的传感单元,以及接收用户对虹膜识别子区域的关闭指令, 传感单元控制电路关闭所述虹膜识别子区域的下方的传感单元。
发明人提供了一种虹膜识别的装置,所述方法应用于虹膜识别的装置,所述装置包括显示单元和传感单元;所述显示单元上设置有虹膜识别区,所述传感单元位于所述虹膜识别区的下方,所述传感单元包括红外感应层;所述红外感应层用于在接收到第一触发信号时,发出红外光,以及用于在接收到第二触发信号时,处于侦测红外光状态,并感知用户虹膜反射的红外光信号以捕捉用户的虹膜信息;所述装置还包括操作信息设置单元、判断单元和处理单元;
所述操作信息设置单元用于预先设置操作配置信息,所述操作配置信息包括虹膜信息与操作指令的对应关系;
所述传感单元用于捕捉用户在虹膜识别区上的虹膜信息,所述判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令。
进一步地,所述装置包括操作指令接收单元,所述“操作信息设置单元用于预先设置操作配置信息”包括:
所述操作指令接收单元用于接收用户设置命令,所述显示单元用于显示虹膜识别区;
所述传感单元用于捕捉用户的虹膜信息并保存;
所述显示单元用于显示一操作指令标识列表;所述操作指令标识列表中包含着一个或多个操作指令对应的标识,每一操作指令标识对应一操作指令;
所述操作指令接收单元用于接收用户对操作指令标识的选择指令,处理单元用于建立所选中的操作指令标识对应的操作指令与所捕捉的虹膜信息的对应关系,并保存至操作配置信息。
进一步地,所述操作指令为画面切换指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:
判断单元用于判断捕捉到的用户的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是则处理单元用于对画面进行切换,否则处理单元不对画面进行切换。
进一步地,所述操作指令为支付指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:
判断单元用于判断判断捕捉到的虹膜信息与支付指令对应的虹膜信息是否匹配,若是则处理单元执行所述支付指令,支付成功,否则处理单元不执行所述支付指令,支付失败。
进一步地,所述操作指令为用户身份信息登录指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:
判断单元用于判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是处理单元执行所述用户信息身份登录指令,登录成功,否则处理单元不执行所述用户信息身份登录指令,登录失败。
进一步地,所述传感单元包括TFT影像感测阵列薄膜,所述红外感应层包括红外光敏二极管所形成的阵列。
进一步地,所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配”具体包括:
判断单元用于根据捕捉到虹膜信息计算其特征值,并与操作配置信息中预设的虹膜信息的特征值进行对比;当误差小于预设值时,判定为相匹配,否则判定为不匹配。
进一步地,所述处理单元还用于在判断单元判定操作配置信息中没有与所捕捉的虹膜信息相匹配的预设虹膜信息时,发出提示信息。
进一步地,所述提示信息包括声音提示信息、图像提示信息、光线提示信 息、视频提示信息中的一种或多种。
进一步地,所述显示单元包括AMOLED显示屏或LCD液晶显示屏。
进一步地,当所述显示单元为LCD液晶显示屏时,所述传感单元的下方还设置有背光单元,所述传感单元设置于背光单元和LCD液晶显示屏之间。
进一步地,所述虹膜识别区包括多个虹膜识别子区域,每一虹膜识别子区域的下方对应设置一传感单元。
进一步地,所述装置还包括传感单元控制电路和操作指令接收单元,所述操作指令接收单元用于接收用户对虹膜识别子区域的启动指令,所述传感单元控制电路用于开启所述虹膜识别子区域的下方的传感单元,以及所述操作指令接收单元用于接收用户对虹膜识别子区域的关闭指令,所述传感单元控制电路用于关闭所述虹膜识别子区域的下方的传感单元。
采用以上技术方案后的有益效果为:通过在显示单元的虹膜识别区下方设置传感单元,用户虹膜通过光学器件成像的投影位于所述虹膜识别区上,传感单元的中心可设置于眼球成像光轴位置或是近轴位置,相较于摄像头独立于显示屏外设置在边缘位置的结构,本发明可以及时捕捉到用户虹膜特征信息,进而与预设的虹膜信息进行比对,执行该虹膜信息对应的操作指令,有效提高虹膜信息的精准度,提升用户体验。此外,传感单元在接收到不同的触发信号时,既可以发出红外光投射于用户眼球虹膜,也可以接收用户眼球虹膜反射回的红外光信号以捕捉虹膜特征信息,使得装置无需额外设置红外光源(如红外LED器件),即可捕捉到用户的虹膜特征信息。同时,传感单元设置于显示单元的下方,相较于摄像头独立突出设置于显示屏区域外的结构,可以有效缩小移动设备的整体厚度,使得穿戴式设备或是移动设备更加轻薄、更适用于柔性穿戴式设备或是移动设备、满足市场的需求。
附图说明
图1为本发明一实施方式涉及的虹膜识别的方法的流程图;
图2为本发明另一实施方式涉及的虹膜识别的方法的示意图;
图3为本发明一实施方式涉及的虹膜识别的装置的流程图;
图4为本发明另一实施方式涉及的虹膜识别的装置的流程图;
图5为本发明另一实施方式涉及的虹膜识别的装置应用场景的流程图;
图6为现有的传感单元的示意图;
图7为本发明一实施方式涉及的传感单元的示意图。
标号说明:
101、显示单元;
102、传感单元;
103、背光单元;
104、操作信息设置单元;
105、操作指令接收单元;
106、处理单元;
107、存储单元;
108、判断单元。
具体实施方式
为详细说明本发明的技术内容、构造特征、所实现目的及效果,以下结合实施方式并配合附图详予说明。
请参阅图1,为本发明一实施方式涉及的虹膜识别的方法的流程图。所述方法应用于虹膜识别的装置,所述装置为具有显示屏或触摸显示屏的电子设备,如是手机、平板电脑、个人数字助理等智能移动设备,还可以是个人计算机、工业装备用计算机等电子设备。当然所述装置还可以与光学成像器件相结合,光学成像器件设置于所述显示单元与用户眼睛之间(即显示屏的上方),如图5所示,用户眼球投影先在光学成像器件中成像,成像的投影位于显示单元上虹膜识别区范围内,使得用户眼球的虹膜信息可以被虹膜识别区下方的传感单元 捕捉。通过光学成像器件与显示单元之间的配合,可以达到模拟VR/AR设备的效果。
所述装置包括显示单元和传感单元。所述方法应用于虹膜识别的装置,所述装置包括显示单元和传感单元;所述显示单元上设置有虹膜识别区,所述传感单元位于所述虹膜识别区的下方,所述传感单元包括红外感应层;所述红外感应层用于在接收到第一触发信号时,发出红外光,以及用于在接收到第二触发信号时,处于侦测红外光状态,并感知用户虹膜反射的红外光信号以捕捉用户的虹膜信息。在本实施方式中,所述显示单元包括AMOLED显示屏或LCD液晶显示屏;在其他实施方式中,显示单元也可以为其他具有显示功能的电子元件。所述方法包括以下步骤:
首先进入步骤S101预先设置操作配置信息,所述操作配置信息包括虹膜信息与操作指令的对应关系。本实施方式中,所述操作指令包括文字操作指令、图像操作指令、视频操作指令、应用操作指令中的一种或多种。所述文字操作指令包括选中文字指令、删除文字指令、复制文字指令等;所述图像操作指令包括选中图像指令、复制图像指令、截取图像指令、删除图像指令、切换图像画面等;所述视频操作指令包括对视频进行截取、暂停、保存、删除、快进、快退、缩放画面、音量调整等;所述应用操作指令包括对软件应用程序(如手机APP)进行启动、删除、选中、移动等。
操作配置信息中的虹膜信息即为用户事先录入存储的虹膜信息,每一虹膜信息可以与多个操作指令相匹配,使得捕捉到的用户虹膜信息在通过认证后,用户可以对设备执行多项操作。操作配置信息可以存储于装置的存储单元,如手机的内存、计算机的硬盘中,也可以存储于服务器的存储单元中,当需要获取操作配置信息时,只需让装置与服务器建立通讯连接,而后再从服务器获取到事先存储的操作配置信息,所述通讯连接包括有线通讯连接或无线通信连接。
而后进入步骤S102捕捉用户在虹膜识别区上的虹膜信息。在本实施方式中,传感单元的覆盖的范围与显示单元的大小相适配,优选的,传感单元的形状为 矩形,矩形的大小位于显示单元的中心,保证不偏移眼球、成像的光轴。这样可以保证只要用户眼睛对准显示单元,无论用户眼球如何活动,传感单元都能精确、快速地采集到用户的虹膜信息。
而后进入步骤S103判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则进入步骤S104执行该虹膜信息对应的操作命令,否则进入步骤S105不执行所述操作指令。虹膜信息的比对可以通过虹膜特征识别算法来实现,虹膜特征识别算法可以实现存储于装置的存储单元中,当传感单元的红外感应层获取到虹膜信息后,装置的处理器将调用存储单元中的虹膜特征识别算法,将所获取的虹膜信息与预设的操作配置信息中的虹膜信息进行比对,判断两者是否匹配。虹膜特征识别算法包括对虹膜特征信息进行预处理、数据特征提取、特征匹配、虹膜识别等步骤,可以用多种算法来实现,这些算法都是成熟的现有技术,现已被应用于各个领域中,此处不再展开赘述。
请参阅图2,为本发明另一实施方式涉及的虹膜识别的方法的流程图。所述步骤“预先设置操作配置信息”包括:
首先进入步骤S201接收用户设置命令,显示虹膜识别区。设置指令可以通过用户点击屏幕上设置栏中的某一按钮触发,装置接收到设置指令后,将对虹膜识别区进行显示,便于用户输入虹膜信息。在本实施方式中,显示虹膜识别区可以包括:提高虹膜识别区的亮度或在虹膜识别区上显示一提示输入框。在某些实施例中,在接收用户设置指令之前,还包括接收用户的账号信息,所述账号信息包括用户ID及密码。用户需要以语音控制、眼球控制、或是按键密码控制等方式输入正确的用户ID及密码,登录用户账号后,才可触发所述设置指令,这样一方面可以提高操作配置信息设置的安全性,另一方面也可以达到在一个装置上区分不同用户、保存不同的虹膜信息以及与之相对应的操作指令的效果。
而后进入步骤S202捕捉用户的虹膜信息并保存。所采集到的虹膜信息即为预设虹膜信息,可以将其存储于存储单元中。本实施方式中,所述步骤“捕捉 用户的虹膜信息并保存”包括:判断用户设置过程中的虹膜信息是否已存储于存储单元,当判定为是时提示用户该虹膜信息已录入,并发出提示信息提醒用户;当判定为否时将该虹膜信息保存至存储单元。这样可以有效避免虹膜信息的重复录入。
而后进入步骤S203显示一操作指令标识列表,接收用户对操作指令标识的选择指令,建立所选中的操作指令标识对应的操作指令与所捕捉的虹膜信息的对应关系,并保存至操作配置信息。所述操作指令标识列表中包含着一个或多个操作指令对应的标识,每一操作指令标识对应一操作指令。操作指令标识可以以文字或图片的形式予以显示,选择指令可以通过用户点击勾选、双击等方式触发。
如图6所示,在本实施方式中,所述传感单元包括TFT影像感测阵列薄膜,所述红外感应层包括红外光敏二极管所形成的阵列。现有的液晶显示(LCD)面板或有机发光二极管(OLED)显示面板,皆是以TFT结构驱动扫描单一像素,以实现面板上像素阵列的显示功能。形成TFT开关功能的主要结构为金属氧化物半导体场效晶体管(MOSFET),其中熟知的半导体层材料主要有非晶硅、多晶硅、氧化铟镓锌(IGZO)、或是混有碳纳米材料之有机化合物等等。由于光感测二极管的结构亦可采用此类半导体材料制备,且生产设备也兼容于TFT阵列的生产设备,因此近年来TFT光侦测二极管开始以TFT阵列制备方式进行生产。本实施方式所述的TFT影像感测阵列薄膜即为上述提到的TFT光侦测二极管(如图6中的光感测二极管区域部分),具体结构可以参考美国专利US6943070B2、中华人民共和国专利CN204808361U中对传感单元结构的描述。TFT影像感测阵列薄膜的生产工艺与显示面板TFT结构不同的是:原本在显示面板的像素开口区域,在生产工艺上改为光感测区域。其TFT制备方式可以采用薄型玻璃为基材,亦可采用耐高温塑性材料为基材,如美国专利US6943070B2所述。
图6所示的传感单元易受周围环境光或者显示屏像素所发出的可见光的反射、折射等因素影响,造成光学干扰,严重影响内嵌于显示面板下方的TFT影 像感测阵列薄膜的信号噪声比(SNR),为了提高信号噪声比,本发明的传感单元在图6所示的传感单元的基础上做了进一步改进,使得TFT影像感测阵列薄膜可以侦测识别用户眼球虹膜反射回的红外信号。
如图7所示,在某些实施例中,所述红外感应层为红外光敏二级管所形成的阵列。为了将TFT影像感测阵列薄膜能够识别的光信号波长从可见光范围扩展至红外光范围,对图6的TFT影像感测阵列薄膜进行改进,具体是采用红外光敏二级管替换图6中TFT影像感测阵列薄膜的光二极管层,红外光敏二极管包括微晶硅光电二极管或非晶硅光电二极管。
实施例一:将非结晶硅p型/i型/n型光电二极管结构(图6中的光二极管层)改由微晶硅p型/i型/n型光电二极管结构。在此实施例中,光电二极管的微结晶程度主要是在化学气象沉积过程中,以适当氢气浓度混入气体硅烷(SiH4)去控制氢原子键结非晶硅之悬空键(dangling bond),以实现微晶硅p型/i型/n型光电二极管结构之镀膜。藉由调整化学气象沉积的氢气浓度,微晶光电二极管的操作波长范围可以扩展到光波长600nm到1000nm的范围。
在采用微晶光电二极管之实施例中,为了进一步地提高光电转换之量子效率,微晶硅光电二极管也可采用双结以上p型/i型/n型结构串接形成。该光电二极管第一结层p型/i型/n型材料仍然为非结晶结构,第二结层以上p型/i型/n型材料可以为微晶结构、多晶结构。
实施例二:将非结晶硅p型/i型/n型光电二极管结构(图6中的光二极管层)改为掺有可扩展光敏波长范围之非结晶硅化合物之p型/i型/n型光电二极管结构,优选之化合物实施例为非晶硅化锗。在此实施例中,光电二极管的本质层(i型)在以化学气象沉积镀膜过程中,通以气体锗烷(GeH4)混入硅烷(SiH4),以实现非结晶硅化锗p型/i型/n型光电二极管之光敏范围达到光波长600nm到1000nm的范围。
在采用非结晶硅化合物光电二极管之实施例中,为了提高光电转换之量子效率,非晶硅光电二极管也可采用双结以上p型/i型/n型结构串接形成。该光 电二极管第一结层p型/i型/n型材料仍然为非晶硅结构,第二结层以上p型/i型/n型材料可以为微晶结构、多晶结构或是掺有可扩展光敏波长范围之化合物材料。
当红外感应层为红外光敏二级管所形成的阵列时,在实际应用过程中,可藉由TFT作扫描驱动外加一偏压(包括正向偏压,或零偏压或负偏压)在p型/i型/n型光电二极管之间,实现TFT影像感测阵列薄膜发出红外光功能,
在某些实施例中,第一触发信号可以通过在p型/i型/n型红外光敏二极管之间施加正向偏压触发,第二触发信号可以通过在p型/i型/n型红外光敏二极管之间施加零偏压或负偏压触发。例如红外光敏二极管所形成的阵列假设有10列(假设编号为1-10),那么可以对编号为奇数的像素点阵列施加正向偏压,以使得奇数列像素点阵列可发出红外光信号,并对编号为偶数的像素点阵施加零偏压或负偏压,以使得偶数列像素点阵列处于侦测红外光状态,以捕捉用户眼球虹膜反射回的红外光并转换为红外图像加以输出。当然,在另一些实施例中,第一触发信号可以通过在p型/i型/n型红外光敏二极管之间施加零偏压或负偏压触发,第二触发信号可以通过在p型/i型/n型红外光敏二极管之间施加正向偏压触发。
在某些实施例中,还可以交替在p型/i型/n型红外光敏二极管之间施加正向偏压,或零偏压或负偏压,以触发所述第一触发信号或第二触发信号。同样以红外光敏二极管所形成的阵列有10列像素点阵为例,在第一周期内对p型/i型/n型红外光敏二极管施加正向偏压,使得10列像素点阵均处于发出红外光状态;在第二周期内对p型/i型/n型红外光敏二极管施加零偏压或负偏压,使得10列像素点阵均处于红外光侦测状态,用于捕捉用户眼球虹膜反射回的红外光信息,并生成相应的红外图像输出;在第三周期内又对p型/i型/n型红外光敏二极管施加正向偏压,使得10列像素点阵均处于发出红外光状态,反复交替,以此类推。相邻的周期之间的时间间隔可以根据实际需要而设置,优选时间间隔可以设置为TFT阵列驱动扫描每一帧(Frame)红外光敏二极管阵列至少能接 收到一帧完整的影像信号所需的时间。
在某些实施例中,所述操作指令为画面切换指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是则对画面进行切换,否则不对画面进行切换。由于视频流数据是有一帧帧图像画面构成的,因而本实施例的方法同样也适用于对视频流数据的判断。
在某些实施例中,所述操作指令为支付指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:判断捕捉到的虹膜信息与支付指令对应的虹膜信息是否匹配,若是则支付成功,否则支付失败。将支付指令与用户虹膜信息识别相挂钩,可以有效增强交易支付的安全性,同时避免其他用户误操作给户主带来不必要的损失。
在某些实施例中,所述操作指令为用户身份信息登录指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是用户身份信息登录成功,否则用户身份信息登录失败。将用户身份信息登录与与用户虹膜信息识别相挂钩,可以有效增强用户身份登录过程的安全性。
在某些实施例中,步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配”具体包括:根据捕捉到虹膜信息计算其特征值,并与操作配置信息中预设的虹膜信息的特征值进行对比;当误差小于预设值时,判定为相匹配,否则判定为不匹配。
在某些实施例中,所述方法还包括步骤:当判定操作配置信息中没有与所捕捉的虹膜信息相匹配的预设虹膜信息时,发出提示信息。所述提示信息包括声音提示信息、图像提示信息、光线提示信息、视频提示信息中的一种或多种。 所述声音提示信息包括提示用户虹膜识别失败的语音提示信息,所述图像提示信息包括提示用户虹膜识别失败的的弹窗提示信息,所述视频提示信息包括提示虹膜识别失败的提示信息,光线提示信息包括改变屏幕亮度或者让显示屏发出不同颜色的光线等。
如图4所示,在某些实施例中,当所述显示单元为LCD液晶显示屏时,所述传感单元的下方还设置有背光单元,所述传感单元设置于背光单元和LCD液晶显示屏之间。由于LCD液晶显示屏不属于自发光元件,因而在安装时需要在传感单元的下方增加背光单元。背光单元可以为LCD背光模组,也可以为其他具有自发光功能的电子元件。在另一些实施例中,当所述显示单元为AMOLED显示屏时,由于OLED显示屏属于自发光元件,因而无需设置背光单元。通过上述两种方案的设置,可以有效满足不同厂家的生产需求,提高装置的适用范围。
在本实施方式中,所述虹膜识别区包括多个虹膜识别子区域,每一虹膜识别子区域的下方对应设置一传感单元。所述装置还包括传感单元控制电路,所述方法还包括:接收用户对虹膜识别子区域的启动指令,传感单元控制电路开启所述虹膜识别子区域的下方的传感单元,以及接收用户对虹膜识别子区域的关闭指令,传感单元控制电路关闭所述虹膜识别子区域的下方的传感单元。
以虹膜识别子区域的数量为两个为例,两个虹膜识别子区域可以一上一下或一左一右均匀分布于屏幕中,也可以以其他排列方式分布于屏幕中。下面对具有两个虹膜识别子区域的装置的应用过程做具体说明:在使用过程中,用户通过启动指令,开启将两个虹膜识别子区域都设置成开启状态,优选的实施例中,两个虹膜识别子区域构成的范围覆盖了整个显示屏,这样可以保证当两个虹膜识别子区域都设置成开启状态时,用户眼球的成像投影始终位于传感单元范围内,有效提高对用户眼球特征的捕捉,提升用户体验。在其他实施例中,两个虹膜识别子区域构成的范围也可以占整个显示屏面积的2/3、3/4等,只需满足虹膜识别子区域的中心不偏离眼球成像的光轴即可。当然,用户也可以根 据自身喜好,设置某一个虹膜识别子区域开启,另一个虹膜识别子区域关闭。在不需要对装置进行操作时,还可以将两个识别子区域均设置为关闭状态。
在其他实施例中,虹膜识别子区域的数量还可以为其他数值,可以根据实际需要进行设置。各个虹膜识别子区域下方的传感单元处于开启或关闭,可以根据用户自身喜好进行设置。
请参阅图3,为本发明一实施方式涉及的眼球追踪操作的装置的示意图。所述装置为具有触摸显示屏的电子设备,如是手机、平板电脑、个人数字助理等智能移动设备,还可以是个人计算机、工业装备用计算机等电子设备。当然所述装置还可以与光学成像器件相结合,光学成像器件设置于所述显示单元与用户眼睛之间,如图5所示,用户眼球投影先在光学成像器件中成像,成像的投影位于显示单元上虹膜识别区范围内,进而被虹膜识别下方的传感单元捕捉,通过光学成像器件与显示单元之间的配合,可以达到模拟VR设备的效果。
所述方法应用于虹膜识别的装置,所述装置包括显示单元101和传感单元102;所述显示单元101上设置有虹膜识别区,所述传感单元102位于所述虹膜识别区的下方,所述传感单元102包括红外感应层;所述红外感应层用于在接收到第一触发信号时,发出红外光,以及用于在接收到第二触发信号时,处于侦测红外光状态,并感知用户虹膜反射的红外光信号以捕捉用户的虹膜信息;所述装置还包括操作信息设置单元104、判断单元108和处理单元106;
所述操作信息设置单元104用于预先设置操作配置信息,所述操作配置信息包括虹膜信息与操作指令的对应关系。本实施方式中,所述操作指令包括文字操作指令、图像操作指令、视频操作指令、应用操作指令中的一种或多种。所述文字操作指令包括选中文字指令、删除文字指令、复制文字指令等;所述图像操作指令包括选中图像指令、复制图像指令、截取图像指令、删除图像指令、切换图像画面等;所述视频操作指令包括对视频进行截取、暂停、保存、删除、快进、快退、缩放画面、音量调整等;所述应用操作指令包括对软件应用程序(如手机APP)进行启动、删除、选中、移动等。
操作配置信息中的虹膜信息即为用户事先录入存储的虹膜信息,每一虹膜信息可以与多个操作指令相匹配,使得捕捉到的用户虹膜信息在通过认证后,用户可以对设备执行多项操作。操作配置信息可以存储于装置的存储单元107,如手机的内存、计算机的硬盘中,也可以存储于服务器的存储单元中,当需要获取操作配置信息时,只需让装置与服务器建立通讯连接,而后再从服务器获取到事先存储的操作配置信息,所述通讯连接包括有线通讯连接或无线通信连接。
所述传感单元102用于捕捉用户在虹膜识别区上的虹膜信息。在本实施方式中,传感单元的覆盖的范围与显示单元的大小相适配,优选的,传感单元的形状为矩形,矩形的大小位于显示单元的中心,保证不偏移眼球、成像的光轴。这样可以保证只要用户眼睛对准显示单元,无论用户眼球如何活动,传感单元都能精确、快速地采集到用户的虹膜信息。
所述判断单元108用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元106用于执行该虹膜信息对应的操作命令,否则处理单元106不执行所述操作指令。虹膜信息的比对可以通过虹膜特征识别算法来实现,虹膜特征识别算法可以实现存储于装置的存储单元中,当传感单元的红外感应层获取到虹膜信息后,装置的处理器将调用存储单元中的虹膜特征识别算法,将所获取的虹膜信息与预设的操作配置信息中的虹膜信息进行比对,判断两者是否匹配。虹膜特征识别算法包括对虹膜特征信息进行预处理、数据特征提取、特征匹配、虹膜识别等步骤,可以用多种算法来实现,这些算法都是成熟的现有技术,现已被应用于各个领域中,此处不再展开赘述。
在某些实施例中,所述装置包括操作指令接收单元105,所述“操作信息设置单元用于预先设置操作配置信息”包括:所述操作指令接收单元用于接收用户设置命令,所述显示单元用于显示虹膜识别区。设置指令可以通过用户点击屏幕上设置栏中的某一按钮触发,装置接收到设置指令后,将对虹膜识别区进行显示,便于用户输入虹膜信息。在本实施方式中,显示虹膜识别区可以包 括:提高虹膜识别区的亮度或在虹膜识别区上显示一提示输入框。在某些实施例中,在接收用户设置指令之前,还包括接收用户的账号信息,所述账号信息包括用户ID及密码。用户需要以语音控制、眼球控制、或是按键密码控制等方式输入正确的用户ID及密码,登录用户账号后,才可触发所述设置指令,这样一方面可以提高操作配置信息设置的安全性,另一方面也可以达到在一个装置上区分不同用户、保存不同的虹膜信息以及与之相对应的操作指令的效果。
所述传感单元用于捕捉用户的虹膜信息并保存。所采集到的虹膜信息即为预设虹膜信息,可以将其存储于存储单元中。本实施方式中,所述步骤“捕捉用户的虹膜信息并保存”包括:判断用户设置过程中的虹膜信息是否已存储于存储单元,当判定为是时提示用户该虹膜信息已录入,并发出提示信息提醒用户;当判定为否时将该虹膜信息保存至存储单元。这样可以有效避免虹膜信息的重复录入。
所述显示单元用于显示一操作指令标识列表,所述操作指令接收单元用于接收用户对操作指令标识的选择指令,处理单元用于建立所选中的操作指令标识对应的操作指令与所捕捉的虹膜信息的对应关系,并保存至操作配置信息。所述操作指令标识列表中包含着一个或多个操作指令对应的标识,每一操作指令标识对应一操作指令。操作指令标识可以以文字或图片的形式予以显示,选择指令可以通过用户点击勾选、双击等方式触发。
在某些实施例中,所述操作指令为画面切换指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:判断单元用于判断捕捉到的用户的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是则处理单元用于对画面进行切换,否则处理单元不对画面进行切换。由于视频流数据是有一帧帧图像画面构成的,因而本实施例的方法同样也适用于对视频流数据的判断。
在某些实施例中,所述操作指令为支付指令;所述“判断单元用于判断所 捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:判断单元用于判断判断捕捉到的虹膜信息与支付指令对应的虹膜信息是否匹配,若是则处理单元执行所述支付指令,支付成功,否则处理单元不执行所述支付指令,支付失败。将支付指令与用户虹膜信息识别相挂钩,可以有效增强交易支付的安全性,同时避免其他用户误操作给户主带来不必要的损失。
在某些实施例中,所述操作指令为用户身份信息登录指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:判断单元用于判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是处理单元执行所述用户信息身份登录指令,登录成功,否则处理单元不执行所述用户信息身份登录指令,登录失败。将用户身份信息登录与与用户虹膜信息识别相挂钩,可以有效增强用户身份登录过程的安全性。
在某些实施例中,所述传感单元包括TFT影像感测阵列薄膜,所述红外感应层包括红外光敏二极管所形成的阵列。所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配”具体包括:判断单元用于根据捕捉到虹膜信息计算其特征值,并与操作配置信息中预设的虹膜信息的特征值进行对比;当误差小于预设值时,判定为相匹配,否则判定为不匹配。
在某些实施例中,所述处理单元还用于在判断单元判定操作配置信息中没有与所捕捉的虹膜信息相匹配的预设虹膜信息时,发出提示信息。所述提示信息包括声音提示信息、图像提示信息、光线提示信息、视频提示信息中的一种或多种。所述声音提示信息包括提示用户虹膜识别失败的语音提示信息,所述图像提示信息包括提示用户虹膜识别失败的的弹窗提示信息,所述视频提示信息包括提示虹膜识别失败的提示信息,光线提示信息包括改变屏幕亮度或者让显示屏发出不同颜色的光线等。
在某些实施例中,当所述显示单元为LCD液晶显示屏时,所述传感单元的 下方还设置有背光单元103,所述传感单元设置于背光单元和LCD液晶显示屏之间。由于LCD液晶显示屏不属于自发光元件,因而在安装时需要在传感单元的下方增加背光单元。背光单元可以为LCD背光模组,也可以为其他具有自发光功能的电子元件。在另一些实施例中,当所述显示单元为AMOLED显示屏时,由于OLED显示屏属于自发光元件,因而无需设置背光单元。通过上述两种方案的设置,可以有效满足不同厂家的生产需求,提高装置的适用范围。
在本实施方式中,所述虹膜识别区包括多个虹膜识别子区域,每一虹膜识别子区域的下方对应设置一传感单元。所述装置还包括传感单元控制电路,所述方法还包括:接收用户对虹膜识别子区域的启动指令,传感单元控制电路开启所述虹膜识别子区域的下方的传感单元,以及接收用户对虹膜识别子区域的关闭指令,传感单元控制电路关闭所述虹膜识别子区域的下方的传感单元。
以虹膜识别子区域的数量为两个为例,两个虹膜识别子区域可以一上一下或一左一右均匀分布于屏幕中,也可以以其他排列方式分布于屏幕中。下面对具有两个虹膜识别子区域的装置的应用过程做具体说明:在使用过程中,用户通过启动指令,开启将两个虹膜识别子区域都设置成开启状态,优选的实施例中,两个虹膜识别子区域构成的范围覆盖了整个显示屏,这样可以保证当两个虹膜识别子区域都设置成开启状态时,用户眼球的成像投影始终位于传感单元范围内,有效提高对用户眼球特征的捕捉,提升用户体验。在其他实施例中,两个虹膜识别子区域构成的范围也可以占整个显示屏面积的2/3、3/4等,只需满足虹膜识别子区域的中心不偏离眼球成像的光轴即可。当然,用户也可以根据自身喜好,设置某一个虹膜识别子区域开启,另一个虹膜识别子区域关闭。在不需要对装置进行操作时,还可以将两个识别子区域均设置为关闭状态。
在其他实施例中,虹膜识别子区域的数量还可以为其他数值,可以根据实际需要进行设置。各个虹膜识别子区域下方的传感单元处于开启或关闭,可以根据用户自身喜好进行设置。
本发明具有以下优点:通过在显示单元的虹膜识别区下方设置传感单元, 用户虹膜通过光学器件成像的投影位于所述虹膜识别区上,传感单元的中心可设置于眼球成像光轴位置或是近轴位置,相较于摄像头独立于显示屏外区域外设置在边缘位置的结构,本发明可以及时捕捉到用户虹膜特征信息,进而与预设的虹膜信息进行比对,执行该虹膜信息对应的操作指令,有效提升用户体验。此外,传感单元在接收到不同的触发信号时,既可以发出红外光投射于用户眼球虹膜,也可以接收用户眼球虹膜反射回的红外光信号以捕捉虹膜特征信息,使得装置无需额外设置红外光源(如红外LED器件),即可捕捉到用户的虹膜特征信息。同时,传感单元设置于显示单元的下方,相较于摄像头独立突出设置于显示屏的结构,可以有效缩小移动设备的整体厚度,使得穿戴式设备或是移动设备更加轻薄、更适用于柔性穿戴式设备或是移动设备、满足市场的需求。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括……”或“包含……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的要素。此外,在本文中,“大于”、“小于”、“超过”等理解为不包括本数;“以上”、“以下”、“以内”等理解为包括本数。
本领域内的技术人员应明白,上述各实施例可提供为方法、装置、或计算机程序产品。这些实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。上述各实施例涉及的方法中的全部或部分步骤可以通过程序来指令相关的硬件来完成,所述的程序可以存储于计算机设备可读取的存储介质中,用于执行上述各实施例方法所述的全部或部分步骤。所述计算机设备,包括但不限于:个人计算机、服务器、通用计算机、专用计算机、 网络设备、嵌入式设备、可编程设备、智能移动终端、智能家居设备、穿戴式智能设备、车载智能设备等;所述的存储介质,包括但不限于:RAM、ROM、磁碟、磁带、光盘、闪存、U盘、移动硬盘、存储卡、记忆棒、网络服务器存储、网络云存储等。
上述各实施例是参照根据实施例所述的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到计算机设备的处理器以产生一个机器,使得通过计算机设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机设备以特定方式工作的计算机设备可读存储器中,使得存储在该计算机设备可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机设备上,使得在计算机设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已经对上述各实施例进行了描述,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改,所以以上所述仅为本发明的实施例,并非因此限制本发明的专利保护范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围之内。

Claims (26)

  1. 一种虹膜识别的方法,其特征在于,所述方法应用于虹膜识别的装置,所述装置包括显示单元和传感单元;所述显示单元上设置有虹膜识别区,所述传感单元位于所述虹膜识别区的下方,所述传感单元包括红外感应层;所述红外感应层用于在接收到第一触发信号时,发出红外光,以及用于在接收到第二触发信号时,处于侦测红外光状态,并感知用户虹膜反射的红外光信号以捕捉用户的虹膜信息;所述方法包括以下步骤:
    预先设置操作配置信息,所述操作配置信息包括虹膜信息与操作指令的对应关系;
    捕捉用户在虹膜识别区上的虹膜信息,判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令。
  2. 如权利要求1所述的虹膜识别的方法,其特征在于,所述步骤“预先设置操作配置信息”包括:
    接收用户设置命令,显示虹膜识别区;
    捕捉用户的虹膜信息并保存;
    显示一操作指令标识列表,所述操作指令标识列表中包含着一个或多个操作指令对应的标识,每一操作指令标识对应一操作指令;
    接收用户对操作指令标识的选择指令,建立所选中的操作指令标识对应的操作指令与所捕捉的虹膜信息的对应关系,并保存至操作配置信息。
  3. 如权利要求1所述的虹膜识别的方法,其特征在于,所述操作指令为画面切换指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:
    判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是则对画面进行切换,否则不对画面进行切换。
  4. 如权利要求1所述的虹膜识别的方法,其特征在于,所述操作指令为支 付指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:
    判断捕捉到的虹膜信息与支付指令对应的虹膜信息是否匹配,若是则支付成功,否则支付失败。
  5. 如权利要求1所述的虹膜识别的方法,其特征在于,所述操作指令为用户身份信息登录指令;步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则执行该虹膜信息对应的操作命令,否则不执行所述操作指令”包括:
    判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是用户身份信息登录成功,否则用户身份信息登录失败。
  6. 如权利要求1或4所述的虹膜识别的方法,其特征在于,所述传感单元包括TFT影像感测阵列薄膜,所述红外感应层包括红外光敏二极管所形成的阵列。
  7. 如权利要求1所述的虹膜识别的方法,其特征在于,步骤“判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配”具体包括:
    根据捕捉到虹膜信息计算其特征值,并与操作配置信息中预设的虹膜信息的特征值进行对比;当误差小于预设值时,判定为相匹配,否则判定为不匹配。
  8. 如权利要求1或7所述的虹膜识别的方法,其特征在于,所述方法还包括步骤:
    当判定操作配置信息中没有与所捕捉的虹膜信息相匹配的预设虹膜信息时,发出提示信息。
  9. 如权利要求8所述的虹膜识别的方法,其特征在于,所述提示信息包括声音提示信息、图像提示信息、光线提示信息、视频提示信息中的一种或多种。
  10. 如权利要求1所述的虹膜识别的方法,其特征在于,所述显示单元包括AMOLED显示屏或LCD液晶显示屏。
  11. 如权利要求10所述的虹膜识别的方法,其特征在于,当所述显示单元为LCD液晶显示屏时,所述传感单元的下方还设置有背光单元,所述传感单元设置于背光单元和LCD液晶显示屏之间。
  12. 如权利要求1所述的虹膜识别的方法,其特征在于,所述虹膜识别区包括多个虹膜识别子区域,每一虹膜识别子区域的下方对应设置一传感单元。
  13. 如权利要求12所述的虹膜识别的方法,其特征在于,所述装置还包括传感单元控制电路,所述方法还包括:
    接收用户对虹膜识别子区域的启动指令,传感单元控制电路开启所述虹膜识别子区域的下方的传感单元,以及接收用户对虹膜识别子区域的关闭指令,传感单元控制电路关闭所述虹膜识别子区域的下方的传感单元。
  14. 一种虹膜识别的装置,其特征在于,所述方法应用于虹膜识别的装置,所述装置包括显示单元和传感单元;所述显示单元上设置有虹膜识别区,所述传感单元位于所述虹膜识别区的下方,所述传感单元包括红外感应层;所述红外感应层用于在接收到第一触发信号时,发出红外光,以及用于在接收到第二触发信号时,处于侦测红外光状态,并感知用户虹膜反射的红外光信号以捕捉用户的虹膜信息;所述装置还包括操作信息设置单元、判断单元和处理单元;
    所述操作信息设置单元用于预先设置操作配置信息,所述操作配置信息包括虹膜信息与操作指令的对应关系;
    所述传感单元用于捕捉用户在虹膜识别区上的虹膜信息,所述判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令。
  15. 如权利要求14所述的虹膜识别的装置,其特征在于,所述装置包括操作指令接收单元,所述“操作信息设置单元用于预先设置操作配置信息”包括:
    所述操作指令接收单元用于接收用户设置命令,所述显示单元用于显示虹膜识别区;
    所述传感单元用于捕捉用户的虹膜信息并保存;
    所述显示单元用于显示一操作指令标识列表;所述操作指令标识列表中包含着一个或多个操作指令对应的标识,每一操作指令标识对应一操作指令;
    所述操作指令接收单元用于接收用户对操作指令标识的选择指令,处理单元用于建立所选中的操作指令标识对应的操作指令与所捕捉的虹膜信息的对应关系,并保存至操作配置信息。
  16. 如权利要求14所述的虹膜识别的装置,其特征在于,所述操作指令为画面切换指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:
    判断单元用于判断捕捉到的用户的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是则处理单元用于对画面进行切换,否则处理单元不对画面进行切换。
  17. 如权利要求14所述的虹膜识别的装置,其特征在于,所述操作指令为支付指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:
    判断单元用于判断判断捕捉到的虹膜信息与支付指令对应的虹膜信息是否匹配,若是则处理单元执行所述支付指令,支付成功,否则处理单元不执行所述支付指令,支付失败。
  18. 如权利要求14所述的虹膜识别的装置,其特征在于,所述操作指令为用户身份信息登录指令;所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配,若是则处理单元用于执行该虹膜信息对应的操作命令,否则处理单元不执行所述操作指令”包括:
    判断单元用于判断捕捉到的虹膜信息与画面切换指令对应的虹膜信息是否匹配,若是处理单元执行所述用户信息身份登录指令,登录成功,否则处理单 元不执行所述用户信息身份登录指令,登录失败。
  19. 如权利要求14所述的虹膜识别的装置,其特征在于,所述传感单元包括TFT影像感测阵列薄膜,所述红外感应层包括红外光敏二极管所形成的阵列。
  20. 如权利要求14所述的虹膜识别的装置,其特征在于,所述“判断单元用于判断所捕捉的虹膜信息与操作配置信息中的预设虹膜信息是否匹配”具体包括:
    判断单元用于根据捕捉到虹膜信息计算其特征值,并与操作配置信息中预设的虹膜信息的特征值进行对比;当误差小于预设值时,判定为相匹配,否则判定为不匹配。
  21. 如权利要求14或20所述的虹膜识别的装置,其特征在于,所述处理单元还用于在判断单元判定操作配置信息中没有与所捕捉的虹膜信息相匹配的预设虹膜信息时,发出提示信息。
  22. 如权利要求21所述的虹膜识别的装置,其特征在于,所述提示信息包括声音提示信息、图像提示信息、光线提示信息、视频提示信息中的一种或多种。
  23. 如权利要求14所述的虹膜识别的装置,其特征在于,所述显示单元包括AMOLED显示屏或LCD液晶显示屏。
  24. 如权利要求23所述的虹膜识别的装置,其特征在于,当所述显示单元为LCD液晶显示屏时,所述传感单元的下方还设置有背光单元,所述传感单元设置于背光单元和LCD液晶显示屏之间。
  25. 如权利要求14所述的虹膜识别的装置,其特征在于,所述虹膜识别区包括多个虹膜识别子区域,每一虹膜识别子区域的下方对应设置一传感单元。
  26. 如权利要求14所述的虹膜识别的装置,其特征在于,所述装置还包括传感单元控制电路和操作指令接收单元,所述操作指令接收单元用于接收用户对虹膜识别子区域的启动指令,所述传感单元控制电路用于开启所述虹膜识别子区域的下方的传感单元,以及所述操作指令接收单元用于接收用户对虹膜识 别子区域的关闭指令,所述传感单元控制电路用于关闭所述虹膜识别子区域的下方的传感单元。
PCT/CN2018/078092 2017-04-20 2018-03-06 一种虹膜识别的方法和装置 WO2018192313A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710260323.8 2017-04-20
CN201710260323.8A CN108734064A (zh) 2017-04-20 2017-04-20 一种虹膜识别的方法和装置

Publications (1)

Publication Number Publication Date
WO2018192313A1 true WO2018192313A1 (zh) 2018-10-25

Family

ID=63856156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078092 WO2018192313A1 (zh) 2017-04-20 2018-03-06 一种虹膜识别的方法和装置

Country Status (3)

Country Link
CN (1) CN108734064A (zh)
TW (1) TWI715832B (zh)
WO (1) WO2018192313A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262345A (zh) * 2019-06-24 2019-09-20 天地科技股份有限公司上海分公司 基于虹膜识别的采煤机开机控制系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116798107B (zh) * 2023-06-16 2024-05-14 北京万里红科技有限公司 一种用于比对虹膜图像的可视化处理方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204791066U (zh) * 2015-05-21 2015-11-18 北京中科虹霸科技有限公司 一种用于移动终端的虹膜识别装置及包含其的移动终端
US20160026246A1 (en) * 2014-04-10 2016-01-28 Samsung Electronics Co., Ltd. Eye gaze tracking method and apparatus and computer-readable recording medium
CN105975933A (zh) * 2016-05-05 2016-09-28 上海聚虹光电科技有限公司 基于透明屏幕的虹膜识别系统
CN106019953A (zh) * 2016-05-19 2016-10-12 捷开通讯(深圳)有限公司 移动终端以及基于虹膜识别进行红外控制的方法
US20160358029A1 (en) * 2010-10-26 2016-12-08 Bi2 Technologies, LLC Mobile wireless hand-held identification system and breathalyzer
CN106485118A (zh) * 2016-09-19 2017-03-08 信利光电股份有限公司 电子设备及其识别系统、解密方法
CN106503514A (zh) * 2016-09-28 2017-03-15 北京用友政务软件有限公司 基于虹膜识别的电子终端设备的解锁方法及系统
CN106557737A (zh) * 2016-09-27 2017-04-05 北京无线电计量测试研究所 一种便携式虹膜识别装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394311B (zh) * 2014-09-15 2015-08-05 贵阳科安科技有限公司 用于移动终端的虹膜识别成像模组及图像获取方法
CN106407881B (zh) * 2015-07-29 2020-07-31 财团法人工业技术研究院 生物辨识装置及方法与穿戴式载体

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160358029A1 (en) * 2010-10-26 2016-12-08 Bi2 Technologies, LLC Mobile wireless hand-held identification system and breathalyzer
US20160026246A1 (en) * 2014-04-10 2016-01-28 Samsung Electronics Co., Ltd. Eye gaze tracking method and apparatus and computer-readable recording medium
CN204791066U (zh) * 2015-05-21 2015-11-18 北京中科虹霸科技有限公司 一种用于移动终端的虹膜识别装置及包含其的移动终端
CN105975933A (zh) * 2016-05-05 2016-09-28 上海聚虹光电科技有限公司 基于透明屏幕的虹膜识别系统
CN106019953A (zh) * 2016-05-19 2016-10-12 捷开通讯(深圳)有限公司 移动终端以及基于虹膜识别进行红外控制的方法
CN106485118A (zh) * 2016-09-19 2017-03-08 信利光电股份有限公司 电子设备及其识别系统、解密方法
CN106557737A (zh) * 2016-09-27 2017-04-05 北京无线电计量测试研究所 一种便携式虹膜识别装置
CN106503514A (zh) * 2016-09-28 2017-03-15 北京用友政务软件有限公司 基于虹膜识别的电子终端设备的解锁方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262345A (zh) * 2019-06-24 2019-09-20 天地科技股份有限公司上海分公司 基于虹膜识别的采煤机开机控制系统

Also Published As

Publication number Publication date
CN108734064A (zh) 2018-11-02
TW201839660A (zh) 2018-11-01
TWI715832B (zh) 2021-01-11

Similar Documents

Publication Publication Date Title
TWI606387B (zh) 用於多層級命令感測之方法及設備
US11227134B2 (en) Method and device for synchronously collecting fingerprint information
US11137845B2 (en) Method and device for recognizing contact of foldable display screen
US11314962B2 (en) Electronic device and method for controlling fingerprint recognition-based electronic device
WO2018228010A1 (zh) 一种终端及终端显示亮度调节的方法
TW202004542A (zh) 一種同步驗證指紋資訊的螢幕解鎖方法和裝置
TWI720484B (zh) 一種同步驗證指紋資訊的觸控元件操作方法和裝置
WO2018192311A1 (zh) 一种眼球追踪操作的方法和装置
WO2018192313A1 (zh) 一种虹膜识别的方法和装置
WO2018192312A1 (zh) 一种眼球追踪操作的方法和装置
WO2018192308A1 (zh) 一种虹膜识别的方法和装置
CN109359640B (zh) 一种指纹识别显示面板及指纹识别方法
US20220075473A1 (en) Method and device for biometric recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18787442

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18787442

Country of ref document: EP

Kind code of ref document: A1