WO2018072179A1 - 一种基于虹膜识别的图像预览方法及装置 - Google Patents

一种基于虹膜识别的图像预览方法及装置 Download PDF

Info

Publication number
WO2018072179A1
WO2018072179A1 PCT/CN2016/102724 CN2016102724W WO2018072179A1 WO 2018072179 A1 WO2018072179 A1 WO 2018072179A1 CN 2016102724 W CN2016102724 W CN 2016102724W WO 2018072179 A1 WO2018072179 A1 WO 2018072179A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
infrared
lens
infrared lens
iris
Prior art date
Application number
PCT/CN2016/102724
Other languages
English (en)
French (fr)
Inventor
骆磊
Original Assignee
深圳达闼科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳达闼科技控股有限公司 filed Critical 深圳达闼科技控股有限公司
Priority to CN201680006897.6A priority Critical patent/CN107223255B/zh
Priority to PCT/CN2016/102724 priority patent/WO2018072179A1/zh
Publication of WO2018072179A1 publication Critical patent/WO2018072179A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Definitions

  • the present invention relates to the field of iris recognition technology, and in particular, to an image preview method and apparatus based on iris recognition.
  • Iris recognition technology is based on the iris in the eye for identification, and the iris is a ring-shaped part between the black pupil and the white sclera. Since each person's iris does not change after the fetal development stage, and the iris is in the iris. It contains many detailed features of interlaced spots, filaments, crowns, stripes, crypts, etc. Therefore, each person's iris is unique, which determines the uniqueness of identity recognition using iris recognition technology.
  • the existing iris recognition technology requires an additional infrared camera on the hardware (it can also be combined with a normal forward camera, but it is still equivalent to two optical paths and technology, but the physical structure is merged) and one An infrared fill light (illuminate the face and eyes with infrared light), then collect the user's eyeball image through the infrared camera, and process the captured eyeball image into a template for preservation or comparison.
  • image registration such as iris registration or recognition or unlocking
  • the user sees a black and white physiological image in a certain area (including the eye) of the face when previewing the iris camera, thereby causing the user to feel uncomfortable.
  • An embodiment of the present invention provides an image recognition method and apparatus based on iris recognition, which can be used to solve the prior art in the process of performing image previewing, such as iris registration or recognition, and the iris pattern displayed in real time by the preview interface is taken by the iris camera.
  • image previewing such as iris registration or recognition
  • iris pattern displayed in real time by the preview interface is taken by the iris camera.
  • an image preview method based on iris recognition including:
  • the first non-infrared image includes a replacement pattern of the iris pattern, wherein the replacement pattern is in the The position in a non-infrared image is determined by the position of the iris pattern in the infrared image;
  • the first non-infrared image is displayed within the image preview area.
  • an image preview apparatus based on iris recognition including:
  • the acquiring module is further configured to: when the infrared image includes an iris pattern, acquire a first non-infrared image corresponding to the infrared image, where the first non-infrared image includes a replacement pattern of the iris pattern, The position of the replacement pattern in the first non-infrared image is determined by the position of the iris pattern in the infrared image;
  • control module configured to display the first non-infrared image acquired by the acquiring module in an image preview area.
  • control module configured to control the non-infrared image acquired by the acquiring module to be displayed in the inside.
  • an image recognition apparatus based on iris recognition comprising a processor configured to support the apparatus to perform a corresponding function in the above method.
  • the apparatus can also include a memory for coupling with the processor to store computer software code for use in the iris recognition based image preview device, comprising a program designed to perform the above aspects.
  • a computer storage medium for storing computer software instructions for use in an iris recognition based image preview device, comprising program code designed to perform the iris recognition based image preview method of the first aspect.
  • a computer program is provided that can be directly loaded into an internal memory of a computer and includes software code, and the computer program can be loaded and executed by a computer to implement the iris recognition-based image preview method according to the first aspect. .
  • a robot comprising the iris recognition based image preview device of the third aspect.
  • the solution provided by the embodiment of the present invention obtains a non-infrared image of the infrared image in the case where the infrared image captured by the infrared lens is included in the infrared image, and the non-infrared image includes the replacement pattern of the iris pattern, and then the non-infrared image An infrared image is displayed in the image preview area, thereby preventing the user from viewing the infrared image containing the iris pattern.
  • the position of the replacement pattern of the iris pattern in the non-infrared image is determined by the position of the iris pattern in the infrared image, that is, the position of the replacement pattern in the non-infrared image will follow the iris in a period of time.
  • the pattern moves in the movement of the position in the infrared image; enabling the user to adjust the position of the subject's eyeball (also referred to as a human face) or adjust the infrared lens in real time by viewing the position of the pattern in the non-infrared image of the preview area.
  • the solution reduces the user's resistance to the infrared iris pattern during the preview process on the premise of ensuring that the user normally completes the requirements of iris registration, recognition, and unlocking.
  • FIG. 1 is a flowchart of a method for previewing an image based on iris recognition according to an embodiment of the present invention
  • 2a is a schematic diagram of a non-infrared image including an alternate pattern of an iris pattern displayed in an image preview area according to an embodiment of the present invention
  • FIG. 2b is a schematic diagram of another non-infrared image including an alternate pattern of an iris pattern displayed in an image preview area according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of an arrangement of an infrared lens and a non-infrared lens in a terminal according to an embodiment of the present invention
  • FIG. 4 is a flowchart of another method for image previewing based on iris recognition according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a field of view of an infrared lens and a non-infrared lens according to an embodiment of the present invention
  • FIG. 6a is a schematic diagram of a field of view of a lens according to an embodiment of the present invention.
  • 6b is a schematic diagram of a field of view of another lens according to an embodiment of the present invention.
  • FIG. 7 is a top view of an actual optical path of an infrared lens and a non-infrared lens in a terminal according to an embodiment of the present invention.
  • FIG. 8 is a top view of an overlapping area of an infrared lens and a non-infrared lens in a terminal according to an embodiment of the present invention.
  • FIG. 9 is a side view of an overlapping area of an infrared lens and a non-infrared lens in a terminal according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of cutting a non-infrared image according to an embodiment of the present invention.
  • FIG. 11 is a flowchart of still another method for image previewing based on iris recognition according to an embodiment of the present invention.
  • 12a is a schematic diagram of displaying a non-infrared image in an image preview area by a puzzle manner according to an embodiment of the present invention
  • FIG. 12b is a schematic diagram of displaying a non-infrared image in an image preview area by an eyeball capture technology according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of an image preview apparatus based on iris recognition according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of another image preview apparatus based on iris recognition according to an embodiment of the present invention.
  • FIG. 15 is a schematic structural diagram of still another image preview apparatus based on iris recognition according to an embodiment of the present invention.
  • Iris is a fabric-like annular part between the black pupil and the white sclera of the eye. It has a rich and different texture pattern.
  • the diameter of the human iris is generally between 11 mm and 12 mm, and each person's The iris does not change after it is formed in the fetal development stage.
  • the iris contains many intertwined crowns, crystals, filaments, and spots. Detail features such as points, structures, crypts, rays, wrinkles, and streaks determine the uniqueness of the iris's features and determine the uniqueness of the identity.
  • Iris Recognition is based on the iris in the eye for identification. It usually takes four steps, including iris pattern acquisition, image preprocessing, iris feature extraction, and feature preservation (when registering) or feature matching (recognition). Used in security equipment (such as access control, etc.) and places with high security requirements.
  • the iris pattern acquisition refers to using an infrared lens to capture the entire iris of the human to obtain an infrared iris pattern;
  • the image preprocessing refers to performing preprocessing operations including image smoothing, edge detection, image separation, etc. on the captured iris pattern;
  • Feature extraction refers to extracting and encoding unique feature points of the iris from the iris pattern by a specific algorithm.
  • Feature matching refers to the iris of the iris pattern stored in advance according to the feature of the extracted iris detail feature. The feature codes of the detail features are compared and verified to achieve the purpose of recognition.
  • Infrared lens which is an infrared camera, usually appears in pairs with an infrared light generator (for example, an infrared fill light).
  • the infrared light generator is used to generate infrared light to illuminate the face and eye of the subject for which the infrared image is directed. And then photographing the face and the eye of the subject targeted by the infrared image illuminated by the infrared light through an infrared lens, thereby obtaining an infrared image of the eyeball image of the subject for the infrared image, usually a black-and-white or a greenish image. .
  • non-infrared lens is a lens other than the infrared camera, for example, a conventional front camera in a smart phone, which is not limited by the present invention.
  • the basic principle of the technical solution provided by the embodiment of the present invention is: in the process of performing image previewing, such as iris registration, recognition, unlocking, etc., if the infrared image collected by the infrared lens includes an iris pattern, select one for the iris pattern. Substituting a pattern (which is a non-infrared image), and displaying the replacement pattern in the image preview area according to the position of the iris pattern in the infrared image, thereby preventing the subject targeted by the infrared image from viewing the infrared including the iris pattern Images, complete the iris registration and recognition process with a better experience, reducing user resistance.
  • image previewing such as iris registration, recognition, unlocking, etc.
  • the execution body of the iris recognition-based image preview method provided by the embodiment of the present invention may be an iris recognition-based image preview device or a terminal device that can be used to execute the above-described iris recognition-based image preview method.
  • the image preview device based on the iris recognition may be a combination of a central processing unit (CPU), a CPU and a memory in the terminal device, or may be another control unit or module in the terminal device.
  • the terminal device may be a personal computer (PC), a netbook, or a personal digital assistant (English: Personal Digital Assistant, referred to as a personal digital assistant, which is used to analyze the infrared image collected by the infrared lens by using the method provided by the embodiment of the present invention. : PDA), server, etc., or the above terminal device can be installed There is a software client or a software system or a software application PC, a server, etc., which can process the infrared image by using the method provided by the embodiment of the present invention.
  • the specific hardware implementation environment can be a general computer form or an ASIC mode. It can also be an FPGA, or some programmable extension platform such as Tensilica's Xtensa platform and so on.
  • an embodiment of the present invention provides an image preview method based on iris recognition. As shown in FIG. 1 , the method includes the following steps:
  • the application can control the infrared lens to be turned on, and further controllable to turn on the infrared fill light.
  • the infrared lens can be used for the object targeted by the infrared image in real time. Capture infrared images.
  • the first non-infrared image described above includes an alternate pattern of the iris pattern.
  • the iris pattern of the infrared image in the embodiment of the present invention is a partial image of the infrared image, and the partial image includes an iris pattern or an eyeball pattern.
  • the replacement pattern of the iris pattern includes a pattern capable of indicating the position of the user's eyeball, thereby facilitating the user to move based on the pattern, and further displaying the user's iris pattern at a suitable position in the image preview area.
  • the position of the replacement pattern of the iris pattern in the infrared image in the non-infrared image is determined by the position of the iris pattern in the infrared image, when the object to which the infrared image is directed is moved, The position of the object targeted by the infrared image changes in the image capturing area of the infrared lens, so that the position of the replacement pattern of the iris pattern in the non-infrared image moves with the object to which the infrared image is directed, thereby making the infrared image
  • the targeted object can complete the alignment of the eyeball, and further ensure that the user completes the process of iris registration and recognition while reducing the resistance of the user while ensuring that the infrared iris pattern does not appear in the image preview area.
  • the replacement pattern b1 of the iris pattern a1 in the infrared image a is not red.
  • the position in the outer image b is the same as the position of the iris pattern a1 in the infrared image a.
  • the replacement pattern of the iris pattern in the embodiment of the present invention may be a pre-stored non-infrared image (for example, a cartoon pattern or other warm tone pattern) or a non-infrared iris pattern collected by the non-infrared lens when the infrared lens captures the infrared image.
  • the present invention can obtain the first non-infrared image of the infrared image by two different implementation manners, specifically:
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the image previewing apparatus based on the iris recognition is provided to the terminal including the infrared lens and the non-infrared lens facing the same side.
  • the terminal includes: Infrared lenses 21 (such as 21a and 21b in Fig. 3), non-infrared lenses 22 (such as 22a and 22b in Fig. 3), and infrared fill lamps 23 (such as 23a and 23b in Fig. 3) face the same side.
  • the so-called facing side is the same as the front or the same.
  • the layout of the infrared lens 21 and the non-infrared lens 22 may be specifically referred to FIG. 3a.
  • the infrared lens 21 and the non-infrared lens 22 in the terminal may be horizontally arranged, or may be vertically arranged with reference to FIG. 3b. There is no limit here.
  • the basic principle of the technical solution provided by this embodiment is: when the terminal device includes two kinds of lenses, an infrared lens and a non-infrared lens, according to the positional relationship and the angular relationship between the infrared lens and the non-infrared lens of the terminal device, the iris registration and recognition are performed.
  • the infrared image in the infrared image captured by the infrared lens is determined in advance in the position area of the non-infrared image captured by the non-infrared lens at the same time, and then the non-infrared image of the non-infrared image is taken in the non-infrared image. The image is previewed.
  • step 101 the method further includes the following steps:
  • step 102 specifically includes the following steps:
  • 102A1 determining a subject to which the infrared image is directed, in the second non-infrared image The area of the image occupied by it.
  • the field of view S1 (capable range of the infrared lens 21) and the field of view S2 of the non-infrared lens 22 have overlapping regions.
  • the collection area D1 of the infrared lens completely falls into the non-infrared lens.
  • the portion of the infrared lens that can be collected will fall completely within the field of view of the non-infrared lens.
  • the portion of the non-infrared image to which the subject of the infrared image is located is located in the collection area D1.
  • the embodiment may determine the positional relationship and the angular relationship between the infrared lens and the non-infrared lens of the terminal device. That is, the above step 102A1 can be specifically implemented by the following steps:
  • the 102A1' according to the angle of view of the infrared lens, the angle of view of the non-infrared lens, the relative position between the infrared lens and the non-infrared lens, and the resolution of the non-infrared lens, determining the object to which the infrared image is directed is in the second non-infrared The area of the image that is occupied by the image.
  • the angle of view of the above lens may refer to an angle formed by the two edges of the maximum range of the shooting area of the lens with the lens as the apex.
  • the angle of view of the lens includes a horizontal field of view and a vertical field of view, wherein the horizontal and vertical directions are perpendicular to each other.
  • the horizontal direction and the vertical direction may respectively be directions of two sides of the mobile phone screen, for example, The direction of the short side of the mobile phone screen is the horizontal direction, and the direction of the long side is the vertical direction. This is exemplified in the following example. Of course, the reverse is also possible.
  • a sensor is installed in the lens to detect whether the lens is currently in landscape orientation (the captured image is a landscape image) or portrait orientation (the captured image is a portrait image).
  • the captured image is a landscape image
  • portrait orientation the captured image is a portrait image.
  • FIG. 6a shows Front view, top view and side view of the terminal device with lens, the horizontal field of view of the lens of the top view of FIG. 6a is g1, the vertical field of view of the side view lens of FIG.
  • FIG. 6a is g2; the iris pattern collected by the lens
  • Figure 6b shows the front view of the terminal device with the lens.
  • the top view lens of FIG. 6b has a horizontal angle of view of g2, and the top view of the lens of FIG. 6b has a vertical field of view of g1.
  • the above-mentioned viewing angle g1 is larger than the viewing angle g2.
  • the horizontal field of view (assumed to be ⁇ ) is small, so when the infrared lens is set inside the terminal, in order to better align the two eyes, usually Using a tilt angle (assumed to be ⁇ ), the infrared lens is tilted toward the vertical centerline of the terminal device screen, while other non-infrared lenses set inside the terminal are usually higher in resolution (above 800MP), and the horizontal field of view (assumed to be ⁇ ) is higher. Large (wide-angle lens), usually placed horizontally. If you look at the top of the terminal, the actual optical path of the infrared lens and non-infrared lens in the terminal is shown in Figure 7.
  • the infrared lens and the non-infrared lens in the terminal used in the embodiment of the present invention are horizontally distributed, and the infrared lens has a horizontal tilt angle;
  • Step 102A1' can be specifically implemented by a horizontal field of view of the infrared lens, a horizontal field of view of the non-infrared lens, a horizontal tilt of the infrared lens, a distance between the infrared lens and the non-infrared lens, and a resolution of the non-infrared lens.
  • Rate determining the horizontal boundary of the image area occupied by the subject for the infrared image in the second non-infrared image, and the vertical field of view of the infrared lens, the vertical field of view of the non-infrared lens, and the resolution of the non-infrared lens. Rate, determines the vertical boundary of the image area.
  • this example uses a non-infrared lens instead of an infrared lens for real-time preview, and the infrared image of the infrared lens may be taken as a target.
  • the horizontal width and position of the non-infrared image captured by the infrared lens, and the infrared image of the infrared lens The vertical height and position of the non-infrared image captured by the non-infrared lens are cut in the real-time non-infrared image collected by the non-infrared lens (the infrared lens and the infrared light generator still work, but the user sees
  • the image is acquired by a non-infrared lens.
  • This cutting method can keep the position of the iris in the infrared image captured by the infrared lens as close as possible.
  • the device when determining the horizontal boundary of the image area occupied by the object targeted by the infrared image in the second non-infrared image, the device may specifically: according to the horizontal field of view of the infrared lens and the horizontal field of the non-infrared lens Angle, the horizontal tilt angle of the infrared lens, and the distance between the infrared lens and the non-infrared lens, and the width of the left side width of the image area in the horizontal width of the second non-infrared image is calculated, and then the ratio is based on the width And the resolution of the non-infrared lens, the horizontal boundary of the image area is obtained.
  • the device determines the vertical boundary of the image region according to the vertical angle of view of the infrared lens, the vertical field of view of the non-infrared lens, and the resolution of the non-infrared lens, specifically: according to the vertical field of view of the infrared lens
  • the vertical field of view of the non-infrared lens is obtained by taking the height of the upper and lower blank heights of the image area in the vertical height of the second non-infrared image, and then, according to the height ratio and the resolution of the non-infrared lens Rate, get the vertical boundary of the image area.
  • the infrared lens and the non-infrared lens disposed in the terminal are horizontally distributed, it is assumed that the distance between the infrared lens and the non-infrared lens is d, and the vertical distance between the human eye and the front surface of the terminal is 1, and the horizontal tilt angle of the infrared lens is ⁇ .
  • the horizontal angle of view is ⁇ 1
  • the relative position between the infrared lens and the non-infrared lens in the terminal and the actual optical path are as shown in FIG. Show.
  • the overlapping of the horizontal areas of the images taken by the two lenses is specifically divided into three parts of x, y, and z as shown in FIG. 8, where y is the horizontal width of the infrared lens imaged at the l distance. , x+y+z is the horizontal width of the non-infrared lens imaged at this distance, N is the horizontal left boundary of the image region, and N' is the horizontal right boundary of the image region.
  • the position of the AB point of the two lenses ie, the horizontal boundary of the image acquired by the infrared lens
  • two distances of y1 and y2 are obtained, as follows:
  • boundary position and relationship of the horizontal range of the image in the non-infrared lens at a distance of the infrared image in the infrared lens can be determined:
  • y/(x+y+z) that is, the ratio of the horizontal width of the infrared image taken by the infrared lens at a distance to the horizontal width of the non-infrared image taken by the non-infrared lens at a distance of one is a constant value. It has nothing to do with d and l, that is to say, the ratio is fixed at any distance after the terminal hardware assembly is completed. And x / (x + y + z) and z / (x + y + z) will change with the change of l.
  • the infrared lens provided in the terminal has no horizontal tilt angle, and the infrared lens is equal to the non-infrared lens. Therefore, assuming that the vertical distance between the human eye and the front surface of the terminal is 1, the vertical field of view of the infrared lens is ⁇ 2, and the vertical field of view of the non-infrared lens is ⁇ 2, if the terminal side is viewed from the side, the terminal The actual optical path of the infrared lens and the non-infrared lens is shown in Fig. 8.
  • the overlap of the vertical regions of the images taken by the two lenses is specifically divided into three parts a, b, and c as shown in FIG. 9, where b is the vertical height of the infrared lens imaged at this distance.
  • M is the vertical lower boundary of the image area
  • M' is the vertical upper boundary of the image area
  • a+b+c is the vertical height of the non-infrared lens imaged at this distance.
  • a/(a+b+c) (tan( ⁇ 2/2)–tan( ⁇ 2/2))/(2*tan( ⁇ 2/2));
  • a/(a+b+c), b/(a+b+c), and c/(a+b+c) are all constant values, regardless of l, that is, after the device hardware is assembled. This ratio is a fixed value at any distance.
  • the distance range is relatively fixed, about 25-35cm, so you can directly take l as a fixed value, such as 30cm.
  • the current value of l is approximated. This can be done by current iris recognition algorithms or eyeball capture algorithms.
  • the value of l is accurate or not only affects whether the position of the human eye captured by the iris camera and the position of the human eye photographed by the forward camera are strictly coincident, but in fact, the position of the human eye is greatly provided when the iris is registered and recognized.
  • the latitude can still be used normally after moving up and down a certain distance. Therefore, whether the value of l is accurate is not a serious problem.
  • the first and easiest method as above can meet most scene requirements.
  • the horizontal left margin ratio is 0.4
  • the horizontal right blank ratio is 0.3
  • the vertical upper and lower blank ratio is 0.3
  • the non-infrared lens resolution is 1800*3200, based on the above values.
  • the horizontal left boundary of the object targeted by the infrared image in the second non-infrared image is the 540th pixel from left to right of the non-infrared image collected by the non-infrared lens, and the horizontal right border is the non-infrared lens collection.
  • the 2240th pixel of the non-infrared image perpendicularly from the bottom to the top, specifically, can be cut with reference to the cutting line shown in FIG.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the above example 1 is cut with a non-infrared image collected by a non-infrared lens as a rainbow
  • the method of previewing the image during film registration and recognition works well when the light is sufficient.
  • the light is insufficient (the light intensity can be obtained by the light sensor and a threshold is set as the judgment of sufficient light and insufficient light)
  • the picture captured by the non-infrared lens may be darker, and even the dark ones can't be seen. If you use the example one, the effect is not good.
  • the technical solution provided by the embodiment does not need to use a non-infrared lens to perform acquisition of a non-infrared image, and directly replaces the infrared preview image with a pre-stored non-infrared image (such as a cartoon pattern or other warm tone pattern).
  • a pre-stored non-infrared image such as a cartoon pattern or other warm tone pattern.
  • the process specific component of obtaining the non-infrared image of the replacement pattern including the iris pattern in step 102 specifically includes the following steps:
  • the background image and the replacement pattern in this embodiment are both non-infrared images.
  • step 103 specifically includes:
  • the position of the iris pattern in the infrared image may be the coordinates of the center point of the iris pattern in the infrared image; the coordinates of at least one edge point (preferably a plurality) of the iris pattern in the infrared image.
  • the display when the control replacement pattern is displayed in the position to be displayed, the display may be directly superimposed on the background image, that is, a technique in which one image is suspended on another image; or the replacement pattern may be used instead of the corresponding image in the background image.
  • the portion of the position is displayed, and the replaced background image is displayed, that is, the replacement image is used to replace the portion of the background image corresponding to the position to be displayed, and the replacement pattern and the background image are combined into a new image, and the new image is displayed.
  • this embodiment may also combine the replacement pattern with the background image to form an image before displaying the background image and the replacement pattern. After that, the combined image is directly displayed in the preview area, and there is no limitation here.
  • step 102B2 the method further includes:
  • C1 Obtain a first ratio, the first ratio being a ratio of an area of the iris pattern to an area of the infrared image.
  • C2. Determine a size of the replacement pattern of the iris pattern according to the first ratio, such that a ratio of an area of the replacement pattern of the iris pattern to an area of the background image is a first ratio.
  • step 102B2 may be specifically implemented by displaying the replacement pattern of the iris pattern at the position to be displayed and the upper layer of the background image according to the size of the replacement pattern of the determined iris pattern. In this way, the user can see the size of the iris pattern through the screen, thereby perceiving the distance from the screen of the terminal device to adjust the distance between the terminal device and the subject to which the infrared image is directed.
  • the background image has a positioning identifier
  • the position of the positioning identifier in the background image is a position where the iris recognition program expects the iris
  • the positioning identifier may be a specific pattern, such as a black dot, or The same pattern as the replacement pattern shape, and the like.
  • the replacement pattern of the iris pattern is gradually moved to the positioning mark under the user's eyeball alignment operation, so that the infrared image of the iris detail feature can be obtained through the above-mentioned eyeball alignment operation.
  • the iris detail feature is extracted from the infrared image, and the iris detail feature is saved to complete registration, or the iris detail feature is used for feature comparison to complete the recognition process.
  • the terminal device may be further unlocked, some applications in the terminal device may be activated, or a certain function (such as payment) of some applications in the terminal device may be activated.
  • the replacement pattern of the infrared eyeball image is displayed at the position of the indication image by adjusting the user's distance and up, down, left, and right orientations, which is equivalent to the iris pattern.
  • Arthur eyes are in the right position.
  • the background image of the image preview area is a cartoon image (here, only an example, where the size, shape, and pattern of the image are not limited), and two circles are missing, and the infrared eyeball image is replaced.
  • the pattern is the same as the image corresponding to the two circles, and the user can adjust the distance and position of the iris until the iris pattern is replaced.
  • the change pattern is displayed in the two circles missing from the cartoon image; or, as shown in FIG. 12b, the background image of the image preview area is displayed in a non-infrared image with two star icons (here is only an example, here)
  • the size, shape and pattern of the image are not limited.
  • the user can adjust the distance and position of the image until the replacement pattern of the iris pattern is displayed to the position of the star icon.
  • An alternative pattern of the infrared iris pattern is set to the star icon pattern.
  • the cartoon puzzle method is mainly used to guide the user's eyes to the most ideal recognition position.
  • the solution provided in this embodiment can be used when the light is sufficient or insufficient. However, if the user sees that his face is relatively intuitive when registering and recognizing, it is recommended to use the example one when the light is sufficient.
  • the provided solution performs image preview, and uses the scheme provided in Example 2 to preview the image when the light is insufficient.
  • the solution acquires the real-time position of the human eye through the eyeball capture technology (through the infrared image), and then does not display the real-time infrared preview image on the image preview area of the screen, but displays the non-infrared image including the infrared iris pattern.
  • the eyeball alignment process in the iris registration or recognition process, because it is no longer necessary to face the self-timer image under the infrared light, the user's resistance is avoided, and the user experience is greatly improved.
  • the solution provided by the embodiment of the present invention is mainly introduced from the image preview device based on the iris recognition and the terminal applied by the device.
  • the device includes corresponding hardware structures and/or software modules for performing various functions in order to implement the above functions.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. Professionals can use different parties for each specific application The described functionality is implemented, but such implementation should not be considered to be beyond the scope of the invention.
  • the embodiment of the present invention may divide the function module of the image preview device based on the iris recognition according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 13 is a schematic diagram showing a possible structure of the image recognition apparatus based on the iris recognition in the above embodiment.
  • the image preview apparatus 4 based on the iris recognition includes: The module 41, the control module 42, and the determination module 43 are obtained.
  • the acquisition module 41 is configured to support the iris recognition-based image preview device to perform steps 101, 102 in FIG. 1; the control module 42 is configured to support the iris recognition-based image preview device to perform step 103 in FIG.
  • the obtaining module 41 is specifically configured to support the image recognition apparatus based on the iris recognition to perform step 101A and steps 102A1 and 102A2 in FIG. 4; further, the obtaining module 41 is specifically configured to support an image recognition apparatus based on the iris recognition.
  • the obtaining module 41 is specifically configured to support the image recognition device based on the iris recognition to perform the steps 102B, 103B1 and 103B2 in FIG. 10, and further, the obtaining module 41 is specifically configured to support the image recognition device based on the iris recognition to perform the above.
  • the determining module 43 is specifically configured to support the iris recognition-based image preview device to perform step C2 above. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • the foregoing obtaining module 41, the control module 42, and the determining module 43 may be processors.
  • the action performed by the above image recognition device based on iris recognition The corresponding programs may be stored in software in the memory of the iris recognition based image preview device, so that the processor calls to perform the operations corresponding to the above respective modules.
  • FIG. 14 shows a possible structural diagram of the iris recognition-based image preview apparatus involved in the above embodiment.
  • the iris recognition based image preview device 5 includes a processor 51, a memory 52, a system bus 53, and a communication interface 54.
  • the memory 52 is used to store computer execution code
  • the processor 51 is connected to the memory 52 through the system bus 53.
  • the processor 51 is configured to execute the computer execution code stored in the memory 52 to execute any of the embodiments provided by the embodiments of the present invention.
  • An iris recognition based image preview method such as processor 51 for supporting an iris recognition based image preview device to perform all of the steps of FIGS. 1, 4, 11 and/or other processes for the techniques described herein,
  • FIGS. 1, 4, 11 and/or other processes for the techniques described herein For a specific image preview method based on iris recognition, reference may be made to the related descriptions in the following and the accompanying drawings, and details are not described herein again.
  • the embodiment of the invention further provides a storage medium, which may include a memory 52.
  • the embodiment of the invention further provides a computer program, which can be directly loaded into the memory 52 and contains software code, and the computer program can be loaded and executed by the computer to implement the above-mentioned iris recognition-based image preview method.
  • the processor 51 can be a processor or a collective name for a plurality of processing elements.
  • the processor 51 can be a central processing unit (CPU).
  • the processor 51 can also be other general purpose processors, digital signal processing (DSP), application specific integrated circuit (ASIC), field-programmable gate array (FPGA) or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like, can implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the present disclosure.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the processor 51 may also be a dedicated processor, which may include at least one of a baseband processing chip, a radio frequency processing chip, and the like.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the dedicated processor may also include a chip having other specialized processing functions of the device.
  • the steps of the method described in connection with the present disclosure may be implemented in a hardware manner, or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in random access memory (English: random access memory, abbreviation: RAM), flash memory, read only memory (English: read only memory, abbreviation: ROM), Erase programmable read-only memory (English: erasable programmable ROM, abbreviation: EPROM), electrically erasable programmable read-only memory (English: electrical EPROM, abbreviation: EEPROM), registers, hard disk, mobile hard disk, CD-ROM (CD) - ROM) or any other form of storage medium known in the art.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erase programmable read-only memory
  • EPROM electrically erasable programmable read-only memory
  • registers hard disk, mobile hard disk, CD-ROM (CD) - ROM) or any other form of storage
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in the terminal device.
  • the processor and the storage medium can also exist as discrete components in the terminal device.
  • the system bus 53 can include a data bus, a power bus, a control bus, and a signal status bus. For the sake of clarity in the present embodiment, various buses are illustrated as the system bus 53 in FIG.
  • the Communication interface 54 may specifically be a transceiver on the device.
  • the transceiver can be a wireless transceiver.
  • the wireless transceiver can be an antenna or the like of the device.
  • the processor 51 communicates with other devices through the communication interface 54, for example, if the device is a module or component of the terminal device, the device is used for data interaction with other modules in the terminal device, for example, the device Performing data interaction with the display module of the terminal device, and controlling the display module to display the non-infrared image or the infrared image in the image preview area.
  • FIG. 15 is a schematic diagram showing a possible structure of the image recognition device based on the iris recognition in the above embodiment.
  • the device includes a processor 61, a memory 62, a system bus 63, a communication interface 64, an infrared lens 65, a non-infrared lens 66, and a display unit 67, wherein the processor 61 is connected to the infrared lens 65 and the non-infrared lens 66.
  • the non-infrared image collected by the head 66 is subjected to image processing, and the processor 61 described above is also connected to the display unit 67 for controlling the display unit 67 to display an image.
  • the display unit 67 can be used to display information input by the user or information provided to the user as well as various menus of the terminal.
  • the display unit 67 may include a display panel.
  • the display panel may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch screen may cover the display panel, and when the touch screen detects a touch operation on or near it, it is transmitted to the processor 61 to determine the type of the touch event, and then the processor 61 provides corresponding on the display panel according to the type of the touch event. Visual output.
  • An embodiment of the present invention further provides a robot including the image recognition device based on iris recognition corresponding to FIGS. 14 and 15.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Input (AREA)

Abstract

本发明的实施例提供一种基于虹膜识别的图像预览方法及装置,设计虹膜识别领域,用以解决现有技术在进行虹膜注册或识别等需要进行图像预览的过程中,预览界面实时显示的虹膜图案为虹膜摄像头拍摄到的眼球黑白生理图像的问题。该方法包括:获取由红外镜头采集的红外图像;在红外图像中包含虹膜图案的情况下,获取红外图像的第一非红外图像,该第一非红外图像包含该虹膜图案的替换图案,该替换图案在非红外图像中的位置由所述虹膜图案在红外图像中的位置决定;将该第一非红外图像在图像预览区域内显示。本发明应用于虹膜识别。

Description

一种基于虹膜识别的图像预览方法及装置 技术领域
本发明涉及虹膜识别技术领域,尤其涉及一种基于虹膜识别的图像预览方法及装置。
背景技术
目前随着大家对身份认证的安全性和实用性的重视程度逐渐提高,越来越多的电子设备(尤其是便携式电子设备)均开始配备虹膜识别功能。虹膜识别技术是基于眼睛中的虹膜进行身份识别,而虹膜是位于黑色瞳孔和白色巩膜之间的圆环状部分,由于每个人的虹膜在胎儿发育阶段形成后便不会发生变化,且虹膜中包含有很多相互交错的斑点、细丝、冠状、条纹、隐窝等的细节特征,因此,每个人的虹膜具备唯一性,也就决定了采用虹膜识别技术进行身份识别的唯一性。
现有的虹膜识别技术在硬件上需要额外的一颗红外摄像头(也可以和普通前向摄像头二合一,但从光路和技术上来说依然相当于两颗,只是物理结构上的合并)和一颗红外补光灯(用红外光照亮面部和眼部),然后通过该红外摄像头采集用户的眼球图像,将拍摄到的眼球图像用算法处理成模板进行保存或比对。然而,在虹膜注册或识别或解锁等需要进行图像预览的过程中,由于虹膜摄像头预览时用户看到的是面部一定区域(包括眼部)内的黑白生理图像,从而使得用户产生不适感。
发明内容
本发明的实施例提供一种基于虹膜识别的图像预览方法及装置,可用以解决现有技术在进行虹膜注册或识别等需要进行图像预览的过程中,预览界面实时显示的虹膜图案为虹膜摄像头拍摄到的面部一定区域(包括眼部)内黑白生理图像的问题。
为达到上述目的,本发明的实施例采用如下技术方案:
第一方面,提供一种基于虹膜识别的图像预览方法,包括:
获取由红外镜头采集的红外图像;
在所述红外图像中包含虹膜图案的情况下,获取所述红外图像对应的第一非红外图像;所述第一非红外图像包含所述虹膜图案的替换图案,所述替换图案在所述第一非红外图像中的位置由所述虹膜图案在所述红外图像中的位置决定;
将所述第一非红外图像在图像预览区域内显示。
第二方面,提供一种基于虹膜识别的图像预览装置,包括:
获取模块,用于获取由红外镜头采集的红外图像;
所述获取模块,还用于在所述红外图像中包含虹膜图案的情况下,获取所述红外图像对应的第一非红外图像,所述第一非红外图像包含所述虹膜图案的替换图案,所述替换图案在所述第一非红外图像中的位置由所述虹膜图案在所述红外图像中的位置决定;
控制模块,用于将所述获取模块获取的所述第一非红外图像在图像预览区域内显示。控制模块,用于控制所述获取模块获取的所述非红外图像在所述内显示。
第三方面,提供一种基于虹膜识别的图像预览装置,该装置的结构中包括处理器,该处理器被配置为支持该装置执行上述方法中相应的功能。该装置还可以包括存储器,该存储器用于与处理器耦合,其储存上述基于虹膜识别的图像预览装置所用的计算机软件代码,其包含用于执行上述方面所设计的程序。
第四方面,提供一种计算机存储介质,用于储存为基于虹膜识别的图像预览装置所用的计算机软件指令,其包含执行第一方面所述的基于虹膜识别的图像预览方法所设计的程序代码。
第五方面,提供一种计算机程序,可直接加载到计算机的内部存储器中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现第一方面所述的基于虹膜识别的图像预览方法。
第六方面,提供一种机器人,该机器人包括第三方面所述的基于虹膜识别的图像预览装置。
本发明实施例提供的方案在确定由红外镜头采集的红外图像中包含虹膜图案的情况下,获取该红外图像的非红外图像,该非红外图像中包含该虹膜图案的替换图案,然后将该非红外图像显示在该图像预览区域中,从而避免用户观看到包含虹膜图案的红外图像。同时,由于该虹膜图案的替换图案在该非红外图像中的位置由该虹膜图案在红外图像中的位置决定,也即在一时间段内,替换图案在非红外图像中的位置会随着虹膜图案在红外图像中的位置的移动而移动;使得用户能够通过观看预览区域的非红外图像中替换图案的位置,实时调整被拍摄者的眼球(也可说是人脸)、或者调整红外镜头的拍摄方向,以完成眼球对正,进而使得红外镜头能够采集到在虹膜注册、识别、解锁等过程中需使用的红外图像。因此,该方案在保证用户正常完成虹膜注册、识别、解锁等需求的前提下,减少用户在预览过程中对于红外虹膜图案的抵触情绪。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种基于虹膜识别的图像预览方法的方法流程图;
图2a为本发明实施例提供的一种包含虹膜图案的替换图案的非红外图像在图像预览区域内显示的示意图;
图2b为本发明实施例提供的另一种包含虹膜图案的替换图案的非红外图像在图像预览区域内显示的示意图;
图3为本发明实施例提供的一种终端中红外镜头和非红外镜头的排布示意图;
图4为本发明实施例提供的另一种基于虹膜识别的图像预览方法的方法流程图;
图5为本发明实施例提供的一种红外镜头与非红外镜头的视场范围示意图;
图6a为本发明实施例提供的一种镜头的视场角示意图;
图6b为本发明实施例提供的另一种镜头的视场角示意图;
图7为本发明实施例提供的一种终端中的红外镜头与非红外镜头的实际光路的俯视图;
图8为本发明实施例提供的一种终端中的红外镜头与非红外镜头的拍摄重叠区域的俯视图;
图9为本发明实施例提供的一种终端中的红外镜头与非红外镜头的拍摄重叠区域的侧视图;
图10为本发明实施例提供的非红外图像的切割示意图;
图11为本发明实施例提供的又一种基于虹膜识别的图像预览方法的方法流程图;
图12a为本发明实施例提供的一种通过拼图方式在图像预览区域内显示非红外图像的示意图;
图12b为本发明实施例提供的一种通过眼球捕获技术在图像预览区域内显示非红外图像的示意图;
图13为本发明实施例提供的一种基于虹膜识别的图像预览装置的结构示意图;
图14为本发明实施例提供的另一种基于虹膜识别的图像预览装置的结构示意图;
图15为本发明实施例提供的又一种基于虹膜识别的图像预览装置的结构示意图。
具体实施方式
下面对本申请中所涉及的部分术语进行解释,以方便读者理解:
“虹膜”,为位于眼睛的黑色瞳孔和白色巩膜之间呈织物状各色圆环状部分,具有丰富且不同的纹理图案,人的虹膜直径一般在11毫米到12毫米之间,且每个人的虹膜在胎儿发育阶段形成后便不会发生变化,虹膜中包含有很多相互交错的像冠、水晶体、细丝、斑 点、结构、隐窝、射线、皱纹和条纹等的细节特征,这些细节特征决定了虹膜特征的唯一性,同时也决定了身份识别的唯一性。
“虹膜识别”,是基于眼睛中的虹膜进行身份识别,通常需要四个步骤,分为虹膜图案获取、图像预处理、虹膜特征提取以及特征保存(注册时)或特征匹配(识别时),主要应用于安防设备(如门禁等)以及有高度保密需求的场所。其中,虹膜图案获取是指使用红外镜头对人的整个眼部进行拍摄得到红外虹膜图案;图像预处理是指对拍摄到的虹膜图案进行包括图像平滑、边缘检测、图像分离等预处理操作;虹膜特征提取是指通过特定算法从虹膜图案中提取出虹膜的独特细节特征点,并对其进行编码;特征匹配是指根据提取出的虹膜细节特征的特征编码与数据库中事先存储的虹膜图案的虹膜细节特征的特征编码进行比对、验证,从而达到识别的目的。
“红外镜头”,为红外摄像头,通常与红外光发生器(例如,红外补光灯)成对出现,红外光发生器用于产生红外光来照亮红外图像所针对的拍摄对象的面部和眼部,再经红外镜头对经红外光照射的红外图像所针对的拍摄对象的面部和眼部进行拍摄,从而得到包含红外图像所针对的拍摄对象的眼球图像的红外图像,通常为黑白或偏绿色图像。
“非红外镜头”,为除红外摄像头以外的其他镜头,例如,智能手机中的普通前置摄像头,本发明对此不作限定。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。如果不加说明,本文中的“多个”是指两个或两个以上。
需要说明的是,本发明实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本发明实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者 “例如”等词旨在以具体方式呈现相关概念。
需要说明的是,本发明实施例中,除非另有说明,“多个”的含义是指两个或两个以上。
需要说明的是,本发明实施例中,“的(英文:of)”,“相应的(英文:corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
下面将结合本发明实施例的说明书附图,对本发明实施例提供的技术方案进行说明。显然,所描述的是本发明的一部分实施例,而不是全部的实施例。需要说明的是,下文所提供的任意多个技术方案中的部分或全部技术特征在不冲突的情况下,可以结合使用,形成新的技术方案。
本发明实施例所提供的技术方案的基本原理为:在进行虹膜注册、识别、解锁等需要进行图像预览的过程中,若红外镜头采集到的红外图像包含虹膜图案,则为该虹膜图案选择一个替换图案(为非红外图像),并按照该虹膜图案在该红外图像中的位置,将该替换图案显示在该图像预览区域中,从而避免红外图像所针对的拍摄对象观看到包含虹膜图案的红外图像,以更好的体验完成虹膜注册和识别过程,减少用户的抵触情绪。
本发明实施例提供的基于虹膜识别的图像预览方法的执行主体可以为基于虹膜识别的图像预览装置或者可以用于执行上述基于虹膜识别的图像预览方法的终端设备。其中,基于虹膜识别的图像预览装置可以为上述终端设备中的中央处理器(Central Processing Unit,CPU)、CPU与存储器等硬件的组合、或者可以为上述终端设备中的其他控制单元或者模块。
示例性的,上述终端设备可以为采用本发明实施例提供的方法对红外镜头采集的红外图像进行分析的个人计算机((personal computer,PC)、上网本、个人数字助理(英文:Personal Digital Assistant,简称:PDA)、服务器等,或者上述终端设备可以为安装 有可以采用本发明实施例提供的方法对红外镜头采集红外图像进行处理的软件客户端或软件系统或软件应用的PC、服务器等,具体的硬件实现环境可以通用计算机形式,或者是ASIC的方式,也可以是FPGA,或者是一些可编程的扩展平台例如Tensilica的Xtensa平台等等。
基于上述内容,本发明的实施例提供一种基于虹膜识别的图像预览方法,如图1所示,该方法包括如下步骤:
101、获取由红外镜头采集的红外图像。
示例性的,在开启终端设备中基于虹膜识别的应用后,该应用可以控制打开红外镜头、进一步的可控制打开红外补光灯,此时,红外镜头便可针对红外图像所针对的拍摄对象实时采集红外图像。
102、在红外图像中包含虹膜图案的情况下,获取红外图像的第一非红外图像。
其中,上述的第一非红外图像包含虹膜图案的替换图案。本发明实施例中的红外图像的虹膜图案为该红外图像的部分图像,该部分图像中包含虹膜图案或眼球图案。而该虹膜图案的替换图案中包含能够指示用户眼球位置的图案,从而方便用户基于该图案进行移动,进一步的可将用户的虹膜图案显示在该图像预览区域的合适位置。
在本实施例中,由于红外图像中的虹膜图案的替换图案在该非红外图像中的位置由该虹膜图案在红外图像中的位置决定,因此,当红外图像所针对的拍摄对象移动时,由于红外图像所针对的拍摄对象在红外镜头的图像采集区域内的位置发生变化,而使得虹膜图案的替换图案在非红外图像中的位置随着红外图像所针对的拍摄对象而移动,从而使得红外图像所针对的拍摄对象能够完成眼球对正,进而在保证图像预览区域内不出现红外虹膜图案的同时,还可以保证用户完成虹膜注册和识别的过程,减少用户的抵触情绪。
参考图2a、图2b所示的虹膜图案的替换图案在非红外图像中的位置示意图,红外图像a中的虹膜图案a1的替换图案b1在非红 外图像b中位置,与该虹膜图案a1在红外图像a中的位置相同。
示例性的,本发明实施例中虹膜图案的替换图案可以为预存的非红外图像(例如,卡通图案或其他暖色调图案)或非红外镜头在红外镜头采集红外图像时采集的非红外虹膜图案。
103、将第一非红外图像在图像预览区域内显示。
在不同的应用场景下,本发明可以通过两种不同的实现方式来获取该红外图像的第一非红外图像,具体的:
实施例一:
本实施例提供的基于虹膜识别的图像预览方法的执行主体基于虹膜识别的图像预览装置应用于包含朝向同侧的红外镜头和非红外镜头的终端,如图3a、3b所示,该终端包括:朝向同侧的红外镜头21(如图3中的21a和21b)、非红外镜头22(如图3中的22a和22b)以及红外补光灯23(如图3中的23a和23b)。所谓朝向同侧是指同为前置或同为后置。具体的,上述的红外镜头21与非红外镜头22的布局具体可以参照图3a,该终端中的红外镜头21和非红外镜头22可以水平排布,也可以参考图3b为二者垂直排布,这里并不做限定。
本实施例所提供的技术方案的基本原理为:当该终端设备包含红外镜头和非红外镜头两种镜头时,根据终端设备的红外镜头和非红外镜头的位置关系和角度关系,在虹膜注册识别等程序中提前确定红外镜头拍摄的红外图像中的红外图像所针对的拍摄对象在同一时间非红外镜头拍摄的非红外图像中的位置区域,然后将非红外镜头拍摄的非红外图像中该区域的图像进行预览。
示例性的,如图4所示,在实现步骤101的同时,该方法还包括如下步骤:
101A、获取由红外镜头采集的红外图像,并在获取由红外镜头采集的红外图像时,获取由非红外镜头采集的第二非红外图像。
进一步的,基于步骤101A,上述的步骤102具体包括如下步骤:
102A1、确定红外图像所针对的拍摄对象在该第二非红外图像 中所占据的图像区域。
一般而言,红外镜头21的视场范围S1(可拍摄的范围)与非红外镜头22的视场范围S2存在交叠区域。参考附图5,通常在距离镜头大于阈值d的位置处,例如图5中在距离镜头(终端)d’(d’大于d)的位置处,红外镜头的采集区域D1完全落入非红外镜头的采集区域D2内。也就意味着,当红外图像所针对的拍摄对象距离终端d’时,红外镜头所能采集到的部分会完全落入非红外镜头的视场范围内。
102A2、从第二非红外图像中提取出位于该图像区域的图像,作为第一非红外图像。
具体的,参考图5,当红外图像所针对的拍摄对象距离终端d’时,从非红外图像中,将红外图像所针对的拍摄对象位于采集区域D1的部分提取出来。
可选的,本实施例在确定红外图像所针对的拍摄对象在第二非红外图像中所占据的图像区域时,可以基于终端设备的红外镜头和非红外镜头间的位置关系和角度关系来确定,即上述的步骤102A1具体可以通过如下步骤实现:
102A1’、根据红外镜头的视场角、非红外镜头的视场角、红外镜头与非红外镜头间的相对位置以及非红外镜头的分辨率,确定红外图像所针对的拍摄对象在第二非红外图像中所占据的图像区域。
示例性的,上述镜头的视场角可以指以该镜头为顶点,该镜头的拍摄区域的最大范围的两条边缘构成的夹角。其中,镜头的视场角包括水平视场角和垂直视场角,其中水平和垂直两个方向相互垂直,以手机为例,水平方向和垂直方向分别可以是手机屏两个边的方向,例如:手机屏短边的方向作为水平方向,长边的方向作为垂直方向,以下示例中以此为例,当然,反之也是可以的。
一般的,镜头中都会安装传感器来检测该镜头当前是横向拍摄(拍摄出的图像为横向图像)还是纵向拍摄(拍摄出的图像为纵向图像)。示例性的,以具有镜头的终端为例,当镜头采集到的虹膜图 案为纵向图像时,由于该终端的短屏端与该纵向图像的短边平齐,则镜头的水平视场角小于该镜头的垂直视场角,如图6a所示,图6a示出了具有镜头的终端设备的正视图、俯视图和侧视图,参考图6a的俯视图镜头的水平视场角为g1,参考图6a的侧视图镜头的垂直视场角为g2;当镜头采集到的虹膜图案为横向图像,由于该终端的短屏端与该纵向图像的长边平齐,则镜头的水平视场角大于该镜头的垂直视场角,图6b示出了具有镜头的终端设备的正视图、俯视图和侧视图,参考图6b的侧视图镜头的水平视场角为g2,参考图6b的俯视图镜头的垂直视场角为g1。其中,上述的视场角g1大于视场角g2。
通常情况下,由于红外镜头分辨率较低(200-500MP),水平视场角(假设为β)较小,因此当终端内部设置有红外镜头时,为了更好的对正两眼,通常会采用一个倾角(假设为α),将红外镜头向终端设备屏幕的垂直中线倾斜,而终端内部设置的其他非红外镜头分辨率通常较高(800MP以上),水平视场角(假设为γ)较大(广角镜头),通常会水平放置,若以终端顶部俯视来说明,该终端中的红外镜头与非红外镜头的实际光路如图7所示。
因此,进一步的,若本发明实施例所应用的终端中的红外镜头与非红外镜头水平分布,且该红外镜头具有水平倾角时;
步骤102A1’具体可以通过如下过程来实现:根据红外镜头的水平视场角、非红外镜头的水平视场角、红外镜头的水平倾角、红外镜头与非红外镜头间的距离以及非红外镜头的分辨率,确定红外图像所针对的拍摄对象在第二非红外图像中所占据的图像区域的水平边界,以及根据红外镜头的垂直视场角、非红外镜头的垂直视场角以及非红外镜头的分辨率,确定图像区域的垂直边界。
示例性的,在进行虹膜注册或识别等需要进行图像预览的过程中,本示例使用非红外镜头替代红外镜头来进行实时的预览,则可以取如上红外镜头的红外图像所针对的拍摄对象在非红外镜头采集的非红外图像中的水平宽度和位置,以及红外镜头的红外图像所针 对的拍摄对象在非红外镜头采集的非红外图像中垂直高度和位置,在非红外镜头采集到的实时非红外图像中进行切割(红外镜头和红外光发生器依然工作,只不过用户看到的图像是非红外镜头采集到的,此切割方法能够尽量保持切割后图像和红外镜头拍到的红外图像中虹膜位置重合)。
进一步的,该装置在确定红外图像所针对的拍摄对象在第二非红外图像中所占据的图像区域的水平边界时,具体可以:根据红外镜头的水平视场角、非红外镜头的水平视场角、红外镜头的水平倾角以及红外镜头与非红外镜头间的距离,求取该图像区域的两侧留空宽度在第二非红外图像的水平宽度中的宽度占比,然后根据该宽度占比以及非红外镜头的分辨率,得到图像区域的水平边界。当该装置在根据红外镜头的垂直视场角、所述非红外镜头的垂直视场角以及非红外镜头的分辨率,确定图像区域的垂直边界时,具体可以:根据红外镜头的垂直视场角、所述非红外镜头的垂直视场角,求取该图像区域的上下留空高度在第二非红外图像的垂直高度中的高度占比,然后,根据该高度占比以及非红外镜头的分辨率,得到图像区域的垂直边界。
示例性的,若终端内设置的红外镜头与非红外镜头水平分布,假设红外镜头和非红外镜头间的间距为d,人眼到终端前表面间的垂直距离为l,红外镜头的水平倾角α,水平视场角为β1,非红外镜头的水平视场角为为γ1时,若以终端顶部俯视来说明,该终端中的红外镜头与非红外镜头间的相对位置以及实际光路如图8所示。
如图8所示,两镜头拍摄的图像的水平区域的交叠中具体分割成如图8中的x,y,z三个部分,其中,y为红外镜头在此l距离时成像的水平宽度,x+y+z为非红外镜头在此l距离时成像的水平宽度,N为该图像区域的水平左边界,N’为该图像区域的水平右边界。此时,延两镜头的AB点(即红外镜头采集的图像的水平边界)位置分别做相距l距离平面的垂线,得到y1和y2两个距离,如下所示:
根据几何原理,可以得到如下几个等式:
x+y1+d=l*tan(γ1/2);
y1=l*tan(β1/2-α);
y2+d=l*tan(β1/2+α);
y2+z=l*tan(γ1/2);
y=y1+y2+d;
通过如上几个等式,可以推导出:
x=l*tan(γ1/2)-l*tan(β1/2-α)–d;
y=l*tan(β1/2-α)+l*tan(β1/2+α);
z=l*tan(γ1/2)-l*tan(β1/2+α)+d;
进一步,可确定红外镜头中红外图像所针对的拍摄对象于l距离处在非红外镜头中图像的水平范围的边界位置和关系:
x/(x+y+z)=(tan(γ1/2)-tan(β1/2-α)-d/l)/(2*tan(γ1/2));
y/(x+y+z)=(tan(β1/2-α)+tan(β1/2+α))/(2*tan(γ1/2));
z/(x+y+z)=(tan(γ1/2)-tan(β1/2+α)+d/l)/(2*tan(γ1/2));
可见,y/(x+y+z),也就是红外镜头在l距离处拍摄的红外图像的水平宽度与非红外镜头在l距离处拍摄的非红外图像的水平宽度的比值为一个恒定数值,与d和l无关,也就是说,终端硬件组装完成后此比值在任何距离上都是固定值。而x/(x+y+z)和z/(x+y+z)会随着l的变化而变化。在l较小时(距离较近时),z/(x+y+z)较大,x/(x+y+z)较小,也就是红外镜头的水平拍摄区域在非红外镜头的水平拍摄范围中比较靠右(以拍摄到的实际场景视角为准,如果红外镜头和非红外镜头的摆放位置相反,则此处为靠左)。在l增大时(距离拉远),z/(x+y+z)会减小,x/(x+y+z)会增大,也就是红外镜头的水平拍摄范围在非红外镜头的水平拍摄范围中会逐渐往左靠,当l为无穷远时,达到x/(x+y+z)的最大 极值和z/(x+y+z)的最小极值。
示例性的,在图像的垂直高度上,由于终端内设置的红外镜头无水平倾角,且红外镜头与非红外镜头等高。因此,假设人眼到终端前表面间的垂直距离为l,红外镜头的垂直视场角为β2,非红外镜头的垂直视场角为γ2时,若以终端侧方侧视来说明,该终端中的红外镜头与非红外镜头的实际光路如图8所示。
如图9所示,两镜头拍摄的图像的垂直区域的交叠中具体分割成如图9中的a,b,c三个部分,其中,b为红外镜头在此l距离时成像的垂直高度,M为该图像区域的垂直下边界,M’为该图像区域的垂直上边界,a+b+c为非红外镜头在此l距离时成像的垂直高度。如下所示:
根据几何关系可以得到:
a=l*tan(γ2/2)–l*tan(β2/2);
b=2*l*tan(β2/2);
c=l*tan(γ2/2)–l*tan(β2/2);
进一步可确定虹膜摄像头在前向摄像头于l距离处的图像纵向范围上下位置和关系:
a/(a+b+c)=(tan(γ2/2)–tan(β2/2))/(2*tan(γ2/2));
b/(a+b+c)=tan(β2/2)/tan(γ2/2);
c/(a+b+c)=(tan(γ2/2)–tan(β2/2))/(2*tan(γ2/2));
可见,a/(a+b+c),b/(a+b+c),c/(a+b+c)都为一个恒定数值,与l无关,也就是说,设备硬件组装完成后此比值在任何距离上都是固定值。
而至于l值的获取,有如下几种方式:
1)因为一般虹膜注册、识别等过程中,距离范围都是比较固定的,大约在25-35cm,因此可以直接将l取为定值,如30cm。
2)通过判定两瞳孔的距离,近似得到当前的l值。当前虹膜识别算法或者眼球捕获算法都可以做到这一点。
3)通过加入测距模块(例如,红外测距模块),获取精确的l 值。好处是l的实时值非常精确。
4)通过具备AF(自动对焦)功能的前向摄像头获取准确l值。
因为l值是否精确只会影响到虹膜摄像头拍摄到的人眼位置和前向摄像头拍摄到的人眼位置是否严格重合,而实际上虹膜注册和识别时对人眼的位置都提供了很大的宽容度,向左右上下移动一定距离后依然可以正常的使用,因此l值是否精确并不是很严重的问题,如上的第一种最简便的方法其实就可以满足绝大多数场景要求了。
当然,若是想要将虹膜图案的替换图案在非红外镜头采集的非红外图像上的位置与虹膜图案在红外图像中位置严格重合,提高精准度,可以通过增加一个测距模块,来精确测量出人眼与终端屏幕正前方的垂直距离。
假如计算得到水平左侧留空占比为0.4,水平右侧留空占比为0.3,垂直上下留空占比为0.3,且非红外镜头的分辨率为1800*3200时,便可基于上述数值,确定出红外图像所针对的拍摄对象在第二非红外图像中所占据的图像区域的水平边界和垂直边界,具体的:
图像区域左侧的像素点个数为:0.3*1800=540;
水平右边界的像素点个数为:(1-0.4)*1800=1080;
垂直上边界的像素点个数为:0.3*3200=960;
垂直下边界的像素点个数为(1-0.3)*3200=2240;
即上述红外图像所针对的拍摄对象在第二非红外图像中的水平左边界为该非红外镜头采集的非红外图像的自左向右第540个像素点,水平右边界为该非红外镜头采集的非红外图像的自右向左第1080个像素点,垂直上边界为该非红外镜头采集的非红外图像的自上向下的第960个像素点,垂直下边界为该非红外镜头采集的非红外图像的垂直自下向上的第2240个像素点,具体的,可以参照图10所示的切割线进行切割。
实施例二:
上述的实例一以非红外镜头采集的非红外图像进行剪裁作为虹 膜注册和识别时预览画面的方法在光线比较充足的情况下效果较好,当光线不足时(可以通过光线传感器来获取光线强度,并设定一个阈值作为光线充足与光线不足的判定),由于非红外镜头拍摄到的画面可能会比较暗,甚至漆黑一片什么都看不到,此时若是再使用实例一的方案,效果不好。
这样本实施例提供的技术方案,通过结合眼球捕获技术,无需使用非红外镜头实施采集非红外图像,直接使用预存的非红外图像(如卡通图案或其他暖色调图案)来替换红外预览图像,来完成眼球的对正过程,从而以更好的体验完成虹膜注册和识别的方法,减少用户的抵触情绪。
示例性的,如图11所示,步骤102中的获取包含虹膜图案的替换图案的非红外图像的过程具体包具体的包括如下步骤:
102B、获取图像预览区域的背景图像,并获取虹膜图案的替换图案。
本实施例中的背景图像与替换图案均为非红外图像。
基于步骤102B,步骤103具体包括:
103B1、获取虹膜图案在红外图像中的位置,作为虹膜图案的替换图案在预览区域中的待显示位置。示例的,虹膜图案在红外图像中的位置,可以是虹膜图案的中心点在红外图像中的坐标;虹膜图案的至少一个边缘点(最好是多个)在红外图像中的坐标。
103B2、将背景图像显示在预览区域中,并将替换图案在待显示位置、于背景图像的上层显示。
本实施例在控制替换图案在待显示位置显示时,既可以直接在背景图像上直接叠加显示,即采用一图像在另一个图像上悬浮显示的技术;也可以利用替换图案替代背景图像中对应待显示位置的部分,并将显示替代后的背景图像,即利用替换图案替代背景图像中对应待显示位置的部分,将替换图案和背景图像合成新的图像,再显示该新的图像即可。当然,本实施例也可以在显示背景图像与该替换图案之前,将该替换图案与该背景图像进行组合形成一幅图像 后,将组合后的图像直接显示在该预览区域内,这里并不做限制。
进一步的,在步骤102B2之前,该方法还包括:
C1、获取第一比值,该第一比值为虹膜图案的面积与红外图像的面积的比值。
C2、根据该第一比值确定虹膜图案的替换图案的大小,使虹膜图案的替换图案的面积与所述背景图像的面积的比值为第一比值。
基于步骤C2,步骤102B2具体可以通过下述过程实现:将虹膜图案的替换图案按照确定出的虹膜图案的替换图案的大小在待显示位置、于背景图像的上层显示。这样,用户可以通过屏幕看出虹膜图案的大小,从而感知到与终端设备屏幕的远近,以调整终端设备与红外图像所针对的拍摄对象之间的距离。
示例性的,该背景图像中具有定位标识,上述的定位标识在背景图像中的位置为虹膜识别程序期望虹膜所在的位置,该定位标识可以是某一个特定的图案,例如可以是黑点、或与替换图案形状相同的图案等。从用户观看的角度,虹膜图案的替换图案会在用户的眼球对准操作下,逐渐向该定位标识移动,这样经过上述的眼球对准操作,便可获取到虹膜细节特征更为清晰的红外图像,然后,从该红外图像中提取出虹膜细节特征,并对该虹膜细节特征进行保存以完成注册,或者用该虹膜细节特征进行特征比对以完成识别过程。进一步的,当识别结果为匹配成功时,则可以进一步控制终端设备的解锁、启动终端设备中某些应用,或启动终端设备中某些应用的某特定功能(例如支付)等。
示例性的,当该背景图像显示在该图像预览区域后,便可通过调整用户的远近和上下左右方位,将红外眼球图像的替换图案显示在该指示图像所在位置,也就相当于在虹膜图案中将眼睛对到正确位置。例如,参照图12a所示,图像预览区域的背景图像为一个卡通图像(这里仅作示例,这里对图像的大小、形状、图案不做限定)中少了两个圆,而红外眼球图像的替换图案与这两个圆对应图像相同,用户可以通过调整自身的远近及位置,直到将该虹膜图案的替 换图案显示至该卡通图像所缺失的这两个圆中;或者,参照图12b所示,图像预览区域的背景图像为一张非红外图像中显示有两个星星图标(这里仅作示例,这里对图像的大小、形状、图案不做限定),用户可以通过调整自身的远近及位置,直到将该虹膜图案的替换图案显示至该星星图标所在位置,为了方便进行眼球对正,进一步的可以将该红外虹膜图案的替换图案设置成该星星图标图案。
当然,因虹膜识别过程对远近距离和上下左右方位有一定的包容度,有可能在完全对正之前就已经完成了虹膜认证。卡通拼图方式主要用于引导用户的眼睛到达最理想的识别位置。
当然,除了上述的拼图或对准方式,还可以有很多不同的呈现方式,其本质是通过眼球捕获技术得到眼球的实时位置,并引导用户将眼睛的位置移动到虹膜程序期望的位置区域。
需要说明的是,本实施例所提供的方案在光线充足或者不充足时都可使用,但考虑用户如果看到自己的脸部在注册和识别时会比较直观,所以建议光线充足时采用实例一所提供的方案进行图像预览,在光线不足时采用实例二所提供的方案进行图像预览。
由此可见,本方案通过眼球捕获技术(通过红外图像)获取人眼的实时位置,然后在屏幕的图像预览区域上不显示实时的红外预览图像,而是显示包含红外虹膜图案的非红外图像,从而完成虹膜注册或识别过程中的眼球对正过程,因为不必再面对红外光下的自拍图像,避免了用户的抵触情绪,使用户体验得到较大提升。
上述主要从基于虹膜识别的图像预览装置、以及该装置所应用的终端角度对本发明实施例提供的方案进行了介绍。可以理解的是,该装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方 法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例可以根据上述方法示例对基于虹膜识别的图像预览装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
下面说明本发明实施例提供的与上文所提供的方法实施例相对应的装置实施例。需要说明的是,下述装置实施例中相关内容的解释,均可以参考上述方法实施例。
在采用对应各个功能划分各个功能模块的情况下,图13示出了上述实施例中所涉及的基于虹膜识别的图像预览装置的一种可能的结构示意图,基于虹膜识别的图像预览装置4包括:获取模块41、控制模块42和确定模块43。获取模块41用于支持基于虹膜识别的图像预览装置执行图1中的步骤101、102;控制模块42用于支持基于虹膜识别的图像预览装置执行图1中的步骤103。进一步的,获取模块41具体用于支持基于虹膜识别的图像预览装置执行图4中的步骤101A以及步骤102A1、102A2;更进一步的,获取模块41具体用于支持基于虹膜识别的图像预览装置执行实例一种的步骤102A1’。进一步的,获取模块41具体用于支持基于虹膜识别的图像预览装置执行图10中的步骤102B、103B1以及103B2,更进一步的,获取模块41具体用于支持基于虹膜识别的图像预览装置执行上文中的步骤C1,确定模块43具体用于支持基于虹膜识别的图像预览装置执行上文中的步骤C2。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在硬件实现上,上述的获取模块41、控制模块42、确定模块43可以是处理器。上述基于虹膜识别的图像预览装置所执行的动作 所对应的程序均可以以软件形式存储于基于虹膜识别的图像预览装置的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在采用集成的单元的情况下,图14示出了上述实施例中所涉及的基于虹膜识别的图像预览装置的一种可能的结构示意图。基于虹膜识别的图像预览装置5包括:处理器51、存储器52、系统总线53和通信接口54。存储器52用于存储计算机执行代码,处理器51与存储器52通过系统总线53连接,当装置运行时,处理器51用于执行存储器52存储的计算机执行代码,以执行本发明实施例提供的任意一种基于虹膜识别的图像预览方法,如,处理器51用于支持基于虹膜识别的图像预览装置执行图1、4、11中的全部步骤,和/或用于本文所描述的技术的其它过程,具体的基于虹膜识别的图像预览方法可参考下文及附图中的相关描述,此处不再赘述。
本发明实施例还提供一种存储介质,该存储介质可以包括存储器52。
本发明实施例还提供一种计算机程序,该计算机程序可直接加载到存储器52中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现上述的基于虹膜识别的图像预览方法。
处理器51可以是一个处理器,也可以是多个处理元件的统称。例如,处理器51可以为中央处理器(central processing unit,CPU)。处理器51也可以为其他通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。处理器51还可以为专用处理器,该专用处理器可以包括基带处理芯片、射频处理芯片等中的至少一个。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。进一步地, 该专用处理器还可以包括具有该装置其他专用处理功能的芯片。
结合本发明公开内容所描述的方法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(英文:random access memory,缩写:RAM)、闪存、只读存储器(英文:read only memory,缩写:ROM)、可擦除可编程只读存储器(英文:erasable programmable ROM,缩写:EPROM)、电可擦可编程只读存储器(英文:electrically EPROM,缩写:EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于终端设备中。当然,处理器和存储介质也可以作为分立组件存在于终端设备中。
系统总线53可以包括数据总线、电源总线、控制总线和信号状态总线等。本实施例中为了清楚说明,在图14中将各种总线都示意为系统总线53。
通信接口54具体可以是该装置上的收发器。该收发器可以为无线收发器。例如,无线收发器可以是该装置的天线等。处理器51通过通信接口54与其他设备,例如,若该装置为该终端设备中的一个模块或组件时,该装置用于与该终端设备中的其他模块之间进行数据交互,如,该装置与该终端设备的显示模块进行数据交互,控制该显示模块将非红外图像或红外图像在图像预览区域内显示。
若上述实施例中所涉及的基于虹膜识别的图像预览装置为终端设备时,图15示出了上述实施例中所涉及的基于虹膜识别的图像预览装置的一种可能的结构示意图。该装置包括:处理器61、存储器62、系统总线63、通信接口64、红外镜头65、非红外镜头66以及显示单元67,其中,上述的处理器61与红外镜头65和非红外镜头66相连,从而获取由红外镜头65采集的红外图像以及由非红外镜 头66采集的非红外图像进行图像处理,同时,上述的处理器61还与显示单元67相连,用于控制显示单元67显示图像。
需要说明的时,本实施例中的处理器61、存储器62、系统总线63、通信接口64的介绍,可以参照图13对应实施例中的相关描述,这里不再赘述。
显示单元67可用于显示由用户输入的信息或提供给用户的信息以及终端的各种菜单。显示单元67可包括显示面板,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板。进一步的,触摸屏可覆盖显示面板,当触摸屏检测到在其上或附近的触摸操作后,传送给处理器61以确定触摸事件的类型,随后处理器61根据触摸事件的类型在显示面板上提供相应的视觉输出。
本发明实施例还提供一种机器人,该机器人包括图14、15对应的基于虹膜识别的图像预览装置。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
最后应说明的是:以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本发明的保护范围之内。

Claims (20)

  1. 一种基于虹膜识别的图像预览方法,其特征在于,包括:
    获取由红外镜头采集的红外图像;
    在所述红外图像中包含虹膜图案的情况下,获取所述红外图像对应的第一非红外图像;所述第一非红外图像包含所述虹膜图案的替换图案,所述替换图案在所述第一非红外图像中的位置由所述虹膜图案在所述红外图像中的位置决定;
    将所述第一非红外图像在图像预览区域内显示。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在获取由所述红外镜头采集的红外图像时,获取由所述非红外镜头采集的第二非红外图像;
    所述获取所述红外图像的第一非红外图像具体包括:
    确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域;
    从所述第二非红外图像中提取出位于所述图像区域的图像,作为所述第一非红外图像。
  3. 根据权利要求2所述的方法,其特征在于,所述确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域具体包括:
    根据所述红外镜头的视场角、所述非红外镜头的视场角、所述红外镜头与所述非红外镜头间的相对位置以及所述非红外镜头的分辨率,确定所述红外图像所针对的拍摄对象在所述第二非红外镜头中所占据的图像区域。
  4. 根据权利要求2或3所述的方法,其特征在于,所述确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域具体包括:
    根据所述红外镜头的水平视场角、所述非红外镜头的水平视场角、所述红外镜头的水平倾角、所述红外镜头与所述非红外镜头间的距离以及所述非红外镜头的分辨率,确定所述红外图像所针对的拍摄 对象在所述第二非红外图像中所占据的图像区域的水平边界;
    以及根据所述红外镜头的垂直视场角、所述非红外镜头的垂直视场角以及所述非红外镜头的分辨率,确定所述图像区域的垂直边界。
  5. 根据权利要求4所述的方法,其特征在于,所述确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域的水平边界具体包括:
    根据所述红外镜头与所述非红外镜头的水平视场角、所述红外镜头的水平倾角以及所述红外镜头与所述非红外镜头间的距离,求取所述所述图像区域的两侧留空宽度在所述第二非红外图像的水平宽度中的宽度占比;
    根据所述宽度占比以及所述非红外镜头的分辨率,得到所述图像区域的水平边界。
  6. 根据权利要求4或5所述的方法,其特征在于,所述根据所述红外镜头与所述非红外镜头的垂直视场角以及所述非红外镜头的分辨率,确定所述图像区域的垂直边界具体包括:
    根据所述红外镜头与所述非红外镜头的垂直视场角,求取所述图像区域的上下留空高度在所述第二非红外图像的垂直高度中的高度占比;
    根据所述高度占比以及所述非红外镜头的分辨率,得到所述图像区域的垂直边界。
  7. 根据权利要求1所述的方法,其特征在于,所述获取所述红外图像的第一非红外图像具体包括:
    获取所述图像预览区域的背景图像,并获取所述虹膜图案的替换图案,所述背景图像与所述替换图案均为非红外图像;
    所述将所述第一非红外图像在所述图像预览区域内显示具体包括:
    获取所述虹膜图案在所述红外图像中的位置,作为所述虹膜图案的替换图案在所述预览区域中的待显示位置;
    将所述背景图像显示在所述图像预览区域中,并将所述替换图案 在所述待显示位置、于所述背景图像的上层显示。
  8. 根据权利要求7所述的方法,其特征在于,所述将所述虹膜图案的替换图案在所述待显示位置、于所述背景图像的上层显示之前,所述方法还包括:
    获取第一比值;所述第一比值为虹膜图案的面积与所述红外图像的面积的比值;
    根据所述第一比值确定所述虹膜图案的替换图案的大小,使所述虹膜图案的替换图案的面积与所述背景图像的面积的比值为第一比值。
  9. 一种基于虹膜识别的图像预览装置,其特征在于,包括:
    获取模块,用于获取由红外镜头采集的红外图像;
    所述获取模块,还用于在所述红外图像中包含虹膜图案的情况下,获取所述红外图像对应的第一非红外图像,所述第一非红外图像包含所述虹膜图案的替换图案,所述替换图案在所述第一非红外图像中的位置由所述虹膜图案在所述红外图像中的位置决定;
    控制模块,用于将所述获取模块获取的所述第一非红外图像在图像预览区域内显示。
  10. 根据权利要求9所述的装置,其特征在于,所述获取模块,还用于:
    在获取由所述红外镜头采集的红外图像时,获取由所述非红外镜头采集的第二非红外图像;
    所述获取模块在获取所述红外图像的第一非红外图像时具体用于:
    确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域;
    从所述第二非红外图像中提取出位于所述图像区域的图像,作为所述第一非红外图像。
  11. 根据权利要求10所述的装置,其特征在于,所述获取模块在确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所 占据的图像区域时具体用于:
    根据所述红外镜头的视场角、所述非红外镜头的视场角、所述红外镜头与所述非红外镜头间的相对位置以及所述非红外镜头的分辨率,确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域。
  12. 根据权利要求10或11所述的装置,其特征在于,所述获取模块在确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域时具体用于:
    根据所述红外镜头的水平视场角、所述非红外镜头的水平视场角、所述红外镜头的水平倾角、所述红外镜头与所述非红外镜头间的距离以及所述非红外镜头的分辨率,确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域的水平边界;
    以及根据所述红外镜头的垂直视场角、所述非红外镜头的垂直视场角以及所述非红外镜头的分辨率,确定所述图像区域的垂直边界。
  13. 根据权利要求12所述的装置,其特征在于,所述获取模块在确定所述红外图像所针对的拍摄对象在所述第二非红外图像中所占据的图像区域的水平边界时具体用于:
    根据所述红外镜头的水平视场角、所述非红外镜头的水平视场角、所述红外镜头的水平倾角以及所述红外镜头与所述非红外镜头间的距离,求取所述所述图像区域的两侧留空宽度在所述第二非红外图像的水平宽度中的宽度占比;
    根据所述宽度占比以及所述非红外镜头的分辨率,得到所述图像区域的水平边界。
  14. 根据权利要求12或13所述的装置,其特征在于,所述获取模块在根据所述红外镜头的垂直视场角、所述非红外镜头的垂直视场角以及所述非红外镜头的分辨率,确定所述图像区域的垂直边界时具体用于:
    根据所述红外镜头的垂直视场角、所述非红外镜头的垂直视场角,求取所述图像区域的上下留空高度在所述第二非红外图像的垂直 高度中的高度占比;
    根据所述高度占比以及所述非红外镜头的分辨率,得到所述图像区域的垂直边界。
  15. 根据权利要求9所述的装置,其特征在于,所述获取模块具体用于:
    获取所述图像预览区域的背景图像,并获取所述虹膜图案的替换图案,所述背景图像与所述替换图案均为非红外图像;
    所述控制模块具体用于:
    获取所述虹膜图案在所述红外图像中的位置,作为所述虹膜图案的替换图案在所述预览区域中的待显示位置;
    将所述背景图像显示在所述预览区域中,并将所述替换图案在所述待显示位置、于所述背景图像的上层显示。
  16. 根据权利要求15所述的装置,其特征在于;
    所述获取模块,还用于获取第一比值;所述第一比值为虹膜图案的面积与所述红外图像的面积的比值;
    所述装置还包括:确定模块,用于根据所述获取模块获取的所述第一比值确定所述虹膜图案的替换图案的大小,使所述虹膜图案的替换图案的面积与所述背景图像的面积的比值为第一比值。
  17. 一种基于虹膜识别的图像预览装置,其特征在于,所述装置包括:存储器和处理器,所述存储器用于存储计算机执行代码,所述计算机执行代码用于控制所述处理器执行权利要求1-8任一项所述的基于虹膜识别的图像预览方法。
  18. 一种计算机存储介质,其特征在于,用于储存为基于虹膜识别的图像预览装置所用的计算机软件指令,其包含执行权利要求1~8任一项所述的基于虹膜识别的图像预览方法所设计的程序代码。
  19. 一种计算机程序,其特征在于,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现权利要求1~8任一项所述的基于虹膜识别的图像预览方法。
  20. 一种机器人,其特征在于,包括权利要求17所述的基于虹膜识别的图像预览装置。
PCT/CN2016/102724 2016-10-20 2016-10-20 一种基于虹膜识别的图像预览方法及装置 WO2018072179A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680006897.6A CN107223255B (zh) 2016-10-20 2016-10-20 一种基于虹膜识别的图像预览方法及装置
PCT/CN2016/102724 WO2018072179A1 (zh) 2016-10-20 2016-10-20 一种基于虹膜识别的图像预览方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/102724 WO2018072179A1 (zh) 2016-10-20 2016-10-20 一种基于虹膜识别的图像预览方法及装置

Publications (1)

Publication Number Publication Date
WO2018072179A1 true WO2018072179A1 (zh) 2018-04-26

Family

ID=59927875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102724 WO2018072179A1 (zh) 2016-10-20 2016-10-20 一种基于虹膜识别的图像预览方法及装置

Country Status (2)

Country Link
CN (1) CN107223255B (zh)
WO (1) WO2018072179A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519540A (zh) * 2019-08-29 2019-11-29 深圳市道通智能航空技术有限公司 一种图像处理方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002796B (zh) 2018-07-16 2020-08-04 阿里巴巴集团控股有限公司 一种图像采集方法、装置和系统以及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101051349A (zh) * 2007-05-18 2007-10-10 北京中科虹霸科技有限公司 采用主动视觉反馈的多目虹膜采集设备
CN101369311A (zh) * 2008-09-26 2009-02-18 北京中科虹霸科技有限公司 一种采用主动视觉反馈的小型化虹膜识别模块
CN103106393A (zh) * 2012-12-12 2013-05-15 袁培江 一种基于机器人平台的嵌入式人脸识别智能身份认证系统
CN105138996A (zh) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 一种具有活体检测功能的虹膜识别系统
CN105956538A (zh) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 基于rgb摄像头和虹膜摄像头的图像呈现装置和方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007201671A (ja) * 2006-01-25 2007-08-09 Pentax Corp デジタル一眼レフカメラ
CN204129314U (zh) * 2014-05-22 2015-01-28 宁波舜宇光电信息有限公司 一种摄像光学镜组及虹膜摄像模组
KR102287751B1 (ko) * 2014-09-25 2021-08-09 삼성전자 주식회사 전자 장치의 홍채 인식 방법 및 장치
CN105118055B (zh) * 2015-08-11 2017-12-15 北京电影学院 摄影机定位修正标定方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101051349A (zh) * 2007-05-18 2007-10-10 北京中科虹霸科技有限公司 采用主动视觉反馈的多目虹膜采集设备
CN101369311A (zh) * 2008-09-26 2009-02-18 北京中科虹霸科技有限公司 一种采用主动视觉反馈的小型化虹膜识别模块
CN103106393A (zh) * 2012-12-12 2013-05-15 袁培江 一种基于机器人平台的嵌入式人脸识别智能身份认证系统
CN105138996A (zh) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 一种具有活体检测功能的虹膜识别系统
CN105956538A (zh) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 基于rgb摄像头和虹膜摄像头的图像呈现装置和方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519540A (zh) * 2019-08-29 2019-11-29 深圳市道通智能航空技术有限公司 一种图像处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN107223255B (zh) 2020-11-17
CN107223255A (zh) 2017-09-29

Similar Documents

Publication Publication Date Title
CN107105130B (zh) 电子设备及其操作方法
US10928904B1 (en) User recognition and gaze tracking in a video system
US11163995B2 (en) User recognition and gaze tracking in a video system
JP5949319B2 (ja) 視線検出装置及び視線検出方法
US9589325B2 (en) Method for determining display mode of screen, and terminal device
US9058519B2 (en) System and method for passive live person verification using real-time eye reflection
JP6052399B2 (ja) 画像処理プログラム、画像処理方法及び情報端末
WO2015180659A1 (zh) 图像处理方法和图像处理装置
US20170091550A1 (en) Multispectral eye analysis for identity authentication
US20160019420A1 (en) Multispectral eye analysis for identity authentication
CN104104867A (zh) 控制摄像装置进行拍摄的方法和装置
WO2016010724A1 (en) Multispectral eye analysis for identity authentication
JP5655644B2 (ja) 視線検出装置及び視線検出方法
JP2005334402A (ja) 認証方法および認証装置
US20140313230A1 (en) Transformation of image data based on user position
WO2014084249A1 (ja) 顔認証装置、認証方法とそのプログラム、情報機器
EP4095744A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
JP2018205819A (ja) 注視位置検出用コンピュータプログラム、注視位置検出装置及び注視位置検出方法
WO2018072179A1 (zh) 一种基于虹膜识别的图像预览方法及装置
JP7226477B2 (ja) 情報処理システム、情報処理方法及び記憶媒体
JP5416489B2 (ja) 三次元指先位置検出方法、三次元指先位置検出装置、およびプログラム
JP7223303B2 (ja) 情報処理装置、情報処理システム、情報処理方法及びプログラム
WO2018072178A1 (zh) 一种基于虹膜识别的图像预览方法及装置
JP2007236668A (ja) 撮影装置および認証装置ならびに撮影方法
JP2005334403A (ja) 認証方法および認証装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16919063

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 04.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16919063

Country of ref document: EP

Kind code of ref document: A1